mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
commit
3ae9aa39e4
@ -0,0 +1,99 @@
|
||||
如何记录你在终端中执行的所有操作
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/03/Record-Everything-You-Do-In-Terminal-720x340.png)
|
||||
|
||||
几天前,我们发布了一个解释如何[保存终端中的命令并按需使用][1]的指南。对于那些不想记忆冗长的 Linux 命令的人来说,这非常有用。今天,在本指南中,我们将看到如何使用 `script` 命令记录你在终端中执行的所有操作。你可能已经在终端中运行了一个命令,或创建了一个目录,或者安装了一个程序。`script` 命令会保存你在终端中执行的任何操作。如果你想知道你几小时或几天前做了什么,那么你可以查看它们。我知道我知道,我们可以使用上/下箭头或 `history` 命令查看以前运行的命令。但是,你无法查看这些命令的输出。而 `script` 命令记录并显示完整的终端会话活动。
|
||||
|
||||
`script` 命令会在终端中创建你所做的所有事件的记录。无论你是安装程序,创建目录/文件还是删除文件夹,一切都会被记录下来,包括命令和相应的输出。这个命令对那些想要一份交互式会话拷贝作为作业证明的人有用。无论是学生还是导师,你都可以将所有在终端中执行的操作和所有输出复制一份。
|
||||
|
||||
### 在 Linux 中使用 script 命令记录终端中的所有内容
|
||||
|
||||
`script` 命令预先安装在大多数现代 Linux 操作系统上。所以,我们不用担心安装。
|
||||
|
||||
让我们继续看看如何实时使用它。
|
||||
|
||||
运行以下命令启动终端会话记录。
|
||||
|
||||
```
|
||||
$ script -a my_terminal_activities
|
||||
```
|
||||
|
||||
其中,`-a` 标志用于将输出追加到文件(记录)中,并保留以前的内容。上述命令会记录你在终端中执行的所有操作,并将输出追加到名为 `my_terminal_activities` 的文件中,并将其保存在当前工作目录中。
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Script started, file is my_terminal_activities
|
||||
```
|
||||
|
||||
现在,在终端中运行一些随机的 Linux 命令。
|
||||
|
||||
```
|
||||
$ mkdir ostechnix
|
||||
$ cd ostechnix/
|
||||
$ touch hello_world.txt
|
||||
$ cd ..
|
||||
$ uname -r
|
||||
```
|
||||
|
||||
运行所有命令后,使用以下命令结束 `script` 命令的会话:
|
||||
|
||||
```
|
||||
$ exit
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
exit
|
||||
Script done, file is my_terminal_activities
|
||||
```
|
||||
|
||||
如你所见,终端活动已存储在名为 `my_terminal_activities` 的文件中,并将其保存在当前工作目录中。
|
||||
|
||||
要查看你的终端活动,只需在任何编辑器中打开此文件,或者使用 `cat` 命令直接显示它。
|
||||
|
||||
```
|
||||
$ cat my_terminal_activities
|
||||
```
|
||||
|
||||
示例输出:
|
||||
|
||||
```
|
||||
Script started on Thu 09 Mar 2017 03:33:44 PM IST
|
||||
[sk@sk]: ~>$ mkdir ostechnix
|
||||
[sk@sk]: ~>$ cd ostechnix/
|
||||
[sk@sk]: ~/ostechnix>$ touch hello_world.txt
|
||||
[sk@sk]: ~/ostechnix>$ cd ..
|
||||
[sk@sk]: ~>$ uname -r
|
||||
4.9.11-1-ARCH
|
||||
[sk@sk]: ~>$ exit
|
||||
exit
|
||||
|
||||
Script done on Thu 09 Mar 2017 03:37:49 PM IST
|
||||
```
|
||||
|
||||
正如你在上面的输出中看到的,`script` 命令记录了我所有的终端活动,包括 `script` 命令的开始和结束时间。真棒,不是吗?使用 `script` 命令的原因不仅仅是记录命令,还有命令的输出。简单地说,脚本命令将记录你在终端上执行的所有操作。
|
||||
|
||||
### 结论
|
||||
|
||||
就像我说的那样,脚本命令对于想要保留其终端活动记录的学生,教师和 Linux 用户非常有用。尽管有很多 CLI 和 GUI 可用来执行此操作,但 `script` 命令是记录终端会话活动的最简单快捷的方式。
|
||||
|
||||
就是这些。希望这有帮助。如果你发现本指南有用,请在你的社交,专业网络上分享,并支持我们。
|
||||
|
||||
干杯!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/record-everything-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/save-commands-terminal-use-demand/
|
@ -1,8 +1,7 @@
|
||||
云计算的成本
|
||||
============================================================
|
||||
|
||||
### 两个开发团队的一天
|
||||
|
||||
> 两个开发团队的一天
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/2000/1*nBZJgNXl54jzFKa91s1KfQ.png)
|
||||
|
||||
@ -12,103 +11,108 @@
|
||||
|
||||
这两个团队被要求为一家全球化企业开发一个新的服务,该企业目前为全球数百万消费者提供服务。要开发的这项新服务需要满足以下基本需求:
|
||||
|
||||
1. 能够随时扩展以满足弹性需求
|
||||
|
||||
2. 具备应对数据中心故障的弹性
|
||||
|
||||
3. 确保数据安全以及数据受到保护
|
||||
|
||||
4. 为排错提供深入的调试功能
|
||||
|
||||
5. 项目必须能迅速分发
|
||||
|
||||
6. 服务构建和维护的性价比要高
|
||||
1. 能够随时**扩展**以满足弹性需求
|
||||
2. 具备应对数据中心故障的**弹性**
|
||||
3. 确保数据**安全**以及数据受到保护
|
||||
4. 为排错提供深入的**调试**功能
|
||||
5. 项目必须能**迅速分发**
|
||||
6. 服务构建和维护的**性价比**要高
|
||||
|
||||
就新服务来说,这看起来是非常标准的需求 — 从本质上看传统专用基础设备上没有什么东西可以超越公共云了。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*DgnAPA6P5R0yQiV8n6siJw.png)
|
||||
|
||||
* * *
|
||||
|
||||
#### 1 — 扩展以满足客户需求
|
||||
|
||||
当说到可扩展性时,这个新服务需要去满足客户变化无常的需求。我们构建的服务不可以拒绝任何请求,以防让公司遭受损失或者声誉受到影响。
|
||||
|
||||
传统的团队使用的是专用基础设施,架构体系的计算能力需要与峰值数据需求相匹配。对于负载变化无常的服务来说,大量昂贵的计算能力在低利用率的时间被浪费掉。
|
||||
**传统团队**
|
||||
|
||||
使用的是专用基础设施,架构体系的计算能力需要与峰值数据需求相匹配。对于负载变化无常的服务来说,大量昂贵的计算能力在低利用率时被浪费掉。
|
||||
|
||||
这是一种很浪费的方法 — 并且大量的资本支出会侵蚀掉你的利润。另外,这些未充分利用的庞大的服务器资源的维护也是一项很大的运营成本。这是一项你无法忽略的成本 — 我不得不再强调一下,为支持一个单一服务去维护一机柜的服务器是多么的浪费时间和金钱。
|
||||
|
||||
云团队使用的是基于云的自动伸缩解决方案,应用会按需要进行自动扩展和收缩。也就是说你只需要支付你所消费的计算资源的费用。
|
||||
**云团队**
|
||||
|
||||
使用的是基于云的自动伸缩解决方案,应用会按需要进行自动扩展和收缩。也就是说你只需要支付你所消费的计算资源的费用。
|
||||
|
||||
一个架构良好的基于云的应用可以实现无缝地伸缩 — 并且还是自动进行的。开发团队只需要定义好自动伸缩的资源组即可,即当你的应用 CPU 利用率达到某个高位、或者每秒有多大请求数时启动多少实例,并且你可以根据你的意愿去定制这些规则。
|
||||
|
||||
* * *
|
||||
|
||||
#### 2 — 应对故障的弹性
|
||||
|
||||
当说到弹性时,将托管服务的基础设施放在同一个房间里并不是一个好的选择。如果你的应用托管在一个单一的数据中心 — (不是如果)发生某些失败时(译者注:指坍塌、地震、洪灾等),你的所有的东西都被埋了。
|
||||
当说到弹性时,将托管服务的基础设施放在同一个房间里并不是一个好的选择。如果你的应用托管在一个单一的数据中心 — (不是如果)发生某些失败时(LCTT 译注:指坍塌、地震、洪灾等),你的所有的东西都被埋了。
|
||||
|
||||
传统的团队去满足这种基本需求的标准解决方案是,为实现局部弹性建立至少两个服务器 — 在地理上冗余的数据中心之间实施秒级复制。
|
||||
**传统团队**
|
||||
|
||||
开发团队需要一个负载均衡解决方案,以便于在发生饱合或者故障等事件时将流量转向到另一个节点 — 并且还要确保镜像节点之间,整个栈是持续完全同步的。
|
||||
满足这种基本需求的标准解决方案是,为实现局部弹性建立至少两个服务器 — 在地理上冗余的数据中心之间实施秒级复制。
|
||||
|
||||
在全球 50 个区域中的每一个云团队,都由 AWS 提供多个_有效区域_。每个区域由多个容错数据中心组成 — 通过自动故障切换功能,AWS 可以在区域内将服务无缝地转移到其它的区中。
|
||||
开发团队需要一个负载均衡解决方案,以便于在发生饱和或者故障等事件时将流量转向到另一个节点 — 并且还要确保镜像节点之间,整个栈是持续完全同步的。
|
||||
|
||||
**云团队**
|
||||
|
||||
在 AWS 全球 50 个地区中,他们都提供多个_可用区_。每个区域由多个容错数据中心组成 — 通过自动故障切换功能,AWS 可以将服务无缝地转移到该地区的其它区中。
|
||||
|
||||
在一个 `CloudFormation` 模板中定义你的_基础设施即代码_,确保你的基础设施在自动伸缩事件中跨区保持一致 — 而对于流量的流向管理,AWS 负载均衡服务仅需要做很少的配置即可。
|
||||
|
||||
* * *
|
||||
|
||||
#### 3 — 安全和数据保护
|
||||
|
||||
安全是一个组织中任何一个系统的基本要求。我想你肯定不想成为那些不幸遭遇安全问题的公司之一的。
|
||||
|
||||
传统团队为保证运行他们服务的基础服务器安全,他们不得不持续投入成本。这意味着将不得不向监视、识别、以及为来自不同数据源的跨多个供应商解决方案的安全威胁打补丁的团队上投资。
|
||||
**传统团队**
|
||||
|
||||
使用公共云的团队并不能免除来自安全方面的责任。云团队仍然需要提高警惕,但是并不需要去担心为底层基础设施打补丁的问题。AWS 将积极地对付各种 0 日漏洞 — 最近的一次是 Spectre 和 Meltdown。
|
||||
为保证运行他们服务的基础服务器安全,他们不得不持续投入成本。这意味着将需要投资一个团队,以监视和识别安全威胁,并用来自不同数据源的跨多个供应商解决方案打上补丁。
|
||||
|
||||
利用来自 AWS 的识别管理和加密安全服务,可以让云团队专注于他们的应用 — 而不是无差别的安全管理。使用 `CloudTrail` 对 API 到 AWS 服务的调用做全面审计,可以实现透明地监视。
|
||||
**云团队**
|
||||
|
||||
* * *
|
||||
使用公共云并不能免除来自安全方面的责任。云团队仍然需要提高警惕,但是并不需要去担心为底层基础设施打补丁的问题。AWS 将积极地对付各种零日漏洞 — 最近的一次是 Spectre 和 Meltdown。
|
||||
|
||||
利用来自 AWS 的身份管理和加密安全服务,可以让云团队专注于他们的应用 — 而不是无差别的安全管理。使用 CloudTrail 对 API 到 AWS 服务的调用做全面审计,可以实现透明地监视。
|
||||
|
||||
#### 4 — 监视和日志
|
||||
|
||||
任何基础设施和部署为服务的应用都需要严密监视实时数据。团队应该有一个可以访问的仪表板,当超过指标阈值时仪表板会显示警报,并能够在排错时提供与事件相关的日志。
|
||||
|
||||
使用传统基础设施的传统团队,将不得不在跨不同供应商和“雪花状”的解决方案上配置监视和报告解决方案。配置这些“见鬼的”解决方案将花费你大量的时间和精力 — 并且能够正确地实现你的目的是相当困难的。
|
||||
**传统团队**
|
||||
|
||||
对于传统基础设施,将不得不在跨不同供应商和“雪花状”的解决方案上配置监视和报告解决方案。配置这些“见鬼的”解决方案将花费你大量的时间和精力 — 并且能够正确地实现你的目的是相当困难的。
|
||||
|
||||
对于大多数部署在专用基础设施上的应用来说,为了搞清楚你的应用为什么崩溃,你可以通过搜索保存在你的服务器文件系统上的日志文件来找到答案。为此你的团队需要通过 SSH 进入服务器,导航到日志文件所在的目录,然后浪费大量的时间,通过 `grep` 在成百上千的日志文件中寻找。如果你在一个横跨 60 台服务器上部署的应用中这么做 — 我能负责任地告诉你,这是一个极差的解决方案。
|
||||
|
||||
云团队利用原生的 AWS 服务,如 CloudWatch 和 CloudTrail,来做云应用程序的监视是非常容易。不需要很多的配置,开发团队就可以监视部署的服务上的各种指标 — 问题的排除过程也不再是个恶梦了。
|
||||
**云团队**
|
||||
|
||||
利用原生的 AWS 服务,如 CloudWatch 和 CloudTrail,来做云应用程序的监视是非常容易。不需要很多的配置,开发团队就可以监视部署的服务上的各种指标 — 问题的排除过程也不再是个恶梦了。
|
||||
|
||||
对于传统的基础设施,团队需要构建自己的解决方案,配置他们的 REST API 或者服务去推送日志到一个聚合器。而得到这个“开箱即用”的解决方案将对生产力有极大的提升。
|
||||
|
||||
* * *
|
||||
|
||||
#### 5 — 加速开发进程
|
||||
|
||||
现在的商业环境中,快速上市的能力越来越重要。由于实施延误所失去的机会成本,可能成为影响最终利润的一个主要因素。
|
||||
现在的商业环境中,快速上市的能力越来越重要。由于实施的延误所失去的机会成本,可能成为影响最终利润的一个主要因素。
|
||||
|
||||
大多数组织的这种传统团队,他们需要在新项目所需要的硬件采购、配置和部署上花费很长的时间 — 并且由于预测能力差,提前获得的额外的性能将造成大量的浪费。
|
||||
**传统团队**
|
||||
|
||||
对于大多数组织,他们需要在新项目所需要的硬件采购、配置和部署上花费很长的时间 — 并且由于预测能力差,提前获得的额外的性能将造成大量的浪费。
|
||||
|
||||
而且还有可能的是,传统的开发团队在无数的“筒仓”中穿梭以及在移交创建的服务上花费数月的时间。项目的每一步都会在数据库、系统、安全、以及网络管理方面需要一个独立工作。
|
||||
|
||||
**云团队**
|
||||
|
||||
而云团队开发新特性时,拥有大量的随时可投入生产系统的服务套件供你使用。这是开发者的天堂。每个 AWS 服务一般都有非常好的文档并且可以通过你选择的语言以编程的方式去访问。
|
||||
|
||||
使用新的云架构,例如无服务器,开发团队可以在最小化冲突的前提下构建和部署一个可扩展的解决方案。比如,只需要几天时间就可以建立一个 [Imgur 的无服务器克隆][4],它具有图像识别的特性,内置一个产品级的监视/日志解决方案,并且它的弹性极好。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*jHmtrp1OKM4mZVn-gSNoQg.png)
|
||||
|
||||
如果必须要我亲自去设计弹性和可伸缩性,我可以向你保证,我仍然在开发这个项目 — 而且最终的产品将远不如目前的这个好。
|
||||
*如何建立一个 Imgur 的无服务器克隆*
|
||||
|
||||
如果必须要我亲自去设计弹性和可伸缩性,我可以向你保证,我会陷在这个项目的开发里 — 而且最终的产品将远不如目前的这个好。
|
||||
|
||||
从我实践的情况来看,使用无服务器架构的交付时间远小于在大多数公司中提供硬件所花费的时间。我只是简单地将一系列 AWS 服务与 Lambda 功能 — 以及 ta-da 耦合到一起而已!我只专注于开发解决方案,而无差别的可伸缩性和弹性是由 AWS 为我处理的。
|
||||
|
||||
* * *
|
||||
|
||||
#### 关于云计算成本的结论
|
||||
|
||||
就弹性而言,云计算团队的按需扩展是当之无愧的赢家 — 因为他们仅为需要的计算能力埋单。而不需要为维护和底层的物理基础设施打补丁付出相应的资源。
|
||||
|
||||
云计算也为开发团队提供一个可使用多个有效区的弹性架构、为每个服务构建的安全特性、持续的日志和监视工具、随用随付的服务、以及低成本的加速分发实践。
|
||||
云计算也为开发团队提供一个可使用多个可用区的弹性架构、为每个服务构建的安全特性、持续的日志和监视工具、随用随付的服务、以及低成本的加速分发实践。
|
||||
|
||||
大多数情况下,云计算的成本要远低于为你的应用运行所需要的购买、支持、维护和设计的按需基础架构的成本 — 并且云计算的麻烦事更少。
|
||||
|
||||
@ -116,17 +120,17 @@
|
||||
|
||||
也有一些云计算比传统基础设施更昂贵的例子,一些情况是在周末忘记关闭运行的一些极其昂贵的测试机器。
|
||||
|
||||
[Dropbox 在决定推出自己的基础设施并减少对 AWS 服务的依赖之后,在两年的时间内节省近 7500 万美元的费用,Dropbox…www.geekwire.com][5][][6]
|
||||
[Dropbox 在决定推出自己的基础设施并减少对 AWS 服务的依赖之后,在两年的时间内节省近 7500 万美元的费用,Dropbox…——www.geekwire.com][5][][6]
|
||||
|
||||
即便如此,这样的案例仍然是非常少见的。更不用说当初 Dropbox 也是从 AWS 上开始它的业务的 — 并且当它的业务达到一个临界点时,才决定离开这个平台。即便到现在,他们也已经进入到云计算的领域了,并且还在 AWS 和 GCP 上保留了 40% 的基础设施。
|
||||
|
||||
将云服务与基于单一“成本”指标(译者注:此处的“成本”仅指物理基础设施的购置成本)的传统基础设施比较的想法是极其幼稚的 — 公然无视云为开发团队和你的业务带来的一些主要的优势。
|
||||
将云服务与基于单一“成本”指标(LCTT 译注:此处的“成本”仅指物理基础设施的购置成本)的传统基础设施比较的想法是极其幼稚的 — 公然无视云为开发团队和你的业务带来的一些主要的优势。
|
||||
|
||||
在极少数的情况下,云服务比传统基础设施产生更多的绝对成本 — 它在开发团队的生产力、速度和创新方面仍然贡献着更好的价值。
|
||||
在极少数的情况下,云服务比传统基础设施产生更多的绝对成本 — 但它在开发团队的生产力、速度和创新方面仍然贡献着更好的价值。
|
||||
|
||||
![](https://cdn-images-1.medium.com/max/1600/1*IlrOdfYiujggbsYynTzzEQ.png)
|
||||
|
||||
客户才不在乎你的数据中心呢
|
||||
*客户才不在乎你的数据中心呢*
|
||||
|
||||
_我非常乐意倾听你在云中开发的真实成本相关的经验和反馈!请在下面的评论区、Twitter _ [_@_ _Elliot_F_][7] 上、或者直接在 _ [_LinkedIn_][8] 上联系我。
|
||||
|
||||
@ -136,7 +140,7 @@ via: https://read.acloud.guru/the-true-cost-of-cloud-a-comparison-of-two-develop
|
||||
|
||||
作者:[Elliot Forbes][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -4,11 +4,12 @@
|
||||
> 这是许多事情的第一步
|
||||
|
||||
![women programming](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G "women programming")
|
||||
|
||||
图片提供 : [WOCinTech Chat][16]. 图片修改 : Opensource.com. [CC BY-SA 4.0][17]
|
||||
|
||||
有一个普遍的误解,那就是对开源做出贡献是一件很难的事。你可能会想,“有时我甚至不能理解我自己的代码;那我怎么可能理解别人的?”
|
||||
|
||||
放轻松。直到去年,我都以为是这样。阅读和理解他人的代码,然后把你自己的写在顶上,这是一件令人气馁的任务;但如果有合适的资源,这不像你想象的那么糟。
|
||||
放轻松。直到去年,我都以为是这样。阅读和理解他人的代码,然后在他们的基础上写上你自己的代码,这是一件令人气馁的任务;但如果有合适的资源,这不像你想象的那么糟。
|
||||
|
||||
第一步要做的是选择一个项目。这个决定是可能是一个菜鸟转变成一个老练的开源贡献者的关键一步。
|
||||
|
||||
@ -16,7 +17,7 @@
|
||||
|
||||
### 理解产品
|
||||
|
||||
在开始贡献之前,你需要理解项目是怎么工作的。为了理解这一点,你需要自己来尝试。如果你发现这个产品很有趣并且游泳,它就值得你来做贡献。
|
||||
在开始贡献之前,你需要理解项目是怎么工作的。为了理解这一点,你需要自己来尝试。如果你发现这个产品很有趣并且有用,它就值得你来做贡献。
|
||||
|
||||
初学者常常选择参与贡献那些他们没有使用过的软件。他们会失望,并且最终放弃贡献。如果你没有用过这个软件,你不会理解它是怎么工作的。如果你不理解它是怎么工作的,你怎么能解决 bug 或添加新特性呢?
|
||||
|
||||
@ -31,77 +32,59 @@
|
||||
这里介绍了怎么确认一个项目是否还是活跃的:
|
||||
|
||||
* **贡献者数量:** 一个增加的贡献者数量表明开发者社区乐于接受新的贡献者。
|
||||
|
||||
* **<ruby>提交<rt>commit</rt></ruby>频率:** 查看最近的提交时间。如果是一周之内,甚至是一两个月内,这个项目应该是定期维护的。
|
||||
|
||||
* **维护者数量:** 维护者的数量越多,你越可能得到指导。
|
||||
|
||||
* **聊天室活动等级:** 一个繁忙的聊天室意味着你的问题可以更快得到回复。
|
||||
* **<ruby>提交<rt>commit</rt></ruby>频率:** 查看最近的提交时间。如果是一周之内,甚至是一两个月内,这个项目应该是定期维护的。
|
||||
* **维护者数量:** 维护者的数量越多,你越可能得到指导。
|
||||
* **聊天室或 IRC 活跃度:** 一个繁忙的聊天室意味着你的问题可以更快得到回复。
|
||||
|
||||
### 新手资源
|
||||
|
||||
Coala 是一个开源项目的例子。它有自己的教程和文档,让你可以使用它(每一个类和方法)的 API。这个网站还设计了一个有吸引力的界面,让你有阅读的兴趣。
|
||||
|
||||
**文档:** 所有水平的开发者都需要可靠的,被很好地维护的文档,来理解项目的细节。找找在 [GitHub][19](或者承载的任何位置)上,或者在单独的类似于 [阅读文档][20] 的页面上提供完善文档的项目,这样可以帮助你深入了解代码。
|
||||
|
||||
### [Coala 新手指南.png][2]
|
||||
**文档:** 不管哪种水平的开发者都需要可靠的、被很好地维护的文档,来理解项目的细节。找找在 [GitHub][19](或者放在其它位置)或者类似于 [Read the Docs][20] 之类的独立站点上提供了完善文档的项目,这样可以帮助你深入了解代码。
|
||||
|
||||
![Coala Newcomers' Guide screen](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala-newcomers_guide.png?itok=G7mfPbXN "Coala Newcomers' Guide screen")
|
||||
|
||||
**教程:** 教程会给新手解释如何在项目里添加特性 (然而,你可以在任何项目里找到它)。例如,Coala 提供了 [tutorials for writing _bears_][21] (执行代码分析的<ruby>格式化代码<rt>linting</rt></ruby>工具的Python 包装器).
|
||||
|
||||
### [Coala 界面.png][3]
|
||||
**教程:** 教程会给新手解释如何在项目里添加特性 (然而,你不是在每个项目中都能找到它)。例如,Coala 提供了 [小熊编写指南][21] (进行代码分析的<ruby>代码格式化<rt>linting</rt></ruby>工具的 Python 包装器)。
|
||||
|
||||
![Coala UI](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala_ui.png?itok=LR02629W "Coala User Interface screenshot")
|
||||
|
||||
**添加标签的<ruby>讨论点<rt>issue</rt></ruby>:** 对刚刚想明白如何选择第一个项目的初学者来说,选择一个讨论点是一个更加困难的任务。标签被设为“难度/低”,“难度/新手”,“利于初学者”,以及“low-hanging fruit”都表明是对新手友好的。F
|
||||
|
||||
### [Coala 讨论点标签.png][4]
|
||||
**分类的<ruby>讨论点<rt>issue</rt></ruby>:** 对刚刚想明白如何选择第一个项目的初学者来说,选择一个讨论点是一个更加困难的任务。标签被设为“难度/低”、“难度/新手”、“利于初学者”,以及“<ruby>触手可及<rt>low-hanging fruit</rt></ruby>”都表明是对新手友好的。
|
||||
|
||||
![Coala labeled issues](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/coala_labeled_issues.png?itok=74qSjG_T "Coala labeled issues")
|
||||
|
||||
### 其他因素
|
||||
|
||||
### [ci_历史纪录.png][5]
|
||||
|
||||
![CI user pipeline log](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/ci_logs.png?itok=J3V8gbc7 "CI user pipeline log")
|
||||
|
||||
* **维护者对新的贡献者的态度:** 从我的经验来看,大部分开源贡献者都很乐于帮助他们项目里的新手。然而,当你问问题时,你也有可能遇到一些不太友好的人(甚至可能有点粗鲁)。不要因为这些人失去信心。他们只是因为在比他们经验更丰富的人那儿得不到发泄的机会。还有很多其他人愿意提供帮助。
|
||||
|
||||
* **审阅过程/结构:** 你的拉取请求将被你的同事和有经验的开发者查看和更改很多次——这就是你学习软件开发最主要的方式。一个具有严格审阅过程的项目使您能够通过编写生产级代码来作为开发人员成长。
|
||||
|
||||
* **一个稳健的<ruby>持续整合<rt>continuous integration</rt></ruby>管道:** 开源项目会向新手们介绍持续整合和部署服务。一个稳健的 CI 管道将帮助你学习阅读和理解 CI 日志。它也将带给你处理失败的测试案例和代码覆盖率问题的经验。
|
||||
|
||||
* **参加编程项目 (例如 [Google Summer Of Code][1]):** 参加组织证明了你乐于对一个项目的长期发展做贡献。他们也会给新手提供一个机会来获得现实世界中的开发经验,从而获得报酬。大多数参加这些项目的组织都欢迎新人加入。
|
||||
* **维护者对新的贡献者的态度:** 从我的经验来看,大部分开源贡献者都很乐于帮助他们项目里的新手。然而,当你问问题时,你也有可能遇到一些不太友好的人(甚至可能有点粗鲁)。不要因为这些人失去信心。他们只是因为在比他们经验更丰富的人那儿得不到发泄的机会而已。还有很多其他人愿意提供帮助。
|
||||
* **审阅过程/机制:** 你的拉取请求将经历几遍你的同伴和有经验的开发者的查看和更改——这就是你学习软件开发最主要的方式。一个具有严格审阅过程的项目使您在编写生产级代码的过程中成长。
|
||||
* **一个稳健的<ruby>持续集成<rt>continuous integration</rt></ruby>管道:** 开源项目会向新手们介绍持续集成和部署服务。一个稳健的 CI 管道将帮助你学习阅读和理解 CI 日志。它也将带给你处理失败的测试用例和代码覆盖率问题的经验。
|
||||
* **参加编程项目(例如 [Google Summer Of Code][1]):** 参加组织证明了你乐于对一个项目的长期发展做贡献。他们也会给新手提供一个机会来获得现实世界中的开发经验,从而获得报酬。大多数参加这些项目的组织都欢迎新人加入。
|
||||
|
||||
### 7 对新手友好的组织
|
||||
|
||||
* [coala (Python)][7]
|
||||
|
||||
* [oppia (Python, Django)][8]
|
||||
|
||||
* [DuckDuckGo (Perl, JavaScript)][9]
|
||||
|
||||
* [OpenGenus (JavaScript)][10]
|
||||
|
||||
* [Kinto (Python, JavaScript)][11]
|
||||
|
||||
* [FOSSASIA (Python, JavaScript)][12]
|
||||
|
||||
* [Kubernetes (Go)][13]
|
||||
|
||||
|
||||
### 关于作者
|
||||
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/img_20180309_001440858.jpg?itok=tG8yvrJF)][22] Palash Nigam - 我是一个印度计算机科学专业本科生,十分乐于参与开源软件的开发,我在 GitHub 上花费了大部分的时间。我现在的兴趣包括 web 后端开发,区块链,和 All things python.[更多关于我][14]
|
||||
[![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/img_20180309_001440858.jpg?itok=tG8yvrJF)][22]
|
||||
|
||||
Palash Nigam - 我是一个印度计算机科学专业本科生,十分乐于参与开源软件的开发,我在 GitHub 上花费了大部分的时间。我现在的兴趣包括 web 后端开发,区块链,和 All things python。[更多关于我][14]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/get-started-open-source-project
|
||||
|
||||
作者:[ Palash Nigam ][a]
|
||||
作者:[Palash Nigam][a]
|
||||
译者:[lonaparte](https://github.com/lonaparte)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
196
published/20180530 3 Python command-line tools.md
Normal file
196
published/20180530 3 Python command-line tools.md
Normal file
@ -0,0 +1,196 @@
|
||||
3 个 Python 命令行工具
|
||||
======
|
||||
|
||||
> 用 Click、Docopt 和 Fire 库写你自己的命令行应用。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-tool-box.png?itok=NrJYb417)
|
||||
|
||||
有时对于某项工作来说一个命令行工具就足以胜任。命令行工具是一种从你的 shell 或者终端之类的地方交互或运行的程序。[Git][2] 和 [Curl][3] 就是两个你也许已经很熟悉的命令行工具。
|
||||
|
||||
当你有一小段代码需要在一行中执行多次或者经常性地被执行,命令行工具就会很有用。Django 开发者执行 `./manage.py runserver` 命令来启动他们的网络服务器;Docker 开发者执行 `docker-compose up` 来启动他们的容器。你想要写一个命令行工具的原因可能和你一开始想写代码的原因有很大不同。
|
||||
|
||||
对于这个月的 Python 专栏,我们有 3 个库想介绍给希望为自己编写命令行工具的 Python 使用者。
|
||||
|
||||
### Click
|
||||
|
||||
[Click][4] 是我们最爱的用来开发命令行工具的 Python 包。其:
|
||||
|
||||
* 有一个富含例子的出色文档
|
||||
* 包含说明如何将命令行工具打包成一个更加易于执行的 Python 应用程序
|
||||
* 自动生成实用的帮助文本
|
||||
* 使你能够叠加使用可选和必要参数,甚至是 [多个命令][5]
|
||||
* 有一个 Django 版本( [`django-click`][6] )用来编写管理命令
|
||||
|
||||
Click 使用 `@click.command()` 去声明一个函数作为命令,同时可以指定必要和可选参数。
|
||||
|
||||
```
|
||||
# hello.py
|
||||
import click
|
||||
|
||||
@click.command()
|
||||
@click.option('--name', default='', help='Your name')
|
||||
def say_hello(name):
|
||||
click.echo("Hello {}!".format(name))
|
||||
|
||||
if __name__ == '__main__':
|
||||
say_hello()
|
||||
```
|
||||
|
||||
`@click.option()` 修饰器声明了一个 [可选参数][7] ,而 `@click.argument()` 修饰器声明了一个 [必要参数][8]。你可以通过叠加修饰器来组合可选和必要参数。`echo()` 方法将结果打印到控制台。
|
||||
|
||||
```
|
||||
$ python hello.py --name='Lacey'
|
||||
Hello Lacey!
|
||||
```
|
||||
|
||||
### Docopt
|
||||
|
||||
[Docopt][9] 是一个命令行工具的解析器,类似于命令行工具的 Markdown。如果你喜欢流畅地编写应用文档,在本文推荐的库中 Docopt 有着最好的格式化帮助文本。它不是我们最爱的命令行工具开发包的原因是它的文档犹如把人扔进深渊,使你开始使用时会有一些小困难。然而,它仍是一个轻量级的、广受欢迎的库,特别是当一个漂亮的说明文档对你来说很重要的时候。
|
||||
|
||||
Docopt 对于如何格式化文章开头的 docstring 是很特别的。在工具名称后面的 docsring 中,顶部元素必须是 `Usage:` 并且需要列出你希望命令被调用的方式(比如:自身调用,使用参数等等)。`Usage:` 需要包含 `help` 和 `version` 参数。
|
||||
|
||||
docstring 中的第二个元素是 `Options:`,对于在 `Usages:` 中提及的可选项和参数,它应当提供更多的信息。你的 docstring 的内容变成了你帮助文本的内容。
|
||||
|
||||
```
|
||||
"""HELLO CLI
|
||||
|
||||
Usage:
|
||||
hello.py
|
||||
hello.py <name>
|
||||
hello.py -h|--help
|
||||
hello.py -v|--version
|
||||
|
||||
Options:
|
||||
<name> Optional name argument.
|
||||
-h --help Show this screen.
|
||||
-v --version Show version.
|
||||
"""
|
||||
|
||||
from docopt import docopt
|
||||
|
||||
def say_hello(name):
|
||||
return("Hello {}!".format(name))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
arguments = docopt(__doc__, version='DEMO 1.0')
|
||||
if arguments['<name>']:
|
||||
print(say_hello(arguments['<name>']))
|
||||
else:
|
||||
print(arguments)
|
||||
```
|
||||
|
||||
在最基本的层面,Docopt 被设计用来返回你的参数键值对。如果我不指定上述的 `name` 调用上面的命令,我会得到一个字典的返回值:
|
||||
|
||||
```
|
||||
$ python hello.py
|
||||
{'--help': False,
|
||||
'--version': False,
|
||||
'<name>': None}
|
||||
```
|
||||
|
||||
这里可看到我没有输入 `help` 和 `version` 标记并且 `name` 参数是 `None`。
|
||||
|
||||
但是如果我带着一个 `name` 参数调用,`say_hello` 函数就会执行了。
|
||||
|
||||
```
|
||||
$ python hello.py Jeff
|
||||
Hello Jeff!
|
||||
```
|
||||
|
||||
Docopt 允许同时指定必要和可选参数,且各自有着不同的语法约定。必要参数需要在 `ALLCAPS` 和 `<carets>` 中展示,而可选参数需要单双横杠显示,就像 `--like`。更多内容可以阅读 Docopt 有关 [patterns][10] 的文档。
|
||||
|
||||
### Fire
|
||||
|
||||
[Fire][11] 是谷歌的一个命令行工具开发库。尤其令人喜欢的是当你的命令需要更多复杂参数或者处理 Python 对象时,它会聪明地尝试解析你的参数类型。
|
||||
|
||||
Fire 的 [文档][12] 包括了海量的样例,但是我希望这些文档能被更好地组织。Fire 能够处理 [同一个文件中的多条命令][13]、使用 [对象][14] 的方法作为命令和 [分组][15] 命令。
|
||||
|
||||
它的弱点在于输出到控制台的文档。命令行中的 docstring 不会出现在帮助文本中,并且帮助文本也不一定标识出参数。
|
||||
|
||||
```
|
||||
import fire
|
||||
|
||||
|
||||
def say_hello(name=''):
|
||||
return 'Hello {}!'.format(name)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
fire.Fire()
|
||||
```
|
||||
|
||||
参数是必要还是可选取决于你是否在函数或者方法定义中为其指定了一个默认值。要调用命令,你必须指定文件名和函数名,比较类似 Click 的语法:
|
||||
|
||||
```
|
||||
$ python hello.py say_hello Rikki
|
||||
Hello Rikki!
|
||||
```
|
||||
|
||||
你还可以像标记一样传参,比如 `--name=Rikki`。
|
||||
|
||||
### 额外赠送:打包!
|
||||
|
||||
Click 包含了使用 `setuptools` [打包][16] 命令行工具的使用说明(强烈推荐按照说明操作)。
|
||||
|
||||
要打包我们第一个例子中的命令行工具,将以下内容加到你的 `setup.py` 文件里:
|
||||
|
||||
```
|
||||
from setuptools import setup
|
||||
|
||||
setup(
|
||||
name='hello',
|
||||
version='0.1',
|
||||
py_modules=['hello'],
|
||||
install_requires=[
|
||||
'Click',
|
||||
],
|
||||
entry_points='''
|
||||
[console_scripts]
|
||||
hello=hello:say_hello
|
||||
''',
|
||||
)
|
||||
```
|
||||
|
||||
任何你看见 `hello` 的地方,使用你自己的模块名称替换掉,但是要记得忽略 `.py` 后缀名。将 `say_hello` 替换成你的函数名称。
|
||||
|
||||
然后,执行 `pip install --editable` 来使你的命令在命令行中可用。
|
||||
|
||||
现在你可以调用你的命令,就像这样:
|
||||
|
||||
```
|
||||
$ hello --name='Jeff'
|
||||
Hello Jeff!
|
||||
```
|
||||
|
||||
通过打包你的命令,你可以省掉在控制台键入 `python hello.py --name='Jeff'` 这种额外的步骤以减少键盘敲击。这些指令也很可能可在我们提到的其他库中使用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/3-python-command-line-tools
|
||||
|
||||
作者:[Jeff Triplett][a],[Lacey Williams Hensche][1]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[hoppipolla-](https://github.com/hoppipolla-)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/laceynwilliams
|
||||
[1]:https://opensource.com/users/laceynwilliams
|
||||
[2]:https://git-scm.com/
|
||||
[3]:https://curl.haxx.se/
|
||||
[4]:http://click.pocoo.org/5/
|
||||
[5]:http://click.pocoo.org/5/commands/
|
||||
[6]:https://github.com/GaretJax/django-click
|
||||
[7]:http://click.pocoo.org/5/options/
|
||||
[8]:http://click.pocoo.org/5/arguments/
|
||||
[9]:http://docopt.org/
|
||||
[10]:https://github.com/docopt/docopt#usage-pattern-format
|
||||
[11]:https://github.com/google/python-fire
|
||||
[12]:https://github.com/google/python-fire/blob/master/docs/guide.md
|
||||
[13]:https://github.com/google/python-fire/blob/master/docs/guide.md#exposing-multiple-commands
|
||||
[14]:https://github.com/google/python-fire/blob/master/docs/guide.md#version-3-firefireobject
|
||||
[15]:https://github.com/google/python-fire/blob/master/docs/guide.md#grouping-commands
|
||||
[16]:http://click.pocoo.org/5/setuptools/
|
||||
|
@ -1,9 +1,11 @@
|
||||
Intel 和 AMD 透露新的处理器设计
|
||||
======
|
||||
|
||||
> Whiskey Lake U 系列和 Amber Lake Y 系列的酷睿芯片将会在今年秋季开始出现在超过 70 款笔记本以及 2 合 1 机型中。
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/whiskey-lake.jpg?itok=b1yuW71L)
|
||||
|
||||
根据本周的台北国际电脑展 (Computex 2018) 以及最近其它的消息,处理器成为科技新闻圈中最前沿的话题。Intel 发布了一些公告涉及从新的酷睿处理器到延长电池续航的尖端技术。与此同时,AMD 亮相了第二代 32 核心的高端游戏处理器线程撕裂者(Threadripper)以及一些嵌入式友好的新型号锐龙 Ryzen 处理器。
|
||||
根据最近的台北国际电脑展 (Computex 2018) 以及最近其它的消息,处理器成为科技新闻圈中最前沿的话题。Intel 发布了一些公告涉及从新的酷睿处理器到延长电池续航的尖端技术。与此同时,AMD 亮相了第二代 32 核心的高端游戏处理器线程撕裂者(Threadripper)以及一些适合嵌入式的新型号锐龙 Ryzen 处理器。
|
||||
|
||||
以上是对 Intel 和 AMD 主要公布产品的快速浏览,针对那些对嵌入式 Linux 开发者最感兴趣的处理器。
|
||||
|
||||
@ -11,7 +13,7 @@ Intel 和 AMD 透露新的处理器设计
|
||||
|
||||
在四月份,Intel 已经宣布量产 10nm 制程的 Cannon Lake 系列酷睿处理器将会延期到 2019 年,这件事引起了人们对摩尔定律最终走上正轨的议论。然而,在 Intel 的 [Computex 展区][1] 中有着众多让人欣慰的消息。Intel 展示了两款节能的第八代 14nm 酷睿家族产品,同时也是 Intel 首款 5GHz 的设计。
|
||||
|
||||
Whiskey Lake U 系列和 Amber Lake Y 系列的酷睿芯片将会在今年秋季开始出现在超过70款笔记本以及 2 合 1 机型中。Intel 表示,这些芯片相较于第七代的 Kaby Lake 酷睿系列处理器会带来两倍的性能提升。新的产品家族将会相比于目前出现的搭载 [Coffee Lake][2] 芯片的产品更加节能 。
|
||||
Whiskey Lake U 系列和 Amber Lake Y 系列的酷睿芯片将会在今年秋季开始出现在超过 70 款笔记本以及 2 合 1 机型中。Intel 表示,这些芯片相较于第七代的 Kaby Lake 酷睿系列处理器会带来两倍的性能提升。新的产品家族将会相比于目前出现的搭载 [Coffee Lake][2] 芯片的产品更加节能 。
|
||||
|
||||
Whiskey Lake 和 Amber Lake 两者将会配备 Intel 高性能千兆 WiFi (Intel 9560 AC),该网卡同样出现在 [Gemini Lake][3] 架构的奔腾银牌和赛扬处理器,随之出现在 Apollo Lake 一代。千兆 WiFi 本质上就是 Intel 将 2×2 MU-MIMO 和 160MHz 信道技术与 802.11ac 结合。
|
||||
|
||||
@ -19,7 +21,7 @@ Intel 的 Whiskey Lake 将作为第七代和第八代 Skylake U 系列处理器
|
||||
|
||||
[PC World][6] 报导称,Amber Lake Y 系列芯片主要目标定位是 2 合 1 机型。就像双核的 [Kaby Lake Y 系列][5] 芯片,Amber Lake 将会支持 4.5W TDP。
|
||||
|
||||
为了庆祝 Intel 即将到来的 50 周年庆典, 同样也是作为世界上第一款 8086 处理器的 40 周年庆典,Intel 将启动一个限量版,带有一个时钟频率 4GHz 的第八代 [酷睿 i7-8086K][7] CPU。 这款 64 位限量版产品将会是第一块拥有 5GHz, 单核睿频加速,并且是首款带有集成显卡的 6 核,12 线程处理器。Intel 将会于 6 月 7 日开始 [赠送][8] 8,086 块超频酷睿 i7-8086K 芯片。
|
||||
为了庆祝 Intel 即将到来的 50 周年庆典, 同样也是作为世界上第一款 8086 处理器的 40 周年庆典,Intel 将启动一个限量版,带有一个时钟频率 4GHz 的第八代 [酷睿 i7-8086K][7] CPU。 这款 64 位限量版产品将会是第一块拥有 5GHz, 单核睿频加速,并且是首款带有集成显卡的 6 核,12 线程处理器。Intel 将会于 6 月 7 日开始 [赠送][8] 8086 块超频酷睿 i7-8086K 芯片。
|
||||
|
||||
Intel 也展示了计划于今年年底启动新的高端 Core X 系列拥有高核心和线程数。[AnandTech 预测][9] 可能会使用类似于 Xeon 的 Cascade Lake 架构。今年晚些时候,Intel 将会公布新的酷睿 S系列型号,AnandTech 预测它可能会是八核心的 Coffee Lake 芯片。
|
||||
|
||||
@ -32,7 +34,8 @@ Intel 也表示第一款疾速傲腾 SSD —— 一个 M.2 接口产品被称作
|
||||
### AMD 继续翻身
|
||||
|
||||
在展会中,AMD 亮相了第二代拥有 32 核 64 线程的线程撕裂者(Threadripper) CPU。为了走在 Intel 尚未命名的 28 核怪兽之前,这款高端游戏处理器将会在第三季度推出。根据 [Engadget][11] 的消息,新的线程撕裂者同样采用了被用在锐龙 Ryzen 芯片的 12nm Zen+ 架构。
|
||||
[WCCFTech][12] 报导,AMD 也表示它选自为拥有 32GB 昂贵的 HBM2 显存而不是 GDDR5X 或 GDDR6 的显卡而设计的 7nm Vega Instinct GPU 。这款 Vega Instinct 将提供相比现今 14nm Vega GPU 高出 35% 的性能和两倍的功效效率。新的渲染能力将会帮助它同 Nvidia 启用 CUDA 技术的 GPU 在光线追踪中竞争。
|
||||
|
||||
[WCCFTech][12] 报导,AMD 也表示它选自 7nm Vega Instinct GPU(为拥有 32GB 昂贵的 HBM2 显存而不是 GDDR5X 或 GDDR6 的显卡而设计)。这款 Vega Instinct 将提供相比现今 14nm Vega GPU 高出 35% 的性能和两倍的功效效率。新的渲染能力将会帮助它同 Nvidia 启用 CUDA 技术的 GPU 在光线追踪中竞争。
|
||||
|
||||
一些新的 Ryzen 2000 系列处理器近期出现在一个 ASRock CPU 聊天室,它将拥有比主流的 Ryzen 芯片更低的功耗。[AnandTech][13] 详细介绍了,2.8GHz,8 核心,16 线程的 Ryzen 7 2700E 和 3.4GHz/3.9GHz,六核,12 线程 Ryzen 5 2600E 都将拥有 45W TDP。这比 12-54W TDP 的 [Ryzen Embedded V1000][2] 处理器更高,但低于 65W 甚至更高的主流 Ryzen 芯片。新的 Ryzen-E 型号是针对 SFF (外形小巧,small form factor) 和无风扇系统。
|
||||
|
||||
@ -45,7 +48,7 @@ via: https://www.linux.com/blog/2018/6/intel-amd-and-arm-reveal-new-processor-de
|
||||
作者:[Eric Brown][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[softpaopao](https://github.com/softpaopao)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,184 @@
|
||||
4 种用于构建嵌入式 Linux 系统的工具
|
||||
======
|
||||
|
||||
> 了解 Yocto、Buildroot、 OpenWRT,和改造过的桌面发行版以确定哪种方式最适合你的项目。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6)
|
||||
|
||||
|
||||
Linux 被部署到比 Linus Torvalds 在他的宿舍里开发时所预期的更广泛的设备。令人震惊的支持了各种芯片,使得Linux 可以应用于大大小小的设备上:从 [IBM 的巨型机][1]到不如其连接的端口大的[微型设备][2],以及各种大小的设备。它被用于大型企业数据中心、互联网基础设施设备和个人的开发系统。它还为消费类电子产品、移动电话和许多物联网设备提供了动力。
|
||||
|
||||
在为桌面和企业级设备构建 Linux 软件时,开发者通常在他们的构建机器上使用桌面发行版,如 [Ubuntu][3] 以便尽可能与被部署的机器相似。如 [VirtualBox][4] 和 [Docker][5] 这样的工具使得开发、测试和生产环境更好的保持了一致。
|
||||
|
||||
### 什么是嵌入式系统?
|
||||
|
||||
维基百科将[嵌入式系统][6]定义为:“在更大的机械或电气系统中具有专用功能的计算机系统,往往伴随着实时计算限制。”
|
||||
|
||||
我觉得可以很简单地说,嵌入式系统是大多数人不认为是计算机的计算机。它的主要作用是作为某种设备,而不被视为通用计算平台。
|
||||
|
||||
嵌入式系统编程的开发环境通常与测试和生产环境大不相同。它们可能会使用不同的芯片架构、软件堆栈甚至操作系统。开发工作流程对于嵌入式开发人员与桌面和 Web 开发人员来说是非常不同的。通常,其构建后的输出将包含目标设备的整个软件映像,包括内核、设备驱动程序、库和应用程序软件(有时也包括引导加载程序)。
|
||||
|
||||
在本文中,我将对构建嵌入式 Linux 系统的四种常用方式进行纵览。我将介绍一下每种产品的工作原理,并提供足够的信息来帮助读者确定使用哪种工具进行设计。我不会教你如何使用它们中的任何一个;一旦缩小了选择范围,就有大量深入的在线学习资源。没有任何选择适用于所有情况,我希望提供足够的细节来指导您的决定。
|
||||
|
||||
### Yocto
|
||||
|
||||
[Yocto][7] 项目 [定义][8]为“一个开源协作项目,提供模板、工具和方法,帮助您为嵌入式产品创建定制的基于 Linux 的系统,而不管硬件架构如何。”它是用于创建定制的 Linux 运行时映像的配方、配置值和依赖关系的集合,可根据您的特定需求进行定制。
|
||||
|
||||
完全公开:我在嵌入式 Linux 中的大部分工作都集中在 Yocto 项目上,而且我对这个系统的认识和偏见可能很明显。
|
||||
|
||||
Yocto 使用 [Openembedded][9] 作为其构建系统。从技术上讲,这两个是独立的项目;然而,在实践中,用户不需要了解区别,项目名称经常可以互换使用。
|
||||
|
||||
Yocto 项目的输出大致由三部分组成:
|
||||
|
||||
* **目标运行时二进制文件:**这些包括引导加载程序、内核、内核模块、根文件系统映像。以及将 Linux 部署到目标平台所需的任何其他辅助文件。
|
||||
* **包流:**这是可以安装在目标上的软件包集合。您可以根据需要选择软件包格式(例如,deb、rpm、ipk)。其中一些可能预先安装在目标运行时二进制文件中,但可以构建用于安装到已部署系统的软件包。
|
||||
* **目标 SDK:**这些是安装在目标平台上的软件的库和头文件的集合。应用程序开发人员在构建代码时使用它们,以确保它们与适当的库链接
|
||||
|
||||
#### 优点
|
||||
|
||||
Yocto 项目在行业中得到广泛应用,并得到许多有影响力的公司的支持。此外,它还拥有一个庞大且充满活力的开发人员[社区][10]和[生态系统][11]。开源爱好者和企业赞助商的结合的方式有助于推动 Yocto 项目。
|
||||
|
||||
获得 Yocto 的支持有很多选择。如果您想自己动手,有书籍和其他培训材料。如果您想获得专业知识,有许多有 Yocto 经验的工程师。而且许多商业组织可以为您的设计提供基于 Yocto 的 Turnkey 产品或基于服务的实施和定制。
|
||||
|
||||
Yocto 项目很容易通过 [层][12] 进行扩展,层可以独立发布以添加额外的功能,或针对项目发布时尚不可用的平台,或用于保存系统特有定制功能。层可以添加到你的配置中,以添加未特别包含在市面上版本中的独特功能;例如,“[meta-browser] [13]” 层包含 Web 浏览器的清单,可以轻松为您的系统进行构建。因为它们是独立维护的,所以层可以按不同的时间发布(根据层的开发速度),而不是跟着标准的 Yocto 版本发布。
|
||||
|
||||
Yocto 可以说是本文讨论的任何方式中最广泛的设备支持。由于许多半导体和电路板制造商的支持,Yocto 很可能能够支持您选择的任何目标平台。主版本 Yocto [分支][14]仅支持少数几块主板(以便达成合理的测试和发布周期),但是,标准工作模式是使用外部主板支持层。
|
||||
|
||||
最后,Yocto 非常灵活和可定制。您的特定应用程序的自定义可以存储在一个层进行封装和隔离,通常将要素层特有的自定义项存储为层本身的一部分,这可以将相同的设置同时应用于多个系统配置。Yocto 还提供了一个定义良好的层优先和覆盖功能。这使您可以定义层应用和搜索元数据的顺序。它还使您可以覆盖具有更高优先级的层的设置;例如,现有清单的许多自定义功能都将保留。
|
||||
|
||||
#### 缺点
|
||||
|
||||
Yocto 项目最大的缺点是学习曲线陡峭。学习该系统并真正理解系统需要花费大量的时间和精力。 根据您的需求,这可能对您的应用程序不重要的技术和能力投入太大。 在这种情况下,与一家商业供应商合作可能是一个不错的选择。
|
||||
|
||||
Yocto 项目的开发时间和资源相当高。 需要构建的包(包括工具链,内核和所有目标运行时组件)的数量相当不少。 Yocto 开发人员的开发工作站往往是大型系统。 不建议使用小型笔记本电脑。 这可以通过使用许多提供商提供的基于云的构建服务器来缓解。 另外,Yocto 有一个内置的缓存机制,当它确定用于构建特定包的参数没有改变时,它允许它重新使用先前构建的组件。
|
||||
|
||||
#### 建议
|
||||
|
||||
为您的下一个嵌入式 Linux 设计使用 Yocto 项目是一个强有力的选择。 在这里介绍的选项中,无论您的目标用例如何,它都是最广泛适用的。 广泛的行业支持,积极的社区和广泛的平台支持使其成为必须设计师的不错选择。
|
||||
|
||||
### Buildroot
|
||||
|
||||
[Buildroot][15] 项目定义为“通过交叉编译生成嵌入式 Linux 系统的简单、高效且易于使用的工具。”它与 Yocto 项目具有许多相同的目标,但它注重简单性和简约性。一般来说,Buildroot 会禁用所有软件包的所有可选编译时设置(有一些值得注意的例外),从而生成尽可能小的系统。系统设计人员需要启用适用于给定设备的设置。
|
||||
|
||||
Buildroot 从源代码构建所有组件,但不支持按目标包管理。因此,它有时称为固件生成器,因为镜像在构建时大部分是固定的。应用程序可以更新目标文件系统,但是没有机制将新软件包安装到正在运行的系统中。
|
||||
|
||||
Buildroot 输出主要由三部分组成:
|
||||
|
||||
* 将 Linux 部署到目标平台所需的根文件系统映像和任何其他辅助文件
|
||||
* 适用于目标硬件的内核,引导加载程序和内核模块
|
||||
* 用于构建所有目标二进制文件的工具链。
|
||||
|
||||
#### 优点
|
||||
|
||||
Buildroot 对简单性的关注意味着,一般来说,它比 Yocto 更容易学习。核心构建系统用 Make 编写,并且足够短以便开发人员了解整个系统,同时可扩展到足以满足嵌入式 Linux 开发人员的需求。 Buildroot 核心通常只处理常见用例,但它可以通过脚本进行扩展。
|
||||
|
||||
Buildroot 系统使用普通的 Makefile 和 Kconfig 语言来进行配置。 Kconfig 由 Linux 内核社区开发,广泛用于开源项目,使得许多开发人员都熟悉它。
|
||||
|
||||
由于禁用所有可选的构建时设置的设计目标,Buildroot 通常会使用开箱即用的配置生成尽可能最小的镜像。一般来说,构建时间和构建主机资源的规模将比 Yocto 项目的规模更小。
|
||||
|
||||
#### 缺点
|
||||
|
||||
关注简单性和最小化启用的构建方式意味着您可能需要执行大量的自定义来为应用程序配置 Buildroot 构建。此外,所有配置选项都存储在单个文件中,这意味着如果您有多个硬件平台,则需要为每个平台进行每个定制更改。
|
||||
|
||||
对系统配置文件的任何更改都需要全部重新构建所有软件包。与 Yocto 相比,这个问题通过最小的镜像大小和构建时间得到了一定的解决,但在你调整配置时可能会导致构建时间过长。
|
||||
|
||||
中间软件包状态缓存默认情况下未启用,并且不像 Yocto 实施那么彻底。这意味着,虽然第一次构建可能比等效的 Yocto 构建短,但后续构建可能需要重建许多组件。
|
||||
|
||||
#### 建议
|
||||
|
||||
对于大多数应用程序,使用 Buildroot 进行下一个嵌入式 Linux 设计是一个不错的选择。如果您的设计需要多种硬件类型或其他差异,但由于同步多个配置的复杂性,您可能需要重新考虑,但对于由单一设置组成的系统,Buildroot 可能适合您。
|
||||
|
||||
### OpenWRT/LEDE
|
||||
|
||||
[OpenWRT][16] 项目开始为消费类路由器开发定制固件。您当地零售商提供的许多低成本路由器都可以运行 Linux 系统,但可能无法开箱即用。这些路由器的制造商可能无法提供频繁的更新来解决新的威胁,即使他们这样做,安装更新镜像的机制也很困难且容易出错。 OpenWRT 项目为许多已被其制造商放弃的设备生成更新的固件镜像,让这些设备焕发新生。
|
||||
|
||||
OpenWRT 项目的主要交付物是可用于大量商业设备的二进制镜像。它有网络可访问的软件包存储库,允许设备最终用户将新软件添加到他们的系统中。 OpenWRT 构建系统是一个通用构建系统,它允许开发人员创建自定义版本以满足他们自己的需求并添加新软件包,但其主要重点是目标二进制文件。
|
||||
|
||||
#### 优点
|
||||
|
||||
如果您正在为商业设备寻找替代固件,则 OpenWRT 应位于您的选项列表中。它的维护良好,可以保护您免受制造商固件无法解决的问题。您也可以添加额外的功能,使您的设备更有用。
|
||||
|
||||
如果您的嵌入式设计专注于网络,则 OpenWRT 是一个不错的选择。网络应用程序是 OpenWRT 的主要用例,您可能会发现许多可用的软件包。
|
||||
|
||||
#### 缺点
|
||||
|
||||
OpenWRT 对您的设计限制很多(与 Yocto 和 Buildroot 相比)。如果这些决定不符合您的设计目标,则可能需要进行大量的修改。
|
||||
|
||||
在部署的设备中允许基于软件包的更新是很难管理的。按照其定义,这会导致与您的 QA 团队测试的软件负载不同。此外,很难保证大多数软件包管理器的原子安装,以及错误的电源循环可能会使您的设备处于不可预知的状态。
|
||||
|
||||
#### 建议
|
||||
|
||||
OpenWRT 是爱好者项目或商用硬件再利用的不错选择。它也是网络应用程序的不错选择。如果您需要从默认设置进行大量定制,您可能更喜欢 Buildroot 或 Yocto。
|
||||
|
||||
### 桌面发行版
|
||||
|
||||
设计嵌入式 Linux 系统的一种常见方法是从桌面发行版开始,例如 [Debian][17] 或 [Red Hat][18],并删除不需要的组件,直到安装的镜像符合目标设备的占用空间。这是 [Raspberry Pi][20] 平台流行的 [Raspbian][19]发行版的方法。
|
||||
|
||||
#### 优点
|
||||
|
||||
这种方法的主要优点是熟悉。通常,嵌入式 Linux 开发人员也是桌面 Linux 用户,并且精通他们的选择发行版。在目标上使用类似的环境可能会让开发人员更快地入门。根据所选的分布,可以使用 apt 和 yum 等标准封装工具安装许多其他工具。
|
||||
|
||||
可以将显示器和键盘连接到目标设备,并直接在那里进行所有的开发。对于不熟悉嵌入式空间的开发人员来说,这可能是一个更为熟悉的环境,无需配置和使用棘手的跨开发平台设置。
|
||||
|
||||
大多数桌面发行版可用的软件包数量通常大于前面讨论的嵌入式特定的构建器可用软件包数量。由于较大的用户群和更广泛的用例,您可能能够找到您的应用程序所需的所有运行时包,这些包已经构建并可供使用。
|
||||
|
||||
#### 缺点
|
||||
|
||||
将目标平台作为您的主要开发环境可能会很慢。运行编译器工具是一项资源密集型操作,根据您构建的代码的多少,这可能会严重妨碍您的性能。
|
||||
|
||||
除了一些例外情况,桌面发行版的设计并不适合低资源系统,并且可能难以充分裁剪目标映像。同样,桌面环境中的预设工作流程对于大多数嵌入式设计来说都不理想。以这种方式获得可再现的环境很困难。手动添加和删除软件包很容易出错。这可以使用特定于发行版的工具进行脚本化,例如基于 Debian 系统的 [debootstrap][21]。为了进一步提高[可再现性][21],您可以使用配置管理工具,如 [CFEngine][22](我的雇主 [Mender.io][23] 完整披露了
|
||||
这一工具)。但是,您仍然受发行版提供商的支配,他们将更新软件包以满足他们的需求,而不是您的需求。
|
||||
|
||||
#### 建议
|
||||
|
||||
对于您打算推向市场的产品,请谨慎使用此方法。这对于爱好者应用程序来说是一个很好的模型;但是,对于需要支持的产品,这种方法很可能会遇到麻烦。虽然您可能能够获得更快的起步,但从长远来看,您可能会花费您的时间和精力。
|
||||
|
||||
### 其他考虑
|
||||
|
||||
这个讨论集中在构建系统的功能上,但通常有非功能性需求可能会影响您的决定。如果您已经选择了片上系统(SoC)或电路板,则您的选择很可能由供应商决定。如果您的供应商为特定系统提供板级支持包(BSP),使用它通常会节省相当多的时间,但请研究 BSP 的质量以避免在开发周期后期发生问题。
|
||||
|
||||
如果您的预算允许,您可能需要考虑为目标操作系统使用商业供应商。有些公司会为这里讨论的许多选项提供经过验证和支持的配置,除非您拥有嵌入式 Linux 构建系统方面的专业知识,否则这是一个不错的选择,可以让您专注于核心能力。
|
||||
|
||||
作为替代,您可以考虑为您的开发人员进行商业培训。这可能比商业操作系统供应商便宜,并且可以让你更加自给自足。这是快速找到您选择的构建系统基础知识的学习曲线。
|
||||
|
||||
最后,您可能已经有一些开发人员拥有一个或多个系统的经验。如果你的工程师有倾向性,当你做出决定时,肯定值得考虑。
|
||||
|
||||
### 总结
|
||||
|
||||
构建嵌入式 Linux 系统有多种选择,每种都有优点和缺点。将这部分设计放在优先位置至关重要,因为在以后的过程中切换系统的成本非常高。除了这些选择之外,还有新的系统在开发中。希望这次讨论能够为评估新的系统(以及这里提到的系统)提供一些背景,并帮助您为下一个项目做出坚实的决定。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/embedded-linux-build-tools
|
||||
|
||||
作者:[Drew Moseley][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[LHRChina](https://github.com/LHRChina)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/drewmoseley
|
||||
[1]:https://en.wikipedia.org/wiki/Linux_on_z_Systems
|
||||
[2]:http://www.picotux.com/
|
||||
[3]:https://www.ubuntu.com/
|
||||
[4]:https://www.virtualbox.org/
|
||||
[5]:https://www.docker.com/
|
||||
[6]:https://en.wikipedia.org/wiki/Embedded_system
|
||||
[7]:https://yoctoproject.org/
|
||||
[8]:https://www.yoctoproject.org/about/
|
||||
[9]:https://www.openembedded.org/
|
||||
[10]:https://www.yoctoproject.org/community/
|
||||
[11]:https://www.yoctoproject.org/ecosystem/participants/
|
||||
[12]:https://layers.openembedded.org/layerindex/branch/master/layers/
|
||||
[13]:https://layers.openembedded.org/layerindex/branch/master/layer/meta-browser/
|
||||
[14]:https://yoctoproject.org/downloads
|
||||
[15]:https://buildroot.org/
|
||||
[16]:https://openwrt.org/
|
||||
[17]:https://www.debian.org/
|
||||
[18]:https://www.redhat.com/
|
||||
[19]:https://www.raspbian.org/
|
||||
[20]:https://www.raspberrypi.org/
|
||||
[21]:https://wiki.debian.org/Debootstrap
|
||||
[22]:https://cfengine.com/
|
||||
[23]:http://Mender.io
|
@ -0,0 +1,83 @@
|
||||
不要再手动合并你的拉取请求(PR)
|
||||
======
|
||||
|
||||
![](https://julien.danjou.info/content/images/2018/06/github-branching.png)
|
||||
|
||||
如果有什么我讨厌的东西,那就是当我知道我可以自动化它们时,但我手动进行了操作。只有我有这种情况么?我觉得不是。
|
||||
|
||||
尽管如此,他们每天都有数千名使用 [GitHub][1] 的开发人员一遍又一遍地做同样的事情:他们点击这个按钮:
|
||||
|
||||
![Screen-Shot-2018-06-19-at-18.12.39][2]
|
||||
|
||||
这没有任何意义。
|
||||
|
||||
不要误解我的意思。合并拉取请求是有意义的。只是每次点击这个该死的按钮是没有意义的。
|
||||
|
||||
这样做没有意义因为世界上的每个开发团队在合并拉取请求之前都有一个已知的先决条件列表。这些要求几乎总是相同的,而且这些要求也是如此:
|
||||
|
||||
* 是否通过测试?
|
||||
* 文档是否更新了?
|
||||
* 这是否遵循我们的代码风格指南?
|
||||
* 是否有若干位开发人员对此进行审查?
|
||||
|
||||
随着此列表变长,合并过程变得更容易出错。 “糟糕,在没有足够的开发人员审查补丁时 John 就点了合并按钮。” 要发出警报么?
|
||||
|
||||
在我的团队中,我们就像外面的每一支队伍。我们知道我们将一些代码合并到我们仓库的标准是什么。这就是为什么我们建立一个持续集成系统,每次有人创建一个拉取请求时运行我们的测试。我们还要求代码在获得批准之前由团队的 2 名成员进行审查。
|
||||
|
||||
当这些条件全部设定好时,我希望代码被合并。
|
||||
|
||||
而不用点击一个按钮。
|
||||
|
||||
这正是启动 [Mergify][3] 的原因。
|
||||
|
||||
![github-branching-1][4]
|
||||
|
||||
[Mergify][3] 是一个为你按下合并按钮的服务。你可以在仓库的 `.mergify.yml` 中定义规则,当规则满足时,Mergify 将合并该请求。
|
||||
|
||||
无需按任何按钮。
|
||||
|
||||
随机抽取一个请求,就像这样:
|
||||
|
||||
![Screen-Shot-2018-06-20-at-17.12.11][5]
|
||||
|
||||
这来自一个小型项目,没有很多持续集成服务,只有 Travis。在这个拉取请求中,一切都是绿色的:其中一个所有者审查了代码,并且测试通过。因此,该代码应该被合并:但是它还在那里挂起这,等待某人有一天按下合并按钮。
|
||||
|
||||
使用 [Mergify][3] 后,你只需将 `.mergify.yml` 放在仓库的根目录即可:
|
||||
|
||||
```
|
||||
rules:
|
||||
default:
|
||||
protection:
|
||||
required_status_checks:
|
||||
contexts:
|
||||
- continuous-integration/travis-ci
|
||||
required_pull_request_reviews:
|
||||
required_approving_review_count: 1
|
||||
```
|
||||
|
||||
通过这样的配置,[Mergify][3] 可以实现所需的限制,即 Travis 通过,并且至少有一个项目成员审阅了代码。只要这些条件是肯定的,拉取请求就会自动合并。
|
||||
|
||||
我们将 [Mergify][3] 构建为 **一个对开源项目免费的服务**。[提供服务的引擎][6]也是开源的。
|
||||
|
||||
现在去[尝试它][3],不要让这些拉取请求再挂起哪怕一秒钟。合并它们!
|
||||
|
||||
如果你有任何问题,请随时在下面向我们提问或写下评论!并且敬请期待 - 因为 Mergify 还提供了其他一些我迫不及待想要介绍的功能!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://julien.danjou.info/stop-merging-your-pull-request-manually/
|
||||
|
||||
作者:[Julien Danjou][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://julien.danjou.info/author/jd/
|
||||
[1]:https://github.com
|
||||
[2]:https://julien.danjou.info/content/images/2018/06/Screen-Shot-2018-06-19-at-18.12.39.png
|
||||
[3]:https://mergify.io
|
||||
[4]:https://julien.danjou.info/content/images/2018/06/github-branching-1.png
|
||||
[5]:https://julien.danjou.info/content/images/2018/06/Screen-Shot-2018-06-20-at-17.12.11.png
|
||||
[6]:https://github.com/mergifyio/mergify-engine
|
94
published/20180626 5 open source puzzle games for Linux.md
Normal file
94
published/20180626 5 open source puzzle games for Linux.md
Normal file
@ -0,0 +1,94 @@
|
||||
Linux 上的五个开源益智游戏
|
||||
======
|
||||
|
||||
> 用这些有趣好玩的游戏来测试你的战略能力。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle-pieces.jpg?itok=YHIN4_0L)
|
||||
|
||||
游戏一直是 Linux 的弱点之一。由于 Steam、GOG 和其他将商业游戏引入多种操作系统的努力,这种情况近年来有所改变,但这些游戏通常不是开源的。当然,这些游戏可以在开源操作系统上玩,但对于纯粹开源主义者来说还不够好。
|
||||
|
||||
那么,一个只使用开源软件的人,能否找到那些经过足够打磨的游戏,在不损害其开源理念的前提下,提供一种可靠的游戏体验呢?当然可以。虽然开源游戏历来不太可能与一些借由大量预算开发的 AAA 商业游戏相匹敌,但在多种类型的开源游戏中,有很多都很有趣,可以从大多数主要 Linux 发行版的仓库中安装。即使某个特定的游戏没有被打包成特定的发行版本,通常也很容易从项目的网站上下载该游戏以便安装和游戏。
|
||||
|
||||
这篇文章着眼于益智游戏。我已经写过[街机风格游戏][1]和[棋牌游戏][2]。 在之后的文章中,我计划涉足赛车,角色扮演、战略和模拟经营游戏。
|
||||
|
||||
### Atomix
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/atomix.png)
|
||||
|
||||
[Atomix][3] 是 1990 年在 Amiga、Commodore 64、MS-DOS 和其他平台发布的 [Atomix][4] 益智游戏的开源克隆。Atomix 的目标是通过连接原子来构建原子分子。单个原子可以向上、向下、向左或向右移动,并一直朝这个方向移动,直到原子撞上一个障碍物——水平墙或另一个原子。这意味着需要进行规划,以确定在水平上构建分子的位置,以及移动单个部件的顺序。第一关是一个简单的水分子,它由两个氢原子和一个氧原子组成,但后来的关卡是更复杂的分子。
|
||||
|
||||
要安装 Atomix,请运行以下命令:
|
||||
|
||||
* 在 Fedora: `dnf install atomix`
|
||||
* 在 Debian/Ubuntu: `apt install atomix`
|
||||
|
||||
### Fish Fillets - Next Generation
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/fish_fillets.png)
|
||||
|
||||
[Fish Fillets - Next Generation][5] 是游戏 Fish fillet 的 Linux 移植版本,它于 1998 年在 Windows 发布,源代码在 2004 年以 GPL 许可证发布。游戏中,两条鱼试图将物体移出道路来通过不同的关卡。这两条鱼有不同的属性,所以玩家需要为每个任务挑选合适的鱼。较大的鱼可以移动较重的物体,但它更大,这意味着它不适合较小的空隙。较小的鱼可以适应那些较小的间隙,但它不能移动较重的物体。如果一个物体从上面掉下来,两条鱼都会被压死,所以玩家在移动棋子时要小心。
|
||||
|
||||
要安装 Fish fillet——Next Generation,请运行以下命令:
|
||||
|
||||
* 在 Fedora:`dnf install fillets-ng`
|
||||
* 在 Debian/Ubuntu: `apt install fillets-ng`
|
||||
|
||||
### Frozen Bubble
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/frozen-bubble.png)
|
||||
|
||||
[Frozen Bubble][6] 是一款街机风格的益智游戏,从屏幕底部向屏幕顶部的一堆泡泡射击。如果三个相同颜色的气泡连接在一起,它们就会被从屏幕上移除。任何连接在被移除的气泡下面但没有连接其他任何东西的气泡也会被移除。在拼图模式下,关卡的设计是固定的,玩家只需要在泡泡掉到屏幕底部的线以下前将泡泡从游戏区域中移除。该游戏街机模式和多人模式遵循相同的基本规则,但也有不同,这增加了多样性。Frozen Bubble 是一个标志性的开源游戏,所以如果你以前没有玩过它,玩玩看。
|
||||
|
||||
要安装 Frozen Bubble,请运行以下命令:
|
||||
|
||||
* 在 Fedora: `dnf install frozen-bubble`
|
||||
* 在 Debian/Ubuntu: `apt install frozen-bubble`
|
||||
|
||||
|
||||
### Hex-a-hop
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/hex-a-hop.png)
|
||||
|
||||
[Hex-a-hop][7] 是一款基于六角形瓦片的益智游戏,玩家需要将所有的绿色瓦片从水平面上移除。瓦片通过移动被移除。由于瓦片在移动后会消失,所以有必要规划出穿过水平面的最佳路径,以便在不被卡住的情况下移除所有的瓦片。但是,如果玩家使用的是次优路径,会有撤销功能。之后的关卡增加了额外的复杂性,包括需要跨越多次的瓦片和使玩家跳过一定数量的六角弹跳瓦片。
|
||||
|
||||
要安装 Hex-a-hop,请运行以下命令:
|
||||
|
||||
* 在 Fedora: `dnf install hex-a-hop`
|
||||
* 在 Debian/Ubuntu: `apt install hex-a-hop`
|
||||
|
||||
|
||||
### Pingus
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/pingus.png)
|
||||
|
||||
[Pingus][8] 是 [Lemmings][9] 的开源克隆。这不是一个精确的克隆,但游戏非常相似。小动物(Lemmings 里是旅鼠,Pingus 里是企鹅)通过关卡入口进入关卡,开始沿着直线行走。玩家需要使用特殊技能使小动物能够到达关卡的出口而不会被困住或者掉下悬崖。这些技能包括挖掘或建桥。如果有足够数量的小动物进入出口,这个关卡将成功完成,玩家可以进入下一个关卡。Pingus 为标准的 Lemmings 添加了一些额外的特性,包括一个世界地图和一些在原版游戏中没有的技能,但经典 Lemmings 游戏的粉丝们在这个开源版本中仍会感到自在。
|
||||
|
||||
要安装 Pingus,请运行以下命令:
|
||||
|
||||
* 在 Fedora: `dnf install pingus`
|
||||
* 在 Debian/Ubuntu: `apt install pingus`
|
||||
|
||||
|
||||
我漏掉你最喜欢的开源益智游戏了吗? 请在下面的评论中分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/puzzle-games-linux
|
||||
|
||||
作者:[Joshua Allen Holm][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[ZenMoore](https://github.com/ZenMoore)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/holmja
|
||||
[1]:https://opensource.com/article/18/1/arcade-games-linux
|
||||
[2]:https://opensource.com/article/18/3/card-board-games-linux
|
||||
[3]:https://wiki.gnome.org/action/raw/Apps/Atomix
|
||||
[4]:https://en.wikipedia.org/w/index.php?title=Atomix_(video_game)
|
||||
[5]:http://fillets.sourceforge.net/index.php
|
||||
[6]:http://www.frozen-bubble.org/home/
|
||||
[7]:http://hexahop.sourceforge.net/index.html
|
||||
[8]:https://pingus.seul.org/index.html
|
||||
[9]:http://en.wikipedia.org/wiki/Lemmings
|
@ -0,0 +1,86 @@
|
||||
协同编辑器的历史性清单
|
||||
======
|
||||
|
||||
按时间顺序快速列出主要协同编辑器的演变。
|
||||
|
||||
正如任何这样的清单一样,它必定会在一开始便提到被誉为“<ruby>[所有演示之母][25]<rt>the mother of all demos</rt></ruby>”,在这个演示里<ruby>[道格·恩格尔巴特][26]<rt>Doug Engelbart</rt></ruby>早在 1968 年就描述了几乎所有软件的详尽清单。这不仅包括协同编辑器,还包括图形、编程和数学编辑器。
|
||||
|
||||
一切都始于那个演示,只不过软件的实现跟不上硬件的发展罢了。
|
||||
|
||||
> 软件发展的速度比硬件提升的速度慢。——沃斯定律
|
||||
|
||||
闲话少说,这里是我找到的可圈可点的协同编辑器的清单。我说“可圈可点”的意思是它们具有可圈可点的特征或实现细节。
|
||||
|
||||
| 项目 | 日期 | 平台 | 说明 |
|
||||
| --- | --- | --- | --- |
|
||||
| [SubEthaEdit][1] | 2003-2015? | 仅 Mac|我能找到的首个协同的、实时的、多光标的编辑器, [有个在 Emacs 上的逆向工程的尝试][2]却没有什么结果。 |
|
||||
| [DocSynch][3] | 2004-2007 | ? | 建立于 IRC 之上! |
|
||||
| [Gobby][4] | 2005 至今 | C,多平台 | 首个开源、稳固可靠的实现。 仍然存在!众所周知 [libinfinoted][5] 协议很难移植到其他编辑器中(例如: [Rudel][6] 不能在 Emacs 上实现此协议)。 2017 年 1 月发行的 0.7 版本添加了也许可以改善这种状况的 Python 绑定。 值得注意的插件: 自动保存到磁盘。|
|
||||
| [Ethercalc][27] | 2005 至今 | Web,JavaScript | 首个电子表格,随同 [Google Docs][28]。 |
|
||||
| [moonedit][7] | 2005-2008? | ? | 原网站已关闭。其他用户的光标可见并且会模仿击键的声音。 包括一个计算器和音乐定序器。 |
|
||||
| [synchroedit][8] | 2006-2007 | ? | 首个 Web 应用。|
|
||||
| [Inkscape][29] | 2007-2011 | C++ | 首个具备协同功能的图形编辑器,其背后的“whiteboard” 插件构建于 Jabber 之上,现已停摆。|
|
||||
| [Abiword][30] | 2008 至今|C++| 首个文字处理器。|
|
||||
| [Etherpad][9] | 2008 至今 | Web |首款稳定的 Web 应用。 最初在 2008 年被开发时是一款大型 Java 应用,在 2009 年被谷歌收购并开源,然后在 2011 年被用 Node.JS 重写。使用广泛。|
|
||||
| [Wave][31]|2009-2010|Web, Java| 在大一统协议的尝试上失败。|
|
||||
| [CRDT][10] | 2011 | 特定平台| 在不同电脑间可靠地复制一个文件的数据结构的标准。 |
|
||||
| [Operational transform][11] | 2013 | 特定平台| 与 CRDT 类似,然而确切地说,两者是不同的。 |
|
||||
| [Floobits][12] | 2013 至今 | ? | 商业软件,但有对各种编辑器的开源插件。 |
|
||||
| [LibreOffice Online][32]| 2015至今| Web| 免费的 Google docs 替代品,现已集成到 [Nextcloud][33] |
|
||||
| [HackMD][13] | 2015 至今| ? | 商业软件,[开源][14]。灵感来自于(已被 Dropbox 收购的) hackpad。 |
|
||||
| [Cryptpad][15] | 2016 至今 | Web ? | Xwiki 的副产品。服务器端的加密的、“零知识” 产品。|
|
||||
| [Prosemirror][16] | 2016 至今 | Web, Node.JS | “试图架起消除 Markdown 文本编辑和传统的所见即所得编辑器之间隔阂的桥梁。”不是完全意义上的编辑器,而是一种可以用来构建编辑器的工具。 |
|
||||
| [Qill][17] | 2013 至今 | Web, Node.JS | 富文本编辑器,同时也是 JavaScript 编辑器。不确定是否是协同式的。 |
|
||||
| [Teletype][19] | 2017 至今 | WebRTC, Node.JS | 为 GitHub 的 [Atom 编辑器][20] 引入了“门户”的思路 ,使得访客可以夸多个文档跟踪主人的操作。访问介绍服务器后使用实时通讯的点对点技术(P2P),基于 CRDT。 |
|
||||
| [Tandem][21] | 2018 至今 | Node.JS? | Atom、 Vim、Neovim、 Sublime 等的插件。 使用中继来设置基于 CRDT 的 P2P 连接。多亏 Debian 开发者的参与,[可疑证书问题][22]已被解决,这使它成为很有希望在未来被遵循的标准。 |
|
||||
|
||||
### 其他清单
|
||||
|
||||
* [Emacs 维基][23]
|
||||
* [维基百科][24]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://anarc.at/blog/2018-06-26-collaborative-editors-history/
|
||||
|
||||
作者:[Anacr][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[ZenMoore](https://github.com/ZenMoore)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://anarc.at
|
||||
[1]:https://www.codingmonkeys.de/subethaedit/
|
||||
[2]:https://www.emacswiki.org/emacs/SubEthaEmacs
|
||||
[3]:http://docsynch.sourceforge.net/
|
||||
[4]:https://gobby.github.io/
|
||||
[5]:http://infinote.0x539.de/libinfinity/API/libinfinity/
|
||||
[6]:https://www.emacswiki.org/emacs/Rudel
|
||||
[7]:https://web.archive.org/web/20060423192346/http://www.moonedit.com:80/
|
||||
[8]:http://www.synchroedit.com/
|
||||
[9]:http://etherpad.org/
|
||||
[10]:https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
||||
[11]:http://operational-transformation.github.io/
|
||||
[12]:https://floobits.com/
|
||||
[13]:https://hackmd.io/
|
||||
[14]:https://github.com/hackmdio/hackmd
|
||||
[15]:https://cryptpad.fr/
|
||||
[16]:https://prosemirror.net/
|
||||
[17]:https://quilljs.com/
|
||||
[18]:https://nextcloud.com/collaboraonline/
|
||||
[19]:https://teletype.atom.io/
|
||||
[20]:https://atom.io
|
||||
[21]:http://typeintandem.com/
|
||||
[22]:https://github.com/typeintandem/tandem/issues/131
|
||||
[23]:https://www.emacswiki.org/emacs/CollaborativeEditing
|
||||
[24]:https://en.wikipedia.org/wiki/Collaborative_real-time_editor
|
||||
[25]:https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
|
||||
[26]:https://en.wikipedia.org/wiki/Douglas_Engelbart
|
||||
[27]:https://ethercalc.net/
|
||||
[28]:https://en.wikipedia.org/wiki/Google_docs
|
||||
[29]:http://wiki.inkscape.org/wiki/index.php/WhiteBoard
|
||||
[30]:https://en.wikipedia.org/wiki/AbiWord
|
||||
[31]:https://en.wikipedia.org/wiki/Apache_Wave
|
||||
[32]:https://wiki.documentfoundation.org/Development/LibreOffice_Online
|
||||
[33]:https://nextcloud.com/collaboraonline/
|
79
published/20180627 World Cup football on the command line.md
Normal file
79
published/20180627 World Cup football on the command line.md
Normal file
@ -0,0 +1,79 @@
|
||||
命令行中的世界杯
|
||||
======
|
||||
|
||||
![](https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc2018.jpg?resize=700%2C450&ssl=1)
|
||||
|
||||
足球始终在我们身边。即使我们国家的队伍已经出局(LCTT 译注:显然这不是指我们国家,因为我们根本没有入局……),我还是想知道球赛比分。目前, 国际足联世界杯是世界上最大的足球锦标赛,2018 届是由俄罗斯主办的。每届世界杯都有一些足球强国未能取得参赛资格(LCTT 译注:我要吐槽么?)。意大利和荷兰就无缘本次世界杯。但是即使在未参加比赛的国家,追踪关注最新比分也成为了一种仪式。我希望能及时了解这个世界级的重大赛事最新比分的变化,而不用去搜索不同的网站。
|
||||
|
||||
如果你很喜欢命令行,那么有更好的方法用一个小型命令行程序追踪最新的世界杯比分和排名。让我们看一看最热门的可用的球赛趋势分析程序之一,它叫作 football-cli。
|
||||
|
||||
football-cli 不是一个开创性的应用程序。这几年,有许多命令行工具可以让你了解到最新的球赛比分和赛事排名。例如,我是 soccer-cli (Python 写的)和 App-football (Perl 写的)的重度用户。但我总是在寻找新的趋势分析应用,而 football-cli 在某些方面脱颖而出。
|
||||
|
||||
football-cli 是 JavaScript 开发的,由 Manraj Singh 编写,它是开源的软件。基于 MIT 许可证发布,用 npm(JavaScript 包管理器)安装十分简单。那么,让我们直接行动吧!
|
||||
|
||||
该应用程序提供了命令以获取过去及现在的赛事得分、查看联赛和球队之前和将要进行的赛事。它也会显示某一特定联赛的排名。有一条指令可以列出程序所支持的不同赛事。我们不妨从最后一个条指令开始。
|
||||
|
||||
在 shell 提示符下:
|
||||
|
||||
```
|
||||
luke@ganges:~$ football lists
|
||||
```
|
||||
|
||||
![球赛列表][3]
|
||||
|
||||
世界杯被列在最下方,我错过了昨天的比赛,所以为了了解比分,我在 shell 提示下输入:
|
||||
|
||||
```
|
||||
luke@ganges:~$ football scores
|
||||
```
|
||||
|
||||
![football-wc-22][4]
|
||||
|
||||
现在,我想看看目前的世界杯小组排名。很简单:
|
||||
|
||||
```
|
||||
luke@ganges:~$ football standings -l WC
|
||||
```
|
||||
|
||||
下面是输出的一个片段:
|
||||
|
||||
![football-wc-biaoge][5]
|
||||
|
||||
你们当中眼尖的可能会注意到这里有一个错误。比如比利时看上去领先于 G 组,但这是不正确的,比利时和英格兰(截稿前)在得分上打平。在这种情况下,纪律好的队伍排名更高。英格兰收到两张黄牌,而比利时收到三张,因此,英格兰应当名列榜首。
|
||||
|
||||
假设我想知道利物浦 90 天前英超联赛的结果,那么:
|
||||
|
||||
```
|
||||
luke@ganges:~$ football fixtures -l PL -d 90 -t "Liverpool"
|
||||
```
|
||||
|
||||
![足球-利物浦][6]
|
||||
|
||||
我发现这个程序非常方便。它用一种清晰、整洁而有吸引力的方式显示分数和排名。当欧洲联赛再次开始时,它就更有用了。(事实上 2018-19 冠军联赛已经在进行中)!
|
||||
|
||||
这几个示例让大家对 football-cli 的实用性有了更深的体会。想要了解更多,请转至开发者的 [GitHub 页面][7]。足球 + 命令行 = football-cli。
|
||||
|
||||
如同许多类似的工具一样,该软件从 football-data.org 获取相关数据。这项服务以机器可读的方式为所有欧洲主要联赛提供数据,包括比赛、球队、球员、结果等等。所有这些信息都是以 JOSN 形式通过一个易于使用的 RESTful API 提供的。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxlinks.com/football-cli-world-cup-football-on-the-command-line/
|
||||
|
||||
作者:[Luke Baker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[ZenMoore](https://github.com/ZenMoore)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxlinks.com/author/luke-baker/
|
||||
[1]:https://www.linuxlinks.com/wp-content/plugins/jetpack/modules/lazy-images/images/1x1.trans.gif
|
||||
[2]:https://i0.wp.com/www.linuxlinks.com/wp-content/uploads/2017/12/CLI.png?resize=195%2C171&ssl=1
|
||||
[3]:https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-lists.png?resize=595%2C696&ssl=1
|
||||
[4]:https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc-22.png?resize=634%2C75&ssl=1
|
||||
[5]:https://i0.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc-table.png?resize=750%2C581&ssl=1
|
||||
[6]:https://i1.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-Liverpool.png?resize=749%2C131&ssl=1
|
||||
[7]:https://github.com/ManrajGrover/football-cli
|
||||
[8]:https://www.linuxlinks.com/links/Software/
|
||||
[9]:https://discord.gg/uN8Rqex
|
@ -1,90 +0,0 @@
|
||||
翻译中 by ZenMoore
|
||||
5 open source puzzle games for Linux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle-pieces.jpg?itok=YHIN4_0L)
|
||||
|
||||
Gaming has traditionally been one of Linux's weak points. That has changed somewhat in recent years thanks to Steam, GOG, and other efforts to bring commercial games to multiple operating systems, but those games are often not open source. Sure, the games can be played on an open source operating system, but that is not good enough for an open source purist.
|
||||
|
||||
So, can someone who only uses free and open source software find games that are polished enough to present a solid gaming experience without compromising their open source ideals? Absolutely. While open source games are unlikely ever to rival some of the AAA commercial games developed with massive budgets, there are plenty of open source games, in many genres, that are fun to play and can be installed from the repositories of most major Linux distributions. Even if a particular game is not packaged for a particular distribution, it is usually easy to download the game from the project's website in order to install and play it.
|
||||
|
||||
This article looks at puzzle games. I have already written about [arcade-style games][1] and [board and card games][2]. In future articles, I plan to cover racing, role-playing, and strategy & simulation games.
|
||||
|
||||
### Atomix
|
||||
![](https://opensource.com/sites/default/files/uploads/atomix.png)
|
||||
[Atomix][3] is an open source clone of the [Atomix][4] puzzle game released in 1990 for Amiga, Commodore 64, MS-DOS, and other platforms. The goal of Atomix is to construct atomic molecules by connecting atoms. Individual atoms can be moved up, down, left, or right and will keep moving in that direction until the atom hits an obstacle—either the level's walls or another atom. This means that planning is needed to figure out where in the level to construct the molecule and in what order to move the individual pieces. The first level features a simple water molecule, which is made up of two hydrogen atoms and one oxygen atom, but later levels feature more complex molecules.
|
||||
|
||||
To install Atomix, run the following command:
|
||||
|
||||
* On Fedora: `dnf`` install ``atomix`
|
||||
* On Debian/Ubuntu: `apt install`
|
||||
|
||||
|
||||
|
||||
### Fish Fillets - Next Generation
|
||||
![](https://opensource.com/sites/default/files/uploads/fish_fillets.png)
|
||||
[Fish Fillets - Next Generation][5] is a Linux port of the game Fish Fillets, which was released in 1998 for Windows, and the source code was released under the GPL in 2004. The game involves two fish trying to escape various levels by moving objects out of their way. The two fish have different attributes, so the player needs to pick the right fish for each task. The larger fish can move heavier objects but it is bigger, which means it cannot fit in smaller gaps. The smaller fish can fit in those smaller gaps, but it cannot move the heavier objects. Both fish will be crushed if an object is dropped on them from above, so the player needs to be careful when moving pieces.
|
||||
|
||||
To install Fish Fillets, run the following command:
|
||||
|
||||
* On Fedora: `dnf`` install fillets-ng`
|
||||
* On Debian/Ubuntu: `apt install fillets-ng`
|
||||
|
||||
|
||||
|
||||
### Frozen Bubble
|
||||
![](https://opensource.com/sites/default/files/uploads/frozen-bubble.png)
|
||||
[Frozen Bubble][6] is an arcade-style puzzle game that involves shooting bubbles from the bottom of the screen toward a collection of bubbles at the top of the screen. If three bubbles of the same color connect, they are removed from the screen. Any other bubbles that were connected below the removed bubbles but that were not connected to anything else are also removed. In puzzle mode, the design of the levels is fixed, and the player simply needs to remove the bubbles from the play area before the bubbles drop below a line near the bottom of the screen. The games arcade mode and multiplayer modes follow the same basic rules but provide some differences, which adds to the variety. Frozen Bubble is one of the iconic open source games, so if you have not played it before, check it out.
|
||||
|
||||
To install Frozen Bubble, run the following command:
|
||||
|
||||
* On Fedora: `dnf`` install frozen-bubble`
|
||||
* On Debian/Ubuntu: `apt install frozen-bubble`
|
||||
|
||||
|
||||
|
||||
### Hex-a-hop
|
||||
![](https://opensource.com/sites/default/files/uploads/hex-a-hop.png)
|
||||
[Hex-a-hop][7] is a hexagonal tile-based puzzle game in which the player needs to remove all the green tiles from the level. Tiles are removed by moving over them. Since tiles disappear after they are moved over, it is imperative to plan the optimal path through the level to remove all the tiles without getting stuck. However, there is an undo feature if the player uses a sub-optimal path. Later levels add extra complexity by including tiles that need to be crossed over multiple times and bouncing tiles that cause the player to jump over a certain number of hexes.
|
||||
|
||||
To install Hex-a-hop, run the following command:
|
||||
|
||||
* On Fedora: `dnf`` install hex-a-hop`
|
||||
* On Debian/Ubuntu: `apt install hex-a-hop`
|
||||
|
||||
|
||||
|
||||
### Pingus
|
||||
![](https://opensource.com/sites/default/files/uploads/pingus.png)
|
||||
[Pingus][8] is an open source clone of [Lemmings][9]. It is not an exact clone, but the game-play is very similar. Small creatures (lemmings in Lemmings, penguins in Pingus) enter the level through the level's entrance and start walking in a straight line. The player needs to use special abilities to make it so that the creatures can reach the level's exit without getting trapped or falling off a cliff. These abilities include things like digging or building a bridge. If a sufficient number of creatures make it to the exit, the level is successfully solved and the player can advance to the next level. Pingus adds a few extra features to the standard Lemmings features, including a world map and a few abilities not found in the original game, but fans of the classic Lemmings game should feel right at home in this open source variant.
|
||||
|
||||
To install Pingus, run the following command:
|
||||
|
||||
* On Fedora: `dnf`` install ``pingus`
|
||||
* On Debian/Ubuntu: `apt install ``pingus`
|
||||
|
||||
|
||||
|
||||
Did I miss one of your favorite open source puzzle games? Share it in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/puzzle-games-linux
|
||||
|
||||
作者:[Joshua Allen Holm][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/holmja
|
||||
[1]:https://opensource.com/article/18/1/arcade-games-linux
|
||||
[2]:https://opensource.com/article/18/3/card-board-games-linux
|
||||
[3]:https://wiki.gnome.org/action/raw/Apps/Atomix
|
||||
[4]:https://en.wikipedia.org/w/index.php?title=Atomix_(video_game)
|
||||
[5]:http://fillets.sourceforge.net/index.php
|
||||
[6]:http://www.frozen-bubble.org/home/
|
||||
[7]:http://hexahop.sourceforge.net/index.html
|
||||
[8]:https://pingus.seul.org/index.html
|
||||
[9]:http://en.wikipedia.org/wiki/Lemmings
|
@ -0,0 +1,52 @@
|
||||
CIP: Keeping the Lights On with Linux
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cip-lights.jpg?itok=6LAUoIzt)
|
||||
|
||||
Modern civil infrastructure is all around us -- in power plants, radar systems, traffic lights, dams, weather systems, and so on. Many of these infrastructure projects exist for decades, if not longer, so security and longevity are paramount.
|
||||
|
||||
And, many of these systems are powered by Linux, which offers technology providers more control over these issues. However, if every provider is building their own solution, this can lead to fragmentation and duplication of effort. Thus, the primary goal of [Civil Infrastructure Platform (CIP)][1] is to create an open source base layer for industrial use-cases in these systems, such as embedded controllers and gateway devices.
|
||||
|
||||
“We have a very conservative culture in this area because once we create a system, it has to be supported for more than ten years; in some cases for over 60 years. That’s why this project was created, because every player in this industry had the same issue of being able to use Linux for a long time,” says Yoshitake Kobayashi is Technical Steering Committee Chair of CIP.
|
||||
|
||||
CIP’s concept is to create a very fundamental system to use open source software on controllers. This base layer comprises the Linux kernel and a small set of common open source software like libc, busybox, and so on. Because longevity of software is a primary concern, CIP chose Linux kernel 4.4, which is the LTS release of the kernel maintained by Greg Kroah-Hartman.
|
||||
|
||||
### Collaboration
|
||||
|
||||
Since CIP has an upstream first policy, the code that they want in the project must be in the upstream kernel. To create a proactive feedback loop with the kernel community, CIP hired Ben Hutchings as the official maintainer of CIP. Hutchings is known for the work he has done on Debian LTS release, which also led to an official collaboration between CIP and the Debian project.
|
||||
|
||||
Under the newly forged collaboration, CIP will use Debian LTS to build the platform. CIP will also help Debian Long Term Support (LTS) to extend the lifetime of all Debian stable releases. CIP will work closely with Freexian, a company that offers commercial services around Debian LTS. The two organizations will focus on interoperability, security, and support for open source software for embedded systems. CIP will also provide funding for some of the Debian LTS activities.
|
||||
|
||||
“We are excited about this collaboration as well as the CIP’s support of the Debian LTS project, which aims to extend the support lifetime to more than five years. Together, we are committed to long-term support for our users and laying the ‘foundation’ for the cities of the future.” said Chris Lamb, Debian Project Leader.
|
||||
|
||||
### Security
|
||||
|
||||
Security is the biggest concern, said Kobayashi. Although most of the civil infrastructure is not connected to the Internet for obvious security reasons (you definitely don’t want a nuclear power plant to be connected to the Internet), there are many other risks.
|
||||
|
||||
Just because the system itself is not connected to the Internet, that doesn’t mean it’s immune to all threats. Other systems -- like user’s laptops -- may connect to the Internet and then be plugged into the local systems. If someone receives a malicious file as an attachment with email, it can “contaminate” the internal infrastructure.
|
||||
|
||||
Thus, it’s critical to keep all software running on such controllers up to date and fully patched. To ensure security, CIP has also backported many components of the Kernel Self Protection project. CIP also follows one of the strictest cybersecurity standards -- IEC 62443 -- which defines processes and tests to ensure the system is more secure.
|
||||
|
||||
### Going forward
|
||||
|
||||
As CIP is maturing, it's extending its collaboration with providers of Linux. In addition to collaboration with Debian and freexian, CIP recently added Cybertrust Japan Co, Ltd., a supplier of enterprise Linux operating system, as a new Silver member.
|
||||
|
||||
Cybertrust joins other industry leaders, such as Siemens, Toshiba, Codethink, Hitachi, Moxa, Plat’Home, and Renesas, in their work to create a reliable and secure Linux-based embedded software platform that is sustainable for decades to come.
|
||||
|
||||
The ongoing work of these companies under the umbrella of CIP will ensure the integrity of the civil infrastructure that runs our modern society.
|
||||
|
||||
Learn more at the [Civil Infrastructure Platform][1] website.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/cip-keeping-lights-linux
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/arnieswap
|
||||
[1]:https://www.cip-project.org/
|
@ -1,74 +0,0 @@
|
||||
翻译中 by ZenMoore
|
||||
World Cup football on the command line
|
||||
======
|
||||
|
||||
![](https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc2018.jpg?resize=700%2C450&ssl=1)
|
||||
|
||||
Football is around us constantly. Even when domestic leagues have finished, there’s always a football score I want to know. Currently, it’s the biggest football tournament in the world, the Fifa World Cup 2018, hosted in Russia. Every World Cup there are some great football nations that don’t manage to qualify for the tournament. This time around the Italians and the Dutch missed out. But even in non-participating countries, it’s a rite of passage to keep track of the latest scores. I also like to keep abreast of the latest scores from the major leagues around the world without having to search different websites.
|
||||
|
||||
![Command-Line Interface][2]If you’re a big fan of the command-line, what better way to keep track of the latest World Cup scores and standings with a small command-line utility. Let’s take a look at one of the hottest trending football utilities available. It’s goes by the name football-cli.
|
||||
|
||||
If you’re a big fan of the command-line, what better way to keep track of the latest World Cup scores and standings with a small command-line utility. Let’s take a look at one of the hottest trending football utilities available. It’s goes by the name football-cli.
|
||||
|
||||
football-cli is not a groundbreaking app. Over the years, there’s been a raft of command line tools that let you keep you up-to-date with the latest football scores and league standings. For example, I am a heavy user of soccer-cli, a Python based tool, and App-Football, written in Perl. But I’m always looking on the look out for trending apps. And football-cli stands out from the crowd in a few ways.
|
||||
|
||||
football-cli is developed in JavaScript and written by Manraj Singh. It’s open source software, published under the MIT license. Installation is trivial with npm (the package manager for JavaScript), so let’s get straight into the action.
|
||||
|
||||
The utility offers commands that give scores of past and live fixtures, see upcoming and past fixtures of a league and team. It also displays standings of a particular league. There’s a command that lists the various supported competitions. Let’s start with the last command.
|
||||
|
||||
At a shell prompt.
|
||||
|
||||
`luke@ganges:~$ football lists`
|
||||
|
||||
![football-lists][3]
|
||||
|
||||
The World Cup is listed at the bottom. I missed yesterday’s games, so to catch up on the scores, I type at a shell prompt:
|
||||
|
||||
`luke@ganges:~$ football scores`
|
||||
|
||||
![football-wc-22][4]
|
||||
|
||||
Now I want to see the current World Cup group standings. That’s easy.
|
||||
|
||||
`luke@ganges:~$ football standings -l WC`
|
||||
|
||||
Here’s an excerpt of the output:
|
||||
|
||||
![football-wc-table][5]
|
||||
|
||||
The eagle-eyed among you may notice a bug here. Belgium is showing as the leader of Group G. But this is not correct. Belgium and England are (at the time of writing) both tied on points, goal difference, and goals scored. In this situation, the team with the better disciplinary record is ranked higher. England and Belgium have received 2 and 3 yellow cards respectively, so England top the group.
|
||||
|
||||
Suppose I want to find out Liverpool’s results in the Premiership going back 90 days from today.
|
||||
|
||||
`luke@ganges:~$ football fixtures -l PL -d 90 -t "Liverpool"`
|
||||
|
||||
![football-Liverpool][6]
|
||||
|
||||
I’m finding the utility really handy, displaying the scores and standings in a clear, uncluttered, and attractive way. When the European domestic games start up again, it’ll get heavy usage. (Actually, the 2018-19 Champions League is already underway)!
|
||||
|
||||
These few examples give a taster of the functionality available with football-cli. Read more about the utility from the developer’s **[GitHub page][7].** Football + command-line = football-cli
|
||||
|
||||
Like similar tools, the software retrieves its football data from football-data.org. This service provide football data for all major European leagues in a machine-readable way. This includes fixtures, teams, players, results and more. All this information is provided via an easy-to-use RESTful API in JSON representation.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxlinks.com/football-cli-world-cup-football-on-the-command-line/
|
||||
|
||||
作者:[Luke Baker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linuxlinks.com/author/luke-baker/
|
||||
[1]:https://www.linuxlinks.com/wp-content/plugins/jetpack/modules/lazy-images/images/1x1.trans.gif
|
||||
[2]:https://i0.wp.com/www.linuxlinks.com/wp-content/uploads/2017/12/CLI.png?resize=195%2C171&ssl=1
|
||||
[3]:https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-lists.png?resize=595%2C696&ssl=1
|
||||
[4]:https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc-22.png?resize=634%2C75&ssl=1
|
||||
[5]:https://i0.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc-table.png?resize=750%2C581&ssl=1
|
||||
[6]:https://i1.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-Liverpool.png?resize=749%2C131&ssl=1
|
||||
[7]:https://github.com/ManrajGrover/football-cli
|
||||
[8]:https://www.linuxlinks.com/links/Software/
|
||||
[9]:https://discord.gg/uN8Rqex
|
42
sources/talk/20180702 My first sysadmin mistake.md
Normal file
42
sources/talk/20180702 My first sysadmin mistake.md
Normal file
@ -0,0 +1,42 @@
|
||||
My first sysadmin mistake
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_mistakes.png?itok=dN0OoIl5)
|
||||
|
||||
If you work in IT, you know that things never go completely as you think they will. At some point, you'll hit an error or something will go wrong, and you'll end up having to fix things. That's the job of a systems administrator.
|
||||
|
||||
As humans, we all make mistakes. Sometimes, we are the error in the process, or we are what went wrong. As a result, we end up having to fix our own mistakes. That happens. We all make mistakes, typos, or errors.
|
||||
|
||||
As a young systems administrator, I learned this lesson the hard way. I made a huge blunder. But thanks to some coaching from my supervisor, I learned not to dwell on my errors, but to create a "mistake strategy" to set things right. Learn from your mistakes. Get over it, and move on.
|
||||
|
||||
My first job was a Unix systems administrator for a small company. Really, I was a junior sysadmin, but I worked alone most of the time. We were a small IT team, just the three of us. I was the only sysadmin for 20 or 30 Unix workstations and servers. The other two supported the Windows servers and desktops.
|
||||
|
||||
Any systems administrators reading this probably won't be surprised to know that, as an unseasoned, junior sysadmin, I eventually ran the `rm` command in the wrong directory. As root. I thought I was deleting some stale cache files for one of our programs. Instead, I wiped out all files in the `/etc` directory by mistake. Ouch.
|
||||
|
||||
My clue that I'd done something wrong was an error message that `rm` couldn't delete certain subdirectories. But the cache directory should contain only files! I immediately stopped the `rm` command and looked at what I'd done. And then I panicked. All at once, a million thoughts ran through my head. Did I just destroy an important server? What was going to happen to the system? Would I get fired?
|
||||
|
||||
Fortunately, I'd run `rm *` and not `rm -rf *` so I'd deleted only files. The subdirectories were still there. But that didn't make me feel any better.
|
||||
|
||||
Immediately, I went to my supervisor and told her what I'd done. She saw that I felt really dumb about my mistake, but I owned it. Despite the urgency, she took a few minutes to do some coaching with me. "You're not the first person to do this," she said. "What would someone else do in your situation?" That helped me calm down and focus. I started to think less about the stupid thing I had just done, and more about what I was going to do next.
|
||||
|
||||
I put together a simple strategy: Don't reboot the server. Use an identical system as a template, and re-create the `/etc` directory.
|
||||
|
||||
Once I had my plan of action, the rest was easy. It was just a matter of running the right commands to copy the `/etc` files from another server and edit the configuration so it matched the system. Thanks to my practice of documenting everything, I used my existing documentation to make any final adjustments. I avoided having to completely restore the server, which would have meant a huge disruption.
|
||||
|
||||
To be sure, I learned from that mistake. For the rest of my years as a systems administrator, I always confirmed what directory I was in before running any command.
|
||||
|
||||
I also learned the value of building a "mistake strategy." When things go wrong, it's natural to panic and think about all the bad things that might happen next. That's human nature. But creating a "mistake strategy" helps me stop worrying about what just went wrong and focus on making things better. I may still think about it, but knowing my next steps allows me to "get over it."
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/my-first-sysadmin-mistake
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jim-hall
|
@ -0,0 +1,60 @@
|
||||
Translating by vk
|
||||
|
||||
How to make a career move from proprietary to open source technology
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration_0.png?itok=YEl_GXbv)
|
||||
|
||||
I started my journey as a software engineer at Northern Telecom, where I developed proprietary software for carrier-grade telephone switches. Although I learned Pascal while in college, at Northern Telecom I was trained in a proprietary programming language based on C. I also used a proprietary operating system and a proprietary version-control software.
|
||||
|
||||
I enjoyed working in the proprietary environment and had opportunities to do some interesting work. Then I had a turning point in my career that made me think about things. It happened at a career fair. I was invited to speak at a STEM career panel at a local middle school. I shared with the students my day-to-day responsibilities as a software engineer, and one of the students asked me a question: "Is this really what you always wanted to do in life? Do you enjoy and love what you are doing?"
|
||||
|
||||
Whenever my manager asked me this question, I would safely answer, "Yes, of course, I do!" But I had never been asked this by an innocent 6th grader who is interested in STEM. My response to the student was the same: "Of course I do!"
|
||||
|
||||
The truth was I did enjoy my career, but that student had me thinking… I had to reassess where I was in my career. I thought about the proprietary environment. I was an expert in my specialty, but that was one of the downsides: I was only modifying my area of code. Was I learning about different types of technology in a closed system? Was my skillset still marketable? Was I going through the motions? Is this what I really want to continue to do?
|
||||
|
||||
I thought all of those things, and I wondered: Was the challenge and creativity still there?
|
||||
|
||||
Life went on, and I had major life changes. I left Nortel Networks and took a career break to focus on my family.
|
||||
|
||||
When I was ready to re-enter the workforce, that 6th-grader's questions lingered in my mind. Is this want I've always wanted to do? I applied for several jobs that appeared to be a good match, but the feedback I received from recruiters was that they were looking for people with five or more years of Java and Python skills. It seemed that the skills and knowledge I had acquired over the course of my 15-year career at Nortel were no longer in demand or in use.
|
||||
|
||||
### Challenges
|
||||
|
||||
My first challenge was figuring out how to leverage the skills I gained while working at a proprietary company. I noticed there had been a huge shift in IT from proprietary to open source. I decided to learn and teach myself Python because it was the most in-demand language. Once I started to learn Python, I realized I needed a project to gain experience and make myself more marketable.
|
||||
|
||||
The next challenge was figuring out how to gain project experience with my new knowledge of Python. Former colleagues and my husband directed me toward open source software. When I googled "open source project," I discovered there were hundreds of open source projects, ranging from very small (one contributor) ones, to communities of less than 50 people, to huge projects with hundreds of contributors all over the world.
|
||||
|
||||
I did a keyword search in GitHub of technical terms that fit my skillset and found several projects that matched. I decided to leverage my interests and networking background to make my first contribution to OpenStack. I also discovered the [Outreachy][1] program, which offers three-month paid internships to people who are under-represented in tech.
|
||||
|
||||
### Lessons learned
|
||||
|
||||
One of the first things I learned is that I could contribute in many different ways. I could contribute to documentation and user design. I could also contribute by writing test cases. These are skillsets I developed over my career, and I didn't need five years of experience to contribute. All I needed was the commitment and drive to make a contribution.
|
||||
|
||||
After my first contribution to OpenStack was merged into the release, I was accepted into the Outreachy program. One of the best things about Outreachy is the mentor I was assigned to help me navigate the open source world.
|
||||
|
||||
Here are three other valuable lessons I learned that might help others who are interested in breaking into the open source world:
|
||||
|
||||
**Be persistent.** Be persistent in finding the right open source projects. Look for projects that match your core skillset. Also, look for ones that have a code of conduct and that are welcoming to newcomers—especially those with a getting started guide for newcomers. Be persistent in engaging in the community.
|
||||
|
||||
**Be patient.** Adjusting to open source takes time. Engagingin the community takes time. Giving thoughtful and meaningful feedback takes time, and reading and considering feedback you receive takes time.
|
||||
|
||||
**Participate in the community.** You don't have to have permission to work on a certain technology or a certain area. You can decide what you would like to work on and dive in.
|
||||
|
||||
Petra Sargent will present [You Can Teach an Old Dog New Tricks: Moving From Proprietary to Open Source][2] at the 20th annual [OSCON][3] event, July 16-19 in Portland, Oregon.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/career-move
|
||||
|
||||
作者:[Petra Sargent][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/psargent
|
||||
[1]:https://www.outreachy.org/
|
||||
[2]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/307631
|
||||
[3]:https://conferences.oreilly.com/oscon/oscon-or
|
@ -0,0 +1,49 @@
|
||||
What Game of Thrones can teach us about open innovation
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_thinklaterally_520x292.jpg?itok=JkbRl5KU)
|
||||
|
||||
You might think the only synergy one can find in Game of Thrones is that between Jaime Lannister and his sister, Cersei. Characters in the show's rotating cast don't see many long term relationships, as they're killed off, betrayed, and otherwise trading loyalty in an effort to stay alive. Even the Stark children, siblings suffering from the deaths of their parents, don't really get along most of the time.
|
||||
|
||||
But there's something about the chaotic free-for-all of constantly shifting loyalties in Game of Thrones that lends itself to a thought exercise: How can we always be aligning disparate positions in order to innovate?
|
||||
|
||||
Here are three ways Game of Thrones illustrates behaviors that lead to innovation.
|
||||
|
||||
### Join forces
|
||||
|
||||
Aria Stark has no loyalties. Through the death of her parents and separation from her siblings, young Aria demonstrates courage in pursuing an education with the faceless man. And she's rewarded for her courage with the development of seemingly supernatural abilities.
|
||||
|
||||
Aria's hate for the people on her list has her innovating left and right in an attempt to get closer to them. As the audience, we're on Aria's side; despite her violent and deadly methods, we identify with her attempts to overcome hardship. Her determination makes us loyal fans, and in an open organization, courage and determination like hers would be rewarded with some well-deserved influence.
|
||||
|
||||
Being loyal and helpful to driven people like Aria will help you and (by extension) your organization innovate. Passion is infectious.
|
||||
|
||||
### Be nimble
|
||||
|
||||
The Lannisters represent a traditional management structure that forcibly resists innovation. Their resistance is usually the result of their fear of change.
|
||||
|
||||
Without a doubt, change is scary—especially to people who wield power in an organization. Losing status causes us fear, because in our evolutionary and social history, losing status could mean that we would be unable to survive. But look to Tyrion as an example of how to thrive once status is lost.
|
||||
|
||||
There's something about the chaotic free-for-all of constantly shifting loyalties in Game of Thrones that lends itself to a thought exercise: How can we always be aligning disparate positions in order to innovate?
|
||||
|
||||
Tyrion is cast out (demoted) by his family (the senior executive team). Instead of lamenting his loss of power, he seeks out a community (by the side of Daenerys) that values (and can utilize) his unique skills, connections, and influences. His resilience in the face of being cast out of Casterly Rock is the perfect metaphor for how innovation occurs: It's iterative and never straightforward. It requires resilience. A more open source way to say this would be: "fail forward," or "release early, release often."
|
||||
|
||||
### Score resources
|
||||
|
||||
Daenerys Targaryen embodies all the necessary traits for successful innovation. She can be seen as a model for the kind of employee that thrives in an open organization. What the Mother of Dragons needs, the Mother of Dragons gets, and she doesn't compromise her ideals to do it.
|
||||
|
||||
Whether freeing slaves (and then asking for their help) or forming alliances to acquire transport vehicles she's never seen before, Daenerys is resourceful. In an open organization, a staff member needs to have the wherewithal to get things done. Colleagues (even the entire organization) may not always share your priorities, but innovation happens when people take risks. Becoming a savvy negotiator like Khaleesi, and developing a willingness to trade a lot for a little (she's been known to do favors for the mere promise of loyalty), you can get things done, fail forward, and innovate.
|
||||
|
||||
Courage, resilience, and resourcefulness are necessary traits for innovating in an open organization. What else can Game of Thrones teach us about working—and succeeding—openly?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/18/7/open-innovation-lessons-game-of-thrones
|
||||
|
||||
作者:[Laura Hilliger][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/laurahilliger
|
@ -0,0 +1,70 @@
|
||||
Comparing Twine and Ren'Py for creating interactive fiction
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/book_list_fiction_sand_vacation_read.jpg?itok=IViIZu8J)
|
||||
|
||||
Any experienced technology educator knows engagement and motivation are key to a student's learning. Of the many techniques for stimulating engagement and motivation among learners, storytelling and game creation have good track records of success, and writing interactive fiction is a great way to combine both of those techniques.
|
||||
|
||||
Interactive fiction has a respectable history in computing, stretching back to the text-only adventure games of the early 1980s, and it's enjoyed a new popularity recently. There are many technology tools that can be used for writing interactive fiction, but the two that will be considered here, [Twine][1] and [Ren'Py][2], are ideal for the task. Each has different strengths that make it more attractive for particular types of projects.
|
||||
|
||||
### Twine
|
||||
|
||||
![Twine 2.0][4]
|
||||
|
||||
![][5]
|
||||
|
||||
Twine is a popular cross-platform open source interactive fiction system that developed out of the HTML- and JavaScript-based [TiddlyWiki][6]. If you're not familiar with Twine, multimedia artist and Opensource.com contributor Seth Kenlon's article on how he [uses Twine to create interactive adventure games][7] is a great introduction to the tool.
|
||||
|
||||
One of Twine's advantages is that it produces a single, compiled HTML file, which makes it easy to distribute and play an interactive fiction work on any system with a reasonably modern web browser. But this comes at a cost: While it will support graphics, sound files, and embedded video, Twine is somewhat limited by its roots as a primarily text-based system (even though it has developed a lot over the years).
|
||||
|
||||
This is very appealing to new learners who can rapidly produce something that looks good and is fun to play. However, when they want to add visual effects, graphics, and multimedia, learners can get lost among the different, creative ways to do this and the maze of different Twine program versions and story formats. Even so, there's an impressive amount of resources available on how to use Twine.
|
||||
|
||||
Educators often hope learners will take the skills they have gained using one tool and build on them, but this isn't a strength for Twine. While Twine is great for developing literacy and creative writing skills, the coding and programming side is weaker. The story format scripting language has what you would expect: logic commands, conditional statements, arrays/lists, and loops, but it is not closely related to any popular programming language.
|
||||
|
||||
### Ren'Py
|
||||
|
||||
![Ren'Py 7.0][9]
|
||||
|
||||
![][5]
|
||||
|
||||
Ren'Py approaches interactive fiction from a different angle; [Wikipedia][10] describes it as a "visual novel engine." This means that the integration of graphics and other multimedia elements is a lot smoother and more integrated than in Twine. In addition, as Opensource.com contributor Joshua Allen Holm explained, [you don't need much coding experience][11] to use Ren'Py.
|
||||
|
||||
Ren'Py can export finished work for Android, Linux, Mac, and Windows, which is messier than the "one file for all systems" that comes from Twine, particularly if you get into the complexity of making builds for mobile devices. Bear in mind, too, that finished Ren'Py projects with their multimedia elements are a lot bigger than Twine projects.
|
||||
|
||||
The ease of downloading graphics and multimedia files from the internet for Ren'Py projects also provides a great opportunity to teach learners about the complexities of copyright and advocate (as everyone should!) for [Creative Commons][12] licenses.
|
||||
|
||||
As its name suggests, Ren'Py's scripting languages are a mix of true Python and Python-like additions. This will be very attractive to educators who want learners to progress to Python programming. Python's syntatical rules and strict enforcement of indentation are more intimidating to use than the scripting options in Twine, but the long-term gains are worth it.
|
||||
|
||||
### Comparing Twine and Ren'Py
|
||||
|
||||
There are various reasons why Twine has become so successful, but one that will appeal to open source enthusiasts is that anyone can take a compiled Twine story or game and import it back into Twine. This means if you come across a compiled Twine story or game with a neat feature, you can look at the source code and find out how it was done. Ren'Py allows a level of obfuscation that prevents low-level attempts at hacking.
|
||||
|
||||
When it comes to my work helping people with visual impairments use technology, Ren'Py is superior to Twine. Despite claims to the contrary, Twine's HTML files can be used by screen reader users—but only with difficulty. In contrast, Ren'Py has built-in self-voicing capabilities, something that I am very pleased to see, although Linux users may need to add the [eSpeak package][13] to support it.
|
||||
|
||||
Ren'Py and Twine can be used for similar purposes. Text-based projects tend to be simpler and quicker to create than ones that require creating or sourcing graphics and multimedia elements. If your projects will be more text-based, Twine might be the best choice. And, if your projects make extensive use of graphics and multimedia elements, Ren'Py might suit you better.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/twine-vs-renpy-interactive-fiction
|
||||
|
||||
作者:[Peter Cheer][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/petercheer
|
||||
[1]:http://twinery.org/
|
||||
[2]:https://www.renpy.org/
|
||||
[3]:/file/402696
|
||||
[4]:https://opensource.com/sites/default/files/uploads/twine2.png (Twine 2.0)
|
||||
[5]:data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== (Click and drag to move)
|
||||
[6]:https://tiddlywiki.com/
|
||||
[7]:https://opensource.com/article/18/2/twine-gaming
|
||||
[8]:/file/402701
|
||||
[9]:https://opensource.com/sites/default/files/uploads/renpy.png (Ren'Py 7.0)
|
||||
[10]:https://en.wikipedia.org/wiki/Ren%27Py
|
||||
[11]:https://opensource.com/life/13/8/gaming-renpy
|
||||
[12]:https://creativecommons.org/
|
||||
[13]:http://espeak.sourceforge.net/
|
@ -0,0 +1,45 @@
|
||||
New Training Options Address Demand for Blockchain Skills
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/blockchain-301.png?itok=1EA-Ob6F)
|
||||
|
||||
Blockchain technology is transforming industries and bringing new levels of trust to contracts, payment processing, asset protection, and supply chain management. Blockchain-related jobs are the second-fastest growing in today’s labor market, [according to TechCrunch][1]. But, as in the rapidly expanding field of artificial intelligence, there is a pronounced blockchain skills gap and a need for expert training resources.
|
||||
|
||||
### Blockchain for Business
|
||||
|
||||
A new training option was recently announced from The Linux Foundation. Enrollment is now open for a free training course called[Blockchain: Understanding Its Uses and Implications][2], as well as a [Blockchain for Business][2] professional certificate program. Delivered through the edX training platform, the new course and program provide a way to learn about the impact of blockchain technologies and a means to demonstrate that knowledge. Certification, in particular, can make a difference for anyone looking to work in the blockchain arena.
|
||||
|
||||
“In the span of only a year or two, blockchain has gone from something seen only as related to cryptocurrencies to a necessity for businesses across a wide variety of industries,” [said][3] Linux Foundation General Manager, Training & Certification Clyde Seepersad. “Providing a free introductory course designed not only for technical staff but business professionals will help improve understanding of this important technology, while offering a certificate program through edX will enable professionals from all over the world to clearly demonstrate their expertise.”
|
||||
|
||||
TechCrunch [also reports][4] that venture capital is rapidly flowing toward blockchain-focused startups. And, this new program is designed for business professionals who need to understand the potential – or threat – of blockchain to their company and industry.
|
||||
|
||||
“Professional Certificate programs on edX deliver career-relevant education in a flexible, affordable way, by focusing on the critical skills industry leaders and successful professionals are seeking today,” said Anant Agarwal, edX CEO and MIT Professor.
|
||||
|
||||
### Hyperledger Fabric
|
||||
|
||||
The Linux Foundation is steward to many valuable blockchain resources and includes some notable community members. In fact, a recent New York Times article — “[The People Leading the Blockchain Revolution][5]” — named Brian Behlendorf, Executive Director of The Linux Foundation’s [Hyperledger Project][6], one of the [top influential voices][7] in the blockchain world.
|
||||
|
||||
Hyperledger offers proven paths for gaining credibility and skills in the blockchain space. For example, the project offers a free course titled Introduction to Hyperledger Fabric for Developers. Fabric has emerged as a key open source toolset in the blockchain world. Through the Hyperledger project, you can also take the B9-lab Certified Hyperledger Fabric Developer course. More information on both courses is available [here][8].
|
||||
|
||||
“As you can imagine, someone needs to do the actual coding when companies move to experiment and replace their legacy systems with blockchain implementations,” states the Hyperledger website. “With training, you could gain serious first-mover advantage.”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/7/new-training-options-address-demand-blockchain-skills
|
||||
|
||||
作者:[SAM DEAN][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/sam-dean
|
||||
[1]:https://techcrunch.com/2018/02/14/blockchain-engineers-are-in-demand/
|
||||
[2]:https://www.edx.org/course/understanding-blockchain-and-its-implications
|
||||
[3]:https://www.linuxfoundation.org/press-release/as-demand-skyrockets-for-blockchain-expertise-the-linux-foundation-and-edx-offer-new-introductory-blockchain-course-and-blockchain-for-business-professional-certificate-program/
|
||||
[4]:https://techcrunch.com/2018/05/20/with-at-least-1-3-billion-invested-globally-in-2018-vc-funding-for-blockchain-blows-past-2017-totals/
|
||||
[5]:https://www.nytimes.com/2018/06/27/business/dealbook/blockchain-stars.html
|
||||
[6]:https://www.hyperledger.org/
|
||||
[7]:https://www.linuxfoundation.org/blog/hyperledgers-brian-behlendorf-named-as-top-blockchain-influencer-by-new-york-times/
|
||||
[8]:https://www.hyperledger.org/resources/training
|
@ -0,0 +1,77 @@
|
||||
translating---geekpi
|
||||
|
||||
Keeping (financial) score with Ledger –
|
||||
======
|
||||
I’ve used [Ledger CLI][1] to keep track of my finances since 2005, when I moved to Canada. I like the plain-text approach, and its support for virtual envelopes means that I can reconcile both my bank account balances and my virtual allocations to different categories. Here’s how we use those virtual envelopes to manage our finances separately.
|
||||
|
||||
Every month, I have an entry that moves things from my buffer of living expenses to various categories, including an allocation for household expenses. W- doesn’t ask for a lot, so I take care to be frugal with the difference between that and the cost of, say, living on my own. The way we handle it is that I cover a fixed amount, and this is credited by whatever I pay for groceries. Since our grocery total is usually less than the amount I budget for household expenses, any difference just stays on the tab. I used to write him cheques to even it out, but lately I just pay for the occasional additional large expense.
|
||||
|
||||
Here’s a sample envelope allocation:
|
||||
```
|
||||
2014.10.01 * Budget
|
||||
[Envelopes:Living]
|
||||
[Envelopes:Household] $500
|
||||
;; More lines go here
|
||||
|
||||
```
|
||||
|
||||
Here’s one of the envelope rules set up. This one encourages me to classify expenses properly. All expenses are taken out of my “Play” envelope.
|
||||
```
|
||||
= /^Expenses/
|
||||
(Envelopes:Play) -1.0
|
||||
|
||||
```
|
||||
|
||||
This one reimburses the “Play” envelope for household expenses, moving the amount from the “Household” envelope into the “Play” one.
|
||||
```
|
||||
= /^Expenses:House$/
|
||||
(Envelopes:Play) 1.0
|
||||
(Envelopes:Household) -1.0
|
||||
|
||||
```
|
||||
|
||||
I have a regular set of expenses that simulate the household expenses coming out of my budget. For example, here’s the one for October.
|
||||
```
|
||||
2014.10.1 * House
|
||||
Expenses:House
|
||||
Assets:Household $-500
|
||||
|
||||
```
|
||||
|
||||
And this is what a grocery transaction looks like:
|
||||
```
|
||||
2014.09.28 * No Frills
|
||||
Assets:Household:Groceries $70.45
|
||||
Liabilities:MBNA:September $-70.45
|
||||
|
||||
```
|
||||
|
||||
Then `ledger bal Assets:Household` will tell me if I owe him money (negative balance) or not. If I pay for something large (ex: plane tickets, plumbing), the regular household expense budget gradually reduces that balance.
|
||||
|
||||
I picked up the trick of adding a month label to my credit card transactions from W-, who also uses Ledger to track his transactions. It lets me doublecheck the balance of a statement and see if the previous statement has been properly cleared.
|
||||
|
||||
It’s a bit of a weird use of the assets category, but it works out for me mentally.
|
||||
|
||||
Using Ledger to track it in this way lets me keep track of our grocery expenses and the difference between what I’ve actually paid and what I’ve budgeted for. If I end up spending more than I expected, I can move virtual money from more discretionary envelopes, so my budget always stays balanced.
|
||||
|
||||
Ledger’s a powerful tool. Pretty geeky, but maybe more descriptions of workflow might help people who are figuring things out!
|
||||
|
||||
More posts about: [finance][2] Tags: [ledger][3] | [See in index][4] // **[5 Comments »][5]**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/
|
||||
|
||||
作者:[Sacha Chua][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://sachachua.com
|
||||
[1]:http://www.ledger-cli.org/
|
||||
[2]:http://sachachua.com/blog/category/finance/
|
||||
[3]:http://sachachua.com/blog/tag/ledger/
|
||||
[4]:http://pages.sachachua.com/sharing/blog.html?url=http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/
|
||||
[5]:http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/#comments
|
@ -1,3 +1,4 @@
|
||||
translating by wenwensnow
|
||||
An Advanced System Configuration Utility For Ubuntu Power Users
|
||||
======
|
||||
|
||||
|
@ -1,217 +0,0 @@
|
||||
Translating by qhwdw
|
||||
Operating a Kubernetes network
|
||||
============================================================
|
||||
|
||||
I’ve been working on Kubernetes networking a lot recently. One thing I’ve noticed is, while there’s a reasonable amount written about how to **set up** your Kubernetes network, I haven’t seen much about how to **operate** your network and be confident that it won’t create a lot of production incidents for you down the line.
|
||||
|
||||
In this post I’m going to try to convince you of three things: (all I think pretty reasonable :))
|
||||
|
||||
* Avoiding networking outages in production is important
|
||||
|
||||
* Operating networking software is hard
|
||||
|
||||
* It’s worth thinking critically about major changes to your networking infrastructure and the impact that will have on your reliability, even if very fancy Googlers say “this is what we do at Google”. (google engineers are doing great work on Kubernetes!! But I think it’s important to still look at the architecture and make sure it makes sense for your organization.)
|
||||
|
||||
I’m definitely not a Kubernetes networking expert by any means, but I have run into a few issues while setting things up and definitely know a LOT more about Kubernetes networking than I used to.
|
||||
|
||||
### Operating networking software is hard
|
||||
|
||||
Here I’m not talking about operating physical networks (I don’t know anything about that), but instead about keeping software like DNS servers & load balancers & proxies working correctly.
|
||||
|
||||
I have been working on a team that’s responsible for a lot of networking infrastructure for a year, and I have learned a few things about operating networking infrastructure! (though I still have a lot to learn obviously). 3 overall thoughts before we start:
|
||||
|
||||
* Networking software often relies very heavily on the Linux kernel. So in addition to configuring the software correctly you also need to make sure that a bunch of different sysctls are set correctly, and a misconfigured sysctl can easily be the difference between “everything is 100% fine” and “everything is on fire”.
|
||||
|
||||
* Networking requirements change over time (for example maybe you’re doing 5x more DNS lookups than you were last year! Maybe your DNS server suddenly started returning TCP DNS responses instead of UDP which is a totally different kernel workload!). This means software that was working fine before can suddenly start having issues.
|
||||
|
||||
* To fix a production networking issues you often need a lot of expertise. (for example see this [great post by Sophie Haskins on debugging a kube-dns issue][1]) I’m a lot better at debugging networking issues than I was, but that’s only after spending a huge amount of time investing in my knowledge of Linux networking.
|
||||
|
||||
I am still far from an expert at networking operations but I think it seems important to:
|
||||
|
||||
1. Very rarely make major changes to the production networking infrastructure (because it’s super disruptive)
|
||||
|
||||
2. When you _are_ making major changes, think really carefully about what the failure modes are for the new network architecture are
|
||||
|
||||
3. Have multiple people who are able to understand your networking setup
|
||||
|
||||
Switching to Kubernetes is obviously a pretty major networking change! So let’s talk about what some of the things that can go wrong are!
|
||||
|
||||
### Kubernetes networking components
|
||||
|
||||
The Kubernetes networking components we’re going to talk about in this post are:
|
||||
|
||||
* Your overlay network backend (like flannel/calico/weave net/romana)
|
||||
|
||||
* `kube-dns`
|
||||
|
||||
* `kube-proxy`
|
||||
|
||||
* Ingress controllers / load balancers
|
||||
|
||||
* The `kubelet`
|
||||
|
||||
If you’re going to set up HTTP services you probably need all of these. I’m not using most of these components yet but I’m trying to understand them, so that’s what this post is about.
|
||||
|
||||
### The simplest way: Use host networking for all your containers
|
||||
|
||||
Let’s start with the simplest possible thing you can do. This won’t let you run HTTP services in Kubernetes. I think it’s pretty safe because there are less moving parts.
|
||||
|
||||
If you use host networking for all your containers I think all you need to do is:
|
||||
|
||||
1. Configure the kubelet to configure DNS correctly inside your containers
|
||||
|
||||
2. That’s it
|
||||
|
||||
If you use host networking for literally every pod you don’t need kube-dns or kube-proxy. You don’t even need a working overlay network.
|
||||
|
||||
In this setup your pods can connect to the outside world (the same way any process on your hosts would talk to the outside world) but the outside world can’t connect to your pods.
|
||||
|
||||
This isn’t super important (I think most people want to run HTTP services inside Kubernetes and actually communicate with those services) but I do think it’s interesting to realize that at some level all of this networking complexity isn’t strictly required and sometimes you can get away without using it. Avoiding networking complexity seems like a good idea to me if you can.
|
||||
|
||||
### Operating an overlay network
|
||||
|
||||
The first networking component we’re going to talk about is your overlay network. Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”).
|
||||
|
||||
All other Kubernetes networking stuff relies on the overlay networking working correctly. You can read more about the [kubernetes networking model here][10].
|
||||
|
||||
The way Kelsey Hightower describes in [kubernetes the hard way][11] seems pretty good but it’s not really viable on AWS for clusters more than 50 nodes or so, so I’m not going to talk about that.
|
||||
|
||||
There are a lot of overlay network backends (calico, flannel, weaveworks, romana) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities:
|
||||
|
||||
1. Make sure your pods can send network requests outside your cluster
|
||||
|
||||
2. Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed.
|
||||
|
||||
Okay! So! What can go wrong with your overlay network?
|
||||
|
||||
* The overlay network is responsible for setting up iptables rules (basically `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`) to ensure that containers can make network requests outside Kubernetes. If something goes wrong with this rule then your containers can’t connect to the external network. This isn’t that hard (it’s just a few iptables rules) but it is important. I made a [pull request][2] because I wanted to make sure this was resilient
|
||||
|
||||
* Something can go wrong with adding or deleting nodes. We’re using the flannel hostgw backend and at the time we started using it, node deletion [did not work][3].
|
||||
|
||||
* Your overlay network is probably dependent on a distributed database (etcd). If that database has an incident, this can cause issues. For example [https://github.com/coreos/flannel/issues/610][4] says that if you have data loss in your flannel etcd cluster it can result in containers losing network connectivity. (this has now been fixed)
|
||||
|
||||
* You upgrade Docker and everything breaks
|
||||
|
||||
* Probably more things!
|
||||
|
||||
I’m mostly talking about past issues in Flannel here but I promise I’m not picking on Flannel – I actually really **like** Flannel because I feel like it’s relatively simple (for instance the [vxlan backend part of it][12] is like 500 lines of code) and I feel like it’s possible for me to reason through any issues with it. And it’s obviously continuously improving. They’ve been great about reviewing pull requests.
|
||||
|
||||
My approach to operating an overlay network so far has been:
|
||||
|
||||
* Learn how it works in detail and how to debug it (for example the hostgw network backend for Flannel works by creating routes, so you mostly just need to do `sudo ip route list` to see whether it’s doing the correct thing)
|
||||
|
||||
* Maintain an internal build so it’s easy to patch it if needed
|
||||
|
||||
* When there are issues, contribute patches upstream
|
||||
|
||||
I think it’s actually really useful to go through the list of merged PRs and see bugs that have been fixed in the past – it’s a bit time consuming but is a great way to get a concrete list of kinds of issues other people have run into.
|
||||
|
||||
It’s possible that for other people their overlay networks just work but that hasn’t been my experience and I’ve heard other folks report similar issues. If you have an overlay network setup that is a) on AWS and b) works on a cluster more than 50-100 nodes where you feel more confident about operating it I would like to know.
|
||||
|
||||
### Operating kube-proxy and kube-dns?
|
||||
|
||||
Now that we have some thoughts about operating overlay networks, let’s talk about
|
||||
|
||||
There’s a question mark next to this one because I haven’t done this. Here I have more questions than answers.
|
||||
|
||||
Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6)
|
||||
|
||||
1. Every Kubernetes service gets an IP address (like 10.23.1.2)
|
||||
|
||||
2. `kube-dns` resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2)
|
||||
|
||||
3. `kube-proxy` sets up iptables rules in order to do random load balancing between them. Kube-proxy also has a userspace round-robin load balancer but my impression is that they don’t recommend using it.
|
||||
|
||||
So when you make a request to `my-svc.my-namespace.svc.cluster.local`, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random.
|
||||
|
||||
Some things that I can imagine going wrong with this:
|
||||
|
||||
* `kube-dns` is misconfigured
|
||||
|
||||
* `kube-proxy` dies and your iptables rules don’t get updated
|
||||
|
||||
* Some issue related to maintaining a large number of iptables rules
|
||||
|
||||
Let’s talk about the iptables rules a bit, since doing load balancing by creating a bajillion iptables rules is something I had never heard of before!
|
||||
|
||||
kube-proxy creates one iptables rule per target host like this: (these rules are from [this github issue][13])
|
||||
|
||||
```
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c]
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y
|
||||
|
||||
```
|
||||
|
||||
So kube-proxy creates a **lot** of iptables rules. What does that mean? What are the implications of that in for my network? There’s a great talk from Huawei called [Scale Kubernetes to Support 50,000 services][14] that says if you have 5,000 services in your kubernetes cluster, it takes **11 minutes** to add a new rule. If that happened to your real cluster I think it would be very bad.
|
||||
|
||||
I definitely don’t have 5,000 services in my cluster, but 5,000 isn’t SUCH a bit number. The proposal they give to solve this problem is to replace this iptables backend for kube-proxy with IPVS which is a load balancer that lives in the Linux kernel.
|
||||
|
||||
It seems like kube-proxy is going in the direction of various Linux kernel based load balancers. I think this is partly because they support UDP load balancing, and other load balancers (like HAProxy) don’t support UDP load balancing.
|
||||
|
||||
But I feel comfortable with HAProxy! Is it possible to replace kube-proxy with HAProxy! I googled this and I found this [thread on kubernetes-sig-network][15] saying:
|
||||
|
||||
> kube-proxy is so awesome, we have used in production for almost a year, it works well most of time, but as we have more and more services in our cluster, we found it was getting hard to debug and maintain. There is no iptables expert in our team, we do have HAProxy&LVS experts, as we have used these for several years, so we decided to replace this distributed proxy with a centralized HAProxy. I think this maybe useful for some other people who are considering using HAProxy with kubernetes, so we just update this project and make it open source: [https://github.com/AdoHe/kube2haproxy][5]. If you found it’s useful , please take a look and give a try.
|
||||
|
||||
So that’s an interesting option! I definitely don’t have answers here, but, some thoughts:
|
||||
|
||||
* Load balancers are complicated
|
||||
|
||||
* DNS is also complicated
|
||||
|
||||
* If you already have a lot of experience operating one kind of load balancer (like HAProxy), it might make sense to do some extra work to use that instead of starting to use an entirely new kind of load balancer (like kube-proxy)
|
||||
|
||||
* I’ve been thinking about where we want to be using kube-proxy or kube-dns at all – I think instead it might be better to just invest in Envoy and rely entirely on Envoy for all load balancing & service discovery. So then you just need to be good at operating Envoy.
|
||||
|
||||
As you can see my thoughts on how to operate your Kubernetes internal proxies are still pretty confused and I’m still not super experienced with them. It’s totally possible that kube-proxy and kube-dns are fine and that they will just work fine but I still find it helpful to think through what some of the implications of using them are (for example “you can’t have 5,000 Kubernetes services”).
|
||||
|
||||
### Ingress
|
||||
|
||||
If you’re running a Kubernetes cluster, it’s pretty likely that you actually need HTTP requests to get into your cluster so far. This blog post is already too long and I don’t know much about ingress yet so we’re not going to talk about that.
|
||||
|
||||
### Useful links
|
||||
|
||||
A couple of useful links, to summarize:
|
||||
|
||||
* [The Kubernetes networking model][6]
|
||||
|
||||
* How GKE networking works: [https://www.youtube.com/watch?v=y2bhV81MfKQ][7]
|
||||
|
||||
* The aforementioned talk on `kube-proxy` performance: [https://www.youtube.com/watch?v=4-pawkiazEg][8]
|
||||
|
||||
### I think networking operations is important
|
||||
|
||||
My sense of all this Kubernetes networking software is that it’s all still quite new and I’m not sure we (as a community) really know how to operate all of it well. This makes me worried as an operator because I really want my network to keep working! :) Also I feel like as an organization running your own Kubernetes cluster you need to make a pretty large investment into making sure you understand all the pieces so that you can fix things when they break. Which isn’t a bad thing, it’s just a thing.
|
||||
|
||||
My plan right now is just to keep learning about how things work and reduce the number of moving parts I need to worry about as much as possible.
|
||||
|
||||
As usual I hope this was helpful and I would very much like to know what I got wrong in this post!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
|
||||
|
||||
作者:[Julia Evans ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/about
|
||||
[1]:http://blog.sophaskins.net/blog/misadventures-with-kube-dns/
|
||||
[2]:https://github.com/coreos/flannel/pull/808
|
||||
[3]:https://github.com/coreos/flannel/pull/803
|
||||
[4]:https://github.com/coreos/flannel/issues/610
|
||||
[5]:https://github.com/AdoHe/kube2haproxy
|
||||
[6]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model
|
||||
[7]:https://www.youtube.com/watch?v=y2bhV81MfKQ
|
||||
[8]:https://www.youtube.com/watch?v=4-pawkiazEg
|
||||
[9]:https://jvns.ca/categories/kubernetes
|
||||
[10]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model
|
||||
[11]:https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md
|
||||
[12]:https://github.com/coreos/flannel/tree/master/backend/vxlan
|
||||
[13]:https://github.com/kubernetes/kubernetes/issues/37932
|
||||
[14]:https://www.youtube.com/watch?v=4-pawkiazEg
|
||||
[15]:https://groups.google.com/forum/#!topic/kubernetes-sig-network/3NlBVbTUUU0
|
@ -1,3 +1,4 @@
|
||||
Transating by qhwdw
|
||||
# [Google launches TensorFlow-based vision recognition kit for RPi Zero W][26]
|
||||
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Running a Python application on Kubernetes
|
||||
============================================================
|
||||
|
||||
|
110
sources/tech/20180424 A gentle introduction to FreeDOS.md
Normal file
110
sources/tech/20180424 A gentle introduction to FreeDOS.md
Normal file
@ -0,0 +1,110 @@
|
||||
A gentle introduction to FreeDOS
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos-fish-laptop-color.png?itok=vfv_Lpph)
|
||||
|
||||
FreeDOS is an old operating system, but it is new to many people. In 1994, several developers and I came together to [create FreeDOS][1]—a complete, free, DOS-compatible operating system you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS.
|
||||
|
||||
In 1994, FreeDOS was immediately familiar to anyone who had used Microsoft's proprietary MS-DOS. And that was by design; FreeDOS intended to mimic MS-DOS as much as possible. As a result, DOS users in the 1990s were able to jump right into FreeDOS. But times have changed. Today, open source developers are more familiar with the Linux command line or they may prefer a graphical desktop like [GNOME][2], making the FreeDOS command line seem alien at first.
|
||||
|
||||
New users often ask, "I [installed FreeDOS][3], but how do I use it?" If you haven't used DOS before, the blinking `C:\>` DOS prompt can seem a little unfriendly. And maybe scary. This gentle introduction to FreeDOS should get you started. It offers just the basics: how to get around and how to look at files. If you want to learn more than what's offered here, visit the [FreeDOS wiki][4].
|
||||
|
||||
### The DOS prompt
|
||||
|
||||
First, let's look at the empty prompt and what it means.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/0-prompt.png)
|
||||
|
||||
DOS is a "disk operating system" created when personal computers ran from floppy disks. Even when computers supported hard drives, it was common in the 1980s and 1990s to switch frequently between the different drives. For example, you might make a backup copy of your most important files to a floppy disk.
|
||||
|
||||
DOS referenced each drive by a letter. Early PCs could have only two floppy drives, which were assigned as the `A:` and `B:` drives. The first partition on the first hard drive was the `C:` drive, and so on for other drives. The `C:` in the prompt means you are using the first partition on the first hard drive.
|
||||
|
||||
Starting with PC-DOS 2.0 in 1983, DOS also supported directories and subdirectories, much like the directories and subdirectories on Linux filesystems. But unlike Linux, DOS directory names are delimited by `\` instead of `/`. Putting that together with the drive letter, the `C:\` in the prompt means you are in the top, or "root," directory of the `C:` drive.
|
||||
|
||||
The `>` is the literal prompt where you type your DOS commands, like the `$` prompt on many Linux shells. The part before the `>` tells you the current working directory, and you type commands at the `>` prompt.
|
||||
|
||||
### Finding your way around in DOS
|
||||
|
||||
The basics of navigating through directories in DOS are very similar to the steps you'd use on the Linux command line. You need to remember only a few commands.
|
||||
|
||||
#### Displaying a directory
|
||||
|
||||
When you want to see the contents of the current directory, use the `DIR` command. Since DOS commands are not case-sensitive, you could also type `dir`. By default, DOS displays the details of every file and subdirectory, including the name, extension, size, and last modified date and time.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/1-dir.png)
|
||||
|
||||
If you don't want the extra details about individual file sizes, you can display a "wide" directory by using the `/w` option with the `DIR` command. Note that Linux uses the hyphen (`-`) or double-hyphen (`--`) to start command-line options, but DOS uses the slash character (`/`).
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/2-dirw.png)
|
||||
|
||||
You can look inside a specific subdirectory by passing the pathname as a parameter to `DIR`. Again, another difference from Linux is that Linux files and directories are case-sensitive, but DOS names are case-insensitive. DOS will usually display files and directories in all uppercase, but you can equally reference them in lowercase.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/3-dir-fdos.png)
|
||||
|
||||
|
||||
#### Changing the working directory
|
||||
|
||||
Once you can see the contents of a directory, you can "move into" any other directory. On DOS, you change your working directory with the `CHDIR` command, also abbreviated as `CD`. You can change into a subdirectory with a command like `CD CHOICE` or into a new path with `CD \FDOS\DOC\CHOICE`.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/5-dir-choice.png)
|
||||
|
||||
Just like on the Linux command line, DOS uses `.` to represent the current directory, and `..` for the parent directory (one level "up" from the current directory). You can combine these. For example, `CD ..` changes to the parent directory, and `CD ..\..` moves you two levels "up" from the current directory.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/11-cd.png)
|
||||
|
||||
FreeDOS also borrows a feature from Linux: You can use `CD -` to jump back to your previous working directory. That is handy after you change into a new path to do one thing and want to go back to your previous work.
|
||||
|
||||
#### Changing the working drive
|
||||
|
||||
Under Linux, the concept of a "drive" is hidden. In Linux and other Unix systems, you "mount" a drive to a directory path, such as `/backup`, or the system does it for you automatically, such as `/var/run/media/user/flashdrive`. But DOS is a much simpler system. With DOS, you must change the working drive by yourself.
|
||||
|
||||
Remember that DOS assigns the first partition on the first hard drive as the `C:` drive, and so on for other drive letters. On modern systems, people rarely divide a hard drive with multiple DOS partitions; they simply use the whole disk—or as much of it as they can assign to DOS. Today, `C:` is usually the first hard drive, and `D:` is usually another hard drive or the CD-ROM drive. Other network drives can be mapped to other letters, such as `E:` or `Z:` or however you want to organize them.
|
||||
|
||||
Changing drives is easy under DOS. Just type the drive letter followed by a colon (`:`) on the command line, and DOS will change to that working drive. For example, on my [QEMU][5] system, I set my `D:` drive to a shared directory in my Linux home directory, where I keep installers for various DOS applications and games I want to test.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/8-d-dirw.png)
|
||||
|
||||
Be careful that you don't try to change to a drive that doesn't exist. DOS may set the working drive, but if you try to do anything there you'll get the somewhat infamous "Abort, Retry, Fail" DOS error message.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/9-e-fail.png)
|
||||
|
||||
### Other things to try
|
||||
|
||||
With the `CD` and `DIR` commands, you have the basics of DOS navigation. These commands allow you to find your way around DOS directories and see what other subdirectories and files exist. Once you are comfortable with basic navigation, you might also try these other basic DOS commands:
|
||||
|
||||
* `MKDIR` or `MD` to create new directories
|
||||
* `RMDIR` or `RD` to remove directories
|
||||
* `TREE` to view a list of directories and subdirectories in a tree-like format
|
||||
* `TYPE` and `MORE` to display file contents
|
||||
* `RENAME` or `REN` to rename files
|
||||
* `DEL` or `ERASE` to delete files
|
||||
* `EDIT` to edit files
|
||||
* `CLS` to clear the screen
|
||||
|
||||
|
||||
|
||||
If those aren't enough, you can find a list of [all DOS commands][6] on the FreeDOS wiki.
|
||||
|
||||
In FreeDOS, you can use the `/?` parameter to get brief instructions to use each command. For example, `EDIT /?` will show you the usage and options for the editor. Or you can type `HELP` to use an interactive help system.
|
||||
|
||||
Like any DOS, FreeDOS is meant to be a simple operating system. The DOS filesystem is pretty simple to navigate with only a few basic commands. So fire up a QEMU session, install FreeDOS, and experiment with the DOS command line. Maybe now it won't seem so scary.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/4/gentle-introduction-freedos
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/jim-hall
|
||||
[1]:https://opensource.com/article/17/10/freedos
|
||||
[2]:https://opensource.com/article/17/8/gnome-20-anniversary
|
||||
[3]:http://www.freedos.org/
|
||||
[4]:http://wiki.freedos.org/
|
||||
[5]:https://www.qemu.org/
|
||||
[6]:http://wiki.freedos.org/wiki/index.php/Dos_commands
|
212
sources/tech/20180425 JavaScript Router.md
Normal file
212
sources/tech/20180425 JavaScript Router.md
Normal file
@ -0,0 +1,212 @@
|
||||
Translating by qhwdw
|
||||
JavaScript Router
|
||||
======
|
||||
There are a lot of frameworks/libraries to build single page applications, but I wanted something more minimal. I’ve come with a solution and I just wanted to share it 🙂
|
||||
```
|
||||
class Router {
|
||||
constructor() {
|
||||
this.routes = []
|
||||
}
|
||||
|
||||
handle(pattern, handler) {
|
||||
this.routes.push({ pattern, handler })
|
||||
}
|
||||
|
||||
exec(pathname) {
|
||||
for (const route of this.routes) {
|
||||
if (typeof route.pattern === 'string') {
|
||||
if (route.pattern === pathname) {
|
||||
return route.handler()
|
||||
}
|
||||
} else if (route.pattern instanceof RegExp) {
|
||||
const result = pathname.match(route.pattern)
|
||||
if (result !== null) {
|
||||
const params = result.slice(1).map(decodeURIComponent)
|
||||
return route.handler(...params)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
const router = new Router()
|
||||
|
||||
router.handle('/', homePage)
|
||||
router.handle(/^\/users\/([^\/]+)$/, userPage)
|
||||
router.handle(/^\//, notFoundPage)
|
||||
|
||||
function homePage() {
|
||||
return 'home page'
|
||||
}
|
||||
|
||||
function userPage(username) {
|
||||
return `${username}'s page`
|
||||
}
|
||||
|
||||
function notFoundPage() {
|
||||
return 'not found page'
|
||||
}
|
||||
|
||||
console.log(router.exec('/')) // home page
|
||||
console.log(router.exec('/users/john')) // john's page
|
||||
console.log(router.exec('/foo')) // not found page
|
||||
|
||||
```
|
||||
|
||||
To use it you add handlers for a URL pattern. This pattern can be a simple string or a regular expression. Using a string will match exactly that, but a regular expression allows you to do fancy things like capture parts from the URL as seen with the user page or match any URL as seen with the not found page.
|
||||
|
||||
I’ll explain what does that `exec` method… As I said, the URL pattern can be a string or a regular expression, so it first checks for a string. In case the pattern is equal to the given pathname, it returns the execution of the handler. If it is a regular expression, we do a match with the given pathname. In case it matches, it returns the execution of the handler passing to it the captured parameters.
|
||||
|
||||
### Working Example
|
||||
|
||||
That example just logs to the console. Let’s try to integrate it to a page and see something.
|
||||
```
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Router Demo</title>
|
||||
<link rel="shortcut icon" href="data:,">
|
||||
<script src="/main.js" type="module"></script>
|
||||
</head>
|
||||
<body>
|
||||
<header>
|
||||
<a href="/">Home</a>
|
||||
<a href="/users/john_doe">Profile</a>
|
||||
</header>
|
||||
<main></main>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
```
|
||||
|
||||
This is the `index.html`. For single page applications, you must do special work on the server side because all unknown paths should return this `index.html`. For development, I’m using an npm tool called [serve][1]. This tool is to serve static content. With the flag `-s`/`--single` you can serve single page applications.
|
||||
|
||||
With [Node.js][2] and npm (comes with Node) installed, run:
|
||||
```
|
||||
npm i -g serve
|
||||
serve -s
|
||||
|
||||
```
|
||||
|
||||
That HTML file loads the script `main.js` as a module. It has a simple `<header>` and a `<main>` element in which we’ll render the corresponding page.
|
||||
|
||||
Inside the `main.js` file:
|
||||
```
|
||||
const main = document.querySelector('main')
|
||||
const result = router.exec(location.pathname)
|
||||
main.innerHTML = result
|
||||
|
||||
```
|
||||
|
||||
We call `router.exec()` passing the current pathname and setting the result as HTML in the main element.
|
||||
|
||||
If you go to localhost and play with it you’ll see that it works, but not as you expect from a SPA. Single page applications shouldn’t refresh when you click on links.
|
||||
|
||||
We’ll have to attach event listeners to each anchor link click, prevent the default behavior and do the correct rendering. Because a single page application is something dynamic, you expect creating anchor links on the fly so to add the event listeners I’ll use a technique called [event delegation][3].
|
||||
|
||||
I’ll attach a click event listener to the whole document and check if that click was on an anchor link (or inside one).
|
||||
|
||||
In the `Router` class I’ll have a method that will register a callback that will run for every time we click on a link or a “popstate” event occurs. The popstate event is dispatched every time you use the browser back or forward buttons.
|
||||
|
||||
To the callback we’ll pass that same `router.exec(location.pathname)` for convenience.
|
||||
```class Router {
|
||||
// ...
|
||||
install(callback) {
|
||||
const execCallback = () => {
|
||||
callback(this.exec(location.pathname))
|
||||
}
|
||||
|
||||
document.addEventListener('click', ev => {
|
||||
if (ev.defaultPrevented
|
||||
|| ev.button !== 0
|
||||
|| ev.ctrlKey
|
||||
|| ev.shiftKey
|
||||
|| ev.altKey
|
||||
|| ev.metaKey) {
|
||||
return
|
||||
}
|
||||
|
||||
const a = ev.target.closest('a')
|
||||
|
||||
if (a === null
|
||||
|| (a.target !== '' && a.target !== '_self')
|
||||
|| a.hostname !== location.hostname) {
|
||||
return
|
||||
}
|
||||
|
||||
ev.preventDefault()
|
||||
|
||||
if (a.href !== location.href) {
|
||||
history.pushState(history.state, document.title, a.href)
|
||||
execCallback()
|
||||
}
|
||||
})
|
||||
|
||||
addEventListener('popstate', execCallback)
|
||||
execCallback()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For link clicks, besides calling the callback, we update the URL with `history.pushState()`.
|
||||
|
||||
We’ll move that previous render we did in the main element into the install callback.
|
||||
```
|
||||
router.install(result => {
|
||||
main.innerHTML = result
|
||||
})
|
||||
|
||||
```
|
||||
|
||||
#### DOM
|
||||
|
||||
Those handlers you pass to the router doesn’t need to return a `string`. If you need more power you can return actual DOM. Ex:
|
||||
```
|
||||
const homeTmpl = document.createElement('template')
|
||||
homeTmpl.innerHTML = `
|
||||
<div class="container">
|
||||
<h1>Home Page</h1>
|
||||
</div>
|
||||
`
|
||||
|
||||
function homePage() {
|
||||
const page = homeTmpl.content.cloneNode(true)
|
||||
// You can do `page.querySelector()` here...
|
||||
return page
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
And now in the install callback you can check if the result is a `string` or a `Node`.
|
||||
```
|
||||
router.install(result => {
|
||||
if (typeof result === 'string') {
|
||||
main.innerHTML = result
|
||||
} else if (result instanceof Node) {
|
||||
main.innerHTML = ''
|
||||
main.appendChild(result)
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
That will cover the basic features. I wanted to share this because I’ll use this router in next blog posts.
|
||||
|
||||
I’ve published it as an [npm package][4].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://nicolasparada.netlify.com/posts/js-router/
|
||||
|
||||
作者:[Nicolás Parada][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://nicolasparada.netlify.com/
|
||||
[1]:https://npm.im/serve
|
||||
[2]:https://nodejs.org/
|
||||
[3]:https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/Events#Event_delegation
|
||||
[4]:https://www.npmjs.com/package/@nicolasparada/router
|
@ -0,0 +1,58 @@
|
||||
Everything old is new again: Microservices – DXC Blogs
|
||||
======
|
||||
![](https://csccommunity.files.wordpress.com/2018/05/old-building-with-modern-addition.jpg?w=610)
|
||||
|
||||
If I told you about a software architecture in which components of an application provided services to other components via a communications protocol over a network you would say it was…
|
||||
|
||||
Well, it depends. If you got your start programming in the 90s, you’d say I just defined a [Service-Oriented Architecture (SOA)][1]. But, if you’re younger and cut your developer teeth on the cloud, you’d say: “Oh, you’re talking about [microservices][2].”
|
||||
|
||||
You’d both be right. To really understand the differences, you need to dive deeper into these architectures.
|
||||
|
||||
In SOA, a service is a function, which is well-defined, self-contained, and doesn’t depend on the context or state of other services. There are two kinds of services. A service consumer, which requests a service from the other type, a service provider. An SOA service can play both roles.
|
||||
|
||||
SOA services can trade data with each other. Two or more services can also coordinate with each other. These services carry out basic jobs such as creating a user account, providing login functionality, or validating a payment.
|
||||
|
||||
SOA isn’t so much about modularizing an application as it is about composing an application by integrating distributed, separately-maintained and deployed components. These components run on servers.
|
||||
|
||||
Early versions of SOA used object-oriented protocols to communicate with each other. For example, Microsoft’s [Distributed Component Object Model (DCOM)][3] and [Object Request Brokers (ORBs)][4] use the [Common Object Request Broker Architecture (CORBA)][5] specification.
|
||||
|
||||
Later versions used messaging services such as [Java Message Service (JMS)][6] or [Advanced Message Queuing Protocol (AMQP)][7]. These service connections are called Enterprise Service Buses (ESB). Over these buses, data, almost always in eXtensible Markup Language (XML) format, is transmitted and received.
|
||||
|
||||
[Microservices][2] is an architectural style where applications are made up from loosely coupled services or modules. It lends itself to the Continuous Integration/Continuous Deployment (CI/CD) model of developing large, complex applications. An application is the sum of its modules.
|
||||
|
||||
Each microservice provides an application programming interface (API) endpoint. These are connected by lightweight protocols such as [REpresentational State Transfer (REST)][8], or [gRPC][9]. Data tends to be represented by [JavaScript Object Notation (JSON)][10] or [Protobuf][11].
|
||||
|
||||
Both architectures stand as an alternative to the older, monolithic style of architecture where applications are built as single, autonomous units. For example, in a client-server model, a typical Linux, Apache, MySQL, PHP/Python/Perl (LAMP) server-side application would deal with HTTP requests, run sub-programs and retrieves/updates from the underlying MySQL database. These are all tied closely together. When you change anything, you must build and deploy a new version.
|
||||
|
||||
With SOA, you may need to change several components, but never the entire application. With microservices, though, you can make changes one service at a time. With microservices, you’re working with a true decoupled architecture.
|
||||
|
||||
Microservices are also lighter than SOA. While SOA services are deployed to servers and virtual machines (VMs), microservices are deployed in containers. The protocols are also lighter. This makes microservices more flexible than SOA. Hence, it works better with Agile shops.
|
||||
|
||||
So what does this mean? The long and short of it is that microservices are an SOA variation for container and cloud computing.
|
||||
|
||||
Old style SOA isn’t going away, but as we continue to move applications to containers, the microservice architecture will only grow more popular.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blogs.dxc.technology/2018/05/08/everything-old-is-new-again-microservices/
|
||||
|
||||
作者:[Cloudy Weather][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blogs.dxc.technology/author/steven-vaughan-nichols/
|
||||
[1]:https://www.service-architecture.com/articles/web-services/service-oriented_architecture_soa_definition.html
|
||||
[2]:http://microservices.io/
|
||||
[3]:https://technet.microsoft.com/en-us/library/cc958799.aspx
|
||||
[4]:https://searchmicroservices.techtarget.com/definition/Object-Request-Broker-ORB
|
||||
[5]:http://www.corba.org/
|
||||
[6]:https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html
|
||||
[7]:https://www.amqp.org/
|
||||
[8]:https://www.service-architecture.com/articles/web-services/representational_state_transfer_rest.html
|
||||
[9]:https://grpc.io/
|
||||
[10]:https://www.json.org/
|
||||
[11]:https://github.com/google/protobuf/
|
74
sources/tech/20180516 How Graphics Cards Work.md
Normal file
74
sources/tech/20180516 How Graphics Cards Work.md
Normal file
@ -0,0 +1,74 @@
|
||||
How Graphics Cards Work
|
||||
======
|
||||
![AMD-Polaris][1]
|
||||
|
||||
Ever since 3dfx debuted the original Voodoo accelerator, no single piece of equipment in a PC has had as much of an impact on whether your machine could game as the humble graphics card. While other components absolutely matter, a top-end PC with 32GB of RAM, a $500 CPU, and PCIe-based storage will choke and die if asked to run modern AAA titles on a ten year-old card at modern resolutions and detail levels. Graphics cards (also commonly referred to as GPUs, or graphics processing units) are critical to game performance and we cover them extensively. But we don’t often dive into what makes a GPU tick and how the cards function.
|
||||
|
||||
By necessity, this will be a high-level overview of GPU functionality and cover information common to AMD, Nvidia, and Intel’s integrated GPUs, as well as any discrete cards Intel might build in the future. It should also be common to the mobile GPUs built by Apple, Imagination Technologies, Qualcomm, ARM, and other vendors.
|
||||
|
||||
### Why Don’t We Run Rendering With CPUs?
|
||||
|
||||
The first point I want to address is why we don’t use CPUs for rendering workloads in gaming in the first place. The honest answer to this question is that you can run rendering workloads directly on a CPU, at least in theory. Early 3D games that predate the widespread availability of graphics cards, like Ultima Underworld, ran entirely on the CPU. UU is a useful reference case for multiple reasons — it had a more advanced rendering engine than games like Doom, with full support for looking up and down, as well as then-advanced features like texture mapping. But this kind of support came at a heavy price — many people lacked a PC that could actually run the game.
|
||||
|
||||
![](https://www.extremetech.com/wp-content/uploads/2018/05/UU.jpg)
|
||||
|
||||
In the early days of 3D gaming, many titles like Half Life and Quake II featured a software renderer to allow players without 3D accelerators to play the title. But the reason we dropped this option from modern titles is simple: CPUs are designed to be general-purpose microprocessors, which is another way of saying they lack the specialized hardware and capabilities that GPUs offer. A modern CPU could easily handle titles that tended to stutter when run in software 18 years ago, but no CPU on Earth could easily handle a modern AAA game from today if run in that mode. Not, at least, without some drastic changes to the scene, resolution, and various visual effects.
|
||||
|
||||
### What’s a GPU?
|
||||
|
||||
A GPU is a device with a set of specific hardware capabilities that are intended to map well to the way that various 3D engines execute their code, including geometry setup and execution, texture mapping, memory access, and shaders. There’s a relationship between the way 3D engines function and the way GPU designers build hardware. Some of you may remember that AMD’s HD 5000 family used a VLIW5 architecture, while certain high-end GPUs in the HD 6000 family used a VLIW4 architecture. With GCN, AMD changed its approach to parallelism, in the name of extracting more useful performance per clock cycle.
|
||||
|
||||
![](https://www.extremetech.com/wp-content/uploads/2018/05/GPU-Evolution.jpg)
|
||||
|
||||
Nvidia first coined the term “GPU” with the launch of the original GeForce 256 and its support for performing hardware transform and lighting calculations on the GPU (this corresponded, roughly to the launch of Microsoft’s DirectX 7). Integrating specialized capabilities directly into hardware was a hallmark of early GPU technology. Many of those specialized technologies are still employed (in very different forms), because it’s more power efficient and faster to have dedicated resources on-chip for handling specific types of workloads than it is to attempt to handle all of the work in a single array of programmable cores.
|
||||
|
||||
There are a number of differences between GPU and CPU cores, but at a high level, you can think about them like this. CPUs are typically designed to execute single-threaded code as quickly and efficiently as possible. Features like SMT / Hyper-Threading improve on this, but we scale multi-threaded performance by stacking more high-efficiency single-threaded cores side-by-side. AMD’s 32-core / 64-thread Epyc CPUs are the largest you can buy today. To put that in perspective, the lowest-end Pascal GPU from Nvidia has 384 cores. A “core” in GPU parlance refers to a much smaller unit of processing capability than in a typical CPU.
|
||||
|
||||
**Note:** You cannot compare or estimate relative gaming performance between AMD and Nvidia simply by comparing the number of GPU cores. Within the same GPU family (for example, Nvidia’s GeForce GTX 10 series, or AMD’s RX 4xx or 5xx family), a higher GPU core count means that GPU is more powerful than a lower-end card.
|
||||
|
||||
The reason you can’t draw immediate conclusions on GPU performance between manufacturers or core families based solely on core counts is because different architectures are more and less efficient. Unlike CPUs, GPUs are designed to work in parallel. Both AMD and Nvidia structure their cards into blocks of computing resources. Nvidia calls these blocks an SM (Streaming Multiprocessor), while AMD refers to them as a Compute Unit.
|
||||
|
||||
![](https://www.extremetech.com/wp-content/uploads/2018/05/PascalSM.png)
|
||||
|
||||
Each block contains a group of cores, a scheduler, a register file, instruction cache, texture and L1 cache, and texture mapping units. The SM / CU can be thought of as the smallest functional block of the GPU. It doesn’t contain literally everything — video decode engines, render outputs required for actually drawing an image on-screen, and the memory interfaces used to communicate with onboard VRAM are all outside its purview — but when AMD refers to an APU as having 8 or 11 Vega Compute Units, this is the (equivalent) block of silicon they’re talking about. And if you look at a block diagram of a GPU, any GPU, you’ll notice that it’s the SM/CU that’s duplicated a dozen or more times in the image.
|
||||
|
||||
![](https://www.extremetech.com/wp-content/uploads/2016/11/Pascal-Diagram.jpg)
|
||||
|
||||
The higher the number of SM/CU units in a GPU, the more work it can perform in parallel per clock cycle. Rendering is a type of problem that’s sometimes referred to as “embarrassingly parallel,” meaning it has the potential to scale upwards extremely well as core counts increase.
|
||||
|
||||
When we discuss GPU designs, we often use a format that looks something like this: 4096:160:64. The GPU core count is the first number. The larger it is, the faster the GPU, provided we’re comparing within the same family (GTX 970 versus GTX 980 versus GTX 980 Ti, RX 560 versus RX 580, and so on).
|
||||
|
||||
### Texture Mapping and Render Outputs
|
||||
|
||||
There are two other major components of a GPU: texture mapping units and render outputs. The number of texture mapping units in a design dictates its maximum texel output and how quickly it can address and map textures on to objects. Early 3D games used very little texturing, because the job of drawing 3D polygonal shapes was difficult enough. Textures aren’t actually required for 3D gaming, though the list of games that don’t use them in the modern age is extremely small.
|
||||
|
||||
The number of texture mapping units in a GPU is signified by the second figure in the 4096:160:64 metric. AMD, Nvidia, and Intel typically shift these numbers equivalently as they scale a GPU family up and down. In other words, you won’t really find a scenario where one GPU has a 4096:160:64 configuration while a GPU above or below it in the stack is a 4096:320:64 configuration. Texture mapping can absolutely be a bottleneck in games, but the next-highest GPU in the product stack will typically offer at least more GPU cores and texture mapping units (whether higher-end cards have more ROPs depends on the GPU family and the card configuration).
|
||||
|
||||
Render outputs (also sometimes called raster operations pipelines) are where the GPU’s output is assembled into an image for display on a monitor or television. The number of render outputs multiplied by the clock speed of the GPU controls the pixel fill rate. A higher number of ROPs means that more pixels can be output simultaneously. ROPs also handle antialiasing, and enabling AA — especially supersampled AA — can result in a game that’s fill-rate limited.
|
||||
|
||||
### Memory Bandwidth, Memory Capacity
|
||||
|
||||
The last components we’ll discuss are memory bandwidth and memory capacity. Memory bandwidth refers to how much data can be copied to and from the GPU’s dedicated VRAM buffer per second. Many advanced visual effects (and higher resolutions more generally) require more memory bandwidth to run at reasonable frame rates because they increase the total amount of data being copied into and out of the GPU core.
|
||||
|
||||
In some cases, a lack of memory bandwidth can be a substantial bottleneck for a GPU. AMD’s APUs like the Ryzen 5 2400G are heavily bandwidth-limited, which means increasing your DDR4 clock rate can have a substantial impact on overall performance. The choice of game engine can also have a substantial impact on how much memory bandwidth a GPU needs to avoid this problem, as can a game’s target resolution.
|
||||
|
||||
The total amount of on-board memory is another critical factor in GPUs. If the amount of VRAM needed to run at a given detail level or resolution exceeds available resources, the game will often still run, but it’ll have to use the CPU’s main memory for storing additional texture data — and it takes the GPU vastly longer to pull data out of DRAM as opposed to its onboard pool of dedicated VRAM. This leads to massive stuttering as the game staggers between pulling data from a quick pool of local memory and general system RAM.
|
||||
|
||||
One thing to be aware of is that GPU manufacturers will sometimes equip a low-end or midrange card with more VRAM than is otherwise standard as a way to charge a bit more for the product. We can’t make an absolute prediction as to whether this makes the GPU more attractive because honestly, the results vary depending on the GPU in question. What we can tell you is that in many cases, it isn’t worth paying more for a card if the only difference is a larger RAM buffer. As a rule of thumb, lower-end GPUs tend to run into other bottlenecks before they’re choked by limited available memory. When in doubt, check reviews of the card and look for comparisons of whether a 2GB version is outperformed by the 4GB flavor or whatever the relevant amount of RAM would be. More often than not, assuming all else is equal between the two solutions, you’ll find the higher RAM loadout not worth paying for.
|
||||
|
||||
Check out our [ExtremeTech Explains][2] series for more in-depth coverage of today’s hottest tech topics.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.extremetech.com/gaming/269335-how-graphics-cards-work
|
||||
|
||||
作者:[Joel Hruska][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.extremetech.com/author/jhruska
|
||||
[1]:https://www.extremetech.com/wp-content/uploads/2016/07/AMD-Polaris-640x353.jpg
|
||||
[2]:http://www.extremetech.com/tag/extremetech-explains
|
@ -0,0 +1,355 @@
|
||||
How the Go runtime implements maps efficiently (without generics)
|
||||
============================================================
|
||||
|
||||
This post discusses how maps are implemented in Go. It is based on a presentation I gave at the [GoCon Spring 2018][7] conference in Tokyo, Japan.
|
||||
|
||||
# What is a map function?
|
||||
|
||||
To understand how a map works, let’s first talk about the idea of the _map function_ . A map function maps one value to another. Given one value, called a _key_ , it will return a second, the _value_ .
|
||||
|
||||
```
|
||||
map(key) → value
|
||||
```
|
||||
|
||||
Now, a map isn’t going to be very useful unless we can put some data in the map. We’ll need a function that adds data to the map
|
||||
|
||||
```
|
||||
insert(map, key, value)
|
||||
```
|
||||
|
||||
and a function that removes data from the map
|
||||
|
||||
```
|
||||
delete(map, key)
|
||||
```
|
||||
|
||||
There are other interesting properties of map implementations like querying if a key is present in the map, but they’re outside the scope of what we’re going to discuss today. Instead we’re just going to focus on these properties of a map; insertion, deletion and mapping keys to values.
|
||||
|
||||
# Go’s map is a hashmap
|
||||
|
||||
The specific map implementation I’m going to talk about is the _hashmap_ , because this is the implementation that the Go runtime uses. A hashmap is a classic data structure offering O(1) lookups on average and O(n) in the worst case. That is, when things are working well, the time to execute the map function is a near constant.
|
||||
|
||||
The size of this constant is part of the hashmap design and the point at which the map moves from O(1) to O(n) access time is determined by its _hash function_ .
|
||||
|
||||
### The hash function
|
||||
|
||||
What is a hash function? A hash function takes a key of an unknown length and returns a value with a fixed length.
|
||||
|
||||
```
|
||||
hash(key) → integer
|
||||
```
|
||||
|
||||
this _hash value _ is almost always an integer for reasons that we’ll see in a moment.
|
||||
|
||||
Hash and map functions are similar. They both take a key and return a value. However in the case of the former, it returns a value _derived _ from the key, not the value _associated_ with the key.
|
||||
|
||||
### Important properties of a hash function
|
||||
|
||||
It’s important to talk about the properties of a good hash function as the quality of the hash function determines how likely the map function is to run near O(1).
|
||||
|
||||
When used with a hashmap, hash functions have two important properties. The first is _stability_ . The hash function must be stable. Given the same key, your hash function must return the same answer. If it doesn’t you will not be able to find things you put into the map.
|
||||
|
||||
The second property is _good distribution_ . Given two near identical keys, the result should be wildly different. This is important for two reasons. Firstly, as we’ll see, values in a hashmap should be distributed evenly across buckets, otherwise the access time is not O(1). Secondly as the user can control some of the aspects of the input to the hash function, they may be able to control the output of the hash function, leading to poor distribution which has been a DDoS vector for some languages. This property is also known as _collision resistance_ .
|
||||
|
||||
### The hashmap data structure
|
||||
|
||||
The second part of a hashmap is the way data is stored.
|
||||
|
||||
![](https://dave.cheney.net/wp-content/uploads/2018/05/Gocon-2018-Maps.021-300x169.png)
|
||||
The classical hashmap is an array of _buckets_ each of which contains a pointer to an array of key/value entries. In this case our hashmap has eight buckets (as this is the value that the Go implementation uses) and each bucket can hold up to eight entries each (again drawn from the Go implementation). Using powers of two allows the use of cheap bit masks and shifts rather than expensive division.
|
||||
|
||||
As entries are added to a map, assuming a good hash function distribution, then the buckets will fill at roughly the same rate. Once the number of entries across each bucket passes some percentage of their total size, known as the _load factor,_ then the map will grow by doubling the number of buckets and redistributing the entries across them.
|
||||
|
||||
With this data structure in mind, if we had a map of project names to GitHub stars, how would we go about inserting a value into the map?
|
||||
|
||||
![](https://dave.cheney.net/wp-content/uploads/2018/05/Screen-Shot-2018-05-20-at-20.25.36-300x169.png)
|
||||
|
||||
We start with the key, feed it through our hash function, then mask off the bottom few bits to get the correct offset into our bucket array. This is the bucket that will hold all the entries whose hash ends in three (011 in binary). Finally we walk down the list of entries in the bucket until we find a free slot and we insert our key and value there. If the key was already present, we’d just overwrite the value.
|
||||
|
||||
![](https://dave.cheney.net/wp-content/uploads/2018/05/Screen-Shot-2018-05-20-at-20.25.44-300x169.png)
|
||||
|
||||
Now, lets use the same diagram to look up a value in our map. The process is similar. We hash the key as before, then masking off the lower 3 bits, as our bucket array contains 8 entries, to navigate to the fifth bucket (101 in binary). If our hash function is correct then the string `"moby/moby"` will always hash to the same value, so we know that the key will not be in any other bucket. Now it’s a case of a linear search through the bucket comparing the key provided with the one stored in the entry.
|
||||
|
||||
### Four properties of a hash map
|
||||
|
||||
That was a very high level explanation of the classical hashmap. We’ve seen there are four properties you need to implement a hashmap;
|
||||
|
||||
* 1. You need a hash function for the key.
|
||||
|
||||
2. You need an equality function to compare keys.
|
||||
|
||||
3. You need to know the size of the key and,
|
||||
|
||||
4. You need to know the size of the value because these affect the size of the bucket structure, which the compiler needs to know, as you walk or insert into that structure, how far to advance in memory.
|
||||
|
||||
# Hashmaps in other languages
|
||||
|
||||
Before we talk about the way Go implements a hashmap, I wanted to give a brief overview of how two popular languages implement hashmaps. I’ve chosen these languages as both offer a single map type that works across a variety of key and values.
|
||||
|
||||
### C++
|
||||
|
||||
The first language we’ll discuss is C++. The C++ Standard Template Library (STL) provides `std::unordered_map` which is usually implemented as a hashmap.
|
||||
|
||||
This is the declaration for `std::unordered_map`. It’s a template, so the actual values of the parameters depend on how the template is instantiated.
|
||||
|
||||
```
|
||||
template<
|
||||
class Key, // the type of the key
|
||||
class T, // the type of the value
|
||||
class Hash = std::hash<Key>,
// the hash function
|
||||
class KeyEqual = std::equal_to<Key>,
// the key equality function
|
||||
class Allocator = std::allocator< std::pair<const Key, T> >
|
||||
> class unordered_map;
|
||||
```
|
||||
|
||||
There is a lot here, but the important things to take away are;
|
||||
|
||||
* The template takes the type of the key and value as parameters, so it knows their size.
|
||||
|
||||
* The template takes a `std::hash` function specialised on the key type, so it knows how to hash a key passed to it.
|
||||
|
||||
* And the template takes an `std::equal_to` function, also specialised on key type, so it knows how to compare two keys.
|
||||
|
||||
Now we know how the four properties of a hashmap are communicated to the compiler in C++’s `std::unordered_map`, let’s look at how they work in practice.
|
||||
|
||||
![](https://dave.cheney.net/wp-content/uploads/2018/05/Gocon-2018-Maps.030-300x169.png)
|
||||
|
||||
First we take the key, pass it to the `std::hash` function to obtain the hash value of the key. We mask and index into the bucket array, then walk the entries in that bucket comparing the keys using the `std::equal_to` function.
|
||||
|
||||
### Java
|
||||
|
||||
The second language we’ll discuss is Java. In java the hashmap type is called, unsurprisingly, `java.util.Hashmap`.
|
||||
|
||||
In java, the `java.util.Hashmap` type can only operate on objects, which is fine because in Java almost everything is a subclass of `java.lang.Object`. As every object in Java descends from `java.lang.Object` they inherit, or override, a `hashCode` and an `equals` method.
|
||||
|
||||
However, you cannot directly store the eight primitive types; `boolean`, `int`, ``short``, ``long``, ``byte``, ``char``, ``float``, and ``double``, because they are not subclasss of `java.lang.Object`. You cannot use them as a key, you cannot store them as a value. To work around this limitation, those types are silently converted into objects representing their primitive values. This is known as _boxing._
|
||||
|
||||
Putting this limitation to one side for the moment, let’s look at how a lookup in Java’s hashmap would operate.
|
||||
|
||||
![](https://dave.cheney.net/wp-content/uploads/2018/05/Gocon-2018-Maps.034-300x169.png)
|
||||
|
||||
First we take the key and call its `hashCode` method to obtain the hash value of the key. We mask and index into the bucket array, which in Java is a pointer to an `Entry`, which holds a key and value, and a pointer to the next `Entry` in the bucket forming a linked list of entries.
|
||||
|
||||
# Tradeoffs
|
||||
|
||||
Now that we’ve seen how C++ and Java implement a Hashmap, let’s compare their relative advantages and disadvantages.
|
||||
|
||||
### C++ templated `std::unordered_map`
|
||||
|
||||
### Advantages
|
||||
|
||||
* Size of the key and value types known at compile time.
|
||||
|
||||
* Data structure are always exactly the right size, no need for boxing or indiretion.
|
||||
|
||||
* As code is specialised at compile time, other compile time optimisations like inlining, constant folding, and dead code elimination, can come into play.
|
||||
|
||||
In a word, maps in C++ _can be_ as fast as hand writing a custom map for each key/value combination, because that is what is happening.
|
||||
|
||||
### Disadvantages
|
||||
|
||||
* Code bloat. Each different map are different types. For N map types in your source, you will have N copies of the map code in your binary.
|
||||
|
||||
* Compile time bloat. Due to the way header files and template work, each file that mentions a `std::unordered_map` the source code for that implementation has to be generated, compiled, and optimised.
|
||||
|
||||
### Java util Hashmap
|
||||
|
||||
### Advantages
|
||||
|
||||
* One implementation of a map that works for any subclass of java.util.Object. Only one copy of java.util.HashMap is compiled, and its referenced from every single class.
|
||||
|
||||
### Disadvantages
|
||||
|
||||
* Everything must be an object, even things which are not objects, this means maps of primitive values must be converted to objects via boxing. This adds gc pressure for wrapper objects, and cache pressure because of additional pointer indirections (each object is effective another pointer lookup)
|
||||
|
||||
* Buckets are stored as linked lists, not sequential arrays. This leads to lots of pointer chasing while comparing objects.
|
||||
|
||||
* Hash and equality functions are left as an exercise to the author of the class. Incorrect hash and equals functions can slow down maps using those types, or worse, fail to implement the map behaviour.
|
||||
|
||||
# Go’s hashmap implementation
|
||||
|
||||
Now, let’s talk about how the hashmap implementation in Go allows us to retain many of the benfits of the best map implementations we’ve seen, without paying for the disadvantages.
|
||||
|
||||
Just like C++ and just like Java, Go’s hashmap written _in Go._ But–Go does not provide generic types, so how can we write a hashmap that works for (almost) any type, in Go?
|
||||
|
||||
### Does the Go runtime use interface{}
?
|
||||
|
||||
No, the Go runtime does not use `interface{}` to implement its hashmap. While we have the `container/{list,heap}` packages which do use the empty interface, the runtime’s map implementation does not use `interface{}`.
|
||||
|
||||
### Does the compiler use code generation?
|
||||
|
||||
No, there is only one copy of the map implementation in a Go binary. There is only one map implementation, and unlike Java, it doesn’t use `interface{}` boxing. So, how does it work?
|
||||
|
||||
There are two parts to the answer, and they both involve co-operation between the compiler and the runtime.
|
||||
|
||||
### Compile time rewriting
|
||||
|
||||
The first part of the answer is to understand that map lookups, insertion, and removal, are implemented in the runtime package. During compilation map operations are rewritten to calls to the runtime. eg.
|
||||
|
||||
```
|
||||
v := m["key"] → runtime.mapaccess1(m, ”key", &v)
|
||||
v, ok := m["key"] → runtime.mapaccess2(m, ”key”, &v, &ok)
|
||||
m["key"] = 9001 → runtime.mapinsert(m, ”key", 9001)
|
||||
delete(m, "key") → runtime.mapdelete(m, “key”)
|
||||
```
|
||||
|
||||
It’s also useful to note that the same thing happens with channels, but not with slices.
|
||||
|
||||
The reason for this is channels are complicated data types. Send, receive, and select have complex interactions with the scheduler so that’s delegated to the runtime. By comparison slices are much simpler data structures, so the compiler natively handles operations like slice access, `len` and `cap` while deferring complicated cases in `copy` and `append` to the runtime.
|
||||
|
||||
### Only one copy of the map code
|
||||
|
||||
Now we know that the compiler rewrites map operations to calls to the runtime. We also know that inside the runtime, because this is Go, there is only one function called `mapaccess`, one function called `mapaccess2`, and so on.
|
||||
|
||||
So, how can the compiler can rewrite this
|
||||
|
||||
```
|
||||
v := m[“key"]
|
||||
```
|
||||
|
||||
into this
|
||||
|
||||
```
|
||||
runtime.mapaccess(m, ”key”, &v)
|
||||
```
|
||||
|
||||
without using something like `interface{}`? The easiest way to explain how map types work in Go is to show you the actual signature of `runtime.mapaccess1`.
|
||||
|
||||
```
|
||||
func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
|
||||
```
|
||||
|
||||
Let’s walk through the parameters.
|
||||
|
||||
* `key` is a pointer to the key, this is the value you provided as the key.
|
||||
|
||||
* `h` is a pointer to a `runtime.hmap` structure. `hmap` is the runtime’s hashmap structure that holds the buckets and other housekeeping values [1][1].
|
||||
|
||||
* `t` is a pointer to a `maptype`, which is odd.
|
||||
|
||||
Why do we need a `*maptype` if we already have a `*hmap`? `*maptype` is the special sauce that makes the generic `*hmap` work for (almost) any combination of key and value types. There is a `maptype`value for each unique map declaration in your program. There will be one that describes maps from `string`s to `int`s, from `string`s to `http.Header`s, and so on.
|
||||
|
||||
Rather than having, as C++ has, a complete map _implementation_ for each unique map declaration, the Go compiler creates a `maptype` during compilation and uses that value when calling into the runtime’s map functions.
|
||||
|
||||
```
|
||||
type maptype struct {
|
||||
typ _type
|
||||
key *_type
|
||||
elem *_type
|
||||
bucket *_type // internal type representing a hash bucket
|
||||
hmap *_type // internal type representing a hmap
|
||||
keysize uint8 // size of key slot
|
||||
indirectkey bool // store ptr to key instead of key itself
|
||||
valuesize uint8 // size of value slot
|
||||
indirectvalue bool // store ptr to value instead of value itself
|
||||
bucketsize uint16 // size of bucket
|
||||
reflexivekey bool // true if k==k for all keys
|
||||
needkeyupdate bool // true if we need to update key on overwrite
|
||||
}
|
||||
```
|
||||
|
||||
Each `maptype` contains details about properties of this kind of map from key to elem. It contains infomation about the key, and the elements. `maptype.key` contains information about the pointer to the key we were passed. We call these _type descriptors._
|
||||
|
||||
```
|
||||
type _type struct {
|
||||
size uintptr
|
||||
ptrdata uintptr // size of memory prefix holding all pointers
|
||||
hash uint32
|
||||
tflag tflag
|
||||
align uint8
|
||||
fieldalign uint8
|
||||
kind uint8
|
||||
alg *typeAlg
|
||||
// gcdata stores the GC type data for the garbage collector.
|
||||
// If the KindGCProg bit is set in kind, gcdata is a GC program.
|
||||
// Otherwise it is a ptrmask bitmap. See mbitmap.go for details.
|
||||
gcdata *byte
|
||||
str nameOff
|
||||
ptrToThis typeOff
|
||||
}
|
||||
```
|
||||
|
||||
In the `_type` type, we have things like it’s size, which is important because we just have a pointer to the key value, but we need to know how large it is, what kind of a type it is; it is an integer, is it a struct, and so on. We also need to know how to compare values of this type and how to hash values of that type, and that is what the `_type.alg` field is for.
|
||||
|
||||
```
|
||||
type typeAlg struct {
|
||||
// function for hashing objects of this type
|
||||
// (ptr to object, seed) -> hash
|
||||
hash func(unsafe.Pointer, uintptr) uintptr
|
||||
// function for comparing objects of this type
|
||||
// (ptr to object A, ptr to object B) -> ==?
|
||||
equal func(unsafe.Pointer, unsafe.Pointer) bool
|
||||
}
|
||||
```
|
||||
|
||||
There is one `typeAlg` value for each _type_ in your Go program.
|
||||
|
||||
Putting it all together, here is the (slightly edited for clarity) `runtime.mapaccess1` function.
|
||||
|
||||
```
|
||||
// mapaccess1 returns a pointer to h[key]. Never returns nil, instead
|
||||
// it will return a reference to the zero object for the value type if
|
||||
// the key is not in the map.
|
||||
func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer {
|
||||
if h == nil || h.count == 0 {
|
||||
return unsafe.Pointer(&zeroVal[0])
|
||||
}
|
||||
alg := t.key.alg
|
||||
hash := alg.hash(key, uintptr(h.hash0))
|
||||
m := bucketMask(h.B)
|
||||
b := (*bmap)(add(h.buckets, (hash&m)*uintptr(t.bucketsize)))
|
||||
```
|
||||
|
||||
One thing to note is the `h.hash0` parameter passed into `alg.hash`. `h.hash0` is a random seed generated when the map is created. It is how the Go runtime avoids hash collisions.
|
||||
|
||||
Anyone can read the Go source code, so they could come up with a set of values which, using the hash ago that go uses, all hash to the same bucket. The seed value adds an amount of randomness to the hash function, providing some protection against collision attack.
|
||||
|
||||
# Conclusion
|
||||
|
||||
I was inspired to give this presentation at GoCon because Go’s map implementation is a delightful compromise between C++’s and Java’s, taking most of the good without having to accomodate most of the bad.
|
||||
|
||||
Unlike Java, you can use scalar values like characters and integers without the overhead of boxing. Unlike C++, instead of _N_ `runtime.hashmap` implementations in the final binary, there are only _N_ `runtime.maptype` _values, a_ substantial saving in program space and compile time.
|
||||
|
||||
Now I want to be clear that I am not trying to tell you that Go should not have generics. My goal today was to describe the situation we have today in Go 1 and how the map type in Go works under the hood. The Go map implementation we have today is very fast and provides most of the benefits of templated types, without the downsides of code generation and compile time bloat.
|
||||
|
||||
I see this as a case study in design that deserves recognition.
|
||||
|
||||
1. You can read more about the runtime.hmap structure here, https://dave.cheney.net/2017/04/30/if-a-map-isnt-a-reference-variable-what-is-it[][6]
|
||||
|
||||
### Related Posts:
|
||||
|
||||
1. [Are Go maps sensitive to data races ?][2]
|
||||
|
||||
2. [Should Go 2.0 support generics?][3]
|
||||
|
||||
3. [Introducing gmx, runtime instrumentation for Go applications][4]
|
||||
|
||||
4. [If a map isn’t a reference variable, what is it?][5]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dave.cheney.net/2018/05/29/how-the-go-runtime-implements-maps-efficiently-without-generics
|
||||
|
||||
作者:[Dave Cheney ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://dave.cheney.net/
|
||||
[1]:https://dave.cheney.net/2018/05/29/how-the-go-runtime-implements-maps-efficiently-without-generics#easy-footnote-bottom-1-3224
|
||||
[2]:https://dave.cheney.net/2015/12/07/are-go-maps-sensitive-to-data-races
|
||||
[3]:https://dave.cheney.net/2017/07/22/should-go-2-0-support-generics
|
||||
[4]:https://dave.cheney.net/2012/02/05/introducing-gmx-runtime-instrumentation-for-go-applications
|
||||
[5]:https://dave.cheney.net/2017/04/30/if-a-map-isnt-a-reference-variable-what-is-it
|
||||
[6]:https://dave.cheney.net/2018/05/29/how-the-go-runtime-implements-maps-efficiently-without-generics#easy-footnote-1-3224
|
||||
[7]:https://gocon.connpass.com/event/82515/
|
||||
[8]:https://dave.cheney.net/category/golang
|
||||
[9]:https://dave.cheney.net/category/programming-2
|
||||
[10]:https://dave.cheney.net/tag/generics
|
||||
[11]:https://dave.cheney.net/tag/hashmap
|
||||
[12]:https://dave.cheney.net/tag/maps
|
||||
[13]:https://dave.cheney.net/tag/runtime
|
||||
[14]:https://dave.cheney.net/2018/05/29/how-the-go-runtime-implements-maps-efficiently-without-generics
|
||||
[15]:https://dave.cheney.net/2018/01/16/containers-versus-operating-systems
|
@ -1,270 +0,0 @@
|
||||
3 Python command-line tools
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-tool-box.png?itok=NrJYb417)
|
||||
|
||||
This article was co-written with [Lacey Williams Henschel][1].
|
||||
|
||||
Sometimes the right tool for the job is a command-line application. A command-line application is a program that you interact with and run from something like your shell or Terminal. [Git][2] and [Curl][3] are examples of command-line applications that you might already be familiar with.
|
||||
|
||||
Command-line apps are useful when you have a bit of code you want to run several times in a row or on a regular basis. Django developers run commands like `./manage.py runserver` to start their web servers; Docker developers run `docker-compose up` to spin up their containers. The reasons you might want to write a command-line app are as varied as the reasons you might want to write code in the first place.
|
||||
|
||||
For this month's Python column, we have three libraries to recommend to Pythonistas looking to write their own command-line tools.
|
||||
|
||||
### Click
|
||||
|
||||
[Click][4] is our favorite Python package for command-line applications. It:
|
||||
|
||||
* Has great documentation filled with examples
|
||||
|
||||
* Includes instructions on packaging your app as a Python application so it's easier to run
|
||||
|
||||
* Automatically generates useful help text
|
||||
|
||||
* Lets you stack optional and required arguments and even [several commands][5]
|
||||
|
||||
* Has a Django version ([][6]
|
||||
|
||||
`django-click`
|
||||
|
||||
) for writing management commands
|
||||
|
||||
|
||||
|
||||
|
||||
Click uses its `@click.command()` to declare a function as a command and specify required or optional arguments.
|
||||
```
|
||||
# hello.py
|
||||
|
||||
import click
|
||||
|
||||
|
||||
|
||||
@click.command()
|
||||
|
||||
@click.option('--name', default='', help='Your name')
|
||||
|
||||
def say_hello(name):
|
||||
|
||||
click.echo("Hello {}!".format(name))
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
hello()
|
||||
|
||||
```
|
||||
|
||||
The `@click.option()` decorator declares an [optional argument][7], and the `@click.argument()` decorator declares a [required argument][8]. You can combine optional and required arguments by stacking the decorators. The `echo()` method prints results to the console.
|
||||
```
|
||||
$ python hello.py --name='Lacey'
|
||||
|
||||
Hello Lacey!
|
||||
|
||||
```
|
||||
|
||||
### Docopt
|
||||
|
||||
[Docopt][9] is a command-line application parser, sort of like Markdown for your command-line apps. If you like writing the documentation for your apps as you go, Docopt has by far the best-formatted help text of the options in this article. It isn't our favorite command-line app library because its documentation throws you into the deep end right away, which makes it a little more difficult to get started. Still, it's a lightweight library that is very popular, especially if exceptionally nice documentation is important to you.
|
||||
|
||||
`help` and `version` flags.
|
||||
|
||||
Docopt is very particular about how you format the required docstring at the top of your file. The top element in your docstring after the name of your tool must be "Usage," and it should list the ways you expect your command to be called (e.g., by itself, with arguments, etc.). Usage should includeandflags.
|
||||
|
||||
The second element in your docstring should be "Options," and it should provide more information about the options and arguments you identified in "Usage." The content of your docstring becomes the content of your help text.
|
||||
```
|
||||
"""HELLO CLI
|
||||
|
||||
|
||||
|
||||
Usage:
|
||||
|
||||
hello.py
|
||||
|
||||
hello.py <name>
|
||||
|
||||
hello.py -h|--help
|
||||
|
||||
hello.py -v|--version
|
||||
|
||||
|
||||
|
||||
Options:
|
||||
|
||||
<name> Optional name argument.
|
||||
|
||||
-h --help Show this screen.
|
||||
|
||||
-v --version Show version.
|
||||
|
||||
"""
|
||||
|
||||
|
||||
|
||||
from docopt import docopt
|
||||
|
||||
|
||||
|
||||
def say_hello(name):
|
||||
|
||||
return("Hello {}!".format(name))
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
arguments = docopt(__doc__, version='DEMO 1.0')
|
||||
|
||||
if arguments['<name>']:
|
||||
|
||||
print(say_hello(arguments['<name>']))
|
||||
|
||||
else:
|
||||
|
||||
print(arguments)
|
||||
|
||||
```
|
||||
|
||||
At its most basic level, Docopt is designed to return your arguments to the console as key-value pairs. If I call the above command without specifying a name, I get a dictionary back:
|
||||
```
|
||||
$ python hello.py
|
||||
|
||||
{'--help': False,
|
||||
|
||||
'--version': False,
|
||||
|
||||
'<name>': None}
|
||||
|
||||
```
|
||||
|
||||
This shows me I did not input the `help` or `version` flags, and the `name` argument is `None`.
|
||||
|
||||
But if I call it with a name, the `say_hello` function will execute.
|
||||
```
|
||||
$ python hello.py Jeff
|
||||
|
||||
Hello Jeff!
|
||||
|
||||
```
|
||||
|
||||
Docopt allows both required and optional arguments and has different syntax conventions for each. Required arguments should be represented in `ALLCAPS` or in `<carets>`, and options should be represented with double or single dashes, like `--name`. Read more about Docopt's [patterns][10] in the docs.
|
||||
|
||||
### Fire
|
||||
|
||||
[Fire][11] is a Google library for writing command-line apps. We especially like it when your command needs to take more complicated arguments or deal with Python objects, as it tries to handle parsing your argument types intelligently.
|
||||
|
||||
Fire's [docs][12] include a ton of examples, but I wish the docs were a bit better organized. Fire can handle [multiple commands in one file][13], commands as methods on [objects][14], and [grouping][15] commands.
|
||||
|
||||
Its weakness is the documentation it makes available to the console. Docstrings on your commands don't appear in the help text, and the help text doesn't necessarily identify arguments.
|
||||
```
|
||||
import fire
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def say_hello(name=''):
|
||||
|
||||
return 'Hello {}!'.format(name)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
fire.Fire()
|
||||
|
||||
```
|
||||
|
||||
Arguments are made required or optional depending on whether you specify a default value for them in your function or method definition. To call this command, you must specify the filename and the function name, more like Click's syntax:
|
||||
```
|
||||
$ python hello.py say_hello Rikki
|
||||
|
||||
Hello Rikki!
|
||||
|
||||
```
|
||||
|
||||
You can also pass arguments as flags, like `--name=Rikki`.
|
||||
|
||||
### Bonus: Packaging!
|
||||
|
||||
Click includes instructions (and highly recommends you follow them) for [packaging][16] your commands using `setuptools`.
|
||||
|
||||
To package our first example, add this content to your `setup.py` file:
|
||||
```
|
||||
from setuptools import setup
|
||||
|
||||
|
||||
|
||||
setup(
|
||||
|
||||
name='hello',
|
||||
|
||||
version='0.1',
|
||||
|
||||
py_modules=['hello'],
|
||||
|
||||
install_requires=[
|
||||
|
||||
'Click',
|
||||
|
||||
],
|
||||
|
||||
entry_points='''
|
||||
|
||||
[console_scripts]
|
||||
|
||||
hello=hello:say_hello
|
||||
|
||||
''',
|
||||
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
Everywhere you see `hello`, substitute the name of your module but omit the `.py` extension. Where you see `say_hello`, substitute the name of your function.
|
||||
|
||||
Then, run `pip install --editable` to make your command available to the command line.
|
||||
|
||||
You can now call your command like this:
|
||||
```
|
||||
$ hello --name='Jeff'
|
||||
|
||||
Hello Jeff!
|
||||
|
||||
```
|
||||
|
||||
By packaging your command, you omit the extra step in the console of having to type `python hello.py --name='Jeff'` and save yourself several keystrokes. These instructions will probably also work for the other libraries we mentioned.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/5/3-python-command-line-tools
|
||||
|
||||
作者:[Jeff Triplett][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/laceynwilliams
|
||||
[1]:https://opensource.com/users/laceynwilliams
|
||||
[2]:https://git-scm.com/
|
||||
[3]:https://curl.haxx.se/
|
||||
[4]:http://click.pocoo.org/5/
|
||||
[5]:http://click.pocoo.org/5/commands/
|
||||
[6]:https://github.com/GaretJax/django-click
|
||||
[7]:http://click.pocoo.org/5/options/
|
||||
[8]:http://click.pocoo.org/5/arguments/
|
||||
[9]:http://docopt.org/
|
||||
[10]:https://github.com/docopt/docopt#usage-pattern-format
|
||||
[11]:https://github.com/google/python-fire
|
||||
[12]:https://github.com/google/python-fire/blob/master/docs/guide.md
|
||||
[13]:https://github.com/google/python-fire/blob/master/docs/guide.md#exposing-multiple-commands
|
||||
[14]:https://github.com/google/python-fire/blob/master/docs/guide.md#version-3-firefireobject
|
||||
[15]:https://github.com/google/python-fire/blob/master/docs/guide.md#grouping-commands
|
||||
[16]:http://click.pocoo.org/5/setuptools/
|
@ -1,55 +0,0 @@
|
||||
6 Open Source AI Tools to Know
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/artificial-intelligence-3382507_1920.jpg?itok=HarDnwVX)
|
||||
|
||||
In open source, no matter how original your own idea seems, it is always wise to see if someone else has already executed the concept. For organizations and individuals interested in leveraging the growing power of artificial intelligence (AI), many of the best tools are not only free and open source, but, in many cases, have already been hardened and tested.
|
||||
|
||||
At leading companies and non-profit organizations, AI is a huge priority, and many of these companies and organizations are open sourcing valuable tools. Here is a sampling of free, open source AI tools available to anyone.
|
||||
|
||||
**Acumos.** [Acumos AI][1] is a platform and open source framework that makes it easy to build, share, and deploy AI apps. It standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies rather than endlessly customizing, modeling, and training an AI implementation.
|
||||
|
||||
Acumos is part of the[LF Deep Learning Foundation][2], an organization within The Linux Foundation that supports open source innovation in artificial intelligence, machine learning, and deep learning. The goal is to make these critical new technologies available to developers and data scientists, including those who may have limited experience with deep learning and AI. The LF Deep Learning Foundation just [recently approved a project lifecycle and contribution process][3] and is now accepting proposals for the contribution of projects.
|
||||
|
||||
**Facebook’s Framework.** Facebook[has open sourced][4] its central machine learning system designed for artificial intelligence tasks at large scale, and a series of other AI technologies. The tools are part of a proven platform in use at the company. Facebook has also open sourced a framework for deep learning and AI [called Caffe2][5].
|
||||
|
||||
**Speaking of Caffe.** Yahoo also released its key AI software under an open source license. The[CaffeOnSpark tool][6] is based on deep learning, a branch of artificial intelligence particularly useful in helping machines recognize human speech or the contents of a photo or video. Similarly, IBM’s machine learning program known as [SystemML][7] is freely available to share and modify through the Apache Software Foundation.
|
||||
|
||||
**Google’s Tools.** Google spent years developing its [TensorFlow][8] software framework to support its AI software and other predictive and analytics programs. TensorFlow is the engine behind several Google tools you may already use, including Google Photos and the speech recognition found in the Google app.
|
||||
|
||||
Two [AIY kits][9] open sourced by Google let individuals easily get hands-on with artificial intelligence. Focused on computer vision and voice assistants, the two kits come as small self-assembly cardboard boxes with all the components needed for use. The kits are currently available at Target in the United States, and are based on the open source Raspberry Pi platform — more evidence of how much is happening at the intersection of open source and AI.
|
||||
|
||||
**H2O.ai.** **** I[previously covered][10] H2O.ai, which has carved out a niche in the machine learning and artificial intelligence arena because its primary tools are free and open source. You can get the main H2O platform and Sparkling Water, which works with Apache Spark, simply by[downloading][11] them. These tools operate under the Apache 2.0 license, one of the most flexible open source licenses available, and you can even run them on clusters powered by Amazon Web Services (AWS) and others for just a few hundred dollars.
|
||||
|
||||
**Microsoft Onboard.** “Our goal is to democratize AI to empower every person and every organization to achieve more,” Microsoft CEO Satya Nadella[has said][12]. With that in mind, Microsoft is continuing to iterate its[Microsoft Cognitive Toolkit][13]. It’s an open source software framework that competes with tools such as TensorFlow and Caffe. Cognitive Toolkit works with both Windows and Linux on 64-bit platforms.
|
||||
|
||||
“Cognitive Toolkit enables enterprise-ready, production-grade AI by allowing users to create, train, and evaluate their own neural networks that can then scale efficiently across multiple GPUs and multiple machines on massive data sets,” reports the Cognitive Toolkit Team.
|
||||
|
||||
Learn more about AI in this new ebook from The Linux Foundation. [Open Source AI: Projects, Insights, and Trends by Ibrahim Haddad][14] surveys 16 popular open source AI projects – looking in depth at their histories, codebases, and GitHub contributions. [Download the free ebook now.][14]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/6-open-source-ai-tools-know
|
||||
|
||||
作者:[Sam Dean][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/sam-dean
|
||||
[1]:https://www.acumos.org/
|
||||
[2]:https://www.linuxfoundation.org/projects/deep-learning/
|
||||
[3]:https://www.linuxfoundation.org/blog/lf-deep-learning-foundation-announces-project-contribution-process/
|
||||
[4]:https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/
|
||||
[5]:https://venturebeat.com/2017/04/18/facebook-open-sources-caffe2-a-new-deep-learning-framework/
|
||||
[6]:http://yahoohadoop.tumblr.com/post/139916563586/caffeonspark-open-sourced-for-distributed-deep
|
||||
[7]:https://systemml.apache.org/
|
||||
[8]:https://www.tensorflow.org/
|
||||
[9]:https://www.techradar.com/news/google-assistant-sweetens-raspberry-pi-with-ai-voice-control
|
||||
[10]:https://www.linux.com/news/sparkling-water-bridging-open-source-machine-learning-and-apache-spark
|
||||
[11]:http://www.h2o.ai/download
|
||||
[12]:https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/02/10/microsoft-cognitive-toolkit-cntk/
|
||||
[13]:https://www.microsoft.com/en-us/cognitive-toolkit/
|
||||
[14]:https://www.linuxfoundation.org/publications/open-source-ai-projects-insights-and-trends/
|
@ -1,70 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
3 journaling applications for the Linux desktop
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6)
|
||||
Keeping a journal, even irregularly, can have many benefits. It's not only therapeutic and cathartic, it's also a good record of where you are and where you've been. It can help show your progress in life and remind you of what you've done right and what you've done wrong.
|
||||
|
||||
No matter what your reasons are for keeping a journal or a diary, there are a variety of ways in which to do that. You could go old school and use pen and paper. You could use a web-based application. Or you could turn to the [humble text file][1].
|
||||
|
||||
Another option is to use a dedicated journaling application. There are several very flexible and very useful journaling tools for the Linux desktop. Let's take a look at three of them.
|
||||
|
||||
### RedNotebook
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/red-notebook.png)
|
||||
|
||||
Of the three journaling applications described here, [RedNotebook][2] is the most flexible. Much of that flexibility comes from its templates. Those templates let you record personal thoughts or meeting minutes, plan a journey, or log a phone call. You can also edit existing templates or create your own.
|
||||
|
||||
You format your journal entries using markup that's very much like Markdown. You can also add tags to your journal entries to make them easier to find. Just click or type a tag in the left pane of the application, and a list of corresponding journal entries appears in the right pane.
|
||||
|
||||
On top of that, you can export all or some or just one of your journal entries to plain text, HTML, LaTeX, or PDF. Before you do that, you can get an idea of how an entry will look as a PDF or HTML file by clicking the Preview button on the toolbar.
|
||||
|
||||
Overall, RedNotebook is an easy to use, yet flexible application. It does take a bit of getting used to, but once you do, it's a useful tool.
|
||||
|
||||
### Lifeograph
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/lifeograph.png)
|
||||
|
||||
[Lifeograph][3] has a similar look and feel to RedNotebook. It doesn't have as many features, but Lifeograph gets the job done.
|
||||
|
||||
The application makes journaling easy by keeping things simple and uncluttered. You have a large area in which to write, and you can add some basic formatting to your journal entries. That includes the usual bold and italics, along with bullets and highlighting. You can add tags to your journal entries to better organize and find them.
|
||||
|
||||
Lifeograph has a pair of features I find especially useful. First, you can create multiple journals—for example, a work journal and a personal journal. Second is the ability to password protect your journals. While the website states that Lifeograph uses "real encryption," there are no details about what that is. Still, setting a password will keep most snoopers at bay.
|
||||
|
||||
### Almanah Diary
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/almanah.png)
|
||||
|
||||
[Almanah Diary][4] is another very simple journaling tool. But don't let its lack of features put you off. It's simple, but it gets the job done.
|
||||
|
||||
How simple? It's pretty much an area for entering your journal entries and a calendar. You can do a bit more than that—like adding some basic formatting (bold, italics, and underline) and convert text to a hyperlink. Almanah also enables you to encrypt your journal.
|
||||
|
||||
While there is a feature to import plaintext files into the application, I couldn't get it working. Still, if you like your software simple and need a quick and dirty journal, then Almanah Diary is worth a look.
|
||||
|
||||
### What about the command line?
|
||||
|
||||
You don't have to go GUI if you don't want to. The command line is a great option for keeping a journal.
|
||||
|
||||
One that I've tried and liked is [jrnl][5]. Or you can use [this solution][6], which uses a command line alias to format and save your journal entries into a text file.
|
||||
|
||||
Do you have a favorite journaling application? Feel free to share it by leaving a comment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/linux-journaling-applications
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://plaintextproject.online/2017/07/19/journal.html
|
||||
[2]:http://rednotebook.sourceforge.net
|
||||
[3]:http://lifeograph.sourceforge.net/wiki/Main_Page
|
||||
[4]:https://wiki.gnome.org/Apps/Almanah_Diary
|
||||
[5]:http://maebert.github.com/jrnl/
|
||||
[6]:http://tamilinux.wordpress.com/2007/07/27/writing-short-notes-and-diaries-from-the-cli/
|
@ -1,66 +0,0 @@
|
||||
Mesos and Kubernetes: It's Not a Competition
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-barge-bay-161764_0.jpg?itok=vNChG5fb)
|
||||
|
||||
The roots of Mesos can be traced back to 2009 when Ben Hindman was a PhD student at the University of California, Berkeley working on parallel programming. They were doing massive parallel computations on 128-core chips, trying to solve multiple problems such as making software and libraries run more efficiently on those chips. He started talking with fellow students so see if they could borrow ideas from parallel processing and multiple threads and apply them to cluster management.
|
||||
|
||||
“Initially, our focus was on Big Data,” said Hindman. Back then, Big Data was really hot and Hadoop was one of the hottest technologies. “We recognized that the way people were running things like Hadoop on clusters was similar to the way that people were running multiple threaded applications and parallel applications,” said Hindman.
|
||||
|
||||
However, it was not very efficient, so they started thinking how it could be done better through cluster management and resource management. “We looked at many different technologies at that time,” Hindman recalled.
|
||||
|
||||
Hindman and his colleagues, however, decided to adopt a novel approach. “We decided to create a lower level of abstraction for resource management, and run other services on top to that to do scheduling and other things,” said Hindman, “That’s essentially the essence of Mesos -- to separate out the resource management part from the scheduling part.”
|
||||
|
||||
It worked, and Mesos has been going strong ever since.
|
||||
|
||||
### The project goes to Apache
|
||||
|
||||
The project was founded in 2009. In 2010 the team decided to donate the project to the Apache Software Foundation (ASF). It was incubated at Apache and in 2013, it became a Top-Level Project (TLP).
|
||||
|
||||
There were many reasons why the Mesos community chose Apache Software Foundation, such as the permissiveness of Apache licensing, and the fact that they already had a vibrant community of other such projects.
|
||||
|
||||
It was also about influence. A lot of people working on Mesos were also involved with Apache, and many people were working on projects like Hadoop. At the same time, many folks from the Mesos community were working on other Big Data projects like Spark. This cross-pollination led all three projects -- Hadoop, Mesos, and Spark -- to become ASF projects.
|
||||
|
||||
It was also about commerce. Many companies were interested in Mesos, and the developers wanted it to be maintained by a neutral body instead of being a privately owned project.
|
||||
|
||||
### Who is using Mesos?
|
||||
|
||||
A better question would be, who isn’t? Everyone from Apple to Netflix is using Mesos. However, Mesos had its share of challenges that any technology faces in its early days. “Initially, I had to convince people that there was this new technology called ‘containers’ that could be interesting as there is no need to use virtual machines,” said Hindman.
|
||||
|
||||
The industry has changed a great deal since then, and now every conversation around infrastructure starts with ‘containers’ -- thanks to the work done by Docker. Today convincing is not needed, but even in the early days of Mesos, companies like Apple, Netflix, and PayPal saw the potential. They knew they could take advantage of containerization technologies in lieu of virtual machines. “These companies understood the value of containers before it became a phenomenon,” said Hindman.
|
||||
|
||||
These companies saw that they could have a bunch of containers, instead of virtual machines. All they needed was something to manage and run these containers, and they embraced Mesos. Some of the early users of Mesos included Apple, Netflix, PayPal, Yelp, OpenTable, and Groupon.
|
||||
|
||||
“Most of these organizations are using Mesos for just running arbitrary services,” said Hindman, “But there are many that are using it for doing interesting things with data processing, streaming data, analytics workloads and applications.”
|
||||
|
||||
One of the reasons these companies adopted Mesos was the clear separation between the resource management layers. Mesos offers the flexibility that companies need when dealing with containerization.
|
||||
|
||||
“One of the things we tried to do with Mesos was to create a layering so that people could take advantage of our layer, but also build whatever they wanted to on top,” said Hindman. “I think that's worked really well for the big organizations like Netflix and Apple.”
|
||||
|
||||
However, not every company is a tech company; not every company has or should have this expertise. To help those organizations, Hindman co-founded Mesosphere to offer services and solutions around Mesos. “We ultimately decided to build DC/OS for those organizations which didn’t have the technical expertise or didn't want to spend their time building something like that on top.”
|
||||
|
||||
### Mesos vs. Kubernetes?
|
||||
|
||||
People often think in terms of x versus y, but it’s not always a question of one technology versus another. Most technologies overlap in some areas, and they can also be complementary. “I don't tend to see all these things as competition. I think some of them actually can work in complementary ways with one another,” said Hindman.
|
||||
|
||||
“In fact the name Mesos stands for ‘middle’; it’s kind of a middle OS,” said Hindman, “We have the notion of a container scheduler that can be run on top of something like Mesos. When Kubernetes first came out, we actually embraced it in the Mesos ecosystem and saw it as another way of running containers in DC/OS on top of Mesos.”
|
||||
|
||||
Mesos also resurrected a project called [Marathon][1](a container orchestrator for Mesos and DC/OS), which they have made a first-class citizen in the Mesos ecosystem. However, Marathon does not really compare with Kubernetes. “Kubernetes does a lot more than what Marathon does, so you can’t swap them with each other,” said Hindman, “At the same time, we have done many things in Mesos that are not in Kubernetes. So, these technologies are complementary to each other.”
|
||||
|
||||
Instead of viewing such technologies as adversarial, they should be seen as beneficial to the industry. It’s not duplication of technologies; it’s diversity. According to Hindman, “it could be confusing for the end user in the open source space because it's hard to know which technologies are suitable for what kind of workload, but that’s the nature of the beast called Open Source.”
|
||||
|
||||
That just means there are more choices, and everybody wins.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/mesos-and-kubernetes-its-not-competition
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/arnieswap
|
||||
[1]:https://mesosphere.github.io/marathon/
|
@ -0,0 +1,143 @@
|
||||
Using Ledger for YNAB-like envelope budgeting
|
||||
======
|
||||
### Bye bye Elbank
|
||||
|
||||
I have to start this post with this: I will not be actively maintaining [Elbank][1] anymore, simply because I switched back to [Ledger][2]. If someone wants to take over, please contact me!
|
||||
|
||||
The main reason for switching is budgeting. While Elbank was a cool experiment, it is not an accounting software, and inherently lacks support for powerful budgeting.
|
||||
|
||||
When I started working on Elbank as a replacement for Ledger, I was looking for a reporting tool within Emacs that would fetch bank transactions automatically, so I wouldn’t have to enter transactions by hand (this is a seriously tedious task, and I grew tired of doing it after roughly two years, and finally gave up).
|
||||
|
||||
Since then, I learned about ledger-autosync and boobank, which I use to sync my bank statements with Ledger (more about that in another post).
|
||||
|
||||
### YNAB’s way of budgeting
|
||||
|
||||
I only came across [YNAB][3] recently. While I won’t use their software (being a non-free web application, and, you know… there’s no `M-x ynab`), I think that the principles behind it are really appealing for personal budgeting. I encourage you to [read more about it][4] (or grab a [copy of the book][5], it’s great), but here’s the idea.
|
||||
|
||||
1. **Budget every euro** : Quite simple once you get it. Every single Euro you have should be in a budget envelope. You should assign a job to every Euro you earn (that’s called [zero-based][6], [envelope system][7]).
|
||||
|
||||
2. **Embrace your true expenses** : Plan for larger and less frequent expenses, so when a yearly bill arrives, or your car breaks down, you’ll be covered.
|
||||
|
||||
3. **Roll with the punches** : Address overspending as it happens by taking money overspent from another envelope. As long as you keep budgeting, you’re succeeding.
|
||||
|
||||
4. **Age your money** : Spend less than you earn, so your money stays in the bank account longer. As you do that, the age of your money will grow, and once you reach the goal of spending money that is at least one month old, you won’t worry about that next bill.
|
||||
|
||||
|
||||
|
||||
|
||||
### Implementation in Ledger
|
||||
|
||||
I assume that you are familiar with Ledger, but if not I recommend reading its great [introduction][8] and [tutorial][9].
|
||||
|
||||
The implementation in Ledger uses plain double-entry accounting. I took most of it from [Sacha][10], with some minor differences.
|
||||
|
||||
#### Budgeting new money
|
||||
|
||||
After each income transaction, I budget the new money:
|
||||
```
|
||||
2018-06-12 Employer
|
||||
Assets:Bank:Checking 1600.00 EUR
|
||||
Income:Salary -1600.00 EUR
|
||||
|
||||
2018-06-12 Budget
|
||||
[Assets:Budget:Food] 400.00 EUR
|
||||
[Assets:Budget:Rent] 600.00 EUR
|
||||
[Assets:Budget:Utilities] 600.00 EUR
|
||||
[Equity:Budget] -1600.00 EUR
|
||||
|
||||
```
|
||||
|
||||
Did you notice the square brackets around the accounts of the budget transaction? It’s a feature Ledger calls [virtual postings][11]. These postings are not considered real, and won’t be present in any report that uses the `--real` flag. This is exactly what we want, since it’s a budget allocation and not a “real” transaction. Therefore we’ll use the `--real` flag for all reports except for our budget report.
|
||||
|
||||
#### Automatically crediting budget accounts when spending money
|
||||
|
||||
Next, we need to credit the budget accounts each time we spend money. Ledger has another neat feature called [automated transactions][12] for this:
|
||||
```
|
||||
= /Expenses/
|
||||
[Assets:Budget:Unbudgeted] -1.0
|
||||
[Equity:Budget] 1.0
|
||||
|
||||
= /Expenses:Food/
|
||||
[Assets:Budget:Food] -1.0
|
||||
[Assets:Budget:Unbudgeted] 1.0
|
||||
|
||||
= /Expenses:Rent/
|
||||
[Assets:Budget:Rent] -1.0
|
||||
[Assets:Budget:Unbudgeted] 1.0
|
||||
|
||||
= /Expenses:Utilities/
|
||||
[Assets:Budget:Utilities] -1.0
|
||||
[Assets:Budget:Unbudgeted] 1.0
|
||||
|
||||
```
|
||||
|
||||
Every expense is taken out of the `Assets:Budget:Unbudgeted` account by default.
|
||||
|
||||
This forces me to budget properly, as `Assets:Budget:Unbudgeted` should always be 0 (if it is not the case I immediately know that there is something wrong going on).
|
||||
|
||||
All other automatic transactions take money out of the `Assets:Budget:Unbudgeted` account instead of `Equity:Budget` account.
|
||||
|
||||
#### A Budget report
|
||||
|
||||
This is the final piece of the puzzle. Here’s the budget report command:
|
||||
```
|
||||
ledger --empty -S -T -f ledger.dat bal ^assets:budget
|
||||
|
||||
```
|
||||
|
||||
If we have the following transactions:
|
||||
```
|
||||
2018/06/12 Groceries store
|
||||
Expenses:Food 123.00 EUR
|
||||
Assets:Bank:Checking
|
||||
|
||||
2018/06/12 Landlord
|
||||
Expenses:Rent 600.00 EUR
|
||||
Assets:Bank:Checking
|
||||
|
||||
2018/06/12 Internet provider
|
||||
Expenses:Utilities:Internet 40.00 EUR
|
||||
Assets:Bank:Checking
|
||||
|
||||
```
|
||||
|
||||
Here’s what the report looks like:
|
||||
```
|
||||
837.00 EUR Assets:Budget
|
||||
560.00 EUR Utilities
|
||||
277.00 EUR Food
|
||||
0 Rent
|
||||
0 Unbudgeted
|
||||
--------------------
|
||||
837.00 EUR
|
||||
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
Ledger is amazingly powerful, and provides a great framework for YNAB-like budgeting. In a future post I’ll explain how I automatically import my bank transactions using a mix of `ledger-autosync` and `weboob`.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://emacs.cafe/ledger/emacs/ynab/budgeting/2018/06/12/elbank-ynab.html
|
||||
|
||||
作者:[Nicolas Petton][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://emacs.cafe/l
|
||||
[1]:https://github.com/NicolasPetton/elbank
|
||||
[2]:https://www.ledger-cli.org/
|
||||
[3]:https://ynab.com
|
||||
[4]:https://www.youneedabudget.com/method/
|
||||
[5]:https://www.youneedabudget.com/book-order-now/
|
||||
[6]:https://en.wikipedia.org/wiki/Zero-based_budgeting
|
||||
[7]:https://en.wikipedia.org/wiki/Envelope_system
|
||||
[8]:https://www.ledger-cli.org/3.0/doc/ledger3.html#Introduction-to-Ledger
|
||||
[9]:https://www.ledger-cli.org/3.0/doc/ledger3.html#Ledger-Tutorial
|
||||
[10]:http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/
|
||||
[11]:https://www.ledger-cli.org/3.0/doc/ledger3.html#Virtual-postings
|
||||
[12]:https://www.ledger-cli.org/3.0/doc/ledger3.html#Automated-Transactions
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Getting started with Open edX to host your course
|
||||
======
|
||||
|
||||
|
@ -1,87 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
Stop merging your pull requests manually
|
||||
======
|
||||
|
||||
![](https://julien.danjou.info/content/images/2018/06/github-branching.png)
|
||||
|
||||
If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so.
|
||||
|
||||
Nevertheless, every day, they are thousands of developers using [GitHub][1] that are doing the same thing over and over again: they click on this button:
|
||||
|
||||
![Screen-Shot-2018-06-19-at-18.12.39][2]
|
||||
|
||||
This does not make any sense.
|
||||
|
||||
Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time.
|
||||
|
||||
It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines:
|
||||
|
||||
* Is the test suite passing?
|
||||
* Is the documentation up to date?
|
||||
* Does this follow our code style guideline?
|
||||
* Have N developers reviewed this?
|
||||
|
||||
|
||||
|
||||
As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell?
|
||||
|
||||
In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated.
|
||||
|
||||
When those conditions are all set, I want the code to be merged.
|
||||
|
||||
Without clicking a single button.
|
||||
|
||||
That's exactly how [Mergify][3] started.
|
||||
|
||||
![github-branching-1][4]
|
||||
|
||||
[Mergify][3] is a service that pushes that merge button for you. You define rules in the `.mergify.yml` file of your repository, and when the rules are satisfied, Mergify merges the pull request.
|
||||
|
||||
No need to press any button.
|
||||
|
||||
Take a random pull request, like this one:
|
||||
|
||||
![Screen-Shot-2018-06-20-at-17.12.11][5]
|
||||
|
||||
This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday.
|
||||
|
||||
With [Mergify][3] enabled, you'd just have to put this `.mergify.yml` a the root of the repository:
|
||||
```
|
||||
rules:
|
||||
default:
|
||||
protection:
|
||||
required_status_checks:
|
||||
contexts:
|
||||
- continuous-integration/travis-ci
|
||||
required_pull_request_reviews:
|
||||
required_approving_review_count: 1
|
||||
|
||||
```
|
||||
|
||||
With such a configuration, [Mergify][3] enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged.
|
||||
|
||||
We built [Mergify][3] as a **free service for open-source projects**. The [engine powering the service][6] is also open-source.
|
||||
|
||||
Now go [check it out][3] and stop letting those pull requests hang out one second more. Merge them!
|
||||
|
||||
If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://julien.danjou.info/stop-merging-your-pull-request-manually/
|
||||
|
||||
作者:[Julien Danjou][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://julien.danjou.info/author/jd/
|
||||
[1]:https://github.com
|
||||
[2]:https://julien.danjou.info/content/images/2018/06/Screen-Shot-2018-06-19-at-18.12.39.png
|
||||
[3]:https://mergify.io
|
||||
[4]:https://julien.danjou.info/content/images/2018/06/github-branching-1.png
|
||||
[5]:https://julien.danjou.info/content/images/2018/06/Screen-Shot-2018-06-20-at-17.12.11.png
|
||||
[6]:https://github.com/mergifyio/mergify-engine
|
102
sources/tech/20180621 Bitcoin is a Cult - Adam Caudill.md
Normal file
102
sources/tech/20180621 Bitcoin is a Cult - Adam Caudill.md
Normal file
@ -0,0 +1,102 @@
|
||||
Bitcoin is a Cult — Adam Caudill
|
||||
======
|
||||
The Bitcoin community has changed greatly over the years; from technophiles that could explain a [Merkle tree][1] in their sleep, to speculators driven by the desire for a quick profit & blockchain startups seeking billion dollar valuations led by people who don’t even know what a Merkle tree is. As the years have gone on, a zealotry has been building around Bitcoin and other cryptocurrencies driven by people who see them as something far grander than they actually are; people who believe that normal (or fiat) currencies are becoming a thing of the past, and the cryptocurrencies will fundamentally change the world’s economy.
|
||||
|
||||
Every year, their ranks grow, and their perception of cryptocurrencies becomes more grandiose, even as [novel uses][2] of the technology brings it to its knees. While I’m a firm believer that a well designed cryptocurrency could ease the flow of money across borders, and provide a stable option in areas of mass inflation, the reality is that we aren’t there yet. In fact, it’s the substantial instability in value that allows speculators to make money. Those that preach that the US Dollar and Euro are on their deathbed have utterly abandoned an objective view of reality.
|
||||
|
||||
### A little background…
|
||||
|
||||
I read the Bitcoin white-paper the day it was released – an interesting use of [Merkle trees][1] to create a public ledger and a fairly reasonable consensus protocol – it got the attention of many in the cryptography sphere for its novel properties. In the years since that paper was released, Bitcoin has become rather valuable, attracted many that see it as an investment, and a loyal (and vocal) following of people who think it’ll change everything. This discussion is about the latter.
|
||||
|
||||
Yesterday, someone on Twitter posted the hash of a recent Bitcoin block, the thousands of Tweets and other conversations that followed have convinced me that Bitcoin has crossed the line into true cult territory.
|
||||
|
||||
It all started with this Tweet by Mark Wilcox:
|
||||
|
||||
> #00000000000000000021e800c1e8df51b22c1588e5a624bea17e9faa34b2dc4a
|
||||
> — Mark Wilcox (@mwilcox) June 19, 2018
|
||||
|
||||
The value posted is the hash of [Bitcoin block #528249][3]. The leading zeros are a result of the mining process; to mine a block you combine the contents of the block with a nonce (and other data), hash it, and it has to have at least a certain number of leading zeros to be considered valid. If it doesn’t have the correct number, you change the nonce and try again. Repeat this until the number of leading zeros is the right number, and you now have a valid block. The part that people got excited about is what follows, 21e800.
|
||||
|
||||
Some are claiming this is an intentional reference, that whoever mined this block actually went well beyond the current difficulty to not just bruteforce the leading zeros, but also the next 24 bits – which would require some serious computing power. If someone had the ability to bruteforce this, it could indicate something rather serious, such as a substantial breakthrough in computing or cryptography.
|
||||
|
||||
You must be asking yourself, what’s so important about 21e800 – a question you would surely regret. Some are claiming it’s a reference to [E8 Theory][4] (a widely criticized paper that presents a standard field theory), or to the 21,000,000 total Bitcoins that will eventually exist (despite the fact that `21 x 10^8` would be 2,100,000,000). There are others, they are just too crazy to write about. Another important fact is that a block is mined on average on once a year that has 21e8 following the leading zeros – those were never seen as anything important.
|
||||
|
||||
This leads to where things get fun: the [theories][5] that are circulating about how this happened.
|
||||
|
||||
* A quantum computer, that is somehow able to hash at unbelievable speed. This is despite the fact that there’s no indication in theories around quantum computers that they’ll be able to do this; hashing is one thing that’s considered safe from quantum computers.
|
||||
* Time travel. Yes, people are actually saying that someone came back from the future to mine this block. I think this is crazy enough that I don’t need to get into why this is wrong.
|
||||
* Satoshi Nakamoto is back. Despite the fact that there has been no activity with his private keys, some theorize that he has returned, and is somehow able to do things that nobody can. These theories don’t explain how he could do it.
|
||||
|
||||
|
||||
|
||||
> So basically (as i understand) Satoshi, in order to have known and computed the things that he did, according to modern science he was either:
|
||||
>
|
||||
> A) Using a quantum computer
|
||||
> B) Fom the future
|
||||
> C) Both
|
||||
>
|
||||
> — Crypto Randy Marsh [REKT] (@nondualrandy) [June 21, 2018][6]
|
||||
|
||||
If all this sounds like [numerology][7] to you, you aren’t alone.
|
||||
|
||||
All this discussion around special meaning in block hashes also reignited the discussion around something that is, at least somewhat, interesting. The Bitcoin genesis block, the first bitcoin block, does have an unusual property: the early Bitcoin blocks required that the first 32 bits of the hash be zero; however the genesis block had 43 leading zero bits. As the code that produced the genesis block was never released, it’s not known how it was produced, nor is it known what type of hardware was used to produce it. Satoshi had an academic background, so may have had access to more substantial computing power than was common at the time via a university. At this point, the oddities of the genesis block are a historical curiosity, nothing more.
|
||||
|
||||
### A brief digression on hashing
|
||||
|
||||
This hullabaloo started with the hash of a Bitcoin block; so it’s important to understand just what a hash is, and understand one very important property they have. A hash is a one-way cryptographic function that creates a pseudo-random output based on the data that it’s given.
|
||||
|
||||
What this means, for the purposes of this discussion, is that for each input you get a random output. Random numbers have a way of sometimes looking interesting, simply as a result of being random and the human brain’s affinity to find order in everything. When you start looking for order in random data, you find interesting things – that are yet meaningless, as it’s simply random. When people ascribe significant meaning to random data, it tells you far more about the mindset of those involved rather than the data itself.
|
||||
|
||||
### Cult of the Coin
|
||||
|
||||
First, let us define a couple of terms:
|
||||
|
||||
* Cult: a system of religious veneration and devotion directed toward a particular figure or object.
|
||||
* Religion: a pursuit or interest to which someone ascribes supreme importance.
|
||||
|
||||
|
||||
|
||||
The Cult of the Coin has many saints, perhaps none greater than Satoshi Nakamoto, the pseudonym used by the person(s) that created Bitcoin. Vigorously defended, ascribed with ability and understanding far above that of a normal researcher, seen as a visionary beyond compare that is leading the world to a new economic order. When combined with Satoshi’s secretive nature and unknown true identify, adherents to the Cult view Satoshi as a truly venerated figure.
|
||||
|
||||
That is, of course, with the exception of adherents that follow a different saint, who is unquestionably correct, and any criticism is seen as not only an attack on their saint, but on themselves as well. Those that follow EOS for example, may see Satoshi has a hack that developed a failed project, yet will react fiercely to the slightest criticism of EOS, a reaction so strong that it’s reserved only for an attack on one’s deity. Those that follow IOTA react with equal fierceness; and there are many others.
|
||||
|
||||
These adherents have abandoned objectivity and reasonable discourse, and allowed their zealotry to cloud their vision. Any discussion of these projects and the people behind them that doesn’t include glowing praise inevitably ends with a level of vitriolic speech that is beyond reason for a discussion of technology.
|
||||
|
||||
This is dangerous, for many reasons:
|
||||
|
||||
* Developers & researchers are blinded to flaws. Due to the vast quantities of praise by adherents, those involved develop a grandiose view of their own abilities, and begin to view criticism as unjustified attacks – as they couldn’t possibly have been wrong.
|
||||
* Real problems are attacked. Instead of technical issues being seen as problems to be solved and opportunities to improve, they are seen as attacks from people who must be motivated to destroy the project.
|
||||
* One coin to rule them all. Adherents are often aligned to one, and only one, saint. Acknowledging the qualities of another project means acceptance of flaws or deficiencies in their own, which they will not do.
|
||||
* Preventing real progress. Evolution is brutal, it requires death, it requires projects to fail and that the reasons for those failures to be acknowledged. If lessons from failure are ignored, if things that should die aren’t allowed to, progress stalls.
|
||||
|
||||
|
||||
|
||||
Discussions around many of the cryptocurrencies and related blockchain projects are becoming more and more toxic, becoming impossible for well-intentioned people to have real technical discussions without being attacked. With discussions of real flaws, flaws that would doom a design in any other environment, being instantly treated as heretical without any analysis to determine the factual claims becoming routine, the cost for the well-intentioned to get involved has become extremely high. There are at least some that are aware of significant security flaws that have opted to remain silent due to the highly toxic environment.
|
||||
|
||||
What was once driven by curiosity, a desire to learn and improve, to determine the viability of ideas, is now driven by blind greed, religious zealotry, self-righteousness, and self-aggrandizement.
|
||||
|
||||
I have precious little hope for the future of projects that inspire this type of zealotry, and its continuous spread will likely harm real research in this area for many years to come. These are technical projects, some projects succeed, some fail – this is how technology evolves. Those designing these systems are human, just as flawed as the rest of us, and so too are the projects flawed. Some are well suited to certain use cases and not others, some aren’t suited to any use case, none yet are suited to all. The discussions about these projects should be focused on the technical aspects, and done so to evolve this field of research; adding a religious to these projects harms all.
|
||||
|
||||
[Note: There are many examples of this behavior that could be cited, however in the interest of protecting those that have been targeted for criticizing projects, I have opted to minimize such examples. I have seen too many people who I respect, too many that I consider friends, being viciously attacked – I have no desire to draw attention to those attacks, and risk restarting them.]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://adamcaudill.com/2018/06/21/bitcoin-is-a-cult/
|
||||
|
||||
作者:[Adam Caudill][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://adamcaudill.com/author/adam/
|
||||
[1]:https://en.wikipedia.org/wiki/Merkle_tree
|
||||
[2]:https://hackernoon.com/how-crypto-kitties-disrupted-the-ethereum-network-845c22aa1e6e
|
||||
[3]:https://blockchain.info/block-height/528249
|
||||
[4]:https://en.wikipedia.org/wiki/An_Exceptionally_Simple_Theory_of_Everything
|
||||
[5]:https://medium.com/@coop__soup/00000000000000000021e800c1e8df51b22c1588e5a624bea17e9faa34b2dc4a-cd4b67d446be
|
||||
[6]:https://twitter.com/nondualrandy/status/1009609117768605696?ref_src=twsrc%5Etfw
|
||||
[7]:https://en.wikipedia.org/wiki/Numerology
|
@ -1,85 +0,0 @@
|
||||
translating----geekpi
|
||||
|
||||
Automatically Change Wallpapers in Linux with Little Simple Wallpaper Changer
|
||||
======
|
||||
|
||||
**Brief: Here is a tiny script that automatically changes wallpaper at regular intervals in your Linux desktop.**
|
||||
|
||||
As the name suggests, LittleSimpleWallpaperChanger is a small script that changes the wallpapers randomly at intervals.
|
||||
|
||||
Now I know that there is a random wallpaper option in the ‘Appearance’ or the ‘Change desktop background’ settings. But that randomly changes the pre-installed wallpapers and not the wallpapers that you add.
|
||||
|
||||
So in this article, we’ll be seeing how to set up a random desktop wallpaper setup consisting of your photos using LittleSimpleWallpaperChanger.
|
||||
|
||||
### Little Simple Wallpaper Changer (LSWC)
|
||||
|
||||
[LittleSimpleWallpaperChanger][1] or LSWC is a very lightweight script that runs in the background, changing the wallpapers from the user-specified folder. The wallpapers change at a random interval between 1 to 5 minutes. The software is rather simple to set up, and once set up, the user can just forget about it.
|
||||
|
||||
![Little Simple Wallpaper Changer to change wallpapers in Linux][2]
|
||||
|
||||
#### Installing LSWC
|
||||
|
||||
Download LSWC by [clicking on this link.][3] The zipped file is around 15 KB in size.
|
||||
|
||||
* Browse to the download location.
|
||||
* Right click on the downloaded .zip file and select ‘extract here’.
|
||||
* Open the extracted folder, right click and select ‘Open in terminal’.
|
||||
* Copy paste the command in the terminal and hit enter.
|
||||
`bash ./README_and_install.sh`
|
||||
* Now a dialogue box will pop up asking you to select the folder containing the wallpapers. Click on it and then select the folder that you’ve stored your wallpapers in.
|
||||
* That’s it. Reboot your computer.
|
||||
|
||||
|
||||
|
||||
![Little Simple Wallpaper Changer for Linux][4]
|
||||
|
||||
#### Using LSWC
|
||||
|
||||
On installation, LSWC asks you to select the folder containing your wallpapers. So I suggest you create a folder and move all the wallpapers you want to use there before we install LSWC. Or you can just use the ‘Wallpapers’ folder in the Pictures folder. **All the wallpapers need to be .jpg format.**
|
||||
|
||||
You can add more wallpapers or delete the current wallpapers from your selected folder. To change the wallpapers folder location, you can edit the location of the wallpapers in the
|
||||
following file.
|
||||
```
|
||||
.config/lswc/homepath.conf
|
||||
|
||||
```
|
||||
|
||||
#### To remove LSWC
|
||||
|
||||
Open a terminal and run the below command to stop LSWC
|
||||
```
|
||||
pkill lswc
|
||||
|
||||
```
|
||||
|
||||
Open home in your file manager and press ctrl+H to show hidden files, then delete the following files:
|
||||
|
||||
* ‘scripts’ folder from .local
|
||||
* ‘lswc’ folder from .config
|
||||
* ‘lswc.desktop’ file from .config/autostart
|
||||
|
||||
|
||||
|
||||
There you have it. How to create your own desktop background slideshow. LSWC is really lightweight and simple to use. Install it and then forget it.
|
||||
|
||||
LSWC is not very feature rich but that intentional. It does what it intends to do and that is to change wallpapers. If you want a tool that automatically downloads wallpapers try [WallpaperDownloader][5].
|
||||
|
||||
Do share your thoughts on this nifty little software in the comments section below. Don’t forget to share this article. Cheers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/little-simple-wallpaper-changer/
|
||||
|
||||
作者:[Aquil Roshan][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/aquil/
|
||||
[1]:https://github.com/LittleSimpleWallpaperChanger/lswc
|
||||
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/Little-simple-wallpaper-changer-2-800x450.jpg
|
||||
[3]:https://github.com/LittleSimpleWallpaperChanger/lswc/raw/master/Lswc.zip
|
||||
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/Little-simple-wallpaper-changer-1-800x450.jpg
|
||||
[5]:https://itsfoss.com/wallpaperdownloader-linux/
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Intercepting and Emulating Linux System Calls with Ptrace « null program
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
How to install Pipenv on Fedora
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,79 @@
|
||||
translating---geekpi
|
||||
|
||||
TrueOS Doesn’t Want to Be ‘BSD for Desktop’ Anymore
|
||||
============================================================
|
||||
|
||||
|
||||
There are some really big changes on the horizon for [TrueOS][9]. Today, we will take a look at what is going on in the world of desktop BSD.
|
||||
|
||||
### The Announcement
|
||||
|
||||
![TrueOS: Core Operating System BSD](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/06/true-os-bsd-desktop.jpeg)
|
||||
|
||||
The team behind [TrueOS][10] [announced][11] that they would be changing the focus of the project. Up until this point, TrueOS has made it easy to install BSD with a graphical user interface out of the box. However, it will now become “a cutting-edge operating system that keeps all of the stability that you know and love from ZFS ([OpenZFS][12]) and [FreeBSD][13], and adds additional features to create a fresh, innovative operating system. Our goal is to create a core-centric operating system that is modular, functional, and perfect for do-it-yourselfers and advanced users alike.”
|
||||
|
||||
Essentially, TrueOs will become a downstream fork of FreeBSD. They will integrate newer software into the system, such as [OpenRC][14] and [LibreSSL][15]. They hope to stick to a 6-month release cycle.
|
||||
|
||||
The goal is to make TrueOS so it can be used as the base for other projects to build on. The graphical part will be missing to make it more distro-agnostic.
|
||||
|
||||
[Suggested readInterview with MidnightBSD Founder and Lead Dev Lucas Holt][16]
|
||||
|
||||
### What about Desktop Users?
|
||||
|
||||
If you read my [review of TrueOS][17] and are interested in trying a desktop BSD or already use TrueOS, never fear (which is good advice for life too). All of the desktop elements of TrueOS will be spun off into [Project Trident][18]. Currently, the Project Trident website is very light on details. It seems as though they are still figuring out the logistics of the spin-off.
|
||||
|
||||
If you currently have TrueOS, you don’t have to worry about moving. The TrueOS team said that “there will be migration paths available for those that would like to move to other FreeBSD-based distributions like Project Trident or [GhostBSD][19].”
|
||||
|
||||
[Suggested readInterview with FreeDOS Founder and Lead Dev Jim Hall][20]
|
||||
|
||||
### Thoughts
|
||||
|
||||
When I first read the announcement, I was frankly a little worried. Changing names can be a bad idea. Customers will be used to one name, but if the product name changes they could lose track of the project very easily. TrueOS already went through a name change. When the project was started in 2006 it was named PC-BSD, but in 2016 the name was changed to TrueOS. It kinds of reminds me of the [ArchMerge and Arcolinux saga][21].
|
||||
|
||||
That being said, I think this will be a good thing for desktop users of BSD. One of the common criticisms that I heard about PC-BSD and TrueOS is that it wasn’t very polished. Separating the two parts of the project will help sharpen the focus of the respective developers. The TrueOS team will be able to add newer features to the slow-moving FreeBSD base and the Project Trident team will be able to improve user’s desktop experience.
|
||||
|
||||
I wish both teams well. Remember, people, when someone works on open source, we all benefit even if the work is done on something we don’t use.
|
||||
|
||||
What are your thoughts about the future of TrueOS and Project Trident? Please let us know in the comments below.
|
||||
|
||||
|
||||
------------------------------
|
||||
|
||||
关于作者:
|
||||
|
||||
My name is John Paul Wohlscheid. I'm an aspiring mystery writer who loves to play with technology, especially Linux. You can catch up with me at [my personal website][23]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/trueos-plan-change/
|
||||
|
||||
作者:[John Paul Wohlscheid ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/john/
|
||||
[1]:https://itsfoss.com/author/john/
|
||||
[2]:https://itsfoss.com/trueos-plan-change/#comments
|
||||
[3]:https://itsfoss.com/category/bsd/
|
||||
[4]:https://itsfoss.com/category/news/
|
||||
[5]:https://itsfoss.com/tag/bsd/
|
||||
[6]:https://itsfoss.com/tag/freebsd/
|
||||
[7]:https://itsfoss.com/tag/project-trident/
|
||||
[8]:https://itsfoss.com/tag/trueos/
|
||||
[9]:https://www.trueos.org/
|
||||
[10]:https://www.trueos.org/
|
||||
[11]:https://www.trueos.org/blog/trueosdownstream/
|
||||
[12]:http://open-zfs.org/wiki/Main_Page
|
||||
[13]:https://www.freebsd.org/
|
||||
[14]:https://en.wikipedia.org/wiki/OpenRC
|
||||
[15]:http://www.libressl.org/
|
||||
[16]:https://itsfoss.com/midnightbsd-founder-lucas-holt/
|
||||
[17]:https://itsfoss.com/trueos-bsd-review/
|
||||
[18]:http://www.project-trident.org/
|
||||
[19]:https://www.ghostbsd.org/
|
||||
[20]:https://itsfoss.com/interview-freedos-jim-hall/
|
||||
[21]:https://itsfoss.com/archlabs-vs-archmerge/
|
||||
[22]:http://reddit.com/r/linuxusersgroup
|
||||
[23]:http://johnpaulwohlscheid.work/
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Blockchain evolution: A quick guide and why open source is at the heart of it
|
||||
======
|
||||
|
||||
|
@ -1,139 +0,0 @@
|
||||
Sosreport – A Tool To Collect System Logs And Diagnostic Information
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/06/sos-720x340.png)
|
||||
|
||||
If you’re working as RHEL administrator, you might definitely heard about **Sosreport** – an extensible, portable and support data collection tool. It is a tool to collect system configuration details and diagnostic information from a Unix-like operating system. When the user raise a support ticket, he/she has to run this tool and send the resulting report generated by Sosreport tool to the Red Hat support executive. The executive will then perform an initial analysis based on the report and try to find what’s the problem in the system. Not just on RHEL system, you can use it on any Unix-like operating systems for collecting system logs and other debug information.
|
||||
|
||||
### Installing Sosreport
|
||||
|
||||
Sosreport is available on Red Hat official systems, so you can install it using Yum Or DNF package managers as shown below.
|
||||
```
|
||||
$ sudo yum install sos
|
||||
|
||||
```
|
||||
|
||||
Or,
|
||||
```
|
||||
$ sudo dnf install sos
|
||||
|
||||
```
|
||||
|
||||
On Debian, Ubuntu and Linux Mint, run:
|
||||
```
|
||||
$ sudo apt install sosreport
|
||||
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
Once installed, run the following command to collect your system configuration details and other diagnostic information.
|
||||
```
|
||||
$ sudo sosreport
|
||||
|
||||
```
|
||||
|
||||
You will be asked to enter some details of your system, such as system name, case id etc. Type the details accordingly, and press ENTER key to generate the report. If you don’t want to change anything and want to use the default values, simply press ENTER.
|
||||
|
||||
Sample output from my CentOS 7 server:
|
||||
```
|
||||
sosreport (version 3.5)
|
||||
|
||||
This command will collect diagnostic and configuration information from
|
||||
this CentOS Linux system and installed applications.
|
||||
|
||||
An archive containing the collected information will be generated in
|
||||
/var/tmp/sos.DiJXi7 and may be provided to a CentOS support
|
||||
representative.
|
||||
|
||||
Any information provided to CentOS will be treated in accordance with
|
||||
the published support policies at:
|
||||
|
||||
https://wiki.centos.org/
|
||||
|
||||
The generated archive may contain data considered sensitive and its
|
||||
content should be reviewed by the originating organization before being
|
||||
passed to any third party.
|
||||
|
||||
No changes will be made to system configuration.
|
||||
|
||||
Press ENTER to continue, or CTRL-C to quit.
|
||||
|
||||
Please enter your first initial and last name [server.ostechnix.local]:
|
||||
Please enter the case id that you are generating this report for []:
|
||||
|
||||
Setting up archive ...
|
||||
Setting up plugins ...
|
||||
Running plugins. Please wait ...
|
||||
|
||||
Running 73/73: yum...
|
||||
Creating compressed archive...
|
||||
|
||||
Your sosreport has been generated and saved in:
|
||||
/var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
|
||||
|
||||
The checksum is: 8f08f99a1702184ec13a497eff5ce334
|
||||
|
||||
Please send this file to your support representative.
|
||||
|
||||
```
|
||||
|
||||
If you don’t want to be prompted for entering such details, simply use batch mode like below.
|
||||
```
|
||||
$ sudo sosreport --batch
|
||||
|
||||
```
|
||||
|
||||
As you can see in the above output, an archived report is generated and saved in **/var/tmp/sos.DiJXi7** file. In RHEL 6/CentOS 6, the report will be generated in **/tmp** location. You can now send this report to your support executive, so that he can do initial analysis and find what’s the problem.
|
||||
|
||||
You might be concerned or wanted to know what’s in the report. If so, you can view it by running the following command:
|
||||
```
|
||||
$ sudo tar -tf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
|
||||
|
||||
```
|
||||
|
||||
Or,
|
||||
```
|
||||
$ sudo vim /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
|
||||
|
||||
```
|
||||
|
||||
Please note that above commands will not extract the archive, but only display the list of files and folders in the archive. If you want to view the actual contents of the files in the archive, first extract the archive using command:
|
||||
```
|
||||
$ sudo tar -xf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
|
||||
|
||||
```
|
||||
|
||||
All the contents of the archive will be extracted in a directory named “sosreport-server.ostechnix.local-20180628171844/” in the current working directory. Go to the directory and view the contents of any file using cat command or any other text viewer:
|
||||
```
|
||||
$ cd sosreport-server.ostechnix.local-20180628171844/
|
||||
|
||||
$ cat uptime
|
||||
17:19:02 up 1:03, 2 users, load average: 0.50, 0.17, 0.10
|
||||
|
||||
```
|
||||
|
||||
For more details about Sosreport, refer man pages.
|
||||
```
|
||||
$ man sosreport
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/sosreport-a-tool-to-collect-system-logs-and-diagnostic-information/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
@ -0,0 +1,113 @@
|
||||
How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers
|
||||
======
|
||||
**Some applications and games built with OpenGL support and packaged as Flatpak fail to start with proprietary Nvidia drivers. This article explains how to get such Flatpak applications or games them to start, without installing the open source drivers (Nouveau).**
|
||||
|
||||
Here's an example. I'm using the proprietary Nvidia drivers on my Ubuntu 18.04 desktop (`nvidia-driver-390`) and when I try to launch the latest
|
||||
```
|
||||
$ /usr/bin/flatpak run --branch=stable --arch=x86_64 --command=krita --file-forwarding org.kde.krita
|
||||
Gtk-Message: Failed to load module "canberra-gtk-module"
|
||||
Gtk-Message: Failed to load module "canberra-gtk-module"
|
||||
libGL error: No matching fbConfigs or visuals found
|
||||
libGL error: failed to load driver: swrast
|
||||
Could not initialize GLX
|
||||
|
||||
```
|
||||
|
||||
To fix Flatpak games and applications not starting when using OpenGL with proprietary Nvidia graphics drivers, you'll need to install a runtime for your currently installed proprietary Nvidia drivers. Here's how to do this.
|
||||
|
||||
**1\. Add the FlatHub repository if you haven't already. You can find exact instructions for your Linux distribution[here][1].**
|
||||
|
||||
**2. Now you'll need to figure out the exact version of the proprietary Nvidia drivers installed on your system. **
|
||||
|
||||
_This step is dependant of the Linux distribution you're using and I can't cover all cases. The instructions below are Ubuntu-oriented (and Ubuntu flavors) but hopefully you can figure out for yourself the Nvidia drivers version installed on your system._
|
||||
|
||||
To do this in Ubuntu, open `Software & Updates` , switch to the `Additional Drivers` tab and note the name of the Nvidia driver package.
|
||||
|
||||
As an example, this is `nvidia-driver-390` in my case, as you can see here:
|
||||
|
||||
![](https://1.bp.blogspot.com/-FAfjtGNeUJc/WzYXMYTFBcI/AAAAAAAAAx0/xUhIO83IAjMuK4Hn0jFUYKJhSKw8y559QCLcBGAs/s1600/additional-drivers-nvidia-ubuntu.png)
|
||||
|
||||
That's not all. We've only found out the Nvidia drivers major version but we'll also need to know the minor version. To get the exact Nvidia driver version, which we'll need for the next step, run this command (should work in any Debian-based Linux distribution, like Ubuntu, Linux Mint and so on):
|
||||
```
|
||||
apt-cache policy NVIDIA-PACKAGE-NAME
|
||||
|
||||
```
|
||||
|
||||
Where NVIDIA-PACKAGE-NAME is the Nvidia drivers package name listed in `Software & Updates` . For example, to see the exact installed version of the `nvidia-driver-390` package, run this command:
|
||||
```
|
||||
$ apt-cache policy nvidia-driver-390
|
||||
nvidia-driver-390:
|
||||
Installed: 390.48-0ubuntu3
|
||||
Candidate: 390.48-0ubuntu3
|
||||
Version table:
|
||||
* 390.48-0ubuntu3 500
|
||||
500 http://ro.archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages
|
||||
100 /var/lib/dpkg/status
|
||||
|
||||
```
|
||||
|
||||
In this command's output, look for the `Installed` section and note the version numbers (excluding `-0ubuntu3` and anything similar). Now we know the exact version of the installed Nvidia drivers (`390.48` in my example). Remember this because we'll need it for the next step.
|
||||
|
||||
**3\. And finally, you can install the Nvidia runtime for your installed proprietary Nvidia graphics drivers, from FlatHub**
|
||||
|
||||
To list all the available Nvidia runtime packages available on FlatHub, you can use this command:
|
||||
```
|
||||
flatpak remote-ls flathub | grep nvidia
|
||||
|
||||
```
|
||||
|
||||
Hopefully the runtime for your installed Nvidia drivers is available on FlatHub. You can now proceed to install the runtime by using this command:
|
||||
|
||||
* For 64bit systems:
|
||||
|
||||
|
||||
```
|
||||
flatpak install flathub org.freedesktop.Platform.GL.nvidia-MAJORVERSION-MINORVERSION
|
||||
|
||||
```
|
||||
|
||||
Replace MAJORVERSION with the Nvidia driver major version installed on your computer (390 in my example above) and
|
||||
MINORVERSION with the minor version (48 in my example from step 2).
|
||||
|
||||
For example, to install the runtime for Nvidia graphics driver version 390.48, you'd have to use this command:
|
||||
```
|
||||
flatpak install flathub org.freedesktop.Platform.GL.nvidia-390-48
|
||||
|
||||
```
|
||||
|
||||
* For 32bit systems (or to be able to run 32bit applications or games on 64bit), install the 32bit runtime using:
|
||||
|
||||
|
||||
```
|
||||
flatpak install flathub org.freedesktop.Platform.GL32.nvidia-MAJORVERSION-MINORVERSION
|
||||
|
||||
```
|
||||
|
||||
Once again, replace MAJORVERSION with the Nvidia driver major version installed on your computer (390 in my example above) and MINORVERSION with the minor version (48 in my example from step 2).
|
||||
|
||||
For example, to install the 32bit runtime for Nvidia graphics driver version 390.48, you'd have to use this command:
|
||||
```
|
||||
flatpak install flathub org.freedesktop.Platform.GL32.nvidia-390-48
|
||||
|
||||
```
|
||||
|
||||
That is all you need to do to get applications or games packaged as Flatpak that are built with OpenGL to run.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxuprising.com/2018/06/how-to-get-flatpak-apps-and-games-built.html
|
||||
|
||||
作者:[Logix][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/118280394805678839070
|
||||
[1]:https://flatpak.org/setup/
|
||||
[2]:https://www.linuxuprising.com/2018/06/free-painting-software-krita-410.html
|
||||
[3]:https://www.linuxuprising.com/2018/06/winepak-is-flatpak-repository-for.html
|
||||
[4]:https://github.com/winepak/applications/issues/23
|
||||
[5]:https://github.com/flatpak/flatpak/issues/138
|
@ -0,0 +1,148 @@
|
||||
Is implementing and managing Linux applications becoming a snap?
|
||||
======
|
||||
![](https://images.idgesg.net/images/article/2018/06/finger-snap-100761923-large.jpg)
|
||||
|
||||
Quick to install, safe to run, easy to update, and dramatically easier to maintain and support, snaps represent a big step forward in Linux software development and distribution. Starting with Ubuntu and now available for Arch Linux, Debian, Fedora, Gentoo Linux, and openSUSE, snaps offer a number of significant advantages over traditional application packaging.
|
||||
|
||||
Compared to traditional packages, snaps are:
|
||||
|
||||
* Easier for developers to build
|
||||
* Faster to install
|
||||
* Automatically updated
|
||||
* Autonomous
|
||||
* Isolated from other apps
|
||||
* More secure
|
||||
* Non-disruptive (they don't interfere with other applications)
|
||||
|
||||
|
||||
|
||||
### So, what are snaps?
|
||||
|
||||
Snaps were originally designed and built by Canonical for use on Ubuntu. The service might be referred to as “snappy,” the technology “snapcraft,” the daemon “snapd,” and the packages “snaps,” but they all refer to a new way that Linux apps are prepared and installed. Does the name “snap” imply some simplification of the development and installation process? You bet it does!
|
||||
|
||||
A snap is completely different than other Linux packages. Other packages are basically file archives that, on installation, place files in a number of directories (/usr/bin, /usr/lib, etc.). In addition, other tools and libraries that the packages depend on have to be installed or updated, as well — possibly interfering with older apps. A snap, on the other hand, will be installed as a single self-sufficient file, bundled with whatever libraries and other files it requires. It won’t interfere with other applications or change any of the resources that those other applications depend on.
|
||||
|
||||
When delivered as a snap, all of the application’s dependencies are included in that single file. The application is also isolated from the rest of the system, ensuring that changes to the snap don’t affect the rest of the system and making it harder for other applications to access the app's data.
|
||||
|
||||
Another important distinction is that snaps aren't included in distributions; they're selected and installed separately (more on this in just a bit).
|
||||
|
||||
Snaps began life as Click packages — a new packaging format built for Ubuntu Mobile — and evolved into snaps
|
||||
|
||||
### How do snaps work?
|
||||
|
||||
Snaps work across a range of Linux distributions in a manner that is sometimes referred to as “distro-agnostic,” releasing developers from their concerns about compatibility with software and libraries previously installed on the systems. Snaps are packaged along with everything they require to run — compressed and ready for use. In fact, they stay that way. They remain compressed, using modest disk space in spite of their autonomous nature.
|
||||
|
||||
Snaps also maintain a relatively low profile. You could have snaps on your system without being aware of them, particularly if you are using a recent release of the distributions mentioned earlier.
|
||||
|
||||
If snaps are available on your system, you'll need to have **/snap/bin** on your search path to use them. For bash users, this should be added automatically.
|
||||
```
|
||||
$ echo $PATH
|
||||
/home/shs/bin:/usr/local/bin:/usr/sbin:/sbin:/bin:/usr/games:/snap/bin
|
||||
|
||||
```
|
||||
|
||||
And even the automatic updates don't cause problems. A running snap continues to run even while it is being updated. The new version simply becomes active the next time it's used.
|
||||
|
||||
### Why are snaps more secure?
|
||||
|
||||
One reason for the improvement is that snaps have considerably more limited access to the OS than traditional packages. They are sandboxed and containerized and don’t have system-wide access.
|
||||
|
||||
### How do snaps help developers?
|
||||
|
||||
##### Easier to build
|
||||
|
||||
With snaps, developers no longer have to contemplate the huge variety of distributions and versions that their customers might be using. They package into the snap everything that is required for it to run.
|
||||
|
||||
##### Easing the slow production lines
|
||||
|
||||
From the developers' perspective, it has been hard to get apps into production. The open source community can only do so much while responding to pressure for fast releases. In addition, developers can use the latest libraries without concern for whether the target distribution relies on older libraries. And even if developers are new to snaps, they can get up to speed in under a week. I've been told that learning to build an application with snaps is significantly easier than learning a new language. And, of course, distro maintainers don't have to funnel every app through their production processes. This is clearly a win-win.
|
||||
|
||||
For sysadmins, as well, the use of snaps avoids breaking systems and the need to chase down hairy support problems.
|
||||
|
||||
### Are snaps on your system?
|
||||
|
||||
You could have snaps on your system without being aware of them, particularly if you are using a recent release of the distributions mentioned above.
|
||||
|
||||
To see if **snapd** is running:
|
||||
```
|
||||
$ ps -ef | grep snapd
|
||||
root 672 1 0 Jun22 ? 00:00:33 /usr/lib/snapd/snapd
|
||||
|
||||
```
|
||||
|
||||
If installed, the command “which snap”, on the other hand, should show you this:
|
||||
```
|
||||
$ which snap
|
||||
/usr/bin/snap
|
||||
|
||||
```
|
||||
|
||||
To see what snaps are installed, use the “snap list” command.
|
||||
```
|
||||
$ snap list
|
||||
Name Version Rev Tracking Developer Notes
|
||||
canonical-livepatch 8.0.2 41 stable canonical -
|
||||
core 16-2.32.8 4650 stable canonical core
|
||||
minecraft latest 11 stable snapcrafters -
|
||||
|
||||
```
|
||||
|
||||
### Where are snaps installed?
|
||||
|
||||
Snaps are delivered as .snap files and stored in **/var/lib/snapd/snaps**. You can **cd** over to that directory or search for files with the .snap extension.
|
||||
```
|
||||
$ sudo find / -name "*.snap"
|
||||
/var/lib/snapd/snaps/canonical-livepatch_39.snap
|
||||
/var/lib/snapd/snaps/canonical-livepatch_41.snap
|
||||
/var/lib/snapd/snaps/core_4571.snap
|
||||
/var/lib/snapd/snaps/minecraft_11.snap
|
||||
/var/lib/snapd/snaps/core_4650.snap
|
||||
|
||||
```
|
||||
|
||||
Adding a snap is, well, a snap. Here’s a typical example of installing one. The snap being loaded here is a very simple “Hello, World” application, but the process is this simple regardless of the compexity of the snap:
|
||||
```
|
||||
$ sudo snap install hello
|
||||
hello 2.10 from 'canonical' installed
|
||||
$ which hello
|
||||
/snap/bin/hello
|
||||
$ hello
|
||||
Hello, world!
|
||||
|
||||
```
|
||||
|
||||
The “snap list” command will then reflect the newly added snap.
|
||||
```
|
||||
$ snap list
|
||||
Name Version Rev Tracking Developer Notes
|
||||
canonical-livepatch 8.0.2 41 stable canonical -
|
||||
core 16-2.32.8 4650 stable canonical core
|
||||
hello 2.10 20 stable canonical -
|
||||
minecraft latest 11 stable snapcrafters -
|
||||
|
||||
```
|
||||
|
||||
There also commands for removing (snap remove), upgrading (snap refresh), and listing available snaps (snap find).
|
||||
|
||||
### A little history about snaps
|
||||
|
||||
The idea for snaps came from Mark Richard Shuttleworth, the founder and CEO of Canonical Ltd., the company behind the development of the Linux-based Ubuntu operating system, and from his decades of experience with Ubuntu. At least part of the motivation was removing the possibility of troublesome installation failures — starting with the phones on which they were first used. Easing production lines, simplifying support, and improving system security made the idea compelling.
|
||||
|
||||
For some additional history on snaps, check out this article on [CIO][1].
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3283337/linux/is-implementing-and-managing-linux-applications-becoming-a-snap.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[1]:https://www.cio.com/article/3085079/linux/goodbye-rpm-and-deb-hello-snaps.html
|
||||
[2]:https://www.facebook.com/NetworkWorld/
|
||||
[3]:https://www.linkedin.com/company/network-world
|
@ -0,0 +1,223 @@
|
||||
12 Things to do After Installing Linux Mint 19
|
||||
======
|
||||
[Linux Mint][1] is one of the [best Linux distributions for new users][2]. It runs pretty well out of the box. Still, there are a few recommended things to do after [installing Linux Mint][3] for the first time.
|
||||
|
||||
In this article, I am going to share some basic yet effective tips that will make your Linux Mint experience even better. If you follow these best practices, you’ll have a more user-friendly system.
|
||||
|
||||
### Things to do after installing Linux Mint 19 Tara
|
||||
|
||||
![Things to do after installing Linux Mint 19][4]
|
||||
|
||||
I am using [Linux Mint][1] 19 Cinnamon edition while writing this article so some of the points in this list are specific to Mint Cinnamon. But this doesn’t mean you can follow these suggestions on Xfce or MATE editions.
|
||||
|
||||
Another disclaimer is that this is just some recommendations from my point of view. Based on your interests and requirement, you would perhaps do a lot more than what I suggest here.
|
||||
|
||||
That said, let’s see the top things to do after installing Linux Mint 19.
|
||||
|
||||
#### 1\. Update your system
|
||||
|
||||
This is the first and foremost thing to do after a fresh install of Linux Mint or any Linux distribution. This ensures that your system has all the latest software and security updates. You can update Linux Mint by going to Menu->Update Manager.
|
||||
|
||||
You can also use a simple command to update your system:
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
|
||||
```
|
||||
|
||||
#### 2\. Create system snapshots
|
||||
|
||||
Linux Mint 19 recommends creating system snapshots using Timeshift application. It is integrated with update manager. This tool will create system snapshots so if you want to restore your Mint to a previous state, you could easily do that. This will help you in the unfortunate event of a broken system.
|
||||
|
||||
![Creating snapshots with Timeshift in Linux Mint 19][5]
|
||||
|
||||
It’s FOSS has a detailed article on [using Timeshift][6]. I recommend reading it to learn about Timeshift in detail.
|
||||
|
||||
#### 3\. Install codecs
|
||||
|
||||
Want to play MP3, watch videos in MP$ and other formats or play DVD? You need to install the codecs. Linux Mint provides an easy way to install these codecs in a package called Mint Codecs.
|
||||
|
||||
You can install it from the Welcome Screen or from the Software Manager.
|
||||
|
||||
You can also use this command to install the media codecs in Linux Mint:
|
||||
```
|
||||
sudo apt install mint-meta-codecs
|
||||
|
||||
```
|
||||
|
||||
#### 4\. Install useful software
|
||||
|
||||
Once you have set up your system, it’s time to install some useful software for your daily usage. Linux Mint itself comes with a number of applications pre-installed and hundreds or perhaps thousands of applications are available in the Software Manager. You just have to search for it.
|
||||
|
||||
In fact, I would recommend relying on Software Manager for your application needs.
|
||||
|
||||
If you want to know what software you should install, I’ll recommend some [useful Linux applications][7]:
|
||||
|
||||
* VLC for videos
|
||||
* Google Chrome for web browsing
|
||||
* Shutter for screenshots and quick editing
|
||||
* Spotify for streaming music
|
||||
* Skype for video communication
|
||||
* Dropbox for [cloud storage][8]
|
||||
* Atom for code editing
|
||||
* Kdenlive for [video editing on Linux][9]
|
||||
* Kazam [screen recorder][10]
|
||||
|
||||
|
||||
|
||||
For your information, not all of these recommended applications are open source.
|
||||
|
||||
#### 5\. Learn to use Snap [For intermediate to advanced users]
|
||||
|
||||
[Snap][11] is a universal packaging format from Ubuntu. You can easily install a number of applications via Snap packages. Though Linux Mint is based on Ubuntu, it doesn’t provide Snap support by default. Mint uses [Flatpak][12] instead, another universal packaging format from Fedora.
|
||||
|
||||
While Flatpak is integrated into the Software Manager, you cannot use Snaps in the same manner. You must use Snap commands here. If you are comfortable with command line, you will find that it is easy to use. With Snap, you can install some additional software that are not available in the Software Manager or in DEB format.
|
||||
|
||||
To [enable Snap support][13], use the command below:
|
||||
```
|
||||
sudo apt install snapd
|
||||
|
||||
```
|
||||
|
||||
You can refer to this article to know [how to use snap commands][14].
|
||||
|
||||
#### 6\. Install KDE [Only for advanced users who like using KDE]
|
||||
|
||||
[Linux Mint 19 doesn’t have a KDE flavor][15]. If you are fond of using [KDE desktop][16], you can install KDE in Linux Mint 19 and use it. If you don’t know what KDE is or have never used it, just ignore this part.
|
||||
|
||||
Before you install KDE, I recommend that you have configured Timeshift and taken system snapshots. Once you have it in place, use the command below to install KDE and some recommended KDE components.
|
||||
```
|
||||
sudo apt install kubuntu-desktop konsole kscreen
|
||||
|
||||
```
|
||||
|
||||
After the installation, log out and switch the desktop environment from the login screen.
|
||||
|
||||
#### 7\. Change the Themes and icons [If you feel like it]
|
||||
|
||||
Linux Mint 19 itself has a nice look and feel but this doesn’t mean you cannot change it. If you go to System Settings, you’ll find the option to change the icons and themes there. There are a few themes already available in this setting section that you can download and activate.
|
||||
|
||||
![Installing themes in Linux Mint is easy][17]
|
||||
|
||||
If you are looking for more eye candy, check out the [best icon themes for Ubuntu][18] and install them in Mint here.
|
||||
|
||||
#### 8\. Protect your eyes at night with Redshift
|
||||
|
||||
Night Light is becoming a mandatory feature in operating systems and smartphones. This feature filters blue light at night and thus reduces the strain on your eyes.
|
||||
|
||||
Unfortunately, Linux Mint Cinnamon doesn’t have built-in Night Light feature like GNOME. Therefore, Mint provides this feature [using Redshift][19] application.
|
||||
|
||||
Redshift is installed by default in Mint 19 so all you have do is to start this application and set it for autostart. Now, this app will automatically switch to yellow light after sunset.
|
||||
|
||||
![Autostart Redshift for night light in Linux Mint][20]
|
||||
|
||||
#### 9\. Minor tweaks to your system
|
||||
|
||||
There is no end to tweaking your system so I am not going to list out all the things you can do in Linux Mint. I’ll leave that up to you to explore. I’ll just mention a couple of tweaks I did.
|
||||
|
||||
##### Tweak 1: Display Battery percentage
|
||||
|
||||
I am used to of keeping a track on the battery life. Mint doesn’t show battery percentage by default. But you can easily change this behavior.
|
||||
|
||||
Right click on the battery icon in the bottom panel and select Configure.
|
||||
|
||||
![Display battery percentage in Linux Mint 19][21]
|
||||
|
||||
And in here, select Show percentage option.
|
||||
|
||||
![Display battery percentage in Linux Mint 19][22]
|
||||
|
||||
##### Tweak 2: Set up the maximum volume
|
||||
|
||||
I also liked that Mint allows setting the maximum volume between 0 and 150. You may use this tiny feature as well.
|
||||
|
||||
![Linux Mint 19 volume more than 100%][23]
|
||||
|
||||
#### 10\. Clean up your system
|
||||
|
||||
Keeping your system free of junk is important. I have discussed [cleaning up Linux Mint][24] in detail so I am not going to repeat it here.
|
||||
|
||||
If you want a quick way to clean your system, I recommend using this one single command from time to time:
|
||||
```
|
||||
sudo apt autoremove
|
||||
|
||||
```
|
||||
|
||||
This will help you get rid of unnecessary packages from your system.
|
||||
|
||||
#### 11\. Set up a Firewall
|
||||
|
||||
Usually, when you are at home network, you are behind your router’s firewall already. But when you connect to a public WiFi, you can have an additional security layer with a firewall.
|
||||
|
||||
Now, setting up a firewall is a complicated business and hence Linux Mint comes pre-installed with Ufw (Uncomplicated Firewall). Just search for Firewall in the menu and enable it at least for the Public mode.
|
||||
|
||||
![UFW Uncomplicated Firewall in Linux Mint 19][25]
|
||||
|
||||
#### 12\. Fixes and workarounds for bugs
|
||||
|
||||
So far I have noticed a few issues in Mint 19. I’ll update this section as I find more bugs.
|
||||
|
||||
##### Issue 1: Error with Flatpaks in Software Manager
|
||||
|
||||
major bug in the Software Manager. If you try to install a Flatpak application, you’ll encounter an error:
|
||||
|
||||
“An error occurred. Could not locate ‘runtime/org.freedesktop.Sdk/x86_64/1.6’ in any registered remotes”
|
||||
|
||||
![Flatpak install issue in Linux Mint 19][26]
|
||||
|
||||
There is nothing wrong with Flatpak but the Software Manager has a bug that results in this error. This bug has been fixed and should be included in future updates. While that happens, you’ll have to [use Flatpak commands][27] in terminal to install these Flatpak applications.
|
||||
|
||||
I advise going to [Flathub website][28] and search for the application you were trying to install. If you click on the install button on this website, it downloads a .flatpakref file. Now all you need to do is to start a terminal, go to Downloads directory and use the command in the following fashion:
|
||||
```
|
||||
flatpak install <name_of_flatpakref_file>
|
||||
|
||||
```
|
||||
|
||||
##### Issue 2: Edit option disabled in Shutter
|
||||
|
||||
Another bug is with Shutter screenshot tool. You’ll find that the edit button has been disabled. It was the same case in Ubuntu 18.04. I have already written a [tutorial for Shutter edit issue][29]. You can use the same steps for Mint 19.
|
||||
|
||||
#### What’s your suggestion?
|
||||
|
||||
This is my recommendation of things to do after installing Linux Mint 19. I’ll update this article as I explore Mint 19 and find interesting things to add to this list. Meanwhile, why don’t you share what you did after installing Linux Mint?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/things-to-do-after-installing-linux-mint-19/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]:https://linuxmint.com/
|
||||
[2]:https://itsfoss.com/best-linux-beginners/
|
||||
[3]:https://itsfoss.com/guide-install-linux-mint-16-dual-boot-windows/
|
||||
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/things-to-do-after-installing-linux-mint-19.jpeg
|
||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/snapshot-timeshift-mint-19.jpeg
|
||||
[6]:https://itsfoss.com/backup-restore-linux-timeshift/
|
||||
[7]:https://itsfoss.com/essential-linux-applications/
|
||||
[8]:https://itsfoss.com/cloud-services-linux/
|
||||
[9]:https://itsfoss.com/best-video-editing-software-linux/
|
||||
[10]:https://itsfoss.com/best-linux-screen-recorders/
|
||||
[11]:https://snapcraft.io/
|
||||
[12]:https://flatpak.org/
|
||||
[13]:https://itsfoss.com/install-snap-linux/
|
||||
[14]:https://itsfoss.com/use-snap-packages-ubuntu-16-04/
|
||||
[15]:https://itsfoss.com/linux-mint-drops-kde/
|
||||
[16]:https://www.kde.org/
|
||||
[17]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/theme-setting-mint-19.png
|
||||
[18]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[19]:https://itsfoss.com/install-redshift-linux-mint/
|
||||
[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/autostart-redshift-mint.jpg
|
||||
[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/configure-battery-linux-mint.jpeg
|
||||
[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/display-battery-percentage-linux-mint-1.png
|
||||
[23]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/01/linux-mint-volume-more-than-100.png
|
||||
[24]:https://itsfoss.com/free-up-space-ubuntu-linux/
|
||||
[25]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/firewall-mint.png
|
||||
[26]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/flatpak-error-mint-19.png
|
||||
[27]:https://itsfoss.com/flatpak-guide/
|
||||
[28]:https://flathub.org/
|
||||
[29]:https://itsfoss.com/shutter-edit-button-disabled/
|
@ -0,0 +1,154 @@
|
||||
How to migrate to the world of Linux from Windows
|
||||
======
|
||||
Installing Linux on a computer, once you know what you’re doing, really isn’t a difficult process. After getting accustomed to the ins and outs of downloading ISO images, creating bootable media, and installing your distribution (henceforth referred to as distro) of choice, you can convert a computer to Linux in no time at all. In fact, the time it takes to install Linux and get it updated with all the latest patches is so short that enthusiasts do the process over and over again to try out different distros; this process is called distro hopping.
|
||||
|
||||
With this guide, I want to target people who have never used Linux before. I’ll give an overview of some distros that are great for beginners, how to write or burn them to media, and how to install them. I’ll show you the installation process of Linux Mint, but the process is similar if you choose Ubuntu. For a distro such as Fedora, however, your experience will deviate quite a bit from what’s shown in this post. I’ll also touch on the sort of software available, and how to install additional software.
|
||||
|
||||
The command line will not be covered; despite what some people say, using the command line really is optional in distributions such as Linux Mint, which is aimed at beginners. Most distros come with update managers, software managers, and file managers with graphical interfaces, which largely do away with the need for a command line. Don’t get me wrong, the command line can be great – I do use it myself from time to time – but largely for convenience purposes.
|
||||
|
||||
This guide will also not touch on troubleshooting or dual booting. While Linux does generally support new hardware, there’s a slight chance that any cutting edge hardware you have might not yet be supported by Linux. Setting up a dual boot system is easy enough, though wiping the disk and doing a clean install is usually my preferred method. For this reason, if you intend to follow the guide, either use a virtual machine to install Linux or use a spare computer that you’ve got lying around.
|
||||
|
||||
The chief appeal for most Linux users is the customisability and the diverse array of Linux distributions or distros that are available. For the majority of people getting into Linux, the usual entry point is Ubuntu, which is backed by Canonical. Ubuntu was my gateway Linux distribution in 2008; although not my favourite, it’s certainly easy to begin using and is very polished.
|
||||
|
||||
Another beginner-friendly distribution is Linux Mint. It’s the distribution I use day-to-day on every one of my machines. It’s very easy to start using, is generally very stable, and the user interface (UI) doesn’t drastically change; anyone familiar with Windows XP or Windows Vista will be familiar with the the UI of Linux Mint. While everyone went chasing the convergence dream of merging mobile and desktop together, Linux Mint stayed staunchly of the position that an operating system on the desktop should be designed for desktop and therefore totally avoids being mobile-friendly UI; desktop and laptops are front and centre.
|
||||
|
||||
For your first dive into Linux, I highly recommend the two mentioned above, simply because they’ve got huge communities and developers tending to them around the clock. With that said, several other operating systems such as elementary OS (based on Ubuntu) and Fedora (run by Red Hat) are also good ways to get started. Other users are fond of options such as Manjaro and Antergos which make the difficult-to-configure Arch Linux easy to use.
|
||||
|
||||
Now, we’re starting to get our hands dirty. For this guide, I will include screenshots of Linux Mint 18.3 Cinnamon edition. If you decide to go with Ubuntu or another version of Linux Mint, note that things may look slightly different. For example, when it comes to a distro that isn’t based on Ubuntu – like Fedora or Manjaro – things will look significantly different during installation, but not so much that you won’t be able to work the process out.
|
||||
|
||||
In order to download Linux Mint, head on over to the Linux Mint downloads page and select either the 32-bit version or 64-bit version of the Cinnamon edition. If you aren’t sure which version is needed for your computer, pick the 64-bit version; this tends to work on computers even from 2007, so it’s a safe bet. The only time I’d advise the 32-bit version is if you’re planning to install Linux on a netbook.
|
||||
|
||||
Once you’ve selected your version, you can either download the ISO image via one of the many mirrors, or as a torrent. It’s best to download it as a torrent because if your internet cuts out, you won’t have to restart the 1.9 GB download. Additionally, the downloaded ISO you receive via torrent will be signed with the correct keys, ensuring authenticity. If you download another distribution, you’ll be able to continue to the next step once you have an ISO file saved to your computer.
|
||||
|
||||
Note: If you’re using a virtual machine, you don’t need to write or burn the ISO to USB or DVD, just use the ISO to launch the distro on your chosen virtual machine.
|
||||
|
||||
Ten years ago when I started using Linux, you could fit an entire distribution onto a CD. Nowadays, you’ll need a DVD or a USB to boot the distro from.
|
||||
|
||||
To write the ISO to a USB device, I recommend downloading a tool called Rufus. Once it’s downloaded and installed, you should insert a USB stick that’s 4GB or more. Be sure to backup the data as the device will be erased.
|
||||
|
||||
Next, launch Rufus and select the device you want to write to; if you aren’t sure which is your USB device, unplug it, check the list, then plug it back in to work out which device you need to write to. Once you’ve worked out which USB drive you want to write to, select ‘MBR Partition Scheme for BIOS or UEFI’ under ‘Partition scheme and target system type’. Once you’ve done that, press the optical drive icon alongside the enabled ‘Create a bootable disk using’ field. You can then navigate to the ISO file that you just downloaded. Once it finishes writing to the USB, you’ve got everything you need to boot into Linux.
|
||||
|
||||
Note: If you’re using a virtual machine, you don’t need to write or burn the ISO to USB or DVD, just use the ISO to launch the distro on your chosen virtual machine.
|
||||
|
||||
If you’re on Windows 7 or above and want to burn the ISO to a DVD, simply insert a blank DVD into the computer, then right-click the ISO file and select ‘Burn disc image’, from the dialogue window which appears, select the drive where the DVD is located, and tick ‘Verify disc after burning’, then hit Burn.
|
||||
|
||||
If you’re on Windows Vista, XP, or lower, download an install Infra Recorder and insert your blank DVD into your computer, selecting ‘Do nothing’ or ‘Cancel’ if any autorun windows pop up. Next, open Infra Recorder and select ‘Write Image’ on the main screen or go to Actions > Burn Image. From there find the Linux ISO you want to burn and press ‘OK’ when prompted.
|
||||
|
||||
Once you’ve got your DVD or USB media ready you’re ready to boot into Linux; doing so won’t harm your Windows install in any way.
|
||||
|
||||
Once you’ve got your installation media on hand, you’re ready to boot into the live environment. The operating system will load entirely from your DVD or USB device without making changes to your hard drive, meaning Windows will be left intact. The live environment is used to see whether your graphics card, wireless devices, and so on are compatible with Linux before you install it.
|
||||
|
||||
To boot into the live environment you’re going to have to switch off the computer and boot it back up with your installation media already inserted into the computer. It’s also a must to ensure that your boot up sequence is set to launch from USB or DVD before your current operating system boots up from the hard drive. Configuring the boot sequence is beyond the scope of this guide, but if you can’t boot from the USB or DVD, I recommend doing a web search for how to access the BIOS to change the boot sequence order on your specific motherboard. Common keys to enter the BIOS or select the drive to boot from are F2, F10, and F11.
|
||||
|
||||
If your boot up sequence is configured correctly, you should see a ten second countdown, that when completed, will automatically boot Linux Mint.
|
||||
|
||||
![][1]
|
||||
|
||||
![][2]
|
||||
|
||||
Those who opted to try Linux Mint can let the countdown run to zero and the boot up will commence normally. On Ubuntu you’ll probably be prompted to choose a language, then press ‘Try Ubuntu without installing’, or the equivalent option on Linux Mint if you interrupted the automatic countdown by pressing the keyboard. If at any time you have the choice between trying or installing your Linux distribution of choice, always opt to try it, as the install option can cause irreversible damage to your Windows installation.
|
||||
|
||||
Hopefully, everything went according to plan, and you’ve made it through to the live environment. The first thing to do now is to check to see whether your Wi-Fi is available. To connect to Wi-Fi press the icon to the left of the clock, where you should see the usual list of available networks; if this is the case, great! If not, don’t despair just yet. In the second case, when wireless card doesn’t seem to be working, either establish a wired connection via Ethernet or connect your phone to the computer – provided your handset supports tethering (via Wi-Fi, not data).
|
||||
|
||||
Once you’ve got some sort of internet connection via one of those methods, press ‘Menu’ and use the search box to look for ‘Driver Manager’. This usually requires an internet connection and may let you enable your wireless card driver. If that doesn’t work, you’re probably out of luck, but the vast majority of cards should work with Linux Mint.
|
||||
|
||||
For those who have a fancy graphics card, chances are that Linux is using an open source driver alternative instead of the proprietary driver you use on Windows. If you notice any issues pertaining to graphics, you can check the Driver Manager and see whether any proprietary drivers are available.
|
||||
|
||||
Once those two critical components are confirmed to be up and running, you may want to check printer and webcam compatibility. To test your printer, go to ‘Menu’ > ‘Office’ > ‘LibreOffice Writer’ and try printing a document. If it works, that’s great, if not, some printers may be made to work with some effort, but that’s outside the scope of this particular guide. I’d recommend searching something like ‘Linux [your printer model]’ and there may be solutions available. As for your webcam, go to ‘Menu’ again and use the search box to look for ‘Software Manager’; this is the Microsoft Store equivalent on Linux Mint. Search for a program named ‘Cheese’ and install it. Once installed, open it up using the ‘Launch’ button in Software Manager, or have a look in ‘Menu’ and find it manually. If it detects a webcam it means it’s compatible!
|
||||
|
||||
![][3]
|
||||
|
||||
By now, you’ve probably had a good look at Linux Mint or your distribution of choice and, hopefully, everything is working for you. If you’ve had enough and want to return to Windows, simply press Menu and then the power off button which is located right above ‘Menu’, then press ‘Shut Down’ if a dialogue box pops up.
|
||||
|
||||
Given that you’re sticking with me and want to install Linux Mint on your computer, thus erasing Windows, ensure that you’ve backed up everything on your computer. Dual boot installations are available from the installer, but in this guide I’ll explain how to install Linux as the sole operating system. Assuming you do decide to deviate and set up a dual boot system, then ensure you still back up your files from Windows first, because things could potentially go wrong for you.
|
||||
|
||||
In order to do a clean install, close down any programs that you’ve got running in the live environment. On the desktop, you should see a disc icon labelled ‘Install Linux Mint’ – click that to continue.
|
||||
|
||||
![][4]
|
||||
|
||||
On the first screen of the installer, choose your language and press continue.
|
||||
|
||||
![][5]
|
||||
|
||||
On the second screen, most users will want to install third-party software to ensure hardware and codecs work.
|
||||
|
||||
![][6]
|
||||
|
||||
In the ‘Installation type’ section you can choose to erase your hard drive or dual boot. You can encrypt the entire drive if you check ‘Encrypt the new Linux Mint installation for security’ and ‘Use LVM with the new Linux Mint installation’. You can press ‘Something else’ for a specific custom set up. In order to set up a dual boot system, the hard drive which you’re installing to must already have Windows installed first.
|
||||
|
||||
![][7]
|
||||
|
||||
Now pick your location so that the operating system’s time can be set correctly, and press continue.
|
||||
|
||||
![][8]
|
||||
|
||||
Now set your keyboard’s language, and press continue.
|
||||
|
||||
![][9]
|
||||
|
||||
On the ‘Who are you’ screen, you’ll create a new user. Pop in your name, leave the computer’s name as default or enter a custom name, pick a username, and enter a password. You can choose to have the system log you in automatically or require a password. If you choose to require a password then you can also encrypt your home folder, which is different from encrypting your entire system. However, if you encrypt your entire system, there’s not a lot of point to encrypting your home folder too.
|
||||
|
||||
![][10]
|
||||
|
||||
Once you’ve completed the ‘Who are you’ screen, Linux Mint will begin installing. You’ll see a slideshow detailing what the operating system offers.
|
||||
|
||||
![][11]
|
||||
|
||||
Once the installation finishes, you’ll be prompted to restart. Go ahead and do so.
|
||||
|
||||
Now that you’ve restarted the computer and removed the Linux media, your computer should boot up straight to your new install. If everything has gone smoothly, you should arrive at the login screen where you just need to enter the password you created during the set up.
|
||||
|
||||
![][12]
|
||||
|
||||
Once you reach the desktop, the first thing you’ll want to do is apply all the system updates that are available. On Linux Mint you should see a shield icon with a blue logo in the bottom right-hand corner of the desktop near the clock, click on it to open the Update Manager.
|
||||
|
||||
![][13]
|
||||
|
||||
You should be prompted to pick an update policy, give them all a read over and apply whichever you think is most appropriate for you then press ‘OK’.
|
||||
|
||||
![][14]
|
||||
|
||||
![][15]
|
||||
|
||||
You’ll probably be asked to pick a more local mirror too. This is optional, but could allow your updates to download quicker. Now, apply any updates offered, until the shield icon has a green tick indicating that all updates have been applied. In future, the Update Manager will continually check for new updates and alert you to them.
|
||||
|
||||
You’ve got all the necessary tasks out the way for setting up Linux Mint and now you’re free to start using the system for whatever you like. By default, Mozilla Firefox is installed, so if you’ve got a Sync account it’s probably a good idea to go pull in all your passwords and bookmarks. If you’re a Chrome user, you can either run Chromium which is in the Software Manager, or download Google Chrome from the internet. If you opt to get Chrome, you’ll be offered a .deb file which you should save to your system and then double-click to install. Installing .deb files is straightforward enough, just press ‘Install’ when prompted and the system will handle the rest, you’ll find the new software in ‘Menu’.
|
||||
|
||||
![][16]
|
||||
|
||||
Other pre-installed software includes LibreOffice which has decent compatibility with Microsoft Office; Mozilla’s Thunderbird for managing your emails; GIMP for editing images; Transmission is readily available for you to begin torrenting files, it supports adding IP block lists too; Pidgin and Hexchat will allow you to send instant messages and connect to IRC respectively. As for media playback, you will find VLC and Rhythmbox under ‘Sound and Video’ to satisfy all your music and video needs. If you need any other software, check out the Software Manager, there are lots of popular packages including Skype, Minecraft, Google Earth, Steam, and Private Internet Access Manager.
|
||||
|
||||
Throughout this guide, I’ve explained that it will not touch on troubleshooting problems. However, the Linux Mint community can help you overcome any complications. The first port of call is definitely a quick web search, as most problems have been resolved by others in the past and you might be able to find your solution online. If you’re still stuck, you can try the Linux Mint forums as well as the Linux Mint subreddit, both of which are oriented towards troubleshooting.
|
||||
|
||||
Linux definitely isn’t for everyone. It still lacks on the gaming front, despite the existence of Steam on Linux, and the growing number of games. In addition, some commonly used software isn’t available on Linux, but usually there are alternatives available. If, however, you have a computer lying around that isn’t powerful enough to support Windows any more, then Linux could be a good option for you. Linux is also free to use, so it’s great for those who don’t want to spend money on a new copy of Windows too.
|
||||
|
||||
loading...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://infosurhoy.com/cocoon/saii/xhtml/en_GB/technology/how-to-migrate-to-the-world-of-linux-from-windows/
|
||||
|
||||
作者:[Marta Subat][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://infosurhoy.com/cocoon/saii/xhtml/en_GB/author/marta-subat/
|
||||
[1]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139198_autoboot_linux_mint.jpg
|
||||
[2]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139206_bootmenu_linux_mint.jpg
|
||||
[3]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139213_cheese_linux_mint.jpg
|
||||
[4]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139254_install_1_linux_mint.jpg
|
||||
[5]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139261_install_2_linux_mint.jpg
|
||||
[6]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139270_install_3_linux_mint.jpg
|
||||
[7]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139278_install_4_linux_mint.jpg
|
||||
[8]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139285_install_5_linux_mint.jpg
|
||||
[9]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139293_install_6_linux_mint.jpg
|
||||
[10]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139302_install_7_linux_mint.jpg
|
||||
[11]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139317_install_8_linux_mint.jpg
|
||||
[12]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139224_first_boot_1_linux_mint.jpg
|
||||
[13]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139232_first_boot_2_linux_mint.jpg
|
||||
[14]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139240_first_boot_3_linux_mint.jpg
|
||||
[15]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139248_first_boot_4_linux_mint.jpg
|
||||
[16]:https://cdn.neow.in/news/images/uploaded/2018/02/1519219725_software_1_linux_mint.jpg
|
101
sources/tech/20180702 5 open source alternatives to Skype.md
Normal file
101
sources/tech/20180702 5 open source alternatives to Skype.md
Normal file
@ -0,0 +1,101 @@
|
||||
5 open source alternatives to Skype
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open-source-chat.png?itok=YnNoA9Kk)
|
||||
|
||||
If you've been a working adult for more than a decade, you probably remember the high cost and complexity of doing audio- and video conferences. Conference calls were arranged through third-party vendors, and video conferences required dedicated rooms with expensive equipment at every endpoint.
|
||||
|
||||
That all started changing by the mid-2000s, as webcams became mainstream computer equipment and Skype and related services hit the market. The cost and complexity of video conferencing decreased rapidly, as nearly anyone with a webcam, a speedy internet connection, and inexpensive software could communicate with colleagues, friends, family members, even complete strangers, right from their home or office PC. Nowadays, your smartphone's video camera puts web conferencing in the palm of your hand anywhere you have a robust cellular or WiFi connection and the right software. But most of that software is proprietary.
|
||||
|
||||
Fortunately, there are a handful of powerful open source video-conferencing solutions that can replicate the features of Skype and similar applications. In this roundup, we've focused on applications that can accommodate multiple participants across various locations, although we do offer a couple of 1:1 communications solutions at the end that may meet your needs.
|
||||
|
||||
### Jitsi
|
||||
|
||||
[Jitsi][1]'s web conferencing solution stands out for its extreme ease of use: It runs directly in the browser with no download necessary. To set up a video-conferencing session, you just point your browser to [Jitsi Meet][2], enter a username (or select the random one that's offered), and click Go. Once you give Jitsi permission to use your webcam and microphone (sessions are [DTLS][3]/[SRTP][4]-encrypted), it generates a web link and a dial-in number others can use to join your session, and you can even add a conference password for an added layer of security.
|
||||
|
||||
While in a video-conferencing session, you can share your screen, a document, or a YouTube link and collaboratively edit documents with Etherpad. Android and iOS apps allow you to make and take Jitsi video conferences on the go, and you can host your own multi-user video-conference service by installing [Jitsi Videobridge][5] on your server.
|
||||
|
||||
Jitsi is written in Java and compatible with WebRTC standards, and the service touts its low-latency due to passing audio and video directly to participants (rather than mixing them, as other solutions do). Jitsi was acquired by Atlassian in 2015, but it remains an open source project under an [Apache 2.0][6] license. You can check out its source code on [GitHub][7], connect with its [community][8], or see some of the [other projects][9] built on the technology.
|
||||
|
||||
### Linphone
|
||||
|
||||
[Linphone][10] is a VoIP (voice over internet protocol) communications service that operates over the session initiation protocol (SIP). This means you need a SIP number to use the service and Linphone limits you to contacting only other SIP numbers—not cellphones or landlines. Fortunately, it's easy to get a SIP number—many internet service providers include them with regular service and Linphone also offers a free SIP service you can use.
|
||||
|
||||
With Linphone, you can make audio and HD video calls, do web conferencing, communicate with instant messenger, and share files and photos, but there are no other screen-sharing nor collaboration features. It's available for Windows, MacOS, and Linux desktops and Android, iOS, Windows Mobile, and BlackBerry 10 mobile devices.
|
||||
|
||||
Linphone is dual-licensed; there's an open source [GPLv2][11] version as well as a closed version which can be embedded in other proprietary projects. You can get its source code from its [downloads][12] page; other resources on Linphone's website include a [user guide][13] and [technical documentation][14].
|
||||
|
||||
### Ring
|
||||
|
||||
If freedom, privacy, and the open source way are your main motivators, you'll want to check out [Ring][15]. It's an official GNU package, licensed under [GPLv3][16], and takes its commitments to security and free and open source software very seriously. Communications are secured by end-to-end encryption with authentication using RSA/AES/DTLS/SRTP technologies and X.509 certificates.
|
||||
|
||||
Audio and video calls are made through the Ring app, which is available for GNU/Linux, Windows, and MacOS desktops and Android and iOS mobile devices. You can communicate using either a RingID (which the Ring app randomly generates the first time it's launched) or over SIP. You can run RingID and SIP in parallel, switching between protocols as needed, but you must register your RingID on the blockchain before it can be used to make or receive communications.
|
||||
|
||||
Ring's features include teleconferencing, media sharing, and text messaging. For more information about Ring, access its [source code][17] repository on GitLab, and its [FAQ][18] answers many questions about using the system.
|
||||
|
||||
### Riot
|
||||
|
||||
[Riot][19] is not just a video-conferencing solution—it's team-management software with integrated group video/voice chat communications. Communication (including voice and video conferencing, file sharing, notifications, and project reminders) happens in dedicated "rooms" that can be organized by topic, team, event, etc. Anything shared in a room is persistently stored with access governed by that room's confidentially settings. A cool feature is that you can use Riot to communicate with people using other collaboration tools—including IRC, Slack, Twitter, SMS, and Gitter.
|
||||
|
||||
You can use Riot in your browser (Chrome and Firefox) or via its apps for MacOS, Windows, and Linux desktops and iOS and Android devices. In terms of infrastructure, Riot can be installed on your server, or you can run it on Riot's servers. It is based on the [Matrix][20] React SDK, so all files and data transferred over Riot are secured with Matrix's end-to-end encryption.
|
||||
|
||||
Riot is available under an [Apache 2.0][21] license, its [source code][22] is available on GitHub, and you can find [documentation][23], including how-to videos and FAQs, on its website.
|
||||
|
||||
### Wire
|
||||
|
||||
Developed by the audio engineers who created Skype, [Wire][24] enables up to 10 people to participate in an end-to-end encrypted audio conference call. Video conferencing (also encrypted) is currently limited to 1:1 communications, with group video capabilities on the app's roadmap. Other features include secure screen sharing, file sharing, and group chat; administrator management; and the ability to switch between accounts and profiles (e.g., work and personal) at will from within the app.
|
||||
|
||||
Wire is open source under the [GPL 3.0][25] license and is free to use if you [compile it from source][26] on your own server. A paid option is available starting at $5 per user per month (with large enterprise plans also available).
|
||||
|
||||
### Other options
|
||||
|
||||
If you need 1:1 communications, here are two other services that might interest you: Pidgin and Signal.
|
||||
|
||||
[Pidgin][27] is like a one-stop-shop for the multitude of chat networks you and your friends, family, and colleagues use. You can use Pidgin to chat with people who use AIM, Google Talk, ICQ, IRC, XMPP, and multiple other networks, all from the same interface. Check out Ray Shimko's article "[Get started with Pidgin][28]" on [Opensource.com][29] for more information.
|
||||
|
||||
This probably isn't the first time you've heard of [Signal][30]. The app transmits end-to-end encrypted voice, video, text, and photos, and it's been endorsed by security and cryptography experts including Edward Snowden and Bruce Schneier and the Electronic Frontier Foundation.
|
||||
|
||||
The open source landscape is perpetually changing, so chances are some of you are using other open source video- and audio-conferencing solutions. If you have a favorite not listed here, please share it in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/alternatives/skype
|
||||
|
||||
作者:[Opensource.com][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com
|
||||
[1]:https://jitsi.org/
|
||||
[2]:https://meet.jit.si/
|
||||
[3]:https://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security
|
||||
[4]:https://en.wikipedia.org/wiki/Secure_Real-time_Transport_Protocol
|
||||
[5]:https://jitsi.org/jitsi-videobridge/
|
||||
[6]:https://github.com/jitsi/jitsi/blob/master/LICENSE
|
||||
[7]:https://github.com/jitsi
|
||||
[8]:https://jitsi.org/the-community/
|
||||
[9]:https://jitsi.org/projects/
|
||||
[10]:http://www.linphone.org/
|
||||
[11]:https://www.gnu.org/licenses/gpl-2.0.html
|
||||
[12]:http://www.linphone.org/technical-corner/linphone/downloads
|
||||
[13]:http://www.linphone.org/user-guide.html
|
||||
[14]:http://www.linphone.org/technical-corner/linphone/documentation
|
||||
[15]:https://ring.cx/
|
||||
[16]:https://www.gnu.org/licenses/gpl-3.0.en.html
|
||||
[17]:https://gitlab.savoirfairelinux.com/groups/ring
|
||||
[18]:https://ring.cx/en/documentation/faq
|
||||
[19]:https://about.riot.im/
|
||||
[20]:https://matrix.org/#about
|
||||
[21]:https://github.com/vector-im/riot-web/blob/master/LICENSE
|
||||
[22]:https://github.com/vector-im
|
||||
[23]:https://about.riot.im/need-help/
|
||||
[24]:https://wire.com/en/
|
||||
[25]:https://github.com/wireapp/wire/blob/master/LICENSE
|
||||
[26]:https://github.com/wireapp/wire
|
||||
[27]:https://pidgin.im/
|
||||
[28]:https://opensource.com/article/18/4/pidgin-open-source-replacement-skype-business
|
||||
[29]:https://opensource.com/
|
||||
[30]:https://signal.org/
|
@ -0,0 +1,91 @@
|
||||
Digg's v4 launch: an optimism born of necessity.
|
||||
============================================================
|
||||
|
||||
![](https://lethain.com/static/blog/heroes/digg-v4.jpg)
|
||||
|
||||
Digg was having a rough year. Our CEO left the day before I joined. Senior engineers ghosted out the door, dampening productivity and pulling their remaining friends. Fraudulent voting rings circumvented our algorithms, selling access to our front page, and threatening our lives over modifications to prevent their abuse. Our provisioning tools for developer environments broke and no one knew how to fix them, so we reassigned new hires the zombie VMs of recently departed coworkers.
|
||||
|
||||
But today wasn't about any of that. Today was reserved for the reversal of the biggest problem that had haunted Digg for the last two years. We were launching a complete rewrite of Digg. We were committed to launching today. We were agreed against further postponing the launch. We were pretty sure the new version, version four, wasn't ready.
|
||||
|
||||
The day started. We were naive. Our education lay in wait.
|
||||
|
||||
If you'd been fortunate enough to be invited into our cavernous, converted warehouse of an office and felt the buzz, you'd probably guess a celebration was underway. The rewrite from Digg v3.5 to Digg v4 had marched haphazardly forward for nearly two years, and promised to move us from a monolithic community-driven news aggregator to an infinitely personalized aggregator driven by blending your social graph, top influencers, and the global zeitgeist of news.
|
||||
|
||||
If our product requirements had continued to flux well into the preceding week, the path to Digg v4 had been clearly established several years earlier, when Digg had been devastated by [Google's Panda algorithm update][3]. As that search update took a leisurely month to soak into effect, our fortunes reversed like we'd spat on the gods: we fell from our first--and only--profitable month, and kept falling until our monthly traffic was severed in half. One month, a company culminating a five year path to profitability, the next a company in freefall and about to fundraise from a position of weakness.
|
||||
|
||||
Launching v4 was our chance to return to our rightful place among the giants of the internet, and the cavernous office, known by employees as "Murder Church", had been lovingly rearranged for the day. In the middle of the room, an immense wooden table had been positioned to serve as the "war room." It was framed by a ring of couches, where others would stand by to assist. Waiters in black tie attire walked the room with trays of sushi, exquisite small bites and chilled champagne. A bar had been erected, serving drinks of all shapes. Folks slipped upstairs to catch a few games of ping pong.
|
||||
|
||||
The problems started slowly.
|
||||
|
||||
At one point, an ebullient engineer had declared the entire rewrite could run on two servers and, our minimalist QA environment being much larger to the contrary, we got remarkably close to launching with two servers as our most accurate estimate. The week before launch, the capacity planning project was shifted to Rich and I. We put on a brave farce of installing JMeter and generated as much performance data as we could against the complex, dense and rapidly shifting sands that comprised the rewrite. It was not the least confident I've ever been in my work, I can remember writing a book report on the bus to school about a book I never read in fourth grade, but it is possible we were launching without much sense of whether this was going to work.
|
||||
|
||||
We had the suspicion it wouldn't matter much anyway, because we weren't going to be able to order and install new hardware in our datacenters before the launch. Capacity would suffice because it was all we had.
|
||||
|
||||
Around 10:00 AM, someone asked when we were going to start the switch, and Mike chimed in helpfully, "We've already started reprovisioning the v3 servers." We had so little capacity that we had decided to reimage all our existing servers and then reprovision them in the new software stack. This was clever from the perspective of reducing our costs, but the optimism it entailed was tinged with madness.
|
||||
|
||||
As the flames of rebirth swallowed the previous infrastructure, something curious happened, or perhaps didn't happen. The new site didn't really come up. The operations team rushed out a maintenance page and we collected ourselves around our handsome wooden table, expensive chairs and gnawing sense of dread. This was _not_ going well. We didn't have a rollback plan. The random self-selection of engineers at the table decided our only possible option was to continue rolling forward, and we did. An hour later, the old infrastructure was entirely gone, replaced by the Digg version four.
|
||||
|
||||
Servers reprovisioning, maintenance page cajoling visitors, the office took on a "last days of rome" atmosphere. The champagne and open bar flowed, the ping pong table was fully occupied, and the rest of the company looked on, unsure how to help, and coming to terms that Digg's final hail mary had been fumbled. The framed Forbes cover in the lobby firmly a legacy, and assuredly not a harbinger.
|
||||
|
||||
The day stretched on, and folks began to leave, but for the engineers swarming the central table, there was much left to do. We had successfully provisioned the new site, but it was still staggering under load, with most pages failing to load. The primary bottleneck was our Cassandra cluster. Rich and I broke off to a conference room and expanded our use of memcache as a write-through-cache shielding Cassandra; a few hours later much of the site started to load for logged out users.
|
||||
|
||||
Logged in users, though, were still seeing error pages when they came to the sit. The culprit was the rewrite's crown jewel, called MyNews, which provided social context on which of your friends had interacted with each article, and merged all that activity together into a personalized news feed. Well, that is what was supposed to happen, anyway, at this point what it actually did was throw tasteful "startup blue" error pages.
|
||||
|
||||
As the day ended, we changed the default page for users from MyNews to TopNews, the global view which was still loading, which made it possible for users to log in and use the site. The MyNews page would still error out, but it was enough for us to go home, tipsy and defeated, survivors of our relaunch celebration.
|
||||
|
||||
Folks trickled into the office early the next day, and we regrouped. MyNews was thoroughly broken, the site was breaking like clockwork every four hours, and behind those core issues, dozens of smaller problems were cropping up as well. We'd learned we could fix the periodic breakage by restarting every single process, we hadn't been able to isolate which ones were the source, so we decided to focus on MyNews first.
|
||||
|
||||
Once again, Rich and I sequestered ourselves in a conference room, this time with the goal of rewriting our MyNews implementation from scratch. The current version wrote into Cassandra, and its load was crushing the clusters, breaking the social functionality, and degrading all other functionality around it. We decided to rewrite to store the data in Redis, but there was too much data to store in any server, so we would need to rollout a new implementation, a new sharding strategy, and the tooling to manage that tooling.
|
||||
|
||||
And we did!
|
||||
|
||||
Over the next two days, we implemented a sharded Redis cluster and migrated over to it successfully. It had some bugs--for the Digg's remaining life, I would clandestinely delete large quantities of data from the MyNews cluster because we couldn't afford to size it correctly to store the necessary data and we couldn't agree what to do about it, so each time I ended up deleting the excess data in secret to keep the site running--but it worked, and our prized rewrite flew out the starting gate to begin limping down the track.
|
||||
|
||||
It really was limping though, requiring manual restarts of every process each four hours. It took a month to track this bug down, and by the end only three people were left trying. I became so engrossed in understanding the problem, working with Jorge and Mike on the Operations team, that I don't even know if anyone else came into the office that following month. Not understanding this breakage became an affront, and as most folks dropped off--presumably to start applying for jobs because they had a lick of sense--I was possessed by the obsession to fix it.
|
||||
|
||||
And we did!
|
||||
|
||||
Our API server was a Python Tornado service, that made API calls into our Python backend tier, known as Bobtail (the frontend was Bobcat), and one of the most frequently accessed endpoint was used to retrieve user by their name or id. Because it supported retrieval by either name or id, it set default values for both parameters as empty lists. This is a super reasonable thing to do! However, Python only initializes default parameters when the function is first evaluated, which means that the same list is used for every call to the function. As a result, if you mutate those values, the mutations span across invocations.
|
||||
|
||||
In this case, user ids and names were appended to the default lists each time it was called. Over hours, those lists began to retrieve tens of thousands of users on each request, overwhelming even the memcache clusters. This took so long to catch because we returned the values as a dictionary, and the dictionary always included the necessary values, it just happened to also include tens of thousands of extraneous values too, so it never failed in an obvious way. The bug's impact was amplified because we assumed users wouldn't pass in duplicate ids, and would cheerfully retrieve the same id repeatedly for a single request.
|
||||
|
||||
We rolled out that final critical fix, and Digg V4 was fully launched. A week later our final CEO would join. A month later we'd have our third round of layoffs. A year later we would sell the company. But for that moment, we'd won.
|
||||
|
||||
I was about to hit my six month anniversary.
|
||||
|
||||
* * *
|
||||
|
||||
Digg V4 is sometimes referenced as an example of a catastrophic launch, with an implied lesson that we shouldn't have launched it. At one point, I used to agree, but these days I think we made the right decision to launch. Our traffic was significantly down, we were losing a bunch of money each month, we had recently raised money and knew we couldn't easily raise more. If we'd had the choice between launching something great and something awful, we'd have preferred to launch something great, but instead we had the choice of taking one last swing or turning in our bat quietly.
|
||||
|
||||
I'm glad we took the last swing; proud we survived the rough launch.
|
||||
|
||||
On the other hand, I'm still shocked that we were so reckless in the launch itself. I remember the meeting where we decided to go ahead with the launch, with Mike vigorously protesting. To the best of my recollection, I remained silent. I hope that I grew from the experience, because even now uncertain how such a talented group put on such a display of fuckery.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
作者简介:
|
||||
|
||||
Hi. I grew up in North Carolina, studied CS at Centre College in Kentucky, spent a year in Japan on the JET Program, and have been living in San Francisco since 2009 or so.
|
||||
|
||||
Since coming out here, I've gotten to work at some great companies, and some of them were even good when I worked there! Starting with Yahoo! BOSS, Digg, SocialCode, Uber and now Stripe.
|
||||
|
||||
A long time ago, I also cofounded a really misguided iOS gaming startup with Luke Hatcher. We made thousands of dollars over six months, and spent the next six years trying to figure out how to stop paying taxes. It was a bit of a missed opportunity.
|
||||
|
||||
The very first iteration of Irrational Exuberance was created the summer after I graduated from college, and I've been publishing to it off and on since. Early on there was a heavy focus on Django, Python and Japan; lately it's more about infrastructure, architecture and engineering management.
|
||||
|
||||
It's hard to predict what it'll look like in the future.
|
||||
|
||||
-----------------------------
|
||||
|
||||
via: https://lethain.com/digg-v4/
|
||||
|
||||
作者:[Will Larson.][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://lethain.com/about/
|
||||
[1]:https://lethain.com/tags/stories/
|
||||
[2]:https://lethain.com/tags/digg/
|
||||
[3]:https://moz.com/learn/seo/google-panda
|
@ -0,0 +1,128 @@
|
||||
How to edit Adobe InDesign files with Scribus and Gedit
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open-indesign-scribus-gedit-graphic.jpg?itok=OPJaGdA5)
|
||||
|
||||
To be a good graphic designer, you must be adept at using the profession's tools, which for most designers today are the ones in the proprietary Adobe Creative Suite.
|
||||
|
||||
However, there are times that open source tools will get you out of a jam. For example, imagine you're a commercial printer tasked with printing a file created in Adobe InDesign. You need to make a simple change (e.g., fixing a small typo) to the file, but you don't have immediate access to the Adobe suite. While these situations are admittedly rare, open source tools like desktop publishing software [Scribus][1] and text editor [Gedit][2] can save the day.
|
||||
|
||||
In this article, I'll show you how I edit Adobe InDesign files with Scribus and Gedit. Note that there are many open source graphic design solutions that can be used instead of or in conjunction with Adobe InDesign. For more on this subject, check out my articles: [Expensive tools aren't the only option for graphic design (and never were)][3] and [2 open][4][source][4][Adobe InDesign scripts][4].
|
||||
|
||||
When developing this solution, I read a few blogs on how to edit InDesign files with open source software but did not find what I was looking for. One suggestion I found was to create an EPS from InDesign and open it as an editable file in Scribus, but that did not work. Another suggestion was to create an IDML (an older InDesign file format) document from InDesign and open that in Scribus. That worked much better, so that's the workaround I used in the following examples.
|
||||
|
||||
### Editing a business card
|
||||
|
||||
Opening and editing my InDesign business card file in Scribus worked fairly well. The only issue I had was that the tracking (the space between letters) was a bit off and the upside-down "J" I used to create the lower-case "f" in "Jeff" was flipped. Otherwise, the styles and colors were all intact.
|
||||
|
||||
|
||||
![Business card in Adobe InDesign][6]
|
||||
|
||||
Business card designed in Adobe InDesign.
|
||||
|
||||
![InDesign IDML file opened in Scribus][8]
|
||||
|
||||
InDesign IDML file opened in Scribus.
|
||||
|
||||
### Deleting copy in a paginated book
|
||||
|
||||
The book conversion didn't go as well. The main body of the text was OK, but the table of contents and some of the drop caps and footers were messed up when I opened the InDesign file in Scribus. Still, it produced an editable document. One problem was some of my blockquotes defaulted to Arial font because a character style (apparently carried over from the original Word file) was on top of the paragraph style. This was simple to fix.
|
||||
|
||||
![Book layout in InDesign][10]
|
||||
|
||||
Book layout in InDesign.
|
||||
|
||||
![InDesign IDML file of book layout opened in Scribus][12]
|
||||
|
||||
InDesign IDML file of book layout opened in Scribus.
|
||||
|
||||
Trying to select and delete a page of text produced surprising results. I placed the cursor in the text and hit Command+A (the keyboard shortcut for "select all"). It looked like one page was highlighted. However, that wasn't really true.
|
||||
|
||||
![Selecting text in Scribus][14]
|
||||
|
||||
Selecting text in Scribus.
|
||||
|
||||
When I hit the Delete key, the entire text string (not just the highlighted page) disappeared.
|
||||
|
||||
![Both pages of text deleted in Scribus][16]
|
||||
|
||||
Both pages of text deleted in Scribus.
|
||||
|
||||
Then something even more interesting happened… I hit Command+Z to undo the deletion. When the text came back, the formatting was messed up.
|
||||
|
||||
![Undo delete restored the text, but with bad formatting.][18]
|
||||
|
||||
Command+Z (undo delete) restored the text, but the formatting was bad.
|
||||
|
||||
### Opening a design file in a text editor
|
||||
|
||||
If you open a Scribus file and an InDesign file in a standard text editor (e.g., TextEdit on a Mac), you will see that the Scribus file is very readable whereas the InDesign file is not.
|
||||
|
||||
You can use TextEdit to make changes to either type of file and save it, but the resulting file is useless. Here's the error I got when I tried re-opening the edited file in InDesign.
|
||||
|
||||
![InDesign error message][20]
|
||||
|
||||
InDesign error message.
|
||||
|
||||
I got much better results when I used Gedit on my Linux Ubuntu machine to edit the Scribus file. I launched Gedit from the command line and voilà, the Scribus file opened, and the changes I made in Gedit were retained.
|
||||
|
||||
![Editing Scribus file in Gedit][22]
|
||||
|
||||
Editing a Scribus file in Gedit.
|
||||
|
||||
![Result of the Gedit edit in Scribus][24]
|
||||
|
||||
Result of the Gedit edit opened in Scribus.
|
||||
|
||||
This could be very useful to a printer that receives a call from a client about a small typo in a project. Instead of waiting to get a new file, the printer could open the Scribus file in Gedit, make the change, and be good to go.
|
||||
|
||||
### Dropping images into a file
|
||||
|
||||
I converted an InDesign doc to an IDML file so I could try dropping in some PDFs using Scribus. It seems Scribus doesn't do this as well as InDesign, as it failed. Instead, I converted my PDFs to JPGs and imported them into Scribus. That worked great. However, when I exported my document as a PDF, I found that the files size was rather large.
|
||||
|
||||
![Huge PDF file][26]
|
||||
|
||||
Exporting Scribus to PDF produced a huge file.
|
||||
|
||||
I'm not sure why this happened—I'll have to investigate it later.
|
||||
|
||||
Do you have any tips for using open source software to edit graphics files? If so, please share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/adobe-indesign-open-source-tools
|
||||
|
||||
作者:[Jeff Macharyas][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/rikki-endsley
|
||||
[1]:https://www.scribus.net/
|
||||
[2]:https://wiki.gnome.org/Apps/Gedit
|
||||
[3]:https://opensource.com/life/16/8/open-source-alternatives-graphic-design
|
||||
[4]:https://opensource.com/article/17/3/scripts-adobe-indesign
|
||||
[5]:/file/402516
|
||||
[6]:https://opensource.com/sites/default/files/uploads/1-business_card_designed_in_adobe_indesign_cc.png (Business card in Adobe InDesign)
|
||||
[7]:/file/402521
|
||||
[8]:https://opensource.com/sites/default/files/uploads/2-indesign_.idml_file_opened_in_scribus.png (InDesign IDML file opened in Scribus)
|
||||
[9]:/file/402531
|
||||
[10]:https://opensource.com/sites/default/files/uploads/3-book_layout_in_indesign.png (Book layout in InDesign)
|
||||
[11]:/file/402536
|
||||
[12]:https://opensource.com/sites/default/files/uploads/4-indesign_.idml_file_of_book_opened_in_scribus.png (InDesign IDML file of book layout opened in Scribus)
|
||||
[13]:/file/402541
|
||||
[14]:https://opensource.com/sites/default/files/uploads/5-command-a_in_the_scribus_file.png (Selecting text in Scribus)
|
||||
[15]:/file/402546
|
||||
[16]:https://opensource.com/sites/default/files/uploads/6-deleted_text_in_scribus.png (Both pages of text deleted in Scribus)
|
||||
[17]:/file/402551
|
||||
[18]:https://opensource.com/sites/default/files/uploads/7-command-z_in_scribus.png (Undo delete restored the text, but with bad formatting.)
|
||||
[19]:/file/402556
|
||||
[20]:https://opensource.com/sites/default/files/uploads/8-indesign_error_message.png (InDesign error message)
|
||||
[21]:/file/402561
|
||||
[22]:https://opensource.com/sites/default/files/uploads/9-scribus_edited_in_gedit_on_linux.png (Editing Scribus file in Gedit)
|
||||
[23]:/file/402566
|
||||
[24]:https://opensource.com/sites/default/files/uploads/10-scribus_opens_after_gedit_changes.png (Result of the Gedit edit in Scribus)
|
||||
[25]:/file/402571
|
||||
[26]:https://opensource.com/sites/default/files/uploads/11-large_pdf_size.png (Huge PDF file)
|
@ -0,0 +1,184 @@
|
||||
View The Contents Of An Archive Or Compressed File Without Extracting It
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/07/View-The-Contents-Of-An-Archive-Or-Compressed-File-720x340.png)
|
||||
|
||||
In this tutorial, we are going to learn how to view the contents of an Archive and/or Compressed file without actually extracting it in Unix-like operating systems. Before going further, let be clear about Archive and compress files. There is significant difference between both. The Archiving is the process of combining multiple files or folders or both into a single file. In this case, the resulting file is not compressed. The compressing is a method of combining multiple files or folders or both into a single file and finally compress the resulting file. The archive is not a compressed file, but the compressed file can be an archive. Clear? Well, let us get to the topic.
|
||||
|
||||
### View The Contents Of An Archive Or Compressed File Without Extracting It
|
||||
|
||||
Thanks to Linux community, there are many command line applications are available to do it. Let us going to see some of them with examples.
|
||||
|
||||
**1\. Using Vim Editor**
|
||||
|
||||
Vim is not just an editor. Using Vim, we can do numerous things. The following command displays the contents of an compressed archive file without decompressing it.
|
||||
```
|
||||
$ vim ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
You can even browse through the archive and open the text files (if there are any) in the archive as well. To open a text file, just put the mouse cursor in-front of the file using arrow keys and hit ENTER to open it.
|
||||
|
||||
|
||||
**2\. Using Tar command**
|
||||
|
||||
To list the contents of a tar archive file, run:
|
||||
```
|
||||
$ tar -tf ostechnix.tar
|
||||
ostechnix/
|
||||
ostechnix/image.jpg
|
||||
ostechnix/file.pdf
|
||||
ostechnix/song.mp3
|
||||
|
||||
```
|
||||
|
||||
Or, use **-v** flag to view the detailed properties of the archive file, such as permissions, file owner, group, creation date etc.
|
||||
```
|
||||
$ tar -tvf ostechnix.tar
|
||||
drwxr-xr-x sk/users 0 2018-07-02 19:30 ostechnix/
|
||||
-rw-r--r-- sk/users 53632 2018-06-29 15:57 ostechnix/image.jpg
|
||||
-rw-r--r-- sk/users 156831 2018-06-04 12:37 ostechnix/file.pdf
|
||||
-rw-r--r-- sk/users 9702219 2018-04-25 20:35 ostechnix/song.mp3
|
||||
|
||||
```
|
||||
|
||||
|
||||
**3\. Using Rar command**
|
||||
|
||||
To view the contents of a rar file, simply do:
|
||||
```
|
||||
$ rar v ostechnix.rar
|
||||
|
||||
RAR 5.60 Copyright (c) 1993-2018 Alexander Roshal 24 Jun 2018
|
||||
Trial version Type 'rar -?' for help
|
||||
|
||||
Archive: ostechnix.rar
|
||||
Details: RAR 5
|
||||
|
||||
Attributes Size Packed Ratio Date Time Checksum Name
|
||||
----------- --------- -------- ----- ---------- ----- -------- ----
|
||||
-rw-r--r-- 53632 52166 97% 2018-06-29 15:57 70260AC4 ostechnix/image.jpg
|
||||
-rw-r--r-- 156831 139094 88% 2018-06-04 12:37 C66C545E ostechnix/file.pdf
|
||||
-rw-r--r-- 9702219 9658527 99% 2018-04-25 20:35 DD875AC4 ostechnix/song.mp3
|
||||
----------- --------- -------- ----- ---------- ----- -------- ----
|
||||
9912682 9849787 99% 3
|
||||
|
||||
```
|
||||
|
||||
**4\. Using Unrar command**
|
||||
|
||||
You can also do the same using **Unrar** command with **l** flag as shown below.
|
||||
```
|
||||
$ unrar l ostechnix.rar
|
||||
|
||||
UNRAR 5.60 freeware Copyright (c) 1993-2018 Alexander Roshal
|
||||
|
||||
Archive: ostechnix.rar
|
||||
Details: RAR 5
|
||||
|
||||
Attributes Size Date Time Name
|
||||
----------- --------- ---------- ----- ----
|
||||
-rw-r--r-- 53632 2018-06-29 15:57 ostechnix/image.jpg
|
||||
-rw-r--r-- 156831 2018-06-04 12:37 ostechnix/file.pdf
|
||||
-rw-r--r-- 9702219 2018-04-25 20:35 ostechnix/song.mp3
|
||||
----------- --------- ---------- ----- ----
|
||||
9912682 3
|
||||
|
||||
```
|
||||
|
||||
**5\. Using Zip command**
|
||||
|
||||
To view the contents of a zip file without extracting it, use the following zip command:
|
||||
```
|
||||
$ zip -sf ostechnix.zip
|
||||
Archive contains:
|
||||
Life advices.jpg
|
||||
Total 1 entries (597219 bytes)
|
||||
|
||||
```
|
||||
|
||||
**6. Using Unzip command
|
||||
**
|
||||
|
||||
You can also use Unzip command with -l flag to display the contents of a zip file like below.
|
||||
```
|
||||
$ unzip -l ostechnix.zip
|
||||
Archive: ostechnix.zip
|
||||
Length Date Time Name
|
||||
--------- ---------- ----- ----
|
||||
597219 2018-04-09 12:48 Life advices.jpg
|
||||
--------- -------
|
||||
597219 1 file
|
||||
|
||||
```
|
||||
|
||||
|
||||
**7\. Using Zipinfo command**
|
||||
```
|
||||
$ zipinfo ostechnix.zip
|
||||
Archive: ostechnix.zip
|
||||
Zip file size: 584859 bytes, number of entries: 1
|
||||
-rw-r--r-- 6.3 unx 597219 bx defN 18-Apr-09 12:48 Life advices.jpg
|
||||
1 file, 597219 bytes uncompressed, 584693 bytes compressed: 2.1%
|
||||
|
||||
```
|
||||
|
||||
As you can see, the above command displays the contents of the zip file, its permissions, creating date, and percentage of compression etc.
|
||||
|
||||
**8. Using Zcat command
|
||||
**
|
||||
|
||||
To view the contents of a compressed archive file without extracting it using **zcat** command, we do:
|
||||
```
|
||||
$ zcat ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
The zcat is same as “gunzip -c” command. So, you can also use the following command to view the contents of the archive/compressed file:
|
||||
```
|
||||
$ gunzip -c ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
**9. Using Zless command
|
||||
**
|
||||
|
||||
To view the contents of an archive/compressed file using Zless command, simply do:
|
||||
```
|
||||
$ zless ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
This command is similar to “less” command where it displays the output page by page.
|
||||
|
||||
**10. Using Less command
|
||||
**
|
||||
|
||||
As you might already know, the **less** command can be used to open a file for interactive reading, allowing scrolling and search.
|
||||
|
||||
Run the following command to view the contents of an archive/compressed file using less command:
|
||||
```
|
||||
$ less ostechnix.tar.gz
|
||||
|
||||
```
|
||||
|
||||
And, that’s all for now. You know now how to view the contents of an archive of compressed file using various commands in Linux. Hope you find this useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-view-the-contents-of-an-archive-or-compressed-file-without-extracting-it/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2018/07/vim.png
|
@ -0,0 +1,87 @@
|
||||
10 killer tools for the admin in a hurry
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT)
|
||||
|
||||
Administering networks and systems can get very stressful when the workload piles up. Nobody really appreciates how long anything takes, and everyone wants their specific thing done yesterday.
|
||||
|
||||
So it's no wonder so many of us are drawn to the open source spirit of figuring out what works and sharing it with everyone. Because, when deadlines are looming, and there just aren't enough hours in the day, it really helps if you can just find free answers you can implement immediately.
|
||||
|
||||
So, without further ado, here's my Swiss Army Knife of stuff to get you out of the office before dinner time.
|
||||
|
||||
### Server configuration and scripting
|
||||
|
||||
Let's jump right in.
|
||||
|
||||
**[NixCraft][1]**
|
||||
Use the site's internal search function. With more than a decade of regular updates, there's gold to be found here—useful scripts and handy hints that can solve your problem straight away. This is often the second place I look after Google.
|
||||
|
||||
**[Webmin][2]**
|
||||
This gives you a nice web interface to remotely edit your configuration files. It cuts down on a lot of time spent having to juggle directory paths and `sudo nano`, which is handy when you're handling several customers.
|
||||
|
||||
**[Windows Subsystem for Linux][3]**
|
||||
The reality of the modern workplace is that most employees are on Windows, while the grown-up gear in the server room is on Linux. So sometimes you find yourself trying to do admin tasks from (gasp) a Windows desktop.
|
||||
|
||||
What do you do? Install a virtual machine? It's actually much faster and far less work to configure if you install the Windows Subsystem for Linux compatibility layer, now available at no cost on Windows 10.
|
||||
|
||||
This gives you a Bash terminal in a window where you can run Bash scripts and Linux binaries on the local machine, have full access to both Windows and Linux filesystems, and mount network drives. It's available in Ubuntu, OpenSUSE, SLES, Debian, and Kali flavors.
|
||||
|
||||
**[mRemoteNG][4]**
|
||||
This is an excellent SSH and remote desktop client for when you have 100+ servers to manage.
|
||||
|
||||
### Setting up a network so you don't have to do it again
|
||||
|
||||
A poorly planned network is the sworn enemy of the admin who hates working overtime.
|
||||
|
||||
**[IP Addressing Schemes that Scale][5]**
|
||||
The diabolical thing about running out of IP addresses is that, when it happens, the network's grown large enough that a new addressing scheme is an expensive, time-consuming pain in the proverbial.
|
||||
|
||||
Ain't nobody got time for that!
|
||||
|
||||
At some point, IPv6 will finally arrive to save the day. Until then, these one-size-fits-most IP addressing schemes should keep you going, no matter how many network-connected wearables, tablets, smart locks, lights, security cameras, VoIP headsets, and espresso machines the world throws at us.
|
||||
|
||||
**[Linux Chmod Permissions Cheat Sheet][6]**
|
||||
A short but sweet cheat sheet of Bash commands to set permissions across the network. This is so when Bill from Customer Service falls for that ransomware scam, you're recovering just his files and not the entire company's.
|
||||
|
||||
**[VLSM Subnet Calculator][7]**
|
||||
Just put in the number of networks you want to create from an address space and the number of hosts you want per network, and it calculates what the subnet mask should be for everything.
|
||||
|
||||
### Single-purpose Linux distributions
|
||||
|
||||
Need a Linux box that does just one thing? It helps if someone else has already sweated the small stuff on an operating system you can install and have ready immediately.
|
||||
|
||||
Each of these has, at one point, made my work day so much easier.
|
||||
|
||||
**[Porteus Kiosk][8]**
|
||||
This is for when you want a computer totally locked down to just a web browser. With a little tweaking, you can even lock the browser down to just one website. This is great for public access machines. It works with touchscreens or with a keyboard and mouse.
|
||||
|
||||
**[Parted Magic][9]**
|
||||
This is an operating system you can boot from a USB drive to partition hard drives, recover data, and run benchmarking tools.
|
||||
|
||||
**[IPFire][10]**
|
||||
Hahahaha, I still can't believe someone called a router/firewall/proxy combo "I pee fire." That's my second favorite thing about this Linux distribution. My favorite is that it's a seriously solid software suite. It's so easy to set up and configure, and there is a heap of plugins available to extend it.
|
||||
|
||||
So, how about you? What tools, resources, and cheat sheets have you found to make the workday easier? I'd love to know. Please share in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/tools-admin
|
||||
|
||||
作者:[Grant Hamono][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/grantdxm
|
||||
[1]:https://www.cyberciti.biz/
|
||||
[2]:http://www.webmin.com/
|
||||
[3]:http://wsl-guide.org/en/latest/
|
||||
[4]:https://mremoteng.org/
|
||||
[5]:https://blog.dxmtechsupport.com.au/ip-addressing-for-a-small-business-that-might-grow/
|
||||
[6]:https://isabelcastillo.com/linux-chmod-permissions-cheat-sheet
|
||||
[7]:http://www.vlsm-calc.net/
|
||||
[8]:http://porteus-kiosk.org/
|
||||
[9]:https://partedmagic.com/
|
||||
[10]:https://www.ipfire.org/
|
@ -0,0 +1,88 @@
|
||||
AGL Outlines Virtualization Scheme for the Software Defined Vehicle
|
||||
============================================================
|
||||
|
||||
![AGL](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/agl.jpg?itok=Vtrn52vk "AGL")
|
||||
AGL outlines the architecture of a “virtualized software defined vehicle architecture” for UCB codebase in new white paper.[The Linux Foundation][2]
|
||||
|
||||
Last August when The Linux Foundation’s Automotive Grade Linux (AGL) project released version 4.0 of its Linux-based Unified Code Base (UCB) reference distribution for automotive in-vehicle infotainment, it also launched a Virtualization Expert Group (EG-VIRT). The workgroup has now [released][5] a white paper outlining a “virtualized software defined vehicle architecture” for AGL’s UCB codebase.
|
||||
|
||||
The paper explains how virtualization is the key to expanding AGL from IVI into instrument clusters, HUDs, and telematics. Virtualization technology can protect these more safety-critical functions from less secure infotainment applications, as well as reduce costs by replacing electronic hardware components with virtual instances. Virtualization can also enable runtime configurability for sophisticated autonomous and semi-autonomous ADAS applications, as well as ease software updates and streamline compliance with safety critical standards.
|
||||
|
||||
The paper also follows several recent AGL announcements including the [addition of seven new members][6]: Abalta Technologies, Airbiquity, Bose, EPAM Systems, HERE, Integrated Computer Solutions, and its first Chinese car manufacturer -- Sitech Electric Automotive. These new members bring the AGL membership to more than 120.
|
||||
|
||||
AGL also [revealed][7] that Mercedes-Benz Vans is using its open source platform as a foundation for a new onboard OS for commercial vehicles. AGL will play a key role in the Daimler business unit’s “adVANce” initiative for providing “holistic transport solutions.” These include technologies for integrating connectivity, IoT, innovative hardware, on-demand mobility and rental concepts, and fleet management solutions for both goods and passengers.
|
||||
|
||||
The Mercedes-Benz deal follows last year’s announcement that AGL would appear in 2018 Toyota Camry cars. AGL has since expanded to other Toyota cars including the 2018 Prius PHV.
|
||||
|
||||
### An open-ended approach to virtualization
|
||||
|
||||
Originally, the AGL suggested that EG-VIRT would identify a single hypervisor for an upcoming AGL virtualization platform that would help consolidate infotainment, cluster, HUD, and rear-seat entertainment applications over a single multicore SoC. A single hypervisor (such as the new ACRN) may yet emerge as the preferred technology, but the paper instead outlines an architecture that can support multiple, concurrent virtualization schemes. These include hypervisors, system partitioners, and to a lesser extent, containers.
|
||||
|
||||
### Virtualization benefits for the software defined vehicle
|
||||
|
||||
Virtualization will enable what the AGL calls the “software defined vehicle” -- a flexible, scalable “autonomous connected automobile whose functions can be customized at run-time.” In addition to boosting security, the proposed virtualization platform offers benefits such as cost reductions, run-time flexibility for the software-defined car, and support for mixed criticality systems:
|
||||
|
||||
* **Software defined autonomous car** -- AGL will use virtualization to enable runtime configurability and software updates that can be automated and performed remotely. The system will orchestrate multiple applications, including sophisticated autonomous driving software, based on different licenses, security levels, and operating systems.
|
||||
|
||||
* **Cost reductions** -- The number of electronic control units (ECUs) -- and wiring complexity -- can be reduced by replacing many ECUs with virtualized instances in a single multi-core powered ECU. In addition, deployment and maintenance can be automated and performed remotely. EG-VIRT cautions, however, that there’s a limit to how many virtual instances can be deployed and how many resources can be shared between VMs without risking software integration complexity.
|
||||
|
||||
* **Security** -- By separating execution environments such as the CPU, memory, or interfaces, the framework will enable multilevel security, including protection of telematics components connected to the CAN bus. With isolation technology, a security flaw in one application will not affect others. In addition, security can be enhanced with remote patch updates.
|
||||
|
||||
* **Mixed criticality** -- One reason why real-time operating systems (RTOSes) such as QNX have held onto the lead in automotive telematics is that it’s easier to ensure high criticality levels and comply with Automotive Safety Integrity Level (ASIL) certification under ISO 26262\. Yet, Linux can ably host virtualization technologies to coordinate components with different levels of criticality and heterogeneous levels of safety, including RTOS driven components. Because many virtualization techniques have a very limited footprint, they can enable easier ASIL certification, including compliance for concurrent execution of systems with different certification levels.
|
||||
|
||||
IVI typically requires the most basic ASIL A certification at most. Instrument cluster and telematics usually need ASIL B, and more advanced functions such as ADAS and digital mirrors require ASIL C or D. At this stage, it would be difficult to develop open source software that is safety-certifiable at the higher levels, says EG-VIRT. Yet, AGL’s virtualization framework will enable proprietary virtualization solutions that can meet these requirements. In the long-term, the [Open Source Automation Development Lab][8] is working on potential solutions for Safety Critical Linux that might help AGL meet the requirements using only open source Linux.</ul>
|
||||
|
||||
### Building an open source interconnect
|
||||
|
||||
The paper includes the first architecture diagrams for AGL’s emerging virtualization framework. The framework orchestrates different hypervisors, VMs, AGL Profiles, and automotive functions as interchangeable modules that can be plugged in at compilation time, and where possible, at runtime. The framework emphasizes open source technologies, but also supports interoperability with proprietary components.
|
||||
|
||||
### [agl-arch.jpg][3]
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/agl-arch.jpg?itok=r53h3iE1)
|
||||
|
||||
AGL virtualization approach integrated in the AGL architecture.[Used with permission][1]
|
||||
|
||||
The AGL application framework already supports application isolation based on namespaces, cgroups, and SMACK. The framework “relies on files/processes security attributes that are checked by the Linux kernel each time an action processes and that work well combined with secure boot techniques,” says EG-VIRT. However, when multiple applications with different security and safety requirements need to be executed, “the management of these security attributes becomes complex and there is a need of an additional level of isolation to properly isolate these applications from each other…This is where the AGL virtualization platform comes into the picture.”
|
||||
|
||||
To meet EG-VIRT’s requirements, compliant hardware virtualization solutions must enable CPU, cache, memory, and interrupts to create execution environments (EEs) such as Arm Virtualization Extensions, Intel VT-x, AMD SVM, and IOMMU. The hardware must also support a trusted computing module to isolate safety-security critical applications and assets. These include Arm TrustZone, Intel Trusted Execution Technology, and others. I/O virtualization support for GPU and connectivity sharing is optional.
|
||||
|
||||
The AGL virtualization platform does not need to invent new hypervisors and EEs, but it does need a way to interconnect them. EG-VIRT is now beginning to focus on the development of an open source communication bus architecture that comprises both critical and non-critical buses. The architecture will enable communications between different virtualization technologies such as hypervisors and different virtualized EEs such as VT-x while also enabling direct communication between different types of EEs.
|
||||
|
||||
### Potential AGL-compliant hypervisors and partitioners
|
||||
|
||||
The AGL white paper describes several open source and proprietary candidates for hypervisor and system partitioners. It does not list any containers, which create abstraction starting from the layers above the Linux kernel.
|
||||
|
||||
Containers are not ideal for most connected car functions. They lack guaranteed hardware isolation or security enforcement, and although they can run applications, they cannot run a full OS. As a result, AGL will not consider containers for safety and real time workloads, but only within non-safety critical systems, such as for IVI application isolation.
|
||||
|
||||
Hypervisors, however, can meet all these requirements and are also optimized for particular multi-core SoCs. “Virtualization provides the best performance in terms of security, isolation and overhead when supported directly by the hardware platform,” says the white paper.
|
||||
|
||||
For hypervisors, the open source options listed by EG-VIRT include Xen, Kernel-based Virtual Machine (KVM), the L4Re Micro-Hypervisor, and ACRN. The latter was [announced][9] as a new Linux Foundation embedded reference hypervisor project in March. The Intel-backed, BSD-licensed ACRN hypervisor provides workload prioritization and supports real-time and safety-criticality functions. The lightweight ACRN supports other embedded applications in addition to automotive.
|
||||
|
||||
Commercial hypervisors that will likely receive support in the AGL virtualization stack include the COQOS Hypervisor SDK, SYSGO PikeOS, and the Xen-based Crucible and Nautilus. The latter was first presented by the Xen Project as a potential solution for AGL virtualization [back in 2014][10]. There’s also the Green Hills Software Integrity Multivisor. Green Hills [announced AGL support for][11] Integrity earlier this month.
|
||||
|
||||
Unlike hypervisors, system partitioners do not tap specific virtualization functions within multi-core SoCs, and instead run as bare-metal solutions. Only two open source options were listed: Jailhouse and the Arm TrustZone based Arm Trusted Firmware (ATF). The only commercial solution included is the TrustZone based VOSYSmonitor.
|
||||
|
||||
In conclusion, EG-VIRT notes that this initial list of potential virtualization solutions is “non-exhaustive,” and that “the role of EG-VIRT has been defined as virtualization technology integrator, identifying as key next contribution the development of a communication bus reference implementation…” In addition: “Future EG-VIRT activities will focus on this communication, on extending the AGL support for virtualization (both as a guest and as a host), as well as on IO devices virtualization (e.g., GPU).”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/7/agl-outlines-virtualization-scheme-software-defined-vehicle
|
||||
|
||||
作者:[ERIC BROWN ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/ericstephenbrown
|
||||
[1]:https://www.linux.com/licenses/category/used-permission
|
||||
[2]:https://www.linux.com/licenses/category/linux-foundation
|
||||
[3]:https://www.linux.com/files/images/agl-archjpg
|
||||
[4]:https://www.linux.com/files/images/agljpg
|
||||
[5]:https://www.automotivelinux.org/blog/2018/06/20/agl-publishes-virtualization-white-paper
|
||||
[6]:https://www.automotivelinux.org/announcements/2018/06/05/automotive-grade-linux-welcomes-seven-new-members
|
||||
[7]:http://linuxgizmos.com/automotive-grade-linux-joins-the-van-life-with-mercedes-benz-vans-deal/
|
||||
[8]:https://www.osadl.org/Safety-Critical-Linux.safety-critical-linux.0.html
|
||||
[9]:http://linuxgizmos.com/open-source-project-aims-to-build-embedded-linux-hypervisor/
|
||||
[10]:http://linuxgizmos.com/xen-hypervisor-targets-automotive-virtualization/
|
||||
[11]:https://www.ghs.com/news/2018061918_automotive_grade_linux.html
|
@ -0,0 +1,320 @@
|
||||
Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png)
|
||||
|
||||
This step by step tutorial walk you through how to install **Oracle VirtualBox** on Ubuntu 18.04 LTS headless server. And, this guide also describes how to manage the VirtualBox headless instances using **phpVirtualBox** , a web-based front-end tool for VirtualBox. The steps described below might also work on Debian, and other Ubuntu derivatives such as Linux Mint. Let us get started.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before installing Oracle VirtualBox, we need to do the following prerequisites in our Ubuntu 18.04 LTS server.
|
||||
|
||||
First of all, update the Ubuntu server by running the following commands one by one.
|
||||
```
|
||||
$ sudo apt update
|
||||
|
||||
$ sudo apt upgrade
|
||||
|
||||
$ sudo apt dist-upgrade
|
||||
|
||||
```
|
||||
|
||||
Next, install the following necessary packages:
|
||||
```
|
||||
$ sudo apt install build-essential dkms unzip wget
|
||||
|
||||
```
|
||||
|
||||
After installing all updates and necessary prerequisites, restart the Ubuntu server.
|
||||
```
|
||||
$ sudo reboot
|
||||
|
||||
```
|
||||
|
||||
### Install Oracle VirtualBox on Ubuntu 18.04 LTS server
|
||||
|
||||
Add Oracle VirtualBox official repository. To do so, edit **/etc/apt/sources.list** file:
|
||||
```
|
||||
$ sudo nano /etc/apt/sources.list
|
||||
|
||||
```
|
||||
|
||||
Add the following lines.
|
||||
|
||||
Here, I will be using Ubuntu 18.04 LTS, so I have added the following repository.
|
||||
```
|
||||
deb http://download.virtualbox.org/virtualbox/debian bionic contrib
|
||||
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
Replace the word **‘bionic’** with your Ubuntu distribution’s code name, such as ‘xenial’, ‘vivid’, ‘utopic’, ‘trusty’, ‘raring’, ‘quantal’, ‘precise’, ‘lucid’, ‘jessie’, ‘wheezy’, or ‘squeeze**‘.**
|
||||
|
||||
Then, run the following command to add the Oracle public key:
|
||||
```
|
||||
$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
|
||||
|
||||
```
|
||||
|
||||
For VirtualBox older versions, add the following key:
|
||||
```
|
||||
$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
|
||||
|
||||
```
|
||||
|
||||
Next, update the software sources using command:
|
||||
```
|
||||
$ sudo apt update
|
||||
|
||||
```
|
||||
|
||||
Finally, install latest Oracle VirtualBox latest version using command:
|
||||
```
|
||||
$ sudo apt install virtualbox-5.2
|
||||
|
||||
```
|
||||
|
||||
### Adding users to VirtualBox group
|
||||
|
||||
We need to create and add our system user to the **vboxusers** group. You can either create a separate user and assign it to vboxusers group or use the existing user. I don’t want to create a new user, so I added my existing user to this group. Please note that if you use a separate user for virtualbox, you must log out and log in to that particular user and do the rest of the steps.
|
||||
|
||||
I am going to use my username named **sk** , so, I ran the following command to add it to the vboxusers group.
|
||||
```
|
||||
$ sudo usermod -aG vboxusers sk
|
||||
|
||||
```
|
||||
|
||||
Now, run the following command to check if virtualbox kernel modules are loaded or not.
|
||||
```
|
||||
$ sudo systemctl status vboxdrv
|
||||
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
As you can see in the above screenshot, the vboxdrv module is loaded and running!
|
||||
|
||||
For older Ubuntu versions, run:
|
||||
```
|
||||
$ sudo /etc/init.d/vboxdrv status
|
||||
|
||||
```
|
||||
|
||||
If the virtualbox module doesn’t start, run the following command to start it.
|
||||
```
|
||||
$ sudo /etc/init.d/vboxdrv setup
|
||||
|
||||
```
|
||||
|
||||
Great! We have successfully installed VirtualBox and started virtualbox module. Now, let us go ahead and install Oracle VirtualBox extension pack.
|
||||
|
||||
### Install VirtualBox Extension pack
|
||||
|
||||
The VirtualBox Extension pack provides the following functionalities to the VirtualBox guests.
|
||||
|
||||
* The virtual USB 2.0 (EHCI) device
|
||||
* VirtualBox Remote Desktop Protocol (VRDP) support
|
||||
* Host webcam passthrough
|
||||
* Intel PXE boot ROM
|
||||
* Experimental support for PCI passthrough on Linux hosts
|
||||
|
||||
|
||||
|
||||
Download the latest Extension pack for VirtualBox 5.2.x from [**here**][4].
|
||||
```
|
||||
$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
|
||||
|
||||
```
|
||||
|
||||
Install Extension pack using command:
|
||||
```
|
||||
$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
|
||||
|
||||
```
|
||||
|
||||
Congratulations! We have successfully installed Oracle VirtualBox with extension pack in Ubuntu 16.04 LTS server. It is time to deploy virtual machines. Refer the [**virtualbox official guide**][5] to start creating and managing virtual machines in command line.
|
||||
|
||||
Not everyone is command line expert. Some of you might want to create and use virtual machines graphically. No worries! Here is where **phpVirtualBox** comes in handy!!
|
||||
|
||||
### About phpVirtualBox
|
||||
|
||||
**phpVirtualBox** is a free, web-based front-end to Oracle VirtualBox. It is written using PHP language. Using phpVirtualBox, we can easily create, delete, manage and administer virtual machines via a web browser from any remote system on the network.
|
||||
|
||||
### Install phpVirtualBox in Ubuntu 18.04 LTS
|
||||
|
||||
Since it is a web-based tool, we need to install Apache web server, PHP and some php modules.
|
||||
|
||||
To do so, run:
|
||||
```
|
||||
$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml
|
||||
|
||||
```
|
||||
|
||||
Then, Download the phpVirtualBox 5.2.x version from the [**releases page**][6]. Please note that we have installed VirtualBox 5.2, so we must install phpVirtualBox version 5.2 as well.
|
||||
|
||||
To download it, run:
|
||||
```
|
||||
$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip
|
||||
|
||||
```
|
||||
|
||||
Extract the downloaded archive with command:
|
||||
```
|
||||
$ unzip 5.2-0.zip
|
||||
|
||||
```
|
||||
|
||||
This command will extract the contents of 5.2.0.zip file into a folder named “phpvirtualbox-5.2-0”. Now, copy or move the contents of this folder to your apache web server root folder.
|
||||
```
|
||||
$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox
|
||||
|
||||
```
|
||||
|
||||
Assign the proper permissions to the phpvirtualbox folder.
|
||||
```
|
||||
$ sudo chmod 777 /var/www/html/phpvirtualbox/
|
||||
|
||||
```
|
||||
|
||||
Next, let us configure phpVirtualBox.
|
||||
|
||||
Copy the sample config file as shown below.
|
||||
```
|
||||
$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php
|
||||
|
||||
```
|
||||
|
||||
Edit phpVirtualBox **config.php** file:
|
||||
```
|
||||
$ sudo nano /var/www/html/phpvirtualbox/config.php
|
||||
|
||||
```
|
||||
|
||||
Find the following lines and replace the username and password with your system user (The same username that we used in “Adding users to VirtualBox group” section).
|
||||
|
||||
In my case, my Ubuntu system username is **sk** , and its password is **ubuntu**.
|
||||
```
|
||||
var $username = 'sk';
|
||||
var $password = 'ubuntu';
|
||||
|
||||
```
|
||||
|
||||
![][7]
|
||||
|
||||
Save and close the file.
|
||||
|
||||
Next, create a new file called **/etc/default/virtualbox** :
|
||||
```
|
||||
$ sudo nano /etc/default/virtualbox
|
||||
|
||||
```
|
||||
|
||||
Add the following line. Replace ‘sk’ with your own username.
|
||||
```
|
||||
VBOXWEB_USER=sk
|
||||
|
||||
```
|
||||
|
||||
Finally, Reboot your system or simply restart the following services to complete the configuration.
|
||||
```
|
||||
$ sudo systemctl restart vboxweb-service
|
||||
|
||||
$ sudo systemctl restart vboxdrv
|
||||
|
||||
$ sudo systemctl restart apache2
|
||||
|
||||
```
|
||||
|
||||
### Adjust firewall to allow Apache web server
|
||||
|
||||
By default, the apache web browser can’t be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https traffic via UFW by following the below steps.
|
||||
|
||||
First, let us view which applications have installed a profile using command:
|
||||
```
|
||||
$ sudo ufw app list
|
||||
Available applications:
|
||||
Apache
|
||||
Apache Full
|
||||
Apache Secure
|
||||
OpenSSH
|
||||
|
||||
```
|
||||
|
||||
As you can see, Apache and OpenSSH applications have installed UFW profiles.
|
||||
|
||||
If you look into the **“Apache Full”** profile, you will see that it enables traffic to the ports **80** and **443** :
|
||||
```
|
||||
$ sudo ufw app info "Apache Full"
|
||||
Profile: Apache Full
|
||||
Title: Web Server (HTTP,HTTPS)
|
||||
Description: Apache v2 is the next generation of the omnipresent Apache web
|
||||
server.
|
||||
|
||||
Ports:
|
||||
80,443/tcp
|
||||
|
||||
```
|
||||
|
||||
Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile:
|
||||
```
|
||||
$ sudo ufw allow in "Apache Full"
|
||||
Rules updated
|
||||
Rules updated (v6)
|
||||
|
||||
```
|
||||
|
||||
If you want to allow https traffic, but only http (80) traffic, run:
|
||||
```
|
||||
$ sudo ufw app info "Apache"
|
||||
|
||||
```
|
||||
|
||||
### Access phpVirtualBox Web console
|
||||
|
||||
Now, go to any remote system that has graphical web browser.
|
||||
|
||||
In the address bar, type: **<http://IP-address-of-virtualbox-headless-server/phpvirtualbox>**.
|
||||
|
||||
In my case, I navigated to this link – **<http://192.168.225.22/phpvirtualbox>**
|
||||
|
||||
You should see the following screen. Enter the phpVirtualBox administrative user credentials.
|
||||
|
||||
The default username and phpVirtualBox is **admin** / **admin**.
|
||||
|
||||
![][8]
|
||||
|
||||
Congratulations! You will now be greeted with phpVirtualBox dashboard.
|
||||
|
||||
![][9]
|
||||
|
||||
Now, start creating your VMs and manage them from phpvirtualbox dashboard. As I mentioned earlier, You can access the phpVirtualBox from any system in the same network. All you need is a web browser and the username and password of phpVirtualBox.
|
||||
|
||||
If you haven’t enabled virtualization support in the BISO of host system (not the guest), phpVirtualBox allows you to create 32-bit guests only. To install 64-bit guest systems, you must enable virtualization in your host system’s BIOS. Look for an option that is something like “virtualization” or “hypervisor” in your bios and make sure it is enabled.
|
||||
|
||||
That’s it. Hope this helps. If you find this guide useful, please share it on your social networks and support us.
|
||||
|
||||
More good stuffs to come. Stay tuned!
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png
|
||||
[4]:https://www.virtualbox.org/wiki/Downloads
|
||||
[5]:http://www.virtualbox.org/manual/ch08.html
|
||||
[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png
|
@ -0,0 +1,74 @@
|
||||
Why is Arch Linux So Challenging and What are Its Pros & Cons?
|
||||
======
|
||||
|
||||
![](https://www.fossmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
|
||||
|
||||
[Arch Linux][1] is among the most popular Linux distributions and it was first released in **2002** , being spear-headed by **Aaron Grifin**. Yes, it aims to provide simplicity, minimalism, and elegance to the OS user but its target audience is not the faint of hearts. Arch encourages community involvement and a user is expected to put in some effort to better comprehend how the system operates.
|
||||
|
||||
Many old-time Linux users know a good amount about **Arch Linux** but you probably don’t if you are new to it considering using it for your everyday computing tasks. I’m no authority on the distro myself but from my experience with it, here are the pros and cons you will experience while using it.
|
||||
|
||||
### 1\. Pro: Build Your Own Linux OS
|
||||
|
||||
Other popular Linux Operating Systems like **Fedora** and **Ubuntu** ship with computers, same as **Windows** and **MacOS**. **Arch** , on the other hand, allows you to develop your OS to your taste. If you are able to achieve this, you will end up with a system that will be able to do exactly as you wish.
|
||||
|
||||
#### Con: Installation is a Hectic Process
|
||||
|
||||
[Installing Arch Linux][2] is far from a walk in a park and since you will be fine-tuning the OS, it will take a while. You will need to have an understanding of various terminal commands and the components you will be working with since you are to pick them yourself. By now, you probably already know that this requires quite a bit of reading.
|
||||
|
||||
### 2\. Pro: No Bloatware and Unnecessary Services
|
||||
|
||||
Since **Arch** allows you to choose your own components, you no longer have to deal with a bunch of software you don’t want. In contrast, OSes like **Ubuntu** come with a huge number of pre-installed desktop and background apps which you may not need and might not be able to know that they exist in the first place, before going on to remove them.
|
||||
|
||||
To put simply, **Arch Linux** saves you post-installation time. **Pacman** , an awesome utility app, is the package manager Arch Linux uses by default. There is an alternative to **Pacman** , called [Pamac][3].
|
||||
|
||||
### 3\. Pro: No System Upgrades
|
||||
|
||||
**Arch Linux** uses the rolling release model and that is awesome. It means that you no longer have to worry about upgrading every now and then. Once you install Arch, say goodbye to upgrading to a new version as updates occur continuously. By default, you will always be using the latest version.
|
||||
|
||||
#### Con: Some Updates Can Break Your System
|
||||
|
||||
While updates flow in continuously, you have to consciously track what comes in. Nobody knows your software’s specific configuration and it’s not tested by anyone but you. So, if you are not careful, things on your machine could break.
|
||||
|
||||
### 4\. Pro: Arch is Community Based
|
||||
|
||||
Linux users generally have one thing in common: The need for independence. Although most Linux distros have less corporate ties, there are still a few you cannot ignore. For instance, a distro based on **Ubuntu** is influenced by whatever decisions Canonical makes.
|
||||
|
||||
If you are trying to become even more independent with the use of your computer, then **Arch Linux** is the way to go. Unlike most systems, Arch has no commercial influence and focuses on the community.
|
||||
|
||||
### 5\. Pro: Arch Wiki is Awesome
|
||||
|
||||
The [Arch Wiki][4] is a super library of everything you need to know about the installation and maintenance of every component in the Linux system. The great thing about this site is that even if you are using a different Linux distro from Arch, you would still find its information relevant. That’s simply because Arch uses the same components as many other Linux distros and its guides and fixes sometimes apply to all.
|
||||
|
||||
### 6\. Pro: Check Out the Arch User Repository
|
||||
|
||||
The [Arch User Repository (AUR)][5] is a huge collection of software packages from members of the community. If you are looking for a Linux program that is not yet available on Arch’s repositories, you can find it on the **AUR** for sure.
|
||||
|
||||
The **AUR** is maintained by users who compile and install packages from source. Users are also allowed to vote on packages which give them (the packages i.e.) higher rankings that make them more visible to potential users.
|
||||
|
||||
#### Ultimately: Is Arch Linux for You?
|
||||
|
||||
**Arch Linux** has way more **pros** than **cons** including the ones that aren’t on this list. The installation process is long and probably too technical for a non-Linux savvy user, but with enough time on your hands and the ability to maximize productivity using wiki guides and the like, you should be good to go.
|
||||
|
||||
**Arch Linux** is a great Linux distro – not in spite of its complexity, but because of it. And it appeals most to those who are ready to do what needs to be done – given that you will have to do your homework and exercise a good amount of patience.
|
||||
|
||||
By the time you build this Operating System from scratch, you would have learned many details about GNU/Linux and would never be ignorant of what’s going on with your PC again.
|
||||
|
||||
What are the **pros** and **cons** of using **Arch Linux** in your experience? And on the whole, why is using it so challenging? Drop your comments in the discussion section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.fossmint.com/why-is-arch-linux-so-challenging-what-are-pros-cons/
|
||||
|
||||
作者:[Martins D. Okoi][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.fossmint.com/author/dillivine/
|
||||
[1]:https://www.archlinux.org/
|
||||
[2]:https://www.tecmint.com/arch-linux-installation-and-configuration-guide/
|
||||
[3]:https://www.fossmint.com/pamac-arch-linux-gui-package-manager/
|
||||
[4]:https://wiki.archlinux.org/
|
||||
[5]:https://wiki.archlinux.org/index.php/Arch_User_Repository
|
107
sources/tech/20180704 BASHing data- Truncated data items.md
Normal file
107
sources/tech/20180704 BASHing data- Truncated data items.md
Normal file
@ -0,0 +1,107 @@
|
||||
BASHing data: Truncated data items
|
||||
======
|
||||
### Truncated data items
|
||||
|
||||
**truncated** (adj.): abbreviated, abridged, curtailed, cut off, clipped, cropped, trimmed...
|
||||
|
||||
One way to truncate a data item is to enter it into a database field that has a character limit shorter than the data item. For example, the string
|
||||
|
||||
>Yarrow Ravine Rattlesnake Habitat Area, 2 mi ENE of Yermo CA
|
||||
|
||||
is 60 characters long. If you enter it into a "Locality" field with a 50-character limit, you get
|
||||
|
||||
>Yarrow Ravine Rattlesnake Habitat Area, 2 mi ENE #Ends with a whitespace
|
||||
|
||||
Truncations can also be data entry errors. You meant to enter
|
||||
|
||||
>Sally Ann Hunter (aka Sally Cleveland)
|
||||
|
||||
but you forgot the closing bracket
|
||||
|
||||
>Sally Ann Hunter (aka Sally Cleveland
|
||||
|
||||
leaving the data user to wonder whether Sally has other aliases that were trimmed off the data item.
|
||||
|
||||
Truncated data items are very difficult to detect. When auditing data I use three different methods to find possible truncations, but I probably miss some.
|
||||
|
||||
**Item length distribution.** The first method catches most of the truncations I find in individual fields. I pass the field to an AWK command that tallies up data items by field width, then I use **sort** to print the tallies in reverse order of width. For example, to check field 33 in the tab-separated file "midges":
|
||||
|
||||
```
|
||||
awk -F"\t" 'NR>1 {a[length($33)]++} \
|
||||
END {for (i in a) print i FS a[i]}' midges | sort -nr
|
||||
```
|
||||
|
||||
![distro1][1]
|
||||
|
||||
The longest entries have exactly 50 characters, which is suspicious, and there's a "bulge" of data items at that width, which is even more suspicious. Inspection of those 50-character-wide items reveals truncations:
|
||||
|
||||
![distro2][2]
|
||||
|
||||
Other tables I've checked this way had bulges at 100, 200 and 255 characters. In each case the bulges contained apparent truncations.
|
||||
|
||||
**Unmatched brackets**. The second method looks for data items like "...(Sally Cleveland" above. A good starting point is a tally of all the punctuation in the data table. Here I'm checking the file "mag2":
|
||||
|
||||
grep -o "[[:punct:]]" file | sort | uniqc
|
||||
|
||||
![punct][3]
|
||||
|
||||
Note that the numbers of opening and closing round brackets in "mag2" aren't equal. To see what's going on, I use the function "unmatched", which takes three arguments and checks all fields in a data table. The first argument is the filename and the second and third are the opening and closing brackets, enclosed in quotes.
|
||||
|
||||
```
|
||||
unmatched()
|
||||
{
|
||||
awk -F"\t" -v start="$2" -v end="$3" \
|
||||
'{for (i=1;i<=NF;i++) \
|
||||
if (split($i,a,start) != split($i,b,end)) \
|
||||
print "line "NR", field "i":\n"$i}' "$1"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
"unmatched" reports line number and field number if it finds a mismatch between opening and closing brackets in the field. It relies on AWK's **split** function, which returns the number of elements (including blank space) separated by the splitting character. This number will always be one more than the number of splitters:
|
||||
|
||||
![split][4]
|
||||
|
||||
Here "ummatched" checks the round brackets in "mag2" and finds some likely truncations:
|
||||
|
||||
![unmatched][5]
|
||||
|
||||
I use "unmatched" to locate unmatched round brackets (), square brackets [], curly brackets {} and arrows <>, but the function can be used for any paired punctuation characters.
|
||||
|
||||
**Unexpected endings**. The third method looks for data items that end in a trailing space or a non-terminal punctuation mark, like a comma or a hyphen. This can be done on a single field with **cut** piped to **grep** , or in one step with AWK. Here I'm checking field 47 of the tab-separated table "herp5", and pulling out suspect data items and their line numbers:
|
||||
|
||||
```
|
||||
cut -f47 herp5 | grep -n "[ ,;:-]$"
|
||||
|
||||
awk -F"\t" '$47 ~ /[ ,;:-]$/ {print NR": "$47}' herp5
|
||||
```
|
||||
|
||||
![herps5][6]
|
||||
|
||||
The all-fields version of the AWK command for a tab-separated file is:
|
||||
|
||||
```
|
||||
awk -F"\t" '{for (i=1;i<=NF;i++) if ($i ~ /[ ,;:-]$/) \
|
||||
print "line "NR", field "i":\n"$i}' file
|
||||
```
|
||||
|
||||
**Cautionary thoughts**. Truncations also appear during the validation tests I do on fields. For example, I might be checking for plausible 4-digit entries in a "Year" field, and there's a 198 that hints at 198n. Or is it 1898? Truncated data items with their lost characters are mysteries. As a data auditor I can only report (possible) character losses and suggest that the (possibly) missing characters be restored by the data compilers or managers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.polydesmida.info/BASHing/2018-07-04.html
|
||||
|
||||
作者:[polydesmida][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.polydesmida.info/
|
||||
[1]:https://www.polydesmida.info/BASHing/img1/2018-07-04_1.png
|
||||
[2]:https://www.polydesmida.info/BASHing/img1/2018-07-04_2.png
|
||||
[3]:https://www.polydesmida.info/BASHing/img1/2018-07-04_3.png
|
||||
[4]:https://www.polydesmida.info/BASHing/img1/2018-07-04_4.png
|
||||
[5]:https://www.polydesmida.info/BASHing/img1/2018-07-04_5.png
|
||||
[6]:https://www.polydesmida.info/BASHing/img1/2018-07-04_6.png
|
@ -0,0 +1,155 @@
|
||||
Install an NVIDIA GPU on almost any machine
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/06/nvidia-816x345.jpg)
|
||||
|
||||
Whether for research or recreation, installing a new GPU can bolster your computer’s performance and enable new functionality across the board. This installation guide uses Fedora 28’s brand-new third-party repositories to install NVIDIA drivers. It walks you through the installation of both software and hardware, and covers everything you need to get your NVIDIA card up and running. This process works for any UEFI-enabled computer, and any modern NVIDIA GPU.
|
||||
|
||||
### Preparation
|
||||
|
||||
This guide relies on the following materials:
|
||||
|
||||
* A machine that is [UEFI][1] capable. If you’re uncertain whether your machine has this firmware, run sudo dmidecode -t 0. If “UEFI is supported” appears anywhere in the output, you are all set to continue. Otherwise, while it’s technically possible to update some computers to support UEFI, the process is often finicky and generally not recommended.
|
||||
* A modern, UEFI-enabled NVIDIA card
|
||||
* A power source that meets the wattage and wiring requirements for your NVIDIA card (see the Hardware & Modifications section for details)
|
||||
* Internet connection
|
||||
* Fedora 28
|
||||
|
||||
|
||||
|
||||
### Example setup
|
||||
|
||||
This example installation uses:
|
||||
|
||||
* An Optiplex 9010 (a fairly old machine)
|
||||
* NVIDIA [GeForce GTX 1050 Ti XLR8 Gaming Overclocked Edition 4GB GDDR5 PCI Express 3.0][2] graphics card
|
||||
* In order to meet the power requirements of the new GPU, the power supply was upgraded to an [EVGA – 80 PLUS 600W ATX 12V/EPS 12V][3]. This new PSU was 300W above the minimum recommendation, but simply meeting the minimum recommendation is sufficient in most cases.
|
||||
* And, of course, Fedora 28.
|
||||
|
||||
|
||||
|
||||
### Hardware and modifications
|
||||
|
||||
#### PSU
|
||||
|
||||
Open up your desktop case and check the maximum power output printed on your power supply. Next, check the documentation on your NVIDIA GPU and determine the minimum recommended power (in watts). Further, take a look at your GPU and see if it requires additional wiring, such as a 6-pin connector. Most entry-level GPUs only draw power directly from the motherboard, but some require extra juice. You’ll need to upgrade your PSU if:
|
||||
|
||||
1. Your power supply’s max power output is below the GPU’s suggested minimum power. **Note:** According to some NVIDIA card manufacturers, pre-built systems may require more or less power than recommended, depending on the system’s configuration. Use your discretion to determine your requirements if you’re using a particularly power-efficient or power-hungry setup.
|
||||
2. Your power supply does not provide the necessary wiring to power your card.
|
||||
|
||||
|
||||
|
||||
PSUs are straightforward to replace, but make sure to take note of the wiring layout before detaching your current power supply. Additionally, make sure to select a PSU that fits your desktop case.
|
||||
|
||||
#### CPU
|
||||
|
||||
Although installing a high-quality NVIDIA GPU is possible in many old machines, a slow or damaged CPU can “bottleneck” the performance of the GPU. To calculate the impact of the bottlenecking effect for your machine, click [here][4]. It’s important to know your CPU’s performance to avoid pairing a high-powered GPU with a CPU that can’t keep up. Upgrading your CPU is a potential consideration.
|
||||
|
||||
#### Motherboard
|
||||
|
||||
Before proceeding, ensure your motherboard is compatible with your GPU of choice. Your graphics card should be inserted into the PCI-E x16 slot closest to the heat-sink. Ensure that your setup contains enough space for the GPU. In addition, note that most GPUs today employ PCI-E 3.0 technology. Though these GPUs will run best if mounted on a PCI-E 3.0 x16 slot, performance should not suffer significantly with an older version slot.
|
||||
|
||||
### Installation
|
||||
```
|
||||
sudo dnf update
|
||||
|
||||
```
|
||||
|
||||
2\. Next, reboot with the simple command:
|
||||
```
|
||||
reboot
|
||||
|
||||
```
|
||||
|
||||
3\. After reboot, install the Fedora 28 workstation repositories:
|
||||
```
|
||||
sudo dnf install fedora-workstation-repositories
|
||||
|
||||
```
|
||||
|
||||
4\. Next, enable the NVIDIA driver repository:
|
||||
```
|
||||
sudo dnf config-manager --set-enabled rpmfusion-nonfree-nvidia-driver
|
||||
|
||||
```
|
||||
|
||||
5\. Then, reboot again.
|
||||
|
||||
6\. After the reboot, verify the addition of the repository via the following command:
|
||||
```
|
||||
sudo dnf repository-packages rpmfusion-nonfree-nvidia-driver info
|
||||
|
||||
```
|
||||
|
||||
If several NVIDIA tools and their respective specs are loaded, then proceed to the next step. If not, you may have encountered an error when adding the new repository and you should give it another shot.
|
||||
|
||||
7\. Login, connect to the internet, and open the software app. Click Add-ons> Hardware Drivers> NVIDIA Linux Graphics Driver> Install.
|
||||
|
||||
Then, reboot once again.
|
||||
|
||||
8\. After reboot, go to ‘Show Applications’ on the side bar, and open up the newly added NVIDIA X Server Settings application. A GUI should open up, and a dialog box will appear with the following message:
|
||||
|
||||
![NVIDIA X Server Prompt][5]
|
||||
|
||||
Take the application’s advice, but before doing so, ensure you have your NVIDIA GPU on-hand and are ready to install. **Please note** that running nvidia xconfig as root and powering off without installing your GPU immediately may cause drastic damage. Doing so may prevent your computer from booting, and force you to repair the system through the reboot screen. A fresh install of Fedora may fix these issues, but the effects can be much worse.
|
||||
|
||||
If you’re ready to proceed, enter the command:
|
||||
```
|
||||
sudo nvidia-xconfig
|
||||
|
||||
```
|
||||
|
||||
If the system prompts you to perform any downloads, accept them and proceed.
|
||||
|
||||
9\. Once this process is complete, close all applications and **shut down** the computer. Unplug the power supply to your machine. Then, press the power button once to drain any residual power to protect yourself from electric shock. If your PSU has a power switch, switch it off.
|
||||
|
||||
10\. Finally, install the graphics card. Remove the old GPU and insert your new NVIDIA graphics card into the proper PCI-E x16 slot, with the fans facing down. If there is no space for the fans to ventilate in this position, place the graphics card face up instead, if possible. When you have successfully installed the new GPU, close your case, plug in the PSU, and turn the computer on. It should successfully boot up.
|
||||
|
||||
**NOTE:** To disable the NVIDIA driver repository used in this installation, or to disable all fedora workstation repositories, consult [The Fedora Wiki Page][6].
|
||||
|
||||
### Verification
|
||||
|
||||
1\. If your newly installed NVIDIA graphics card is connected to your monitor and displaying correctly, then your NVIDIA driver has successfully established a connection to the GPU.
|
||||
|
||||
If you’d like to view your settings, or verify the driver is working (in the case that you have two GPUs installed on the motherboard), open up the NVIDIA X Server Settings app again. This time, you should not be prompted with an error message, and information on the X configuration file and your NVIDIA GPU should be available (see screenshot below).
|
||||
|
||||
![NVIDIA X Server Settings][7]
|
||||
|
||||
Through this app, you may alter your X configuration file should you please, and may monitor the GPU’s performance, clock speed, and thermal information.
|
||||
|
||||
2\. To ensure the new card is working at capacity, a GPU performance test is needed. GL Mark 2, a benchmarking tool that provides information on buffering, building, lighting, texturing, etc, offers an excellent solution. GL Mark 2 records frame rates for a variety of different graphical tests, and outputs an overall performance score (called the glmark2 score).
|
||||
|
||||
**Note:** glxgears will only test the performance of your screen or monitor, not the graphics card itself. Use GL Mark 2 instead.
|
||||
|
||||
To run GLMark2:
|
||||
|
||||
1. Open up a terminal and close all other applications
|
||||
2. sudo dnf install glmark2
|
||||
3. glmark2
|
||||
4. Allow the test to run to completion for best results. Check to see if the frame rates match your expectation for your NVIDA card. If you’d like additional verification, consult the web to determine if a glmark2 benchmark has been previously conducted on your NVIDA card model and published to the web. Compare scores to assess your GPUs performance.
|
||||
5. If your framerates and/or glmark2 score are below expected, consider potential causes. CPU-induced bottlenecking? Other issues?
|
||||
|
||||
|
||||
|
||||
Assuming the diagnostics look good, enjoy using your new GPU.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/install-nvidia-gpu/
|
||||
|
||||
作者:[Justice del Castillo][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/justice/
|
||||
[1]:https://whatis.techtarget.com/definition/Unified-Extensible-Firmware-Interface-UEFI
|
||||
[2]:https://www.cnet.com/products/pny-geforce-gtx-xlr8-gaming-1050-ti-overclocked-edition-graphics-card-gf-gtx-1050-ti-4-gb/specs/
|
||||
[3]:https://www.evga.com/products/product.aspx?pn=100-B1-0600-KR
|
||||
[4]:http://thebottlenecker.com (Home: The Bottle Necker)
|
||||
[5]:https://bytebucket.org/kenneym/fedora-28-nvidia-gpu-installation/raw/7bee7dc6effe191f1f54b0589fa818960a8fa18b/nvidia_xserver_error.jpg?token=c6a7effe35f1c592a155a4a46a068a19fd060a91 (NVIDIA X Sever Prompt)
|
||||
[6]:https://fedoraproject.org/wiki/Workstation/Third_Party_Software_Repositories
|
||||
[7]:https://bytebucket.org/kenneym/fedora-28-nvidia-gpu-installation/raw/7bee7dc6effe191f1f54b0589fa818960a8fa18b/NVIDIA_XCONFIG.png?token=64e1a7be21e5e9ba157f029b65e24e4eef54d88f (NVIDIA X Server Settings)
|
@ -0,0 +1,332 @@
|
||||
Setup Headless Virtualization Server Using KVM In Ubuntu 18.04 LTS
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2016/11/kvm-720x340.jpg)
|
||||
|
||||
We already have covered [**setting up Oracle VirtualBox on Ubuntu 18.04**][1] headless server. In this tutorial, we will be discussing how to setup headless virtualization server using **KVM** and how to manage the guest machines from a remote client. As you may know already, KVM ( **K** ernel-based **v** irtual **m** achine) is an open source, full virtualization for Linux. Using KVM, we can easily turn any Linux server in to a complete virtualization environment in minutes and deploy different kind of VMs such as GNU/Linux, *BSD, Windows etc.
|
||||
|
||||
### Setup Headless Virtualization Server Using KVM
|
||||
|
||||
I tested this guide on Ubuntu 18.04 LTS server, however this tutorial will work on other Linux distributions such as Debian, CentOS, RHEL and Scientific Linux. This method will be perfectly suitable for those who wants to setup a simple virtualization environment in a Linux server that doesn’t have any graphical environment.
|
||||
|
||||
For the purpose of this guide, I will be using two systems.
|
||||
|
||||
**KVM virtualization server:**
|
||||
|
||||
* **Host OS** – Ubuntu 18.04 LTS minimal server (No GUI)
|
||||
* **IP Address of Host OS** : 192.168.225.22/24
|
||||
* **Guest OS** (Which we are going to host on Ubuntu 18.04) : Ubuntu 16.04 LTS server
|
||||
|
||||
|
||||
|
||||
**Remote desktop client :**
|
||||
|
||||
* **OS** – Arch Linux
|
||||
|
||||
|
||||
|
||||
### Install KVM
|
||||
|
||||
First, let us check if our system supports hardware virtualization. To do so, run the following command from the Terminal:
|
||||
```
|
||||
$ egrep -c '(vmx|svm)' /proc/cpuinfo
|
||||
|
||||
```
|
||||
|
||||
If the result is **zero (0)** , the system doesn’t support hardware virtualization or the virtualization is disabled in the Bios. Go to your bios and check for the virtualization option and enable it.
|
||||
|
||||
if the result is **1** or **more** , the system will support hardware virtualization. However, you still need to enable the virtualization option in Bios before running the above commands.
|
||||
|
||||
Alternatively, you can use the following command to verify it. You need to install kvm first as described below, in order to use this command.
|
||||
```
|
||||
$ kvm-ok
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
INFO: /dev/kvm exists
|
||||
KVM acceleration can be used
|
||||
|
||||
```
|
||||
|
||||
If you got the following error instead, you still can run guest machines in KVM, but the performance will be very poor.
|
||||
```
|
||||
INFO: Your CPU does not support KVM extensions
|
||||
INFO: For more detailed results, you should run this as root
|
||||
HINT: sudo /usr/sbin/kvm-ok
|
||||
|
||||
```
|
||||
|
||||
Also, there are other ways to find out if your CPU supports Virtualization or not. Refer the following guide for more details.
|
||||
|
||||
Next, Install KVM and other required packages to setup a virtualization environment in Linux.
|
||||
|
||||
On Ubuntu and other DEB based systems, run:
|
||||
```
|
||||
$ sudo apt-get install qemu-kvm libvirt-bin virtinst bridge-utils cpu-checker
|
||||
|
||||
```
|
||||
|
||||
Once KVM installed, start libvertd service (If it is not started already):
|
||||
```
|
||||
$ sudo systemctl enable libvirtd
|
||||
|
||||
$ sudo systemctl start libvirtd
|
||||
|
||||
```
|
||||
|
||||
### Create Virtual machines
|
||||
|
||||
All virtual machine files and other related files will be stored under **/var/lib/libvirt/**. The default path of ISO images is **/var/lib/libvirt/boot/**.
|
||||
|
||||
First, let us see if there is any virtual machines. To view the list of available virtual machines, run:
|
||||
```
|
||||
$ sudo virsh list --all
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Id Name State
|
||||
----------------------------------------------------
|
||||
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
As you see above, there is no virtual machine available right now.
|
||||
|
||||
Now, let us crate one.
|
||||
|
||||
For example, let us create Ubuntu 16.04 Virtual machine with 512 MB RAM, 1 CPU core, 8 GB Hdd.
|
||||
```
|
||||
$ sudo virt-install --name Ubuntu-16.04 --ram=512 --vcpus=1 --cpu host --hvm --disk path=/var/lib/libvirt/images/ubuntu-16.04-vm1,size=8 --cdrom /var/lib/libvirt/boot/ubuntu-16.04-server-amd64.iso --graphics vnc
|
||||
|
||||
```
|
||||
|
||||
Please make sure you have Ubuntu 16.04 ISO image in path **/var/lib/libvirt/boot/** or any other path you have given in the above command.
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
WARNING Graphics requested but DISPLAY is not set. Not running virt-viewer.
|
||||
WARNING No console to launch for the guest, defaulting to --wait -1
|
||||
|
||||
Starting install...
|
||||
Creating domain... | 0 B 00:00:01
|
||||
Domain installation still in progress. Waiting for installation to complete.
|
||||
Domain has shutdown. Continuing.
|
||||
Domain creation completed.
|
||||
Restarting guest.
|
||||
|
||||
```
|
||||
|
||||
![][4]
|
||||
|
||||
Let us break down the above command and see what each option do.
|
||||
|
||||
* **–name** : This option defines the name of the virtual name. In our case, the name of VM is **Ubuntu-16.04**.
|
||||
* **–ram=512** : Allocates 512MB RAM to the VM.
|
||||
* **–vcpus=1** : Indicates the number of CPU cores in the VM.
|
||||
* **–cpu host** : Optimizes the CPU properties for the VM by exposing the host’s CPU’s configuration to the guest.
|
||||
* **–hvm** : Request the full hardware virtualization.
|
||||
* **–disk path** : The location to save VM’s hdd and it’s size. In our example, I have allocated 8GB hdd size.
|
||||
* **–cdrom** : The location of installer ISO image. Please note that you must have the actual ISO image in this location.
|
||||
* **–graphics vnc** : Allows VNC access to the VM from a remote client.
|
||||
|
||||
|
||||
|
||||
### Access Virtual machines using VNC client
|
||||
|
||||
Now, go to the remote Desktop system. SSH to the Ubuntu server(Virtualization server) as shown below.
|
||||
|
||||
Here, **sk** is my Ubuntu server’s user name and **192.168.225.22** is its IP address.
|
||||
|
||||
Run the following command to find out the VNC port number. We need this to access the Vm from a remote system.
|
||||
```
|
||||
$ sudo virsh dumpxml Ubuntu-16.04 | grep vnc
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
<graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
|
||||
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
Note down the port number **5900**. Install any VNC client application. For this guide, I will be using TigerVnc. TigerVNC is available in the Arch Linux default repositories. To install it on Arch based systems, run:
|
||||
```
|
||||
$ sudo pacman -S tigervnc
|
||||
|
||||
```
|
||||
|
||||
Type the following SSH port forwarding command from your remote client system that has VNC client application installed.
|
||||
|
||||
Again, **192.168.225.22** is my Ubuntu server’s (virtualization server) IP address.
|
||||
|
||||
Then, open the VNC client from your Arch Linux (client).
|
||||
|
||||
Type **localhost:5900** in the VNC server field and click **Connect** button.
|
||||
|
||||
![][6]
|
||||
|
||||
Then start installing the Ubuntu VM as the way you do in the physical system.
|
||||
|
||||
![][7]
|
||||
|
||||
![][8]
|
||||
|
||||
Similarly, you can setup as many as virtual machines depending upon server hardware specifications.
|
||||
|
||||
Alternatively, you can use **virt-viewer** utility in order to install operating system in the guest machines. virt-viewer is available in the most Linux distribution’s default repositories. After installing virt-viewer, run the following command to establish VNC access to the VM.
|
||||
```
|
||||
$ sudo virt-viewer --connect=qemu+ssh://192.168.225.22/system --name Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
### Manage virtual machines
|
||||
|
||||
Managing VMs from the command-line using virsh management user interface is very interesting and fun. The commands are very easy to remember. Let us see some examples.
|
||||
|
||||
To view the list of running VMs, run:
|
||||
```
|
||||
$ sudo virsh list
|
||||
|
||||
```
|
||||
|
||||
Or,
|
||||
```
|
||||
$ sudo virsh list --all
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Id Name State
|
||||
----------------------------------------------------
|
||||
2 Ubuntu-16.04 running
|
||||
|
||||
```
|
||||
|
||||
![][9]
|
||||
|
||||
To start a VM, run:
|
||||
```
|
||||
$ sudo virsh start Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
Alternatively, you can use the VM id to start it.
|
||||
|
||||
![][10]
|
||||
|
||||
As you see in the above output, Ubuntu 16.04 virtual machine’s Id is 2. So, in order to start it, just specify its Id like below.
|
||||
```
|
||||
$ sudo virsh start 2
|
||||
|
||||
```
|
||||
|
||||
To restart a VM, run:
|
||||
```
|
||||
$ sudo virsh reboot Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Domain Ubuntu-16.04 is being rebooted
|
||||
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
To pause a running VM, run:
|
||||
```
|
||||
$ sudo virsh suspend Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Domain Ubuntu-16.04 suspended
|
||||
|
||||
```
|
||||
|
||||
To resume the suspended VM, run:
|
||||
```
|
||||
$ sudo virsh resume Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Domain Ubuntu-16.04 resumed
|
||||
|
||||
```
|
||||
|
||||
To shutdown a VM, run:
|
||||
```
|
||||
$ sudo virsh shutdown Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Domain Ubuntu-16.04 is being shutdown
|
||||
|
||||
```
|
||||
|
||||
To completely remove a VM, run:
|
||||
```
|
||||
$ sudo virsh undefine Ubuntu-16.04
|
||||
|
||||
$ sudo virsh destroy Ubuntu-16.04
|
||||
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
Domain Ubuntu-16.04 destroyed
|
||||
|
||||
```
|
||||
|
||||
![][12]
|
||||
|
||||
For more options, I recommend you to look into the man pages.
|
||||
```
|
||||
$ man virsh
|
||||
|
||||
```
|
||||
|
||||
That’s all for now folks. Start playing with your new virtualization environment. KVM virtualization will be opt for research & development and testing purposes, but not limited to. If you have sufficient hardware, you can use it for large production environments. Have fun and don’t forget to leave your valuable comments in the comment section below.
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/setup-headless-virtualization-server-using-kvm-ubuntu/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
|
||||
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[3]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_001.png
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_008-1.png
|
||||
[5]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_002.png
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2016/11/VNC-Viewer-Connection-Details_005.png
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_006.png
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2016/11/QEMU-Ubuntu-16.04-TigerVNC_007.png
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-1.png
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_010-2.png
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_011-1.png
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2016/11/sk@ubuntuserver-_012.png
|
@ -0,0 +1,96 @@
|
||||
How to use dd in Linux without destroying your disk
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_happy_sad_developer_programming.png?itok=72nkfSQ_)
|
||||
|
||||
This article is excerpted from chapter 4 of [Linux in Action][1], published by Manning.
|
||||
|
||||
Whether you're trying to rescue data from a dying storage drive, backing up archives to remote storage, or making a perfect copy of an active partition somewhere else, you'll need to know how to safely and reliably copy drives and filesystems. Fortunately, `dd` is a simple and powerful image-copying tool that's been around, well, pretty much forever. And in all that time, nothing's come along that does the job better.
|
||||
|
||||
### Making perfect copies of drives and partitions
|
||||
|
||||
`dd` if you research hard enough, but where it shines is in the ways it lets you play with partitions. You can, of course, use `tar` or even `scp` to replicate entire filesystems by copying the files from one computer and then pasting them as-is on top of a fresh Linux install on another computer. But, because those filesystem archives aren't complete images, they'll require a running host OS at both ends to serve as a base.
|
||||
|
||||
There's all kinds of stuff you can do withif you research hard enough, but where it shines is in the ways it lets you play with partitions. You can, of course, useor evento replicate entire filesystems by copying the files from one computer and then pasting them as-is on top of a fresh Linux install on another computer. But, because those filesystem archives aren't complete images, they'll require a running host OS at both ends to serve as a base.
|
||||
|
||||
Using `dd`, on the other hand, can make perfect byte-for-byte images of, well, just about anything digital. But before you start flinging partitions from one end of the earth to the other, I should mention that there's some truth to that old Unix admin joke: "dd stands for disk destroyer." If you type even one wrong character in a `dd` command, you can instantly and permanently wipe out an entire drive of valuable data. And yes, spelling counts.
|
||||
|
||||
**Remember:** Before pressing that Enter key to invoke `dd`, pause and think very carefully!
|
||||
|
||||
### Basic dd operations
|
||||
|
||||
Now that you've been suitably warned, we'll start with something straightforward. Suppose you want to create an exact image of an entire disk of data that's been designated as `/dev/``sda`. You've plugged in an empty drive (ideally having the same capacity as your `/dev/``sda` system). The syntax is simple: `if=` defines the source drive and `of=` defines the file or location where you want your data saved:
|
||||
```
|
||||
# dd if=/dev/sda of=/dev/sdb
|
||||
|
||||
```
|
||||
|
||||
The next example will create an .img archive of the `/dev/``sda` drive and save it to the home directory of your user account:
|
||||
```
|
||||
# dd if=/dev/sda of=/home/username/sdadisk.img
|
||||
|
||||
```
|
||||
|
||||
Those commands created images of entire drives. You could also focus on a single partition from a drive. The next example does that and also uses `bs` to set the number of bytes to copy at a single time (4,096, in this case). Playing with the `bs` value can have an impact on the overall speed of a `dd` operation, although the ideal setting will depend on your hardware profile and other considerations.
|
||||
```
|
||||
# dd if=/dev/sda2 of=/home/username/partition2.img bs=4096
|
||||
|
||||
```
|
||||
|
||||
Restoring is simple: Effectively, you reverse the values of `if` and `of`. In this case, `if=` takes the image you want to restore, and `of=` takes the target drive to which you want to write the image:
|
||||
```
|
||||
# dd if=sdadisk.img of=/dev/sdb
|
||||
|
||||
```
|
||||
|
||||
You can also perform both the create and copy operations in one command. This example, for instance, will create a compressed image of a remote drive using SSH and save the resulting archive to your local machine:
|
||||
```
|
||||
# ssh username@54.98.132.10 "dd if=/dev/sda | gzip -1 -" | dd of=backup.gz
|
||||
|
||||
```
|
||||
|
||||
You should always test your archives to confirm they're working. If it's a boot drive you've created, stick it into a computer and see if it launches as expected. If it's a normal data partition, mount it to make sure the files both exist and are appropriately accessible.
|
||||
|
||||
### Wiping disks with dd
|
||||
|
||||
Years ago, I had a friend who was responsible for security at his government's overseas embassies. He once told me that each embassy under his watch was provided with an official government-issue hammer. Why? In case the facility was ever at risk of being overrun by unfriendlies, the hammer was to be used to destroy all their hard drives.
|
||||
|
||||
What's that? Why not just delete the data? You're kidding, right? Everyone knows that deleting files containing sensitive data from storage devices doesn't actually remove the data. Given enough time and motivation, nearly anything can be retrieved from virtually any digital media, with the possible exception of the ones that have been well and properly hammered.
|
||||
|
||||
You can, however, use `dd` to make it a whole lot more difficult for the bad guys to get at your old data. This command will spend some time writing millions and millions of zeros over every nook and cranny of the `/dev/sda1` partition:
|
||||
```
|
||||
# dd if=/dev/zero of=/dev/sda1
|
||||
|
||||
```
|
||||
|
||||
But it gets better. Using `/dev/``urandom` file as your source, you can write over a disk with random characters:
|
||||
```
|
||||
# dd if=/dev/urandom of=/dev/sda1
|
||||
|
||||
```
|
||||
|
||||
### Monitoring dd operations
|
||||
|
||||
Since disk or partition archiving can take a very long time, you might want to add a progress monitor to your command. Install Pipe Viewer (`sudo apt install pv` on Ubuntu) and insert it into `dd`. With `pv`, that last command might look something like this:
|
||||
```
|
||||
# dd if=/dev/urandom | pv | dd of=/dev/sda1
|
||||
|
||||
4,14MB 0:00:05 [ 98kB/s] [ <=> ]
|
||||
|
||||
```
|
||||
|
||||
Putting off backups and disk management? With dd, you aren't left with too many excuses. It's really not difficult, but be careful. Good luck!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/7/how-use-dd-linux
|
||||
|
||||
作者:[David Clinton][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/remyd
|
||||
[1]:https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
|
@ -1,3 +1,5 @@
|
||||
Translating by shipsw
|
||||
|
||||
Python ChatOps libraries: Opsdroid and Errbot
|
||||
======
|
||||
|
||||
|
@ -1,90 +0,0 @@
|
||||
如何记录你在终端中执行的所有操作
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2017/03/Record-Everything-You-Do-In-Terminal-720x340.png)
|
||||
|
||||
几天前,我们发布了一个解释如何[**保存终端中的命令并按需使用**][1]的指南。对于那些不想记忆冗长的 Linux 命令的人来说,这非常有用。今天,在本指南中,我们将看到如何使用 **‘script’**命令记录你在终端中执行的所有操作。你可能已经在终端中运行了一个命令,或创建了一个目录,或者安装了一个程序。script 命令会保存你在终端中执行的任何操作。如果你想知道你几小时或几天前做了什么,那么你可以查看它们。我知道我知道,我们可以使用上/下箭头或 history 命令查看以前运行的命令。但是,你无法查看这些命令的输出。但是,script 命令记录并显示完整的终端会话活动。
|
||||
|
||||
script 命令会在终端中创建你所做的所有事件的 typescript。无论你是安装程序,创建目录/文件还是删除文件夹,一切都会被记录下来,包括命令和相应的输出。这个命令读对那些想要一份交互式会话拷贝作为作业证明的人有用。无论莫是学生还是导师,你都可以将所有在终端中执行的操作和所有输出复制一份。
|
||||
|
||||
### 在Linux中使用 script 命令记录终端中的所有内容
|
||||
|
||||
script 命令预先安装在大多数现代 Linux 操作系统上。所以,我们不用担心安装。
|
||||
|
||||
让我们继续看看如何实时使用它
|
||||
|
||||
运行以下命令启动终端会话记录。
|
||||
```
|
||||
$ script -a my_terminal_activities
|
||||
|
||||
```
|
||||
|
||||
其中,**-a** 标志用于将输出追加到文件或 typescript,并保留以前的内容。上述命令会记录你在终端中执行的所有操作,并将输出追加到名为 **‘my_terminal_activities’** 的文件中,并将其保存在当前工作目录中。
|
||||
|
||||
示例输出:
|
||||
```
|
||||
Script started, file is my_terminal_activities
|
||||
|
||||
```
|
||||
|
||||
现在,在终端中运行一些随机的 Linux 命令。
|
||||
```
|
||||
$ mkdir ostechnix
|
||||
|
||||
$ cd ostechnix/
|
||||
|
||||
$ touch hello_world.txt
|
||||
|
||||
$ cd ..
|
||||
|
||||
$ uname -r
|
||||
|
||||
```
|
||||
|
||||
运行所有命令后,使用以下命令结束 ‘script’ 命令的会话:
|
||||
```
|
||||
$ exit
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
```
|
||||
exit
|
||||
Script done, file is my_terminal_activities
|
||||
|
||||
```
|
||||
|
||||
如你所见,终端活动已存储在名为 **‘my_terminal_activities’** 的文件中,并将其保存在当前工作目录中。
|
||||
|
||||
要查看你的终端活动,只需在任何编辑器中打开此文件,或者使用 ‘cat’ 命令直接显示它。
|
||||
```
|
||||
$ cat my_terminal_activities
|
||||
|
||||
```
|
||||
|
||||
**示例输出:**
|
||||
|
||||
正如你在上面的输出中看到的,script 命令记录了我所有的终端活动,包括 script 命令的开始和结束时间。真棒,不是吗?使用 script 命令的原因不仅仅是记录命令,还有命令的输出。简单地说,脚本命令将记录你在终端上执行的所有操作。
|
||||
|
||||
### 结论
|
||||
|
||||
就像我说的那样,脚本命令对于想要保留其终端活动记录的学生,教师和 Linux 用户非常有用。尽管有很多 CLI 和 GUI 可用来执行此操作,但 script 命令是记录终端会话活动的最简单快捷的方式。
|
||||
|
||||
就是这些。希望这有帮助。如果你发现本指南有用,请在你的社交,专业网络上分享,并**支持 OSTechNix**。
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/record-everything-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/save-commands-terminal-use-demand/
|
216
translated/tech/20171010 Operating a Kubernetes network.md
Normal file
216
translated/tech/20171010 Operating a Kubernetes network.md
Normal file
@ -0,0 +1,216 @@
|
||||
运营一个 Kubernetes 网络
|
||||
============================================================
|
||||
|
||||
最近我一直在研究 Kubernetes 网络。我注意到一件事情就是,虽然关于如何设置 Kubernetes 网络的文章很多,也写得很不错,但是却没有看到关于如何去运营 Kubernetes 网络的文章、以及如何完全确保它不会给你造成生产事故。
|
||||
|
||||
在本文中,我将尽力让你相信三件事情(我觉得这些都很合理 :)):
|
||||
|
||||
* 避免生产系统网络中断非常重要
|
||||
|
||||
* 运营联网软件是很难的
|
||||
|
||||
* 有关你的网络基础设施的重要变化值得深思熟虑,以及这种变化对可靠性的影响。虽然非常“牛x”的谷歌人常说“这是我们在谷歌正在用的”(谷歌工程师在 Kubernetes 上正做着很重大的工作!但是我认为重要的仍然是研究架构,并确保它对你的组织有意义)。
|
||||
|
||||
我肯定不是 Kubernetes 网络方面的专家,但是我在配置 Kubernetes 网络时遇到了一些问题,并且比以前更加了解 Kubernetes 网络了。
|
||||
|
||||
### 运营联网软件是很难的
|
||||
|
||||
在这里,我并不讨论有关运营物理网络的话题(对于它我不懂),而是讨论关于如何让像 DNS 服务、负载均衡以及代理这样的软件正常工作方面的内容。
|
||||
|
||||
我在一个负责很多网络基础设施的团队工作过一年时间,并且因此学到了一些运营网络基础设施的知识!(显然我还有很多的知识需要继续学习)在我们开始之前有三个整体看法:
|
||||
|
||||
* 联网软件经常重度依赖 Linux 内核。因此除了正确配置软件之外,你还需要确保许多不同的系统控制(sysctl)配置正确,而一个错误配置的系统控制就很容易让你处于“一切都很好”和“到处都出问题”的差别中。
|
||||
|
||||
* 联网需求会随时间而发生变化(比如,你的 DNS 查询或许比上一年多了五倍!或者你的 DNS 服务器突然开始返回 TCP 协议的 DNS 响应而不是 UDP 的,它们是完全不同的内核负载!)。这意味着之前正常工作的软件突然开始出现问题。
|
||||
|
||||
* 修复一个生产网络的问题,你必须有足够的经验。(例如,看这篇 [由 Sophie Haskins 写的关于 kube-dns 问题调试的文章][1])我在网络调试方面比以前进步多了,但那也是我花费了大量时间研究 Linux 网络知识之后的事了。
|
||||
|
||||
我距离成为一名网络运营专家还差得很远,但是我认为以下几点很重要:
|
||||
|
||||
1. 对生产网络的基础设施做重要的更改是很难得的(因为它会产生巨大的混乱)
|
||||
|
||||
2. 当你对网络基础设施做重大更改时,真的应该仔细考虑如果新网络基础设施失败该如何处理
|
||||
|
||||
3. 是否有很多人都能理解你的网络配置
|
||||
|
||||
切换到 Kubernetes 显然是个非常大的更改!因此,我们来讨论一下可能会导致错误的地方!
|
||||
|
||||
### Kubernetes 网络组件
|
||||
|
||||
在本文中我们将要讨论的 Kubernetes 网络组件有:
|
||||
|
||||
* 网络覆盖后端(像 flannel/calico/weave 网络/romana)
|
||||
|
||||
* `kube-dns`
|
||||
|
||||
* `kube-proxy`
|
||||
|
||||
* 入站控制器 / 负载均衡器
|
||||
|
||||
* `kubelet`
|
||||
|
||||
如果你打算配置 HTTP 服务,或许这些你都会用到。这些组件中的大部分我都不会用到,但是我尽可能去理解它们,因此,本文将涉及它们有关的内容。
|
||||
|
||||
### 最简化的方式:为所有容器使用宿主机网络
|
||||
|
||||
我们从你能做到的最简单的东西开始。这并不能让你在 Kubernetes 中运行 HTTP 服务。我认为它是非常安全的,因为在这里面可以让你动的东西很少。
|
||||
|
||||
如果你为所有容器使用宿主机网络,我认为需要你去做的全部事情仅有:
|
||||
|
||||
1. 配置 kubelet,以便于容器内部正确配置 DNS
|
||||
|
||||
2. 没了,就这些!
|
||||
|
||||
如果你为每个 Pod 直接使用宿主机网络,那就不需要 kube-dns 或者 kube-proxy 了。你都不需要一个作为基础的覆盖网络。
|
||||
|
||||
这种配置方式中,你的 pod 们都可以连接到外部网络(同样的方式,你的宿主机上的任何进程都可以与外部网络对话),但外部网络不能连接到你的 pod 们。
|
||||
|
||||
这并不是最重要的(我认为大多数人想在 Kubernetes 中运行 HTTP 服务并与这些服务进行真实的通讯),但我认为有趣的是,从某种程度上来说,网络的复杂性并不是绝对需要的,并且有时候你不用这么复杂的网络就可以实现你的需要。如果可以的话,尽可能地避免让网络过于复杂。
|
||||
|
||||
### 运营一个覆盖网络
|
||||
|
||||
我们将要讨论的第一个网络组件是有关覆盖网络的。Kubernetes 假设每个 pod 都有一个 IP 地址,这样你就可以与那个 pod 中的服务进行通讯了。我在说到“覆盖网络”这个词时,指的就是这个意思(“让你通过它的 IP 地址指向到 pod 的系统)。
|
||||
|
||||
所有其它的 Kubernetes 网络的东西都依赖正确工作的覆盖网络。更多关于它的内容,你可以读 [这里的 kubernetes 网络模型][10]。
|
||||
|
||||
Kelsey Hightower 在 [kubernetes the hard way][11] 中描述的方式看起来似乎很好,但是,事实上它的作法在超过 50 个节点的 AWS 上是行不通的,因此,我不打算讨论它了。
|
||||
|
||||
有许多覆盖网络后端(calico、flannel、weaveworks、romana)并且规划非常混乱。就我的观点来看,我认为一个覆盖网络有 2 个职责:
|
||||
|
||||
1. 确保你的 pod 能够发送网络请求到外部的集群
|
||||
|
||||
2. 保持一个到子网络的稳定的节点映射,并且保持集群中每个节点都可以使用那个映射得以更新。当添加和删除节点时,能够做出正确的反应。
|
||||
|
||||
Okay! 因此!你的覆盖网络可能会出现的问题是什么呢?
|
||||
|
||||
* 覆盖网络负责设置 iptables 规则(最基本的是 `iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE`),以确保那个容器能够向 Kubernetes 之外发出网络请求。如果在这个规则上有错误,你的容器就不能连接到外部网络。这并不很难(它只是几条 iptables 规则而已),但是它非常重要。我发起了一个 [pull request][2],因为我想确保它有很好的弹性。
|
||||
|
||||
* 添加或者删除节点时可能会有错误。我们使用 `flannel hostgw` 后端,我们开始使用它的时候,节点删除 [尚未开始工作][3]。
|
||||
|
||||
* 你的覆盖网络或许依赖一个分布式数据库(etcd)。如果那个数据库发生什么问题,这将导致覆盖网络发生问题。例如,[https://github.com/coreos/flannel/issues/610][4] 上说,如果在你的 `flannel etcd` 集群上丢失了数据,最后的结果将是在容器中网络连接会丢失。(现在这个问题已经被修复了)
|
||||
|
||||
* 你升级 Docker 以及其它东西导致的崩溃
|
||||
|
||||
* 还有更多的其它的可能性!
|
||||
|
||||
我在这里主要讨论的是过去发生在 Flannel 中的问题,但是我并不是要承诺不去使用 Flannel —— 事实上我很喜欢 Flannel,因为我觉得它很简单(比如,类似 [vxlan 在后端这一块的部分][12] 只有 500 行代码),并且我觉得对我来说,通过代码来找出问题的根源成为了可能。并且很显然,它在不断地改进。他们在审查 `pull requests` 方面做的很好。
|
||||
|
||||
到目前为止,我运营覆盖网络的方法是:
|
||||
|
||||
* 学习它的工作原理的详细内容以及如何去调试它(比如,Flannel 用于创建路由的 hostgw 网络后端,因此,你只需要使用 `sudo ip route list` 命令去查看它是否正确即可)
|
||||
|
||||
* 如果需要的话,维护一个内部构建版本,这样打补丁比较容易
|
||||
|
||||
* 有问题时,向上游贡献补丁
|
||||
|
||||
我认为去遍历所有已合并的 PR 以及过去已修复的 bug 清单真的是非常有帮助的 —— 这需要花费一些时间,但这是得到一个其它人遇到的各种问题的清单的好方法。
|
||||
|
||||
对其他人来说,他们的覆盖网络可能工作的很好,但是我并不能从中得到任何经验,并且我也曾听说过其他人报告类似的问题。如果你有一个类似配置的覆盖网络:a) 在 AWS 上并且 b) 在多于 50-100 节点上运行,我想知道你运营这样的一个网络有多大的把握。
|
||||
|
||||
### 运营 kube-proxy 和 kube-dns?
|
||||
|
||||
现在,我有一些关于运营覆盖网络的想法,我们来讨论一下。
|
||||
|
||||
这个标题的最后面有一个问号,那是因为我并没有真的去运营过。在这里我还有更多的问题要问答。
|
||||
|
||||
这里的 Kubernetes 服务是如何工作的!一个服务是一群 pod 们,它们中的每个都有自己的 IP 地址(像 10.1.0.3、10.2.3.5、10.3.5.6 这样)
|
||||
|
||||
1. 每个 Kubernetes 服务有一个 IP 地址(像 10.23.1.2 这样)
|
||||
|
||||
2. `kube-dns` 去解析 Kubernetes 服务 DNS 名字为 IP 地址(因此,my-svc.my-namespace.svc.cluster.local 可能映射到 10.23.1.2 上)
|
||||
|
||||
3. `kube-proxy` 配置 `iptables` 规则是为了在它们之间随机进行均衡负载。Kube-proxy 也有一个用户空间的轮询负载均衡器,但是在我的印象中,他们并不推荐使用它。
|
||||
|
||||
因此,当你发出一个请求到 `my-svc.my-namespace.svc.cluster.local` 时,它将解析为 10.23.1.2,然后,在你本地主机上的 `iptables` 规则(由 kube-proxy 生成)将随机重定向到 10.1.0.3 或者 10.2.3.5 或者 10.3.5.6 中的一个上。
|
||||
|
||||
在这个过程中我能想像出的可能出问题的地方:
|
||||
|
||||
* `kube-dns` 配置错误
|
||||
|
||||
* `kube-proxy` 挂了,以致于你的 `iptables` 规则没有得以更新
|
||||
|
||||
* 维护大量的 `iptables` 规则相关的一些问题
|
||||
|
||||
我们来讨论一下 `iptables` 规则,因为创建大量的 `iptables` 规则是我以前从没有听过的事情!
|
||||
|
||||
kube-proxy 像如下这样为每个目标主机创建一个 `iptables` 规则:这些规则来自 [这里][13])
|
||||
|
||||
```
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c]
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS
|
||||
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y
|
||||
|
||||
```
|
||||
|
||||
因此,kube-proxy 创建了许多 `iptables` 规则。它们都是什么意思?它对我的网络有什么样的影响?这里有一个来自华为的非常好的演讲,它叫做 [支持 50,000 个服务的可伸缩 Kubernetes][14],它说如果在你的 Kubernetes 集群中有 5,000 服务,增加一个新规则,将需要 **11 分钟**。如果这种事情发生在真实的集群中,我认为这将是一件非常糟糕的事情。
|
||||
|
||||
在我的集群中肯定不会有 5,000 个服务,但是 5,000 并不是那么大的一个数字。为解决这个问题,他们给出的解决方案是 kube-proxy 用 IPVS 来替换这个 `iptables` 后端,IPVS 是存在于 Linux 内核中的一个负载均衡器。
|
||||
|
||||
看起来,像 kube-proxy 正趋向于使用各种基于 Linux 内核的负载均衡器。我认为这只是一定程度上是这样,因为他们支持 UDP 负载均衡,而其它类型的负载均衡器(像 HAProxy)并不支持 UDP 负载均衡。
|
||||
|
||||
但是,我觉得使用 HAProxy 更舒服!它能够用于去替换 kube-proxy!我用谷歌搜索了一下,然后发现了这个 [thread on kubernetes-sig-network][15],它说:
|
||||
|
||||
> kube-proxy 是很难用的,我们在生产系统中使用它近一年了,它在大部分的时间都表现的很好,但是,随着我们集群中的服务越来越多,我们发现它的排错和维护工作越来越难。在我们的团队中没有 iptables 方面的专家,我们只有 HAProxy&LVS 方面的专家,由于我们已经使用它们好几年了,因此我们决定使用一个中心化的 HAProxy 去替换分布式的代理。我觉得这可能会对在 Kubernetes 中使用 HAProxy 的其他人有用,因此,我们更新了这个项目,并将它开源:[https://github.com/AdoHe/kube2haproxy][5]。如果你发现它有用,你可以去看一看、试一试。
|
||||
|
||||
因此,那是一个有趣的选择!我在这里确实没有答案,但是,有一些想法:
|
||||
|
||||
* 负载均衡器是很复杂的
|
||||
|
||||
* DNS 也很复杂
|
||||
|
||||
* 如果你有运营某种类型的负载均衡器(比如 HAProxy)的经验,与其使用一个全新的负载均衡器(比如 kube-proxy),还不如做一些额外的工作去使用你熟悉的那个来替换,或许更有意义。
|
||||
|
||||
* 我一直在考虑,我们希望在什么地方能够完全使用 kube-proxy 或者 kube-dns —— 我认为,最好是只在 Envoy 上投入,并且在负载均衡&服务发现上完全依赖 Envoy 来做。因此,你只需要将 Envoy 运营好就可以了。
|
||||
|
||||
正如你所看到的,我在关于如何运营 Kubernetes 中的内部代理方面的思路还是很混乱的,并且我也没有使用它们的太多经验。总体上来说,kube-proxy 和 kube-dns 还是很好的,也能够很好地工作,但是我仍然认为应该去考虑使用它们可能产生的一些问题(例如,”你不能有超出 5000 的 Kubernetes 服务“)。
|
||||
|
||||
### 入口
|
||||
|
||||
如果你正在运行着一个 Kubernetes 集群,那么到目前为止,很有可能的是,你事实上需要 HTTP 请求去进入到你的集群中。这篇博客已经太长了,并且关于入口我知道的也不多,因此,我们将不讨论关于入口的内容。
|
||||
|
||||
### 有用的链接
|
||||
|
||||
几个有用的链接,总结如下:
|
||||
|
||||
* [Kubernetes 网络模型][6]
|
||||
|
||||
* GKE 网络是如何工作的:[https://www.youtube.com/watch?v=y2bhV81MfKQ][7]
|
||||
|
||||
* 上述的有关 `kube-proxy` 上性能的讨论:[https://www.youtube.com/watch?v=4-pawkiazEg][8]
|
||||
|
||||
### 我认为网络运营很重要
|
||||
|
||||
我对 Kubernetes 的所有这些联网软件的感觉是,它们都仍然是非常新的,并且我并不能确定我们(作为一个社区)真的知道如何去把它们运营好。这让我作为一个操作者感到很焦虑,因为我真的想让我的网络运行的很好!:) 而且我觉得作为一个组织,运行你自己的 Kubernetes 集群需要相当大的投入,以确保你理解所有的代码片段,这样当它们出现问题时你可以去修复它们。这不是一件坏事,它只是一个事而已。
|
||||
|
||||
我现在的计划是,继续不断地学习关于它们都是如何工作的,以尽可能多地减少对我动过的那些部分的担忧。
|
||||
|
||||
一如继往,我希望这篇文章对你有帮助,并且如果我在这篇文章中有任何的错误,我非常喜欢你告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
|
||||
|
||||
作者:[Julia Evans ][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://jvns.ca/about
|
||||
[1]:http://blog.sophaskins.net/blog/misadventures-with-kube-dns/
|
||||
[2]:https://github.com/coreos/flannel/pull/808
|
||||
[3]:https://github.com/coreos/flannel/pull/803
|
||||
[4]:https://github.com/coreos/flannel/issues/610
|
||||
[5]:https://github.com/AdoHe/kube2haproxy
|
||||
[6]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model
|
||||
[7]:https://www.youtube.com/watch?v=y2bhV81MfKQ
|
||||
[8]:https://www.youtube.com/watch?v=4-pawkiazEg
|
||||
[9]:https://jvns.ca/categories/kubernetes
|
||||
[10]:https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model
|
||||
[11]:https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md
|
||||
[12]:https://github.com/coreos/flannel/tree/master/backend/vxlan
|
||||
[13]:https://github.com/kubernetes/kubernetes/issues/37932
|
||||
[14]:https://www.youtube.com/watch?v=4-pawkiazEg
|
||||
[15]:https://groups.google.com/forum/#!topic/kubernetes-sig-network/3NlBVbTUUU0
|
55
translated/tech/20180606 6 Open Source AI Tools to Know.md
Normal file
55
translated/tech/20180606 6 Open Source AI Tools to Know.md
Normal file
@ -0,0 +1,55 @@
|
||||
应该知道的 6 个开源 AI 工具
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/artificial-intelligence-3382507_1920.jpg?itok=HarDnwVX)
|
||||
|
||||
在开源领域,不管你的想法是多少的新颖独到,先去看一下别人是否已经做成了这个概念,总是一个很明智的做法。对于有兴趣借助不断成长的人工智能(AI)的力量的组织和个人来说,许多非常好的工具不仅是免费和开源的,而且在很多的情况下,它们都已经过测试和久经考验的。
|
||||
|
||||
在领先的公司和非盈利组织中,AI 的优先级都非常高,并且这些公司和组织都开源了很有价值的工具。下面的样本是任何人都可以使用的免费的、开源的 AI 工具。
|
||||
|
||||
**Acumos.** [Acumos AI][1] 是一个平台和开源框架,使用它可以很容易地去构建、共享和分发 AI 应用。它规范了需要的基础设施栈和组件,使其可以在一个“开箱即用的”通用 AI 环境中运行。这使得数据科学家和模型训练者可以专注于它们的核心竞争力,而不用在无止境的定制、建模、以及训练一个 AI 实现上浪费时间。
|
||||
|
||||
Acumos 是 [LF 深度学习基金会][2] 的一部分,它是 Linux 基金会中的一个组织,它支持在人工智能、机器学习、以及深度学习方面的开源创新。它的目标是让这些重大的新技术可用于开发者和数据科学家,包括那些在深度学习和 AI 上经验有限的人。LF 深度学习基金会 [最近批准了一个项目生命周期和贡献流程][3],并且它现在正接受项目贡献的建议。
|
||||
|
||||
**Facebook 的框架.** Facebook 它自己 [有开源的][4] 中央机器学习系统,它设计用于做一些大规模的人工智能任务,以及一系列其它的 AI 技术。这个工具是经过他们公司验证的平台的一部分。Facebook 也开源了一个叫 [Caffe2][5] 的深度学习和人工智能的框架。
|
||||
|
||||
**说到 Caffe.** Yahoo 也在开源许可证下发布了它自己的关键的 AI 软件。[CaffeOnSpark 工具][6] 是基于深度学习的,它是人工智能的一个分支,在帮助机器识别人类语言、或者照片、视频的内容方面非常有用。同样地,IBM 的机器学习程序 [SystemML][7] 可以通过 Apache 软件基金会免费共享和修改。
|
||||
|
||||
**Google 的工具.** Google 花费了几年的时间开发了它自己的 [TensorFlow][8] 软件框架,用于去支持它的 AI 软件和其它预测和分析程序。TensorFlow 是你可能都已经在使用的一些 Google 工具背后的引擎,包括 Google Photos 和在 Google app 中使用的语言识别。
|
||||
|
||||
Google 开源了两个 [AIY kits][9],它可以让个人很容易地使用人工智能,它们专注于计算机视觉和语音助理。这两个工具包将用到的所有组件封装到一个盒子中。这个工具包目前在美国的 Target 中有售,并且它是基于开源的树莓派平台的 —— 有越来越多的证据表明,在开源和 AI 交集中将发生非常多的事情。
|
||||
|
||||
**H2O.ai.** **** 我 [以前介绍过][10] H2O.ai,它在机器学习和人工智能领域中占有一席之地,因为它的主要工具是免费和开源的。你可以获取主要的 H2O 平台和 Sparkling Water,它与 Apache Spark 一起工作,只需要去 [下载][11] 它们即可。这些工具遵循 Apache 2.0 许可证,它是一个非常灵活的开源许可证,你甚至可以在 Amazon Web 服务(AWS)和其它的集群上运行它们,而这仅需要几百美元而已。
|
||||
|
||||
**Microsoft Onboard.** “我们的目标是让 AI 大众化,让每个人和组织获得更大的成就,“ Microsoft CEO Satya Nadella [说][12]。因此,微软持续迭代它的 [Microsoft Cognitive Toolkit][13]。它是一个能够与 TensorFlow 和 Caffe 去竞争的一个开源软件框架。Cognitive 工具套件可以工作在 64 位的 Windows 和 Linux 平台上。
|
||||
|
||||
Cognitive 工具套件团队的报告称,“Cognitive 工具套件通过允许用户去创建、训练、以及评估他们自己的神经网络,以使企业级的、生产系统级的 AI 成为可能,这些神经网络可能跨多个 GPU 以及多个机器在大量的数据集中高效伸缩。”
|
||||
|
||||
从来自 Linux 基金会的新电子书中学习更多的有关 AI 知识。Ibrahim Haddad 的 [开源 AI:项目、洞察、和趋势][14] 调查了 16 个流行的开源 AI 项目—— 深入研究了他们的历史、代码库、以及 GitHub 的贡献。 [现在可以免费下载这个电子书][14]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/6-open-source-ai-tools-know
|
||||
|
||||
作者:[Sam Dean][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/sam-dean
|
||||
[1]:https://www.acumos.org/
|
||||
[2]:https://www.linuxfoundation.org/projects/deep-learning/
|
||||
[3]:https://www.linuxfoundation.org/blog/lf-deep-learning-foundation-announces-project-contribution-process/
|
||||
[4]:https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/
|
||||
[5]:https://venturebeat.com/2017/04/18/facebook-open-sources-caffe2-a-new-deep-learning-framework/
|
||||
[6]:http://yahoohadoop.tumblr.com/post/139916563586/caffeonspark-open-sourced-for-distributed-deep
|
||||
[7]:https://systemml.apache.org/
|
||||
[8]:https://www.tensorflow.org/
|
||||
[9]:https://www.techradar.com/news/google-assistant-sweetens-raspberry-pi-with-ai-voice-control
|
||||
[10]:https://www.linux.com/news/sparkling-water-bridging-open-source-machine-learning-and-apache-spark
|
||||
[11]:http://www.h2o.ai/download
|
||||
[12]:https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/02/10/microsoft-cognitive-toolkit-cntk/
|
||||
[13]:https://www.microsoft.com/en-us/cognitive-toolkit/
|
||||
[14]:https://www.linuxfoundation.org/publications/open-source-ai-projects-insights-and-trends/
|
@ -0,0 +1,68 @@
|
||||
3 款 Linux 桌面的日记程序
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6)
|
||||
保持记日记,即使是不定期,也可以带来很多好处。这不仅是治疗和宣泄,而且还可以很好地记录你所处的位置以及你去过的地方。它可以帮助你展示你在生活中的进步,并提醒你自己做对了什么,做错了什么。
|
||||
|
||||
无论你记日记的原因是什么,都有多种方法可以做到这一点。你可以去读书,使用笔和纸。你可以使用基于 Web 的程序。或者你可以使用[简单的文本文件][1]。
|
||||
|
||||
另一种选择是使用专门的日记程序。Linux 桌面有几种非常灵活且非常有用的日记工具。我们来看看其中的三个。
|
||||
|
||||
### RedNotebook
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/red-notebook.png)
|
||||
|
||||
在这里描述的三个日记程序中,[RedNotebook][2] 是最灵活的。大部分灵活性来自其模板。这些模板可让莫记录个人想法或会议记录、计划旅程或记录电话。你还可以编辑现有模板或创建自己的模板。
|
||||
|
||||
你使用与 Markdown 非常相似的标记语言记录日记。你还可以在日记中添加标签,以便于查找。只需在程序的左窗格中单击或输入标记,右窗格中将显示相应日记的列表。
|
||||
|
||||
最重要的是,你可以将全部或部分或仅一个日记录导出为纯文本、HTML、LaTeX 或 PDF。在执行此操作之前,你可以通过单击工具栏上的“预览”按钮了解日志在 PDF 或 HTML 中的显示情况。
|
||||
|
||||
总的来说,RedNotebook 是一款易于使用且灵活的程序。它需要习惯,但一旦你这样做,它是一个有用的工具。
|
||||
|
||||
### Lifeograph
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/lifeograph.png)
|
||||
|
||||
[Lifeograph][3] 与 RedNotebook 有相似的外观和感觉。它没有那么多功能,但 Lifeograph 可以完成工作。
|
||||
|
||||
该程序通过保持简单和整洁来简化记日记。你有一个很大的区域可以记录,你可以为日记添加一些基本格式。这包括通常的粗体和斜体,以及箭头和高亮显示。你可以在日记中添加标签,以便更好地组织和查找它们。
|
||||
|
||||
Lifeograph 有一个我觉得特别有用的功能。首先,你可以创建多个日记 - 例如,工作日记和个人日记。其次是密码保护你的日记的能力。虽然该网站声称 Lifeograph 使用“真正的加密”,但没有关于它的详细信息。尽管如此,设置密码仍然会阻止大多数窥探者。
|
||||
|
||||
### Almanah Diary
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/almanah.png)
|
||||
|
||||
[Almanah Diary][4] 是另一种非常简单的日记工具。但不要因为它缺乏功能就关闭它。这很简单,但完成了工作。
|
||||
|
||||
有多简单?它差多是一个区域包含了日记输入和日历。你可以做更多的事情 - 比如添加一些基本格式(粗体、斜体和下划线)并将文本转换为超链接。Almanah 还允许你加密日记。
|
||||
|
||||
虽然有一个功能可以将纯文本文件导入程序,但我无法使其正常工作。尽管如此,如果你喜欢一个简单,能够快速记日记的软件,那么 Almanah 日记值得一看。
|
||||
|
||||
### 命令行怎么样?
|
||||
|
||||
如果你不想用 GUI 则可以不必去做。命令行是保存日记的绝佳选择。
|
||||
|
||||
我尝试过并且喜欢的是 [jrnl][5]。或者你可以使用[此方案][6],它使用命令行别名格式化并将日记保存到文本文件中。
|
||||
|
||||
你有喜欢的日记程序吗?请留下评论,随意分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/linux-journaling-applications
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/scottnesbitt
|
||||
[1]:https://plaintextproject.online/2017/07/19/journal.html
|
||||
[2]:http://rednotebook.sourceforge.net
|
||||
[3]:http://lifeograph.sourceforge.net/wiki/Main_Page
|
||||
[4]:https://wiki.gnome.org/Apps/Almanah_Diary
|
||||
[5]:http://maebert.github.com/jrnl/
|
||||
[6]:http://tamilinux.wordpress.com/2007/07/27/writing-short-notes-and-diaries-from-the-cli/
|
@ -0,0 +1,66 @@
|
||||
Mesos 和 Kubernetes:不是竞争者
|
||||
======
|
||||
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-barge-bay-161764_0.jpg?itok=vNChG5fb)
|
||||
|
||||
Mesos 的起源可以追溯到 2009 年,当时,Ben Hindman 还是加州大学伯克利分校研究并行编程的博士生。他们在 128 核的芯片上做大规模的并行计算,并尝试去解决多个问题,比如怎么让软件和库在这些芯片上运行更高效。他与同学们讨论能否借鉴并行处理和多线程的思想,并将它们应用到集群管理上。
|
||||
|
||||
Hindman 说 "最初,我们专注于大数据” 。那时,大数据非常热门,并且 Hadoop 是其中一个热门技术。“我们发现,人们在集群上运行像 Hadoop 这样的程序与运行多线程应用和并行应用很相似。Hindman 说。
|
||||
|
||||
但是,它们的效率并不高,因此,他们开始去思考,如何通过集群管理和资源管理让它们运行的更好。”我们查看了那个时间很多的不同技术“ Hindman 回忆道。
|
||||
|
||||
然而,Hindman 和他的同事们,决定去采用一种全新的方法。”我们决定去对资源管理创建一个低级的抽象,然后在此之上运行调度服务和做其它的事情。“ Hindman 说,“基本上,这就是 Mesos 的本质 —— 将资源管理部分从调度部分中分离出来。”
|
||||
|
||||
他成功了,并且 Mesos 从那时开始强大了起来。
|
||||
|
||||
### 将项目呈献给 Apache
|
||||
|
||||
这个项目发起于 2009 年。在 2010 年时,团队决定将这个项目捐献给 Apache 软件基金会(ASF)。它在 Apache 孵化,并于 2013 年成为顶级项目(TLP)。
|
||||
|
||||
为什么 Mesos 社区选择 Apache 软件基金会有很多的原因,比如,Apache 许可证,以及他们已经拥有了一个充满活力的此类项目的许多其它社区。
|
||||
|
||||
与影响力也有关系。许多在 Mesos 上工作的人,也参与了 Apache,并且许多人也致力于像 Hadoop 这样的项目。同时,来自 Mesos 社区的许多人也致力于其它大数据项目,比如 Spark。这种交叉工作使得这三个项目 —— Hadoop、Mesos、以及 Spark —— 成为 ASF 的项目。
|
||||
|
||||
与商业也有关系。许多公司对 Mesos 很感兴趣,并且开发者希望它能由一个中立的机构来维护它,而不是让它成为一个私有项目。
|
||||
|
||||
### 谁在用 Mesos?
|
||||
|
||||
更好的问题应该是,谁不在用 Mesos?从 Apple 到 Netflix 每个都在用 Mesos。但是,Mesos 也面临任何技术在早期所面对的挑战。”最初,我要说服人们,这是一个很有趣的新技术。它叫做“容器”,因为它不需要使用虚拟机“ Hindman 说。
|
||||
|
||||
从那以后,这个行业发生了许多变化,现在,只要与别人聊到基础设施,必然是从”容器“开始的 —— 感谢 Docker 所做出的工作。今天再也不需要说服工作了,而在 Mesos 出现的早期,前面提到的像 Apple、Netflix、以及 PayPal 这样的公司。他们已经知道了容器化替代虚拟机给他们带来的技术优势。”这些公司在容器化成为一种现象之前,已经明白了容器化的价值所在“, Hindman 说。
|
||||
|
||||
可以在这些公司中看到,他们有大量的容器而不是虚拟机。他们所做的全部工作只是去管理和运行这些容器,并且他们欣然接受了 Mesos。在 Mesos 早期就使用它的公司有 Apple、Netflix、PayPal、Yelp、OpenTable、和 Groupon。
|
||||
|
||||
“大多数组织使用 Mesos 来运行任意需要的服务” Hindman 说,“但也有些公司用它做一些非常有趣的事情,比如,数据处理、数据流、分析负载和应用程序。“
|
||||
|
||||
这些公司采用 Mesos 的其中一个原因是,资源管理层之间有一个明晰的界线。当公司运营容器的时候,Mesos 为他们提供了很好的灵活性。
|
||||
|
||||
“我们尝试使用 Mesos 去做的一件事情是去创建一个层,以让使用者享受到我们的层带来的好处,当然也可以在它之上创建任何他们想要的东西,” Hindman 说。 “我认为这对一些像 Netflix 和 Apple 这样的大公司非常有用。”
|
||||
|
||||
但是,并不是每个公司都是技术型的公司;不是每个公司都有或者应该有这种专长。为帮助这样的组织,Hindman 联合创建了 Mesosphere 去围绕 Mesos 提供服务和解决方案。“我们最终决定,为这样的组织去构建 DC/OS,它不需要技术专长或者不想把时间花费在像构建这样的事情上。”
|
||||
|
||||
### Mesos vs. Kubernetes?
|
||||
|
||||
人们经常用 x 相对于 y 这样的术语来考虑问题,但是它并不是一个技术对另一个技术的问题。大多数的技术在一些领域总是重叠的,并且它们可以是互补的。“我不喜欢将所有的这些东西都看做是竞争者。我认为它们中的一些与另一个在工作中是互补的,” Hindman 说。
|
||||
|
||||
“事实上,名字 Mesos 表示它处于 ‘中间’;它是一种中间的 OS,” Hindman 说,“我们有一个容器调度器的概念,它能够运行在像 Mesos 这样的东西之上。当 Kubernetes 刚出现的时候,我们实际上在 Mesos 的生态系统中接受它的,并将它看做是运行在 Mesos 之上、DC/OS 之中的另一种方式的容器。”
|
||||
|
||||
Mesos 也复活了一个名为 [Marathon][1](一个用于 Mesos 和 DC/OS 的容器编排器)的项目,它在 Mesos 生态系统中是做的最好的容器编排器。但是,Marathon 确实无法与 Kubernetes 相比较。“Kubernetes 比 Marathon 做的更多,因此,你不能将它们简单地相互交换,” Hindman 说,“与此同时,我们在 Mesos 中做了许多 Kubernetes 中没有的东西。因此,这些技术之间是互补的。”
|
||||
|
||||
不要将这些技术视为相互之间是敌对的关系,它们应该被看做是对行业有益的技术。它们不是技术上的重复;它们是多样化的。据 Hindman 说,“对于开源领域的终端用户来说,这可能会让他们很困惑,因为他们很难去知道哪个技术适用于哪种负载,但这是被称为开源的这种东西最令人讨厌的本质所在。“
|
||||
|
||||
这只是意味着有更多的选择,并且每个都是赢家。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/2018/6/mesos-and-kubernetes-its-not-competition
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/arnieswap
|
||||
[1]:https://mesosphere.github.io/marathon/
|
@ -1,212 +0,0 @@
|
||||
4种用于构建嵌入式Linux系统的工具
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6)
|
||||
|
||||
|
||||
Linux被部署到比Linus Torvalds在他的宿舍里工作时所预期的更广泛的设备。对各种芯片的支持是令人震惊的,使得Linux应用在大大小小的设备上:
|
||||
从[IBM的巨型机][1]到[微型设备][2],没有比他们的连接端口和其间的任何东西都大。它被用于大型企业数据中心,互联网基础设施设备和个人开发系统。
|
||||
它还为消费电子产品、移动电话和许多物联网设备提供动力。
|
||||
|
||||
在为桌面和企业级设备构建Linux软件时,开发者典型的在他们的构建机器上使用桌面发行版,如[Ubuntu][3] 以便尽可能与被部署的机器相似。工具
|
||||
如[VirtualBox][4] and [Docker][5]使得开发测试和生成环境更好的保持一致
|
||||
|
||||
### 什么是嵌入式系统?
|
||||
|
||||
维基百科将[嵌入式系统] [6]定义为:“在更大的机械或电气系统中具有专用功能的计算机系统,往往伴随着实时计算限制。
|
||||
我觉得很简单,可以说嵌入式系统是大多数人不认为是计算机的计算机。它的主要作用是作为某种设备,它不被视为通用计算平台。
|
||||
|
||||
嵌入式系统编程中的开发环境通常与测试和生产环境大不相同。他们可能会使用不同的芯片架构,软件堆栈甚至操作系统。开发工作流程对于嵌入式开发
|
||||
人员与桌面和Web开发人员来说是非常不同通常,构建输出将包含目标设备的整个软件映像,包括内核,设备驱动程序,库和应用程序软件(有时也包括引导
|
||||
加载程序)。
|
||||
|
||||
在本文中,我将对构建嵌入式Linux系统的四种常用选项进行纵览。我将介绍一下每种产品的工作原理,并提供足够的信息来帮助读者确定使用哪种工具
|
||||
进行设计。我不会教你如何使用它们中的任何一个;一旦缩小了选择范围,就有大量深入的在线学习资源。没有任何选择适用于所有用例,我希望提供足够的
|
||||
细节来指导您的决定。
|
||||
### Yocto
|
||||
|
||||
[Yocto] [7]项目[定义] [8]是:一个开源协作项目,提供模板,工具和方法,帮助您为嵌入式产品创建定制的基于Linux的系统,而不管硬件架构如何。
|
||||
它是用于创建定制的Linux运行时映像的配方,配置值和依赖关系的集合,可根据您的特定需求进行定制。
|
||||
完全公开:我在嵌入式Linux中的大部分工作都集中在Yocto项目上,而且我对这个系统的认识和偏见可能很明显。
|
||||
Yocto使用[Openembedded] [9]作为其构建系统。从技术上讲,这两个是独立的项目;然而,在实践中,用户不需要了解区别,项目名称经常可以互换使用。
|
||||
|
||||
Yocto项目的输出大致由三部分组成:
|
||||
|
||||
* **目标运行时二进制文件:**这些包括引导加载程序,内核,内核模块,根文件系统映像。以及将Linux部署到目标平台所需的任何其他辅助文件。
|
||||
* **包流:**这是可以安装在目标上的软件包集合。您可以根据需要选择软件包格式(例如,deb,rpm,ipk)。其中一些可能预先安装在目标运行时
|
||||
二进制文件中,但可以构建用于安装到已部署系统的软件包。
|
||||
* **目标SDK:**这些是表示安装在目标上的软件的库和头文件的集合。应用程序开发人员在构建代码时使用它们,以确保它们与适当的库链接
|
||||
|
||||
#### 优点
|
||||
|
||||
Yocto项目在行业中得到广泛应用,并得到许多有影响力的公司的支持。此外,它还拥有一个庞大且充满活力的开发人员[社区] [10]和[生态系统] [11]。
|
||||
开源爱好者和企业赞助商的结合有助于推动Yocto项目。
|
||||
Yocto获得支持有很多选择。如果您想自己动手,还有书籍和其他培训材料。如果您想获得专业知识,许多有Yocto经验的工程师都可以使用。许多商业组
|
||||
织为您的设计提供基于Yocto的Turnkey产品或基于服务的实施和定制。
|
||||
Yocto项目很容易通过[layer] [12]进行扩展,它可以独立发布以添加额外的功能,将目标平台定位到项目发布中不可用的平台,或存储系统特有的自定义项。
|
||||
layer可以添加到您的配置中,以添加未特别包含在市面上版本中的独特功能;例如,[meta-browser] [13] layer包含Web浏览器的清单,可以轻松为您
|
||||
的系统进行构建。因为它们是独立维护的,所以layer可以在不同的发布时间安排上(根据layer的开发速度)而不是标准的Yocto版本。
|
||||
|
||||
Yocto可以说是本文讨论的任何选项中最广泛的设备支持。由于许多半导体和电路板制造商的支持,Yocto很可能会支持您选择的任何目标平台。
|
||||
主版本Yocto [分支] [14]仅支持少数几块主板(以便进行正确的测试和发布周期),但是,标准工作模式是使用外部主板支持layer。
|
||||
|
||||
最后,Yocto非常灵活和可定制。您的特定应用程序的自定义可以存储在一个layer进行封装和隔离。通常将要素layer特有的自定义项存储为layer本身
|
||||
的一部分,这可以将相同的设置同时应用于多个系统配置。 Yocto还提供了一个定义良好的layer优先和覆盖功能。这使您可以定义layer应用和搜索元数
|
||||
据的顺序。它还使您可以覆盖具有更高优先级的layer的设置;例如,现有清单的许多自定义功能都将保留
|
||||
|
||||
#### 缺点
|
||||
|
||||
Yocto项目最大的缺点是学习曲线。 学习系统并真正理解系统需要花费大量的时间和精力。 根据您的需求,这可能对您的应用程序不重要的技术和能力
|
||||
投入太大。 在这种情况下,与其中一家商业供应商合作可能是一个不错的选择。
|
||||
|
||||
Yocto项目的开发时间和资源相当高。 需要构建的包(包括工具链,内核和所有目标运行时组件)的数量非常重要。 Yocto开发人员的开发工作站往往是
|
||||
大型系统。 不建议使用小型笔记本电脑。 这可以通过使用许多提供商提供的基于云的构建服务器来缓解。 另外,Yocto有一个内置的缓存机制,当它确定
|
||||
用于构建特定包的参数没有改变时,它允许它重新使用先前构建的组件。
|
||||
|
||||
#### 建议
|
||||
|
||||
为您的下一个嵌入式Linux设计使用Yocto项目是一个强有力的选择。 在这里介绍的选项中,无论您的目标用例如何,它都是最广泛适用的。 广泛的行业
|
||||
支持,积极的社区和广泛的平台支持使其成为必须设计师的不错选择。
|
||||
|
||||
### Buildroot
|
||||
|
||||
[Buildroot] [15]项目被定义为:通过交叉编译生成嵌入式Linux系统的简单,高效且易于使用的工具。它与Yocto项目具有许多相同的目标,但它注重
|
||||
简单性和简约性。一般来说,Buildroot会禁用所有软件包的所有可选编译时设置(有一些值得注意的例外),从而导致尽可能小的系统。系统设计人员需要
|
||||
启用适用于给定设备的设置。
|
||||
|
||||
Buildroot从源代码构建所有组件,但不支持按目标包管理。因此,它有时称为固件生成器,因为镜像在构建时大部分是固定的。应用程序可以更新目标
|
||||
文件系统,但是没有机制将新软件包安装到正在运行的系统中。
|
||||
Buildroot输出主要由三部分组成:
|
||||
|
||||
*将Linux部署到目标平台所需的根文件系统映像和任何其他辅助文件
|
||||
*适用于目标硬件的内核,引导加载程序和内核模块
|
||||
*用于构建所有目标二进制文件的工具链。
|
||||
|
||||
### 优点
|
||||
|
||||
Buildroot对简单性的关注意味着,一般来说,它比Yocto更容易学习。核心构建系统用Make编写,并且足够短以允许开发人员了解整个系统,同时可扩展
|
||||
到足以满足嵌入式Linux开发人员的需求。 Buildroot核心通常只处理常见用例,但它可以通过脚本进行扩展。Buildroot系统使用普通的Makefile和Kconfig
|
||||
语言来进行配置。 Kconfig由Linux内核社区开发,广泛用于开源项目,使得许多开发人员都熟悉它。
|
||||
由于禁用所有可选构建时间设置的设计目标,Buildroot通常会使用开箱即用的配置生成尽可能最小的镜像。一般来说,构建时间和构建主机资源的规模
|
||||
将比Yocto项目的规模更小。
|
||||
|
||||
####缺点
|
||||
|
||||
关注简单性和最小化启用的构建选项意味着您可能需要执行重要的自定义来为应用程序配置Buildroot构建。此外,所有配置选项都存储在单个文件中,
|
||||
这意味着如果您有多个硬件平台,则需要为每个平台进行每个定制更改。对系统配置文件的任何更改都需要全部重新构建所有软件包。与Yocto相比,这可以
|
||||
通过最小的镜像大小和构建时间进行缓解,但在您调整配置时可能会导致构建时间过长。
|
||||
|
||||
中间软件包状态缓存默认情况下未启用,并且不像Yocto实施那么彻底。这意味着,虽然第一次构建可能比等效的Yocto构建短,但后续构建可能需要重建
|
||||
许多组件。
|
||||
|
||||
####建议
|
||||
|
||||
对于大多数应用程序,使用Buildroot进行下一个嵌入式Linux设计是一个不错的选择。如果您的设计需要多种硬件类型或其他差异,但由于同步多个配置
|
||||
的复杂性,您可能需要重新考虑,但对于由单一设置组成的系统,Buildroot可能适合您。
|
||||
|
||||
[OpenWRT] [16]项目开始为消费者路由器开发定制固件。您当地零售商提供的许多低成本路由器都可以运行Linux系统,但可能无法使用。这些路由器的
|
||||
制造商可能无法提供频繁的更新来解决新的威胁,即使他们这样做,安装更新镜像的机制也很困难且容易出错。 OpenWRT项目为许多已被其制造商放弃的
|
||||
设备生成更新的固件镜像,并让这些设备更加有效。
|
||||
|
||||
OpenWRT项目的主要交付物是大量商业设备的二进制镜像。有网络可访问的软件包存储库,允许设备最终用户将新软件添加到他们的系统中。 OpenWRT构建
|
||||
系统是一个通用构建系统,它允许开发人员创建自定义版本以满足他们自己的需求并添加新软件包,但其主要重点是目标二进制文件。
|
||||
|
||||
#### 优点
|
||||
|
||||
如果您正在寻找商业设备的替代固件,则OpenWRT应位于您的选项列表中。它的维护良好,可以保护您免受制造商固件无法解决的问题。您也可以添加额外的功能,使您的设备更有用。
|
||||
|
||||
如果您的嵌入式设计专注于网络,则OpenWRT是一个不错的选择。网络应用程序是OpenWRT的主要用例,您可能会发现许多可用的软件包。
|
||||
|
||||
####缺点
|
||||
|
||||
OpenWRT对您的设计施加重大决策(与Yocto和Buildroot相比)。如果这些决定不符合您的设计目标,则可能需要进行非平凡的修改。
|
||||
|
||||
在部署的设备中允许基于软件包的更新很难管理。按照定义,这会导致与您的QA团队测试的软件负载不同。此外,很难保证大多数软件包管理器的原子安装,
|
||||
以及错误的电源循环可能会使您的设备处于不可预知的状态。
|
||||
|
||||
####建议
|
||||
|
||||
OpenWRT是爱好者项目或重复使用商用硬件的不错选择。它也是网络应用程序的不错选择。如果您需要从默认设置进行大量定制,您可能更喜欢Buildroot或Yocto
|
||||
|
||||
### Desktop distros
|
||||
|
||||
设计嵌入式Linux系统的一种常见方法是从桌面发行版开始,例如[Debian] [17]或[Red Hat] [18],并在安装的镜像符合目标设备的占用空间之前删除
|
||||
不需要的组件。这是[Raspberry Pi] [20]平台流行的[Raspbian] [19]分发方法。
|
||||
|
||||
### 优点
|
||||
|
||||
这种方法的主要优点是熟悉。通常,嵌入式Linux开发人员也是桌面Linux用户,并且精通他们的选择发行版。在目标上使用类似的环境可能会让开发人员
|
||||
更快地入门。根据所选的分布,可以使用apt和yum等标准封装工具安装许多其他工具。
|
||||
|
||||
可以将显示器和键盘连接到目标设备,并直接在那里进行所有的开发。对于不熟悉嵌入式空间的开发人员来说,这可能是一个更为熟悉的环境,无需配置和
|
||||
使用棘手的跨开发设置。
|
||||
|
||||
大多数桌面发行版可用的软件包数量通常大于前面讨论的嵌入式特定的构建器可用软件包数量。由于较大的用户群和更广泛的用例,您可能能够找到您的
|
||||
应用程序所需的所有运行时包,这些包已经构建并可供使用。
|
||||
|
||||
####缺点
|
||||
|
||||
将目标作为您的主要开发环境可能会很慢。运行编译器工具是一项资源密集型操作,根据您构建的代码的多少,可能会妨碍您的性能。
|
||||
|
||||
除了一些例外情况,桌面分布的设计并不适合低资源系统,并且可能难以充分修剪目标图像。同样,桌面环境中的预期工作流程对于大多数嵌入式设计来说
|
||||
都不理想。以这种方式获得可重复的环境很困难。手动添加和删除软件包很容易出错。这可以使用特定于发行版的工具进行脚本化,例如基于Debian系统
|
||||
的[debootstrap] [21]。为了进一步提高[可重复性] [21],您可以使用配置管理工具,如[CFEngine] [22](我的雇主[Mender.io] [23]完整披露了
|
||||
这一工具)。但是,您仍然受分发提供商的支配,他们将更新软件包以满足他们的需求,而不是您的需求。
|
||||
|
||||
####建议
|
||||
|
||||
对于您打算推向市场的产品,请谨慎使用此方法。这对于爱好者应用程序来说是一个很好的模型;但是,对于需要支持的产品,这种方法很可能会遇到麻烦。
|
||||
虽然您可能能够获得更快的起步,但从长远来看,您可能会花费您的时间和精力。
|
||||
|
||||
###其他考虑
|
||||
|
||||
这个讨论集中在构建系统的功能上,但通常有非功能性需求可能会影响您的决定。如果您已经选择了片上系统(SoC)或电路板,则您的选择很可能由供应商
|
||||
决定。如果您的供应商为特定系统提供板级支持包(BSP),使用它通常会节省相当多的时间,但请研究BSP的质量以避免在开发周期后期发生问题。
|
||||
|
||||
如果您的预算允许,您可能需要考虑为目标操作系统使用商业供应商。有些公司会为这里讨论的许多选项提供经过验证和支持的配置,除非您拥有嵌入式
|
||||
Linux构建系统方面的专业知识,否则这是一个不错的选择,可以让您专注于核心能力。
|
||||
|
||||
作为替代,您可以考虑为您的开发人员进行商业培训。这可能比商业OS提供商便宜,并且可以让你更加自给自足。这是快速找到您选择的构建系统基础知识
|
||||
的学习曲线。
|
||||
|
||||
最后,您可能已经有一些开发人员拥有一个或多个系统的经验。如果你有工程师有偏好,当你做出决定时,肯定值得考虑。
|
||||
|
||||
###总结
|
||||
|
||||
构建嵌入式Linux系统有多种选择,每种都有优点和缺点。将这部分设计放在优先位置至关重要,因为在以后的过程中切换系统的成本非常高。除了这些
|
||||
选择之外,新系统一直在开发中。希望这次讨论能够为审查新系统(以及这里提到的系统)提供一些背景,并帮助您为下一个项目做出坚实的决定。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/embedded-linux-build-tools
|
||||
|
||||
作者:[Drew Moseley][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[LHRChina](https://github.com/LHRChina)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/drewmoseley
|
||||
[1]:https://en.wikipedia.org/wiki/Linux_on_z_Systems
|
||||
[2]:http://www.picotux.com/
|
||||
[3]:https://www.ubuntu.com/
|
||||
[4]:https://www.virtualbox.org/
|
||||
[5]:https://www.docker.com/
|
||||
[6]:https://en.wikipedia.org/wiki/Embedded_system
|
||||
[7]:https://yoctoproject.org/
|
||||
[8]:https://www.yoctoproject.org/about/
|
||||
[9]:https://www.openembedded.org/
|
||||
[10]:https://www.yoctoproject.org/community/
|
||||
[11]:https://www.yoctoproject.org/ecosystem/participants/
|
||||
[12]:https://layers.openembedded.org/layerindex/branch/master/layers/
|
||||
[13]:https://layers.openembedded.org/layerindex/branch/master/layer/meta-browser/
|
||||
[14]:https://yoctoproject.org/downloads
|
||||
[15]:https://buildroot.org/
|
||||
[16]:https://openwrt.org/
|
||||
[17]:https://www.debian.org/
|
||||
[18]:https://www.redhat.com/
|
||||
[19]:https://www.raspbian.org/
|
||||
[20]:https://www.raspberrypi.org/
|
||||
[21]:https://wiki.debian.org/Debootstrap
|
||||
[22]:https://cfengine.com/
|
||||
[23]:http://Mender.io
|
@ -0,0 +1,82 @@
|
||||
使用 LSWC(Little Simple Wallpaper Changer) 在 Linux 中自动更改壁纸
|
||||
======
|
||||
|
||||
**简介:这是一个小脚本,可以在 Linux 桌面上定期自动更改壁纸。**
|
||||
|
||||
顾名思义,LittleSimpleWallpaperChanger 是一个小脚本,可以定期地随机更改壁纸。
|
||||
|
||||
我知道在“外观”或“更改桌面背景”设置中有一个随机壁纸选项。但那是随机更改预置壁纸而不是你添加的壁纸。
|
||||
|
||||
因此,在本文中,我们将看到如何使用 LittleSimpleWallpaperChanger 设置包含照片的随机桌面壁纸。
|
||||
|
||||
### Little Simple Wallpaper Changer (LSWC)
|
||||
|
||||
[LittleSimpleWallpaperChanger][1] 或 LSWC 是一个非常轻量级的脚本,它在后台运行,从用户指定的文件夹中更改壁纸。壁纸以 1 至 5 分钟的随机间隔变化。该软件设置起来相当简单,设置完后,用户就可以忘掉它。
|
||||
|
||||
![Little Simple Wallpaper Changer to change wallpapers in Linux][2]
|
||||
|
||||
#### 安装 LSWC
|
||||
|
||||
[点此链接下载 LSWC][3]。压缩文件的大小约为 15KB。
|
||||
|
||||
* 进入下载位置。
|
||||
* 右键单击下载的 .zip 文件,然后选择“在此处解压”。
|
||||
* 打开解压后的文件夹,右键单击并选择“在终端中打开”。
|
||||
* 在终端中复制粘贴命令并按 Enter 键。
|
||||
`bash ./README_and_install.sh`
|
||||
* 然后会弹出一个对话框,要求你选择包含壁纸的文件夹。单击它,然后选择你存放壁纸的文件夹。
|
||||
* 就是这样。然后重启计算机。
|
||||
|
||||
|
||||
|
||||
![Little Simple Wallpaper Changer for Linux][4]
|
||||
|
||||
#### 使用 LSWC
|
||||
|
||||
安装时,LSWC 会要求你选择包含壁纸的文件夹。因此,我建议你在安装 LSWC 之前创建一个文件夹并将你想要的壁纸全部移动到那。或者你可以使用图片文件夹中的“壁纸”文件夹。**所有壁纸都必须是 .jpg 格式。**
|
||||
|
||||
你可以添加更多壁纸或从所选文件夹中删除当前壁纸。要更改壁纸文件夹位置,你可以从以下文件中编辑壁纸的位置。
|
||||
```
|
||||
.config/lswc/homepath.conf
|
||||
|
||||
```
|
||||
|
||||
#### 删除 LSWC
|
||||
|
||||
打开终端并运行以下命令以停止 LSWC
|
||||
```
|
||||
pkill lswc
|
||||
|
||||
```
|
||||
|
||||
在文件管理器中打开家目录,然后按 ctrl+H 显示隐藏文件,接着删除以下文件:
|
||||
|
||||
* .local 中的 “scripts” 文件夹
|
||||
* .config 中的 “lswc” 文件夹
|
||||
* .config/autostart 中的 “lswc.desktop” 文件
|
||||
|
||||
|
||||
|
||||
这就完成了。创建自己的桌面背景幻灯片。LSWC 非常轻巧,易于使用。安装它然后忘记它。
|
||||
|
||||
LSWC 功能不是很丰富,但这是有意的。它做了它打算做的事情,那就是更换壁纸。如果你想要一个自动下载壁纸的工具试试 [WallpaperDownloader][5]。
|
||||
|
||||
请在下面的评论栏分享你对这个漂亮的小软件的想法。别忘了分享这篇文章。干杯。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/little-simple-wallpaper-changer/
|
||||
|
||||
作者:[Aquil Roshan][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/aquil/
|
||||
[1]:https://github.com/LittleSimpleWallpaperChanger/lswc
|
||||
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/Little-simple-wallpaper-changer-2-800x450.jpg
|
||||
[3]:https://github.com/LittleSimpleWallpaperChanger/lswc/raw/master/Lswc.zip
|
||||
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/Little-simple-wallpaper-changer-1-800x450.jpg
|
||||
[5]:https://itsfoss.com/wallpaperdownloader-linux/
|
@ -1,76 +0,0 @@
|
||||
协同编辑器的历史性发明
|
||||
======
|
||||
不妨按时间顺序快速列出在主要协同编辑器上付出的努力。
|
||||
|
||||
正如任何如此的清单一样,它必定会在一开始便提到受人尊敬的所有编辑器的先祖,道格·恩格尔巴特描述了在那期间什么基本是 1968 年来所写的所有可能软件的详尽清单。这不仅包括协同编辑器,还包括图形、编程和数学编辑器。
|
||||
|
||||
那个示范之后的所有编辑器仅仅是为了弥补硬件发展的加速度的更缓慢的实现。
|
||||
|
||||
> 软件加快的速度比硬件加快的速度慢。——沃斯定律
|
||||
|
||||
因此没有进一步的麻烦的话,这里是我找到的可圈可点的协同编辑器的清单。我说“可圈可点”的意思是他们具有可圈可点的特征或者实现细节。
|
||||
|
||||
| 项目 | 日期 | 平台 | 说明 |
|
||||
| --- | --- | --- | --- |
|
||||
| [SubEthaEdit][1] | 2003-2015? | 仅 Mac|首次协同, 实时的, 我能找到的多指针编辑器, [在 Emacs 上的逆向工程的尝试。][2] |
|
||||
| [DocSynch][3] | 2004-2007 | ? | 在互联网交互式聊天程序之上构造! [(!)](https://anarc.at/smileys/idea.png)|
|
||||
| [Gobby][4] | 2005 至今 | C, 多平台 | 首次开放,实现稳固可靠。 仍然存在!众所周知 ("[libinfinoted][5]") 协议很难移植到其他编辑器中 (例如: [Rudel][6] 不能在 Emacs 上实现此协议。 2017 年 1 月发行的 0.7 版本添加了也许可以改善这种状况的 Python 捆绑。 有趣的插件: 自动保存到磁盘。|
|
||||
| [moonedit][7] | 2005-2008? | ? | 原网站已关闭。其他用户的光标可见并且会模仿击键的声音。 计算器和音乐定序器。 |
|
||||
| [synchroedit][8] | 2006-2007 | ? |首款网络应用。|
|
||||
| [Etherpad][9] | 2008 至今 | 网络 |首款稳定的网络应用。 最初在 2008 年被开发为一款大型 Java 应用,在 2009 年被谷歌获取并开源,然后在 2011 年被用 Node.JS 重写。广泛使用。|
|
||||
| [CRDT][10] | 2011 | 特定平台| 对在不同电脑间可靠地复制一个文件的数据结构是标准的。 |
|
||||
| [Operational transform][11] | 2013 | 特定平台| 与 CRDT 类似, 然而, 确切地说, 两者是不同的。 |
|
||||
| [Floobits][12] | 2013 至今 | ? | 对不同编辑器是商业的但开源的插件。 |
|
||||
| [HackMD][13] | 2015 至今| ? | 商业的但是[开源][14]。受 hackpad 的启发( hackpad 已被 Dropbox 收购)。 |
|
||||
| [Cryptpad][15] | 2016 至今 | 网络? |Xwiki 的副产品。加密的, 在服务器"零知识"。|
|
||||
| [Prosemirror][16] | 2016 至今 | 网络, Node.JS | "试图架起 Markdown 文本编辑和 传统 WYSIWYG 编辑器之间隔阂的桥梁。"不是完全意义上的编辑器,但是一种可以用来构建编辑器的工具。 |
|
||||
| [Qill][17] | 2013 至今 | 网络, Node.JS | 富文本编辑器,同时支持 JavaScript.不确定是否是协同式的。 |
|
||||
| [Nextcloud][18] | 2017 至今 | Web |一种类似谷歌文档的文档。 |
|
||||
| [Teletype][19] | 2017 至今 | WebRTC, Node.JS | 为 GitHub 的[ Atom 编辑器][20] 引入了 "可移植"的想法,这种想法使访客可以跟踪主人在对多个文档做什么.访问介绍服务器后使用实时通讯的点对点技术( P2P ),基于 CRDT. |
|
||||
| [Tandem][21] | 2018 至今 | Node.JS? | Atom, Vim, Neovim, Sublime 等的插件。 使用中继安装基于 CRDT 的 P2P 连接。多亏 Debian 开发者的参与,[可疑证书问题][22]已被解决,这使它成为很有希望在未来被遵循的标准。 |
|
||||
|
||||
### 其他清单
|
||||
|
||||
* Emacs 维基
|
||||
* 维基百科
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://anarc.at/blog/2018-06-26-collaborative-editors-history/
|
||||
|
||||
作者:[Anacr][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[ZenMoore](https://github.com/ZenMoore)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://anarc.at
|
||||
[1]:https://www.codingmonkeys.de/subethaedit/
|
||||
[2]:https://www.emacswiki.org/emacs/SubEthaEmacs
|
||||
[3]:http://docsynch.sourceforge.net/
|
||||
[4]:https://gobby.github.io/
|
||||
[5]:http://infinote.0x539.de/libinfinity/API/libinfinity/
|
||||
[6]:https://www.emacswiki.org/emacs/Rudel
|
||||
[7]:https://web.archive.org/web/20060423192346/http://www.moonedit.com:80/
|
||||
[8]:http://www.synchroedit.com/
|
||||
[9]:http://etherpad.org/
|
||||
[10]:https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type
|
||||
[11]:http://operational-transformation.github.io/
|
||||
[12]:https://floobits.com/
|
||||
[13]:https://hackmd.io/
|
||||
[14]:https://github.com/hackmdio/hackmd
|
||||
[15]:https://cryptpad.fr/
|
||||
[16]:https://prosemirror.net/
|
||||
[17]:https://quilljs.com/
|
||||
[18]:https://nextcloud.com/collaboraonline/
|
||||
[19]:https://teletype.atom.io/
|
||||
[20]:https://atom.io
|
||||
[21]:http://typeintandem.com/
|
||||
[22]:https://github.com/typeintandem/tandem/issues/131
|
||||
[23]:https://www.emacswiki.org/emacs/CollaborativeEditing
|
||||
[24]:https://en.wikipedia.org/wiki/Collaborative_real-time_editor
|
||||
[25]:https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
|
||||
[26]:https://en.wikipedia.org/wiki/Douglas_Engelbart
|
@ -0,0 +1,139 @@
|
||||
Sosreport - 收集系统日志和诊断信息的工具
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/06/sos-720x340.png)
|
||||
|
||||
如果你是 RHEL 管理员,你可能肯定听说过 **Sosreport** - 一个可扩展、可移植和支持的数据收集工具。它是一个从类 Unix 操作系统收集系统配置详细信息和诊断信息的工具。当用户提出支持服务单时,他/她必须运行此工具并将由 Sosreport 工具生成的结果报告发送给 Red Hat 支持人员。然后,执行人员将根据报告进行初步分析,并尝试找出系统中的问题。不仅在 RHEL 系统上,你可以在任何类 Unix 操作系统上使用它来收集系统日志和其他调试信息。
|
||||
|
||||
### 安装 Sosreport
|
||||
|
||||
Sosreport 在 Red Hat 官方系统仓库中,因此你可以使用 Yum 或 DNF 包管理器安装它,如下所示。
|
||||
```
|
||||
$ sudo yum install sos
|
||||
|
||||
```
|
||||
|
||||
要么,
|
||||
```
|
||||
$ sudo dnf install sos
|
||||
|
||||
```
|
||||
|
||||
在 Debian、Ubuntu 和 Linux Mint 上运行:
|
||||
```
|
||||
$ sudo apt install sosreport
|
||||
|
||||
```
|
||||
|
||||
### 用法
|
||||
|
||||
安装后,运行以下命令以收集系统配置详细信息和其他诊断信息。
|
||||
```
|
||||
$ sudo sosreport
|
||||
|
||||
```
|
||||
|
||||
系统将要求你输入系统的一些详细信息,例如系统名称、案例 ID 等。相应地输入详细信息,然后按 ENTER 键生成报告。如果你不想更改任何内容并使用默认值,只需按 ENTER 键即可。
|
||||
|
||||
我的 CentOS 7 服务器的示例输出:
|
||||
```
|
||||
sosreport (version 3.5)
|
||||
|
||||
This command will collect diagnostic and configuration information from
|
||||
this CentOS Linux system and installed applications.
|
||||
|
||||
An archive containing the collected information will be generated in
|
||||
/var/tmp/sos.DiJXi7 and may be provided to a CentOS support
|
||||
representative.
|
||||
|
||||
Any information provided to CentOS will be treated in accordance with
|
||||
the published support policies at:
|
||||
|
||||
https://wiki.centos.org/
|
||||
|
||||
The generated archive may contain data considered sensitive and its
|
||||
content should be reviewed by the originating organization before being
|
||||
passed to any third party.
|
||||
|
||||
No changes will be made to system configuration.
|
||||
|
||||
Press ENTER to continue, or CTRL-C to quit.
|
||||
|
||||
Please enter your first initial and last name [server.ostechnix.local]:
|
||||
Please enter the case id that you are generating this report for []:
|
||||
|
||||
Setting up archive ...
|
||||
Setting up plugins ...
|
||||
Running plugins. Please wait ...
|
||||
|
||||
Running 73/73: yum...
|
||||
Creating compressed archive...
|
||||
|
||||
Your sosreport has been generated and saved in:
|
||||
/var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
|
||||
|
||||
The checksum is: 8f08f99a1702184ec13a497eff5ce334
|
||||
|
||||
Please send this file to your support representative.
|
||||
|
||||
```
|
||||
|
||||
如果你不希望系统提示你输入此类详细信息,请如下使用批处理模式。
|
||||
```
|
||||
$ sudo sosreport --batch
|
||||
|
||||
```
|
||||
|
||||
正如你在上面的输出中所看到的,生成了一个归档报告并保存在 **/var/tmp/sos.DiJXi7** 中。在 RHEL 6/CentOS 6 中,报告将在 **/tmp** 中生成。你现在可以将此报告发送给你的支持人员,以便他可以进行初步分析并找出问题所在。
|
||||
|
||||
你可能会担心或想知道报告中的内容。如果是这样,你可以通过运行以下命令来查看它:
|
||||
```
|
||||
$ sudo tar -tf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
|
||||
|
||||
```
|
||||
|
||||
要么,
|
||||
```
|
||||
$ sudo vim /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
|
||||
|
||||
```
|
||||
|
||||
请注意,上述命令不会解压存档,而只显示存档中的文件和文件夹列表。如果要查看存档中文件的实际内容,请首先使用以下命令解压存档:
|
||||
```
|
||||
$ sudo tar -xf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
|
||||
|
||||
```
|
||||
|
||||
存档的所有内容都将解压当前工作目录中 “ssosreport-server.ostechnix.local-20180628171844/” 目录中。进入目录并使用 cat 命令或任何其他文本浏览器查看文件内容:
|
||||
```
|
||||
$ cd sosreport-server.ostechnix.local-20180628171844/
|
||||
|
||||
$ cat uptime
|
||||
17:19:02 up 1:03, 2 users, load average: 0.50, 0.17, 0.10
|
||||
|
||||
```
|
||||
|
||||
有关 Sosreport 的更多详细信息,请参阅手册页。
|
||||
```
|
||||
$ man sosreport
|
||||
|
||||
```
|
||||
|
||||
就是这些了。希望这些有用。还有更多好东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/sosreport-a-tool-to-collect-system-logs-and-diagnostic-information/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
Loading…
Reference in New Issue
Block a user