Merge pull request #1 from LCTT/master

update
This commit is contained in:
wenwensnow 2018-07-06 09:37:15 +08:00 committed by GitHub
commit 7e0c170f27
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
36 changed files with 2351 additions and 512 deletions

View File

@ -1,8 +1,7 @@
云计算的成本
============================================================
### 两个开发团队的一天
> 两个开发团队的一天
![](https://cdn-images-1.medium.com/max/2000/1*nBZJgNXl54jzFKa91s1KfQ.png)
@ -12,103 +11,108 @@
这两个团队被要求为一家全球化企业开发一个新的服务,该企业目前为全球数百万消费者提供服务。要开发的这项新服务需要满足以下基本需求:
1. 能够随时扩展以满足弹性需求
2. 具备应对数据中心故障的弹性
3. 确保数据安全以及数据受到保护
4. 为排错提供深入的调试功能
5. 项目必须能迅速分发
6. 服务构建和维护的性价比要高
1. 能够随时**扩展**以满足弹性需求
2. 具备应对数据中心故障的**弹性**
3. 确保数据**安全**以及数据受到保护
4. 为排错提供深入的**调试**功能
5. 项目必须能**迅速分发**
6. 服务构建和维护的**性价比**要高
就新服务来说,这看起来是非常标准的需求 — 从本质上看传统专用基础设备上没有什么东西可以超越公共云了。
![](https://cdn-images-1.medium.com/max/1600/1*DgnAPA6P5R0yQiV8n6siJw.png)
* * *
#### 1 — 扩展以满足客户需求
当说到可扩展性时,这个新服务需要去满足客户变化无常的需求。我们构建的服务不可以拒绝任何请求,以防让公司遭受损失或者声誉受到影响。
传统的团队使用的是专用基础设施,架构体系的计算能力需要与峰值数据需求相匹配。对于负载变化无常的服务来说,大量昂贵的计算能力在低利用率的时间被浪费掉。
**传统团队**
使用的是专用基础设施,架构体系的计算能力需要与峰值数据需求相匹配。对于负载变化无常的服务来说,大量昂贵的计算能力在低利用率时被浪费掉。
这是一种很浪费的方法  —  并且大量的资本支出会侵蚀掉你的利润。另外,这些未充分利用的庞大的服务器资源的维护也是一项很大的运营成本。这是一项你无法忽略的成本  —  我不得不再强调一下,为支持一个单一服务去维护一机柜的服务器是多么的浪费时间和金钱。
云团队使用的是基于云的自动伸缩解决方案,应用会按需要进行自动扩展和收缩。也就是说你只需要支付你所消费的计算资源的费用。
**云团队**
使用的是基于云的自动伸缩解决方案,应用会按需要进行自动扩展和收缩。也就是说你只需要支付你所消费的计算资源的费用。
一个架构良好的基于云的应用可以实现无缝地伸缩 —  并且还是自动进行的。开发团队只需要定义好自动伸缩的资源组即可,即当你的应用 CPU 利用率达到某个高位、或者每秒有多大请求数时启动多少实例,并且你可以根据你的意愿去定制这些规则。
* * *
#### 2 — 应对故障的弹性
当说到弹性时,将托管服务的基础设施放在同一个房间里并不是一个好的选择。如果你的应用托管在一个单一的数据中心  —  (不是如果)发生某些失败时(译注:指坍塌、地震、洪灾等),你的所有的东西都被埋了。
当说到弹性时,将托管服务的基础设施放在同一个房间里并不是一个好的选择。如果你的应用托管在一个单一的数据中心  —  (不是如果)发生某些失败时(LCTT 译注:指坍塌、地震、洪灾等),你的所有的东西都被埋了。
传统的团队去满足这种基本需求的标准解决方案是,为实现局部弹性建立至少两个服务器  —  在地理上冗余的数据中心之间实施秒级复制。
**传统团队**
开发团队需要一个负载均衡解决方案,以便于在发生饱合或者故障等事件时将流量转向到另一个节点  —  并且还要确保镜像节点之间,整个栈是持续完全同步的
满足这种基本需求的标准解决方案是,为实现局部弹性建立至少两个服务器  —  在地理上冗余的数据中心之间实施秒级复制
在全球 50 个区域中的每一个云团队,都由 AWS 提供多个_有效区域_。每个区域由多个容错数据中心组成  — 通过自动故障切换功能AWS 可以在区域内将服务无缝地转移到其它的区中。
开发团队需要一个负载均衡解决方案,以便于在发生饱和或者故障等事件时将流量转向到另一个节点  —  并且还要确保镜像节点之间,整个栈是持续完全同步的。
**云团队**
在 AWS 全球 50 个地区中他们都提供多个_可用区_。每个区域由多个容错数据中心组成  — 通过自动故障切换功能AWS 可以将服务无缝地转移到该地区的其它区中。
在一个 `CloudFormation` 模板中定义你的_基础设施即代码_确保你的基础设施在自动伸缩事件中跨区保持一致而对于流量的流向管理AWS 负载均衡服务仅需要做很少的配置即可。
* * *
#### 3安全和数据保护
安全是一个组织中任何一个系统的基本要求。我想你肯定不想成为那些不幸遭遇安全问题的公司之一的。
传统团队为保证运行他们服务的基础服务器安全,他们不得不持续投入成本。这意味着将不得不向监视、识别、以及为来自不同数据源的跨多个供应商解决方案的安全威胁打补丁的团队上投资。
**传统团队**
使用公共云的团队并不能免除来自安全方面的责任。云团队仍然需要提高警惕但是并不需要去担心为底层基础设施打补丁的问题。AWS 将积极地对付各种 0 日漏洞 — 最近的一次是 Spectre 和 Meltdown
为保证运行他们服务的基础服务器安全,他们不得不持续投入成本。这意味着将需要投资一个团队,以监视和识别安全威胁,并用来自不同数据源的跨多个供应商解决方案打上补丁
利用来自 AWS 的识别管理和加密安全服务,可以让云团队专注于他们的应用 —  而不是无差别的安全管理。使用 `CloudTrail` 对 API 到 AWS 服务的调用做全面审计,可以实现透明地监视。
**云团队**
* * *
使用公共云并不能免除来自安全方面的责任。云团队仍然需要提高警惕但是并不需要去担心为底层基础设施打补丁的问题。AWS 将积极地对付各种零日漏洞 — 最近的一次是 Spectre 和 Meltdown。
利用来自 AWS 的身份管理和加密安全服务,可以让云团队专注于他们的应用 —  而不是无差别的安全管理。使用 CloudTrail 对 API 到 AWS 服务的调用做全面审计,可以实现透明地监视。
#### 4监视和日志
任何基础设施和部署为服务的应用都需要严密监视实时数据。团队应该有一个可以访问的仪表板,当超过指标阈值时仪表板会显示警报,并能够在排错时提供与事件相关的日志。
使用传统基础设施的传统团队,将不得不在跨不同供应商和“雪花状”的解决方案上配置监视和报告解决方案。配置这些“见鬼的”解决方案将花费你大量的时间和精力 —  并且能够正确地实现你的目的是相当困难的。
**传统团队**
对于传统基础设施,将不得不在跨不同供应商和“雪花状”的解决方案上配置监视和报告解决方案。配置这些“见鬼的”解决方案将花费你大量的时间和精力 —  并且能够正确地实现你的目的是相当困难的。
对于大多数部署在专用基础设施上的应用来说,为了搞清楚你的应用为什么崩溃,你可以通过搜索保存在你的服务器文件系统上的日志文件来找到答案。为此你的团队需要通过 SSH 进入服务器,导航到日志文件所在的目录,然后浪费大量的时间,通过 `grep` 在成百上千的日志文件中寻找。如果你在一个横跨 60 台服务器上部署的应用中这么做  —  我能负责任地告诉你,这是一个极差的解决方案。
云团队利用原生的 AWS 服务,如 CloudWatch 和 CloudTrail来做云应用程序的监视是非常容易。不需要很多的配置开发团队就可以监视部署的服务上的各种指标问题的排除过程也不再是个恶梦了。
**云团队**
利用原生的 AWS 服务,如 CloudWatch 和 CloudTrail来做云应用程序的监视是非常容易。不需要很多的配置开发团队就可以监视部署的服务上的各种指标  —  问题的排除过程也不再是个恶梦了。
对于传统的基础设施,团队需要构建自己的解决方案,配置他们的 REST API 或者服务去推送日志到一个聚合器。而得到这个“开箱即用”的解决方案将对生产力有极大的提升。
* * *
#### 5加速开发进程
现在的商业环境中,快速上市的能力越来越重要。由于实施延误所失去的机会成本,可能成为影响最终利润的一个主要因素。
现在的商业环境中,快速上市的能力越来越重要。由于实施延误所失去的机会成本,可能成为影响最终利润的一个主要因素。
大多数组织的这种传统团队,他们需要在新项目所需要的硬件采购、配置和部署上花费很长的时间 — 并且由于预测能力差,提前获得的额外的性能将造成大量的浪费。
**传统团队**
对于大多数组织,他们需要在新项目所需要的硬件采购、配置和部署上花费很长的时间 — 并且由于预测能力差,提前获得的额外的性能将造成大量的浪费。
而且还有可能的是,传统的开发团队在无数的“筒仓”中穿梭以及在移交创建的服务上花费数月的时间。项目的每一步都会在数据库、系统、安全、以及网络管理方面需要一个独立工作。
**云团队**
而云团队开发新特性时,拥有大量的随时可投入生产系统的服务套件供你使用。这是开发者的天堂。每个 AWS 服务一般都有非常好的文档并且可以通过你选择的语言以编程的方式去访问。
使用新的云架构,例如无服务器,开发团队可以在最小化冲突的前提下构建和部署一个可扩展的解决方案。比如,只需要几天时间就可以建立一个 [Imgur 的无服务器克隆][4],它具有图像识别的特性,内置一个产品级的监视/日志解决方案,并且它的弹性极好。
![](https://cdn-images-1.medium.com/max/1600/1*jHmtrp1OKM4mZVn-gSNoQg.png)
如果必须要我亲自去设计弹性和可伸缩性,我可以向你保证,我仍然在开发这个项目 — 而且最终的产品将远不如目前的这个好。
*如何建立一个 Imgur 的无服务器克隆*
如果必须要我亲自去设计弹性和可伸缩性,我可以向你保证,我会陷在这个项目的开发里 — 而且最终的产品将远不如目前的这个好。
从我实践的情况来看,使用无服务器架构的交付时间远小于在大多数公司中提供硬件所花费的时间。我只是简单地将一系列 AWS 服务与 Lambda 功能 — 以及 ta-da 耦合到一起而已!我只专注于开发解决方案,而无差别的可伸缩性和弹性是由 AWS 为我处理的。
* * *
#### 关于云计算成本的结论
就弹性而言,云计算团队的按需扩展是当之无愧的赢家 — 因为他们仅为需要的计算能力埋单。而不需要为维护和底层的物理基础设施打补丁付出相应的资源。
云计算也为开发团队提供一个可使用多个有效区的弹性架构、为每个服务构建的安全特性、持续的日志和监视工具、随用随付的服务、以及低成本的加速分发实践。
云计算也为开发团队提供一个可使用多个可用区的弹性架构、为每个服务构建的安全特性、持续的日志和监视工具、随用随付的服务、以及低成本的加速分发实践。
大多数情况下,云计算的成本要远低于为你的应用运行所需要的购买、支持、维护和设计的按需基础架构的成本 —  并且云计算的麻烦事更少。
@ -116,17 +120,17 @@
也有一些云计算比传统基础设施更昂贵的例子,一些情况是在周末忘记关闭运行的一些极其昂贵的测试机器。
[Dropbox 在决定推出自己的基础设施并减少对 AWS 服务的依赖之后,在两年的时间内节省近 7500 万美元的费用Dropbox…www.geekwire.com][5][][6]
[Dropbox 在决定推出自己的基础设施并减少对 AWS 服务的依赖之后,在两年的时间内节省近 7500 万美元的费用Dropbox…——www.geekwire.com][5][][6]
即便如此,这样的案例仍然是非常少见的。更不用说当初 Dropbox 也是从 AWS 上开始它的业务的  —  并且当它的业务达到一个临界点时,才决定离开这个平台。即便到现在,他们也已经进入到云计算的领域了,并且还在 AWS 和 GCP 上保留了 40% 的基础设施。
将云服务与基于单一“成本”指标(译注:此处的“成本”仅指物理基础设施的购置成本)的传统基础设施比较的想法是极其幼稚的  —  公然无视云为开发团队和你的业务带来的一些主要的优势。
将云服务与基于单一“成本”指标(LCTT 译注:此处的“成本”仅指物理基础设施的购置成本)的传统基础设施比较的想法是极其幼稚的  —  公然无视云为开发团队和你的业务带来的一些主要的优势。
在极少数的情况下,云服务比传统基础设施产生更多的绝对成本  —  它在开发团队的生产力、速度和创新方面仍然贡献着更好的价值。
在极少数的情况下,云服务比传统基础设施产生更多的绝对成本  —  它在开发团队的生产力、速度和创新方面仍然贡献着更好的价值。
![](https://cdn-images-1.medium.com/max/1600/1*IlrOdfYiujggbsYynTzzEQ.png)
客户才不在乎你的数据中心呢
*客户才不在乎你的数据中心呢*
_我非常乐意倾听你在云中开发的真实成本相关的经验和反馈请在下面的评论区、Twitter  _ [_@_ _Elliot_F_][7] 上、或者直接在 _ [_LinkedIn_][8] 上联系我。
@ -136,7 +140,7 @@ via: https://read.acloud.guru/the-true-cost-of-cloud-a-comparison-of-two-develop
作者:[Elliot Forbes][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,13 +1,13 @@
3个 Python 命令行工具
3 个 Python 命令行工具
======
> 用 Click、Docopt 和 Fire 库写你自己的命令行应用。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead-tool-box.png?itok=NrJYb417)
这篇文章是与 [Lacey Williams Hensche][1] 共同撰写的
有时对于某项工作来说一个命令行工具就足以胜任。命令行工具是一种从你的 shell 或者终端之类的地方交互或运行的程序。[Git][2] 和 [Curl][3] 就是两个你也许已经很熟悉的命令行工具
有时对于某项工作来说一个命令行工具就足以胜任。命令行工具是一个交互程序,类似你的 shell 或者终端。[Git][2] 和 [Curl][3] 就是两个你也许已经很熟悉的命令行工具。
当你有一小段代码需要在一行中执行多次或者经常性地被执行命令行工具会很实用。Django 开发者执行 `./manage.py runserver` 命令来启动他们的网络服务器Docker 开发者执行 `docker-compose up` 来启动他们的容器。你想要写一个命令行工具的原因可能和你一开始想写代码的原因有很大不同。
当你有一小段代码需要在一行中执行多次或者经常性地被执行命令行工具就会很有用。Django 开发者执行 `./manage.py runserver` 命令来启动他们的网络服务器Docker 开发者执行 `docker-compose up` 来启动他们的容器。你想要写一个命令行工具的原因可能和你一开始想写代码的原因有很大不同。
对于这个月的 Python 专栏,我们有 3 个库想介绍给希望为自己编写命令行工具的 Python 使用者。
@ -19,24 +19,24 @@
* 包含说明如何将命令行工具打包成一个更加易于执行的 Python 应用程序
* 自动生成实用的帮助文本
* 使你能够叠加使用可选和必要参数,甚至是 [多个命令][5]
* 有一个 Django 版本( [`django-click`][6] )来编写管理命令
* 有一个 Django 版本( [`django-click`][6] 来编写管理命令
Click 使用 `@click.command()` 去声明一个函数作为命令,同时可以指定必要和可选参数。
```
# hello.py
import click
import click
@click.command()
@click.option('--name', default='', help='Your name')
def say_hello(name):
    click.echo("Hello {}!".format(name))
click.echo("Hello {}!".format(name))
if __name__ == '__main__':
    hello()
say_hello()
```
`@click.option()` 修饰器声明了一个 [可选参数][7] 并且 `@click.argument()` 修饰器声明了一个 [必要参数][8]。你可以通过叠加修饰器来组合可选和必要参数。`echo()` 方法将结果打印到控制台。
`@click.option()` 修饰器声明了一个 [可选参数][7] ,而 `@click.argument()` 修饰器声明了一个 [必要参数][8]。你可以通过叠加修饰器来组合可选和必要参数。`echo()` 方法将结果打印到控制台。
```
$ python hello.py --name='Lacey'
@ -45,42 +45,42 @@ Hello Lacey!
### Docopt
[Docopt][9] 是一个命令行工具解析器,类似于命令行工具的 Markdown。如果你喜欢流畅地编写应用文档在本文推荐的库中 Docopt 有着最好的格式化帮助文本。它不是我们最爱的命令行工具开发包的原因是它的文档犹如把人扔进深渊,使你开始使用时会有一些小困难。然而,它仍是一个轻量级广受欢迎的库,特别是当一个漂亮的说明文档对你来说很重要的时候。
[Docopt][9] 是一个命令行工具解析器,类似于命令行工具的 Markdown。如果你喜欢流畅地编写应用文档在本文推荐的库中 Docopt 有着最好的格式化帮助文本。它不是我们最爱的命令行工具开发包的原因是它的文档犹如把人扔进深渊,使你开始使用时会有一些小困难。然而,它仍是一个轻量级的、广受欢迎的库,特别是当一个漂亮的说明文档对你来说很重要的时候。
Docopt 对于如何格式化文章开头的 docstring 是很特别的。在工具名称后面的 docsring 中,顶部元素必须是“Usage:”并且需要列出你希望命令被调用的方式(比如:自身调用,使用参数等等)。Usage 需要包含 **help****version** 标记
Docopt 对于如何格式化文章开头的 docstring 是很特别的。在工具名称后面的 docsring 中,顶部元素必须是 `Usage:` 并且需要列出你希望命令被调用的方式(比如:自身调用,使用参数等等)。`Usage:` 需要包含 `help``version` 参数
docstring 中的第二个元素是“Options:”对于在“Usages:”中提及的可选项和参数,它应当提供更多的信息。你的 docstring 的内容变成了你帮助文本的内容。
docstring 中的第二个元素是 `Options:`,对于在 `Usages:` 中提及的可选项和参数,它应当提供更多的信息。你的 docstring 的内容变成了你帮助文本的内容。
```
"""HELLO CLI
Usage:
    hello.py
    hello.py <name>
    hello.py -h|--help
    hello.py -v|--version
hello.py
hello.py <name>
hello.py -h|--help
hello.py -v|--version
Options:
    <name>  Optional name argument.
    -h --help  Show this screen.
    -v --version  Show version.
<name> Optional name argument.
-h --help Show this screen.
-v --version Show version.
"""
from docopt import docopt
def say_hello(name):
    return("Hello {}!".format(name))
return("Hello {}!".format(name))
if __name__ == '__main__':
    arguments = docopt(__doc__, version='DEMO 1.0')
    if arguments['<name>']:
        print(say_hello(arguments['<name>']))
    else:
        print(arguments)
arguments = docopt(__doc__, version='DEMO 1.0')
if arguments['<name>']:
print(say_hello(arguments['<name>']))
else:
print(arguments)
```
在最基本的层面Docopt 被设计用来返回你的参数键值对。如果我不指定名字调用上面的命令,我会得到一个字典的返回:
在最基本的层面Docopt 被设计用来返回你的参数键值对。如果我不指定上述的 `name` 调用上面的命令,我会得到一个字典的返回
```
$ python hello.py
@ -91,20 +91,20 @@ $ python hello.py
这里可看到我没有输入 `help``version` 标记并且 `name` 参数是 `None`
但是如果我带着一个 name 参数调用,`say_hello` 函数就会执行了。
但是如果我带着一个 `name` 参数调用,`say_hello` 函数就会执行了。
```
$ python hello.py Jeff
Hello Jeff!
```
Docopt 允许同时定必要和可选参数,且各自有着不同的语法约定。必要参数需要在 `ALLCAPS``<carets>` 中展示,而可选参数需要单双横杠显示,就像--like。更多内容可以阅读 Docopt 有关 [patterns][10] 的文档。
Docopt 允许同时定必要和可选参数,且各自有着不同的语法约定。必要参数需要在 `ALLCAPS``<carets>` 中展示,而可选参数需要单双横杠显示,就像 `--like`。更多内容可以阅读 Docopt 有关 [patterns][10] 的文档。
### Fire
[Fire][11] 是谷歌的一个命令行工具开发库。尤其令人喜欢的是当你的命令需要更多复杂参数或者处理 Python 对象时,它会聪明地尝试解析你的参数类型。
Fire 的 [文档][12] 包括了海量的样例但是我希望这些文档能被更好地组织。Fire 能够处理 [同一个文件中的多条命令][13]、使用 [对象][14] 的方法作为命令和 [组][15] 命令。
Fire 的 [文档][12] 包括了海量的样例但是我希望这些文档能被更好地组织。Fire 能够处理 [同一个文件中的多条命令][13]、使用 [对象][14] 的方法作为命令和 [组][15] 命令。
它的弱点在于输出到控制台的文档。命令行中的 docstring 不会出现在帮助文本中,并且帮助文本也不一定标识出参数。
@ -113,11 +113,11 @@ import fire
def say_hello(name=''):
    return 'Hello {}!'.format(name)
return 'Hello {}!'.format(name)
if __name__ == '__main__':
  fire.Fire()
fire.Fire()
```
参数是必要还是可选取决于你是否在函数或者方法定义中为其指定了一个默认值。要调用命令,你必须指定文件名和函数名,比较类似 Click 的语法:
@ -129,7 +129,7 @@ Hello Rikki!
你还可以像标记一样传参,比如 `--name=Rikki`
### 额外奖赏:打包!
### 额外赠送:打包!
Click 包含了使用 `setuptools` [打包][16] 命令行工具的使用说明(强烈推荐按照说明操作)。
@ -139,20 +139,20 @@ Click 包含了使用 `setuptools` [打包][16] 命令行工具的使用说明
from setuptools import setup
setup(
    name='hello',
    version='0.1',
    py_modules=['hello'],
    install_requires=[
        'Click',
    ],
    entry_points='''
        [console_scripts]
        hello=hello:say_hello
    ''',
name='hello',
version='0.1',
py_modules=['hello'],
install_requires=[
'Click',
],
entry_points='''
[console_scripts]
hello=hello:say_hello
''',
)
```
任何你看见 `hello` 的地方,使用你自己的模块名称替换掉,但是要记得忽略`.py` 后缀名。将 `say_hello` 替换成你的函数名称。
任何你看见 `hello` 的地方,使用你自己的模块名称替换掉,但是要记得忽略 `.py` 后缀名。将 `say_hello` 替换成你的函数名称。
然后,执行 `pip install --editable` 来使你的命令在命令行中可用。
@ -169,10 +169,10 @@ Hello Jeff!
via: https://opensource.com/article/18/5/3-python-command-line-tools
作者:[Jeff Triplett][a]
作者:[Jeff Triplett][a][Lacey Williams Hensche][1]
选题:[lujun9972](https://github.com/lujun9972)
译者:[hoppipolla-](https://github.com/hoppipolla-)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,79 @@
命令行中的世界杯
======
![](https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc2018.jpg?resize=700%2C450&ssl=1)
足球始终在我们身边。即使我们国家的队伍已经出局LCTT 译注:显然这不是指我们国家,因为我们根本没有入局……),我还是想知道球赛比分。目前, 国际足联世界杯是世界上最大的足球锦标赛2018 届是由俄罗斯主办的。每届世界杯都有一些足球强国未能取得参赛资格LCTT 译注:我要吐槽么?)。意大利和荷兰就无缘本次世界杯。但是即使在未参加比赛的国家,追踪关注最新比分也成为了一种仪式。我希望能及时了解这个世界级的重大赛事最新比分的变化,而不用去搜索不同的网站。
如果你很喜欢命令行,那么有更好的方法用一个小型命令行程序追踪最新的世界杯比分和排名。让我们看一看最热门的可用的球赛趋势分析程序之一,它叫作 football-cli。
football-cli 不是一个开创性的应用程序。这几年,有许多命令行工具可以让你了解到最新的球赛比分和赛事排名。例如,我是 soccer-cli Python 写的)和 App-football Perl 写的)的重度用户。但我总是在寻找新的趋势分析应用,而 football-cli 在某些方面脱颖而出。
football-cli 是 JavaScript 开发的,由 Manraj Singh 编写,它是开源的软件。基于 MIT 许可证发布,用 npmJavaScript 包管理器)安装十分简单。那么,让我们直接行动吧!
该应用程序提供了命令以获取过去及现在的赛事得分、查看联赛和球队之前和将要进行的赛事。它也会显示某一特定联赛的排名。有一条指令可以列出程序所支持的不同赛事。我们不妨从最后一个条指令开始。
在 shell 提示符下:
```
luke@ganges:~$ football lists
```
![球赛列表][3]
世界杯被列在最下方,我错过了昨天的比赛,所以为了了解比分,我在 shell 提示下输入:
```
luke@ganges:~$ football scores
```
![football-wc-22][4]
现在,我想看看目前的世界杯小组排名。很简单:
```
luke@ganges:~$ football standings -l WC
```
下面是输出的一个片段:
![football-wc-biaoge][5]
你们当中眼尖的可能会注意到这里有一个错误。比如比利时看上去领先于 G 组,但这是不正确的,比利时和英格兰(截稿前)在得分上打平。在这种情况下,纪律好的队伍排名更高。英格兰收到两张黄牌,而比利时收到三张,因此,英格兰应当名列榜首。
假设我想知道利物浦 90 天前英超联赛的结果,那么:
```
luke@ganges:~$ football fixtures -l PL -d 90 -t "Liverpool"
```
![足球-利物浦][6]
我发现这个程序非常方便。它用一种清晰、整洁而有吸引力的方式显示分数和排名。当欧洲联赛再次开始时,它就更有用了。(事实上 2018-19 冠军联赛已经在进行中)!
这几个示例让大家对 football-cli 的实用性有了更深的体会。想要了解更多,请转至开发者的 [GitHub 页面][7]。足球 命令行 football-cli。
如同许多类似的工具一样,该软件从 football-data.org 获取相关数据。这项服务以机器可读的方式为所有欧洲主要联赛提供数据,包括比赛、球队、球员、结果等等。所有这些信息都是以 JOSN 形式通过一个易于使用的 RESTful API 提供的。
--------------------------------------------------------------------------------
via: https://www.linuxlinks.com/football-cli-world-cup-football-on-the-command-line/
作者:[Luke Baker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[ZenMoore](https://github.com/ZenMoore)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxlinks.com/author/luke-baker/
[1]:https://www.linuxlinks.com/wp-content/plugins/jetpack/modules/lazy-images/images/1x1.trans.gif
[2]:https://i0.wp.com/www.linuxlinks.com/wp-content/uploads/2017/12/CLI.png?resize=195%2C171&ssl=1
[3]:https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-lists.png?resize=595%2C696&ssl=1
[4]:https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc-22.png?resize=634%2C75&ssl=1
[5]:https://i0.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc-table.png?resize=750%2C581&ssl=1
[6]:https://i1.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-Liverpool.png?resize=749%2C131&ssl=1
[7]:https://github.com/ManrajGrover/football-cli
[8]:https://www.linuxlinks.com/links/Software/
[9]:https://discord.gg/uN8Rqex

View File

@ -1,74 +0,0 @@
翻译中 by ZenMoore
World Cup football on the command line
======
![](https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc2018.jpg?resize=700%2C450&ssl=1)
Football is around us constantly. Even when domestic leagues have finished, theres always a football score I want to know. Currently, its the biggest football tournament in the world, the Fifa World Cup 2018, hosted in Russia. Every World Cup there are some great football nations that dont manage to qualify for the tournament. This time around the Italians and the Dutch missed out. But even in non-participating countries, its a rite of passage to keep track of the latest scores. I also like to keep abreast of the latest scores from the major leagues around the world without having to search different websites.
![Command-Line Interface][2]If youre a big fan of the command-line, what better way to keep track of the latest World Cup scores and standings with a small command-line utility. Lets take a look at one of the hottest trending football utilities available. Its goes by the name football-cli.
If youre a big fan of the command-line, what better way to keep track of the latest World Cup scores and standings with a small command-line utility. Lets take a look at one of the hottest trending football utilities available. Its goes by the name football-cli.
football-cli is not a groundbreaking app. Over the years, theres been a raft of command line tools that let you keep you up-to-date with the latest football scores and league standings. For example, I am a heavy user of soccer-cli, a Python based tool, and App-Football, written in Perl. But Im always looking on the look out for trending apps. And football-cli stands out from the crowd in a few ways.
football-cli is developed in JavaScript and written by Manraj Singh. Its open source software, published under the MIT license. Installation is trivial with npm (the package manager for JavaScript), so lets get straight into the action.
The utility offers commands that give scores of past and live fixtures, see upcoming and past fixtures of a league and team. It also displays standings of a particular league. Theres a command that lists the various supported competitions. Lets start with the last command.
At a shell prompt.
`luke@ganges:~$ football lists`
![football-lists][3]
The World Cup is listed at the bottom. I missed yesterdays games, so to catch up on the scores, I type at a shell prompt:
`luke@ganges:~$ football scores`
![football-wc-22][4]
Now I want to see the current World Cup group standings. Thats easy.
`luke@ganges:~$ football standings -l WC`
Heres an excerpt of the output:
![football-wc-table][5]
The eagle-eyed among you may notice a bug here. Belgium is showing as the leader of Group G. But this is not correct. Belgium and England are (at the time of writing) both tied on points, goal difference, and goals scored. In this situation, the team with the better disciplinary record is ranked higher. England and Belgium have received 2 and 3 yellow cards respectively, so England top the group.
Suppose I want to find out Liverpools results in the Premiership going back 90 days from today.
`luke@ganges:~$ football fixtures -l PL -d 90 -t "Liverpool"`
![football-Liverpool][6]
Im finding the utility really handy, displaying the scores and standings in a clear, uncluttered, and attractive way. When the European domestic games start up again, itll get heavy usage. (Actually, the 2018-19 Champions League is already underway)!
These few examples give a taster of the functionality available with football-cli. Read more about the utility from the developers **[GitHub page][7].** Football + command-line = football-cli
Like similar tools, the software retrieves its football data from football-data.org. This service provide football data for all major European leagues in a machine-readable way. This includes fixtures, teams, players, results and more. All this information is provided via an easy-to-use RESTful API in JSON representation.
--------------------------------------------------------------------------------
via: https://www.linuxlinks.com/football-cli-world-cup-football-on-the-command-line/
作者:[Luke Baker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linuxlinks.com/author/luke-baker/
[1]:https://www.linuxlinks.com/wp-content/plugins/jetpack/modules/lazy-images/images/1x1.trans.gif
[2]:https://i0.wp.com/www.linuxlinks.com/wp-content/uploads/2017/12/CLI.png?resize=195%2C171&ssl=1
[3]:https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-lists.png?resize=595%2C696&ssl=1
[4]:https://i2.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc-22.png?resize=634%2C75&ssl=1
[5]:https://i0.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-wc-table.png?resize=750%2C581&ssl=1
[6]:https://i1.wp.com/www.linuxlinks.com/wp-content/uploads/2018/06/football-Liverpool.png?resize=749%2C131&ssl=1
[7]:https://github.com/ManrajGrover/football-cli
[8]:https://www.linuxlinks.com/links/Software/
[9]:https://discord.gg/uN8Rqex

View File

@ -0,0 +1,42 @@
My first sysadmin mistake
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_mistakes.png?itok=dN0OoIl5)
If you work in IT, you know that things never go completely as you think they will. At some point, you'll hit an error or something will go wrong, and you'll end up having to fix things. That's the job of a systems administrator.
As humans, we all make mistakes. Sometimes, we are the error in the process, or we are what went wrong. As a result, we end up having to fix our own mistakes. That happens. We all make mistakes, typos, or errors.
As a young systems administrator, I learned this lesson the hard way. I made a huge blunder. But thanks to some coaching from my supervisor, I learned not to dwell on my errors, but to create a "mistake strategy" to set things right. Learn from your mistakes. Get over it, and move on.
My first job was a Unix systems administrator for a small company. Really, I was a junior sysadmin, but I worked alone most of the time. We were a small IT team, just the three of us. I was the only sysadmin for 20 or 30 Unix workstations and servers. The other two supported the Windows servers and desktops.
Any systems administrators reading this probably won't be surprised to know that, as an unseasoned, junior sysadmin, I eventually ran the `rm` command in the wrong directory. As root. I thought I was deleting some stale cache files for one of our programs. Instead, I wiped out all files in the `/etc` directory by mistake. Ouch.
My clue that I'd done something wrong was an error message that `rm` couldn't delete certain subdirectories. But the cache directory should contain only files! I immediately stopped the `rm` command and looked at what I'd done. And then I panicked. All at once, a million thoughts ran through my head. Did I just destroy an important server? What was going to happen to the system? Would I get fired?
Fortunately, I'd run `rm *` and not `rm -rf *` so I'd deleted only files. The subdirectories were still there. But that didn't make me feel any better.
Immediately, I went to my supervisor and told her what I'd done. She saw that I felt really dumb about my mistake, but I owned it. Despite the urgency, she took a few minutes to do some coaching with me. "You're not the first person to do this," she said. "What would someone else do in your situation?" That helped me calm down and focus. I started to think less about the stupid thing I had just done, and more about what I was going to do next.
I put together a simple strategy: Don't reboot the server. Use an identical system as a template, and re-create the `/etc` directory.
Once I had my plan of action, the rest was easy. It was just a matter of running the right commands to copy the `/etc` files from another server and edit the configuration so it matched the system. Thanks to my practice of documenting everything, I used my existing documentation to make any final adjustments. I avoided having to completely restore the server, which would have meant a huge disruption.
To be sure, I learned from that mistake. For the rest of my years as a systems administrator, I always confirmed what directory I was in before running any command.
I also learned the value of building a "mistake strategy." When things go wrong, it's natural to panic and think about all the bad things that might happen next. That's human nature. But creating a "mistake strategy" helps me stop worrying about what just went wrong and focus on making things better. I may still think about it, but knowing my next steps allows me to "get over it."
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/my-first-sysadmin-mistake
作者:[Jim Hall][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jim-hall

View File

@ -0,0 +1,60 @@
Translating by vk
How to make a career move from proprietary to open source technology
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration_0.png?itok=YEl_GXbv)
I started my journey as a software engineer at Northern Telecom, where I developed proprietary software for carrier-grade telephone switches. Although I learned Pascal while in college, at Northern Telecom I was trained in a proprietary programming language based on C. I also used a proprietary operating system and a proprietary version-control software.
I enjoyed working in the proprietary environment and had opportunities to do some interesting work. Then I had a turning point in my career that made me think about things. It happened at a career fair. I was invited to speak at a STEM career panel at a local middle school. I shared with the students my day-to-day responsibilities as a software engineer, and one of the students asked me a question: "Is this really what you always wanted to do in life? Do you enjoy and love what you are doing?"
Whenever my manager asked me this question, I would safely answer, "Yes, of course, I do!" But I had never been asked this by an innocent 6th grader who is interested in STEM. My response to the student was the same: "Of course I do!"
The truth was I did enjoy my career, but that student had me thinking… I had to reassess where I was in my career. I thought about the proprietary environment. I was an expert in my specialty, but that was one of the downsides: I was only modifying my area of code. Was I learning about different types of technology in a closed system? Was my skillset still marketable? Was I going through the motions? Is this what I really want to continue to do?
I thought all of those things, and I wondered: Was the challenge and creativity still there?
Life went on, and I had major life changes. I left Nortel Networks and took a career break to focus on my family.
When I was ready to re-enter the workforce, that 6th-grader's questions lingered in my mind. Is this want I've always wanted to do? I applied for several jobs that appeared to be a good match, but the feedback I received from recruiters was that they were looking for people with five or more years of Java and Python skills. It seemed that the skills and knowledge I had acquired over the course of my 15-year career at Nortel were no longer in demand or in use.
### Challenges
My first challenge was figuring out how to leverage the skills I gained while working at a proprietary company. I noticed there had been a huge shift in IT from proprietary to open source. I decided to learn and teach myself Python because it was the most in-demand language. Once I started to learn Python, I realized I needed a project to gain experience and make myself more marketable.
The next challenge was figuring out how to gain project experience with my new knowledge of Python. Former colleagues and my husband directed me toward open source software. When I googled "open source project," I discovered there were hundreds of open source projects, ranging from very small (one contributor) ones, to communities of less than 50 people, to huge projects with hundreds of contributors all over the world.
I did a keyword search in GitHub of technical terms that fit my skillset and found several projects that matched. I decided to leverage my interests and networking background to make my first contribution to OpenStack. I also discovered the [Outreachy][1] program, which offers three-month paid internships to people who are under-represented in tech.
### Lessons learned
One of the first things I learned is that I could contribute in many different ways. I could contribute to documentation and user design. I could also contribute by writing test cases. These are skillsets I developed over my career, and I didn't need five years of experience to contribute. All I needed was the commitment and drive to make a contribution.
After my first contribution to OpenStack was merged into the release, I was accepted into the Outreachy program. One of the best things about Outreachy is the mentor I was assigned to help me navigate the open source world.
Here are three other valuable lessons I learned that might help others who are interested in breaking into the open source world:
**Be persistent.** Be persistent in finding the right open source projects. Look for projects that match your core skillset. Also, look for ones that have a code of conduct and that are welcoming to newcomers—especially those with a getting started guide for newcomers. Be persistent in engaging in the community.
**Be patient.** Adjusting to open source takes time. Engagingin the community takes time. Giving thoughtful and meaningful feedback takes time, and reading and considering feedback you receive takes time.
**Participate in the community.** You don't have to have permission to work on a certain technology or a certain area. You can decide what you would like to work on and dive in.
Petra Sargent will present [You Can Teach an Old Dog New Tricks: Moving From Proprietary to Open Source][2] at the 20th annual [OSCON][3] event, July 16-19 in Portland, Oregon.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/career-move
作者:[Petra Sargent][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/psargent
[1]:https://www.outreachy.org/
[2]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/307631
[3]:https://conferences.oreilly.com/oscon/oscon-or

View File

@ -0,0 +1,49 @@
What Game of Thrones can teach us about open innovation
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_thinklaterally_520x292.jpg?itok=JkbRl5KU)
You might think the only synergy one can find in Game of Thrones is that between Jaime Lannister and his sister, Cersei. Characters in the show's rotating cast don't see many long term relationships, as they're killed off, betrayed, and otherwise trading loyalty in an effort to stay alive. Even the Stark children, siblings suffering from the deaths of their parents, don't really get along most of the time.
But there's something about the chaotic free-for-all of constantly shifting loyalties in Game of Thrones that lends itself to a thought exercise: How can we always be aligning disparate positions in order to innovate?
Here are three ways Game of Thrones illustrates behaviors that lead to innovation.
### Join forces
Aria Stark has no loyalties. Through the death of her parents and separation from her siblings, young Aria demonstrates courage in pursuing an education with the faceless man. And she's rewarded for her courage with the development of seemingly supernatural abilities.
Aria's hate for the people on her list has her innovating left and right in an attempt to get closer to them. As the audience, we're on Aria's side; despite her violent and deadly methods, we identify with her attempts to overcome hardship. Her determination makes us loyal fans, and in an open organization, courage and determination like hers would be rewarded with some well-deserved influence.
Being loyal and helpful to driven people like Aria will help you and (by extension) your organization innovate. Passion is infectious.
### Be nimble
The Lannisters represent a traditional management structure that forcibly resists innovation. Their resistance is usually the result of their fear of change.
Without a doubt, change is scary—especially to people who wield power in an organization. Losing status causes us fear, because in our evolutionary and social history, losing status could mean that we would be unable to survive. But look to Tyrion as an example of how to thrive once status is lost.
There's something about the chaotic free-for-all of constantly shifting loyalties in Game of Thrones that lends itself to a thought exercise: How can we always be aligning disparate positions in order to innovate?
Tyrion is cast out (demoted) by his family (the senior executive team). Instead of lamenting his loss of power, he seeks out a community (by the side of Daenerys) that values (and can utilize) his unique skills, connections, and influences. His resilience in the face of being cast out of Casterly Rock is the perfect metaphor for how innovation occurs: It's iterative and never straightforward. It requires resilience. A more open source way to say this would be: "fail forward," or "release early, release often."
### Score resources
Daenerys Targaryen embodies all the necessary traits for successful innovation. She can be seen as a model for the kind of employee that thrives in an open organization. What the Mother of Dragons needs, the Mother of Dragons gets, and she doesn't compromise her ideals to do it.
Whether freeing slaves (and then asking for their help) or forming alliances to acquire transport vehicles she's never seen before, Daenerys is resourceful. In an open organization, a staff member needs to have the wherewithal to get things done. Colleagues (even the entire organization) may not always share your priorities, but innovation happens when people take risks. Becoming a savvy negotiator like Khaleesi, and developing a willingness to trade a lot for a little (she's been known to do favors for the mere promise of loyalty), you can get things done, fail forward, and innovate.
Courage, resilience, and resourcefulness are necessary traits for innovating in an open organization. What else can Game of Thrones teach us about working—and succeeding—openly?
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/7/open-innovation-lessons-game-of-thrones
作者:[Laura Hilliger][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/laurahilliger

View File

@ -0,0 +1,77 @@
translating---geekpi
Keeping (financial) score with Ledger
======
Ive used [Ledger CLI][1] to keep track of my finances since 2005, when I moved to Canada. I like the plain-text approach, and its support for virtual envelopes means that I can reconcile both my bank account balances and my virtual allocations to different categories. Heres how we use those virtual envelopes to manage our finances separately.
Every month, I have an entry that moves things from my buffer of living expenses to various categories, including an allocation for household expenses. W- doesnt ask for a lot, so I take care to be frugal with the difference between that and the cost of, say, living on my own. The way we handle it is that I cover a fixed amount, and this is credited by whatever I pay for groceries. Since our grocery total is usually less than the amount I budget for household expenses, any difference just stays on the tab. I used to write him cheques to even it out, but lately I just pay for the occasional additional large expense.
Heres a sample envelope allocation:
```
2014.10.01 * Budget
[Envelopes:Living]
[Envelopes:Household] $500
;; More lines go here
```
Heres one of the envelope rules set up. This one encourages me to classify expenses properly. All expenses are taken out of my “Play” envelope.
```
= /^Expenses/
(Envelopes:Play) -1.0
```
This one reimburses the “Play” envelope for household expenses, moving the amount from the “Household” envelope into the “Play” one.
```
= /^Expenses:House$/
(Envelopes:Play) 1.0
(Envelopes:Household) -1.0
```
I have a regular set of expenses that simulate the household expenses coming out of my budget. For example, heres the one for October.
```
2014.10.1 * House
Expenses:House
Assets:Household $-500
```
And this is what a grocery transaction looks like:
```
2014.09.28 * No Frills
Assets:Household:Groceries $70.45
Liabilities:MBNA:September $-70.45
```
Then `ledger bal Assets:Household` will tell me if I owe him money (negative balance) or not. If I pay for something large (ex: plane tickets, plumbing), the regular household expense budget gradually reduces that balance.
I picked up the trick of adding a month label to my credit card transactions from W-, who also uses Ledger to track his transactions. It lets me doublecheck the balance of a statement and see if the previous statement has been properly cleared.
Its a bit of a weird use of the assets category, but it works out for me mentally.
Using Ledger to track it in this way lets me keep track of our grocery expenses and the difference between what Ive actually paid and what Ive budgeted for. If I end up spending more than I expected, I can move virtual money from more discretionary envelopes, so my budget always stays balanced.
Ledgers a powerful tool. Pretty geeky, but maybe more descriptions of workflow might help people who are figuring things out!
More posts about: [finance][2] Tags: [ledger][3] | [See in index][4] // **[5 Comments »][5]**
--------------------------------------------------------------------------------
via: http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/
作者:[Sacha Chua][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://sachachua.com
[1]:http://www.ledger-cli.org/
[2]:http://sachachua.com/blog/category/finance/
[3]:http://sachachua.com/blog/tag/ledger/
[4]:http://pages.sachachua.com/sharing/blog.html?url=http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/
[5]:http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/#comments

View File

@ -1,3 +1,4 @@
Transating by qhwdw
# [Google launches TensorFlow-based vision recognition kit for RPi Zero W][26]

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Running a Python application on Kubernetes
============================================================

View File

@ -0,0 +1,212 @@
Translating by qhwdw
JavaScript Router
======
There are a lot of frameworks/libraries to build single page applications, but I wanted something more minimal. Ive come with a solution and I just wanted to share it 🙂
```
class Router {
constructor() {
this.routes = []
}
handle(pattern, handler) {
this.routes.push({ pattern, handler })
}
exec(pathname) {
for (const route of this.routes) {
if (typeof route.pattern === 'string') {
if (route.pattern === pathname) {
return route.handler()
}
} else if (route.pattern instanceof RegExp) {
const result = pathname.match(route.pattern)
if (result !== null) {
const params = result.slice(1).map(decodeURIComponent)
return route.handler(...params)
}
}
}
}
}
const router = new Router()
router.handle('/', homePage)
router.handle(/^\/users\/([^\/]+)$/, userPage)
router.handle(/^\//, notFoundPage)
function homePage() {
return 'home page'
}
function userPage(username) {
return `${username}'s page`
}
function notFoundPage() {
return 'not found page'
}
console.log(router.exec('/')) // home page
console.log(router.exec('/users/john')) // john's page
console.log(router.exec('/foo')) // not found page
```
To use it you add handlers for a URL pattern. This pattern can be a simple string or a regular expression. Using a string will match exactly that, but a regular expression allows you to do fancy things like capture parts from the URL as seen with the user page or match any URL as seen with the not found page.
Ill explain what does that `exec` method… As I said, the URL pattern can be a string or a regular expression, so it first checks for a string. In case the pattern is equal to the given pathname, it returns the execution of the handler. If it is a regular expression, we do a match with the given pathname. In case it matches, it returns the execution of the handler passing to it the captured parameters.
### Working Example
That example just logs to the console. Lets try to integrate it to a page and see something.
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Router Demo</title>
<link rel="shortcut icon" href="data:,">
<script src="/main.js" type="module"></script>
</head>
<body>
<header>
<a href="/">Home</a>
<a href="/users/john_doe">Profile</a>
</header>
<main></main>
</body>
</html>
```
This is the `index.html`. For single page applications, you must do special work on the server side because all unknown paths should return this `index.html`. For development, Im using an npm tool called [serve][1]. This tool is to serve static content. With the flag `-s`/`--single` you can serve single page applications.
With [Node.js][2] and npm (comes with Node) installed, run:
```
npm i -g serve
serve -s
```
That HTML file loads the script `main.js` as a module. It has a simple `<header>` and a `<main>` element in which well render the corresponding page.
Inside the `main.js` file:
```
const main = document.querySelector('main')
const result = router.exec(location.pathname)
main.innerHTML = result
```
We call `router.exec()` passing the current pathname and setting the result as HTML in the main element.
If you go to localhost and play with it youll see that it works, but not as you expect from a SPA. Single page applications shouldnt refresh when you click on links.
Well have to attach event listeners to each anchor link click, prevent the default behavior and do the correct rendering. Because a single page application is something dynamic, you expect creating anchor links on the fly so to add the event listeners Ill use a technique called [event delegation][3].
Ill attach a click event listener to the whole document and check if that click was on an anchor link (or inside one).
In the `Router` class Ill have a method that will register a callback that will run for every time we click on a link or a “popstate” event occurs. The popstate event is dispatched every time you use the browser back or forward buttons.
To the callback well pass that same `router.exec(location.pathname)` for convenience.
```class Router {
// ...
install(callback) {
const execCallback = () => {
callback(this.exec(location.pathname))
}
document.addEventListener('click', ev => {
if (ev.defaultPrevented
|| ev.button !== 0
|| ev.ctrlKey
|| ev.shiftKey
|| ev.altKey
|| ev.metaKey) {
return
}
const a = ev.target.closest('a')
if (a === null
|| (a.target !== '' && a.target !== '_self')
|| a.hostname !== location.hostname) {
return
}
ev.preventDefault()
if (a.href !== location.href) {
history.pushState(history.state, document.title, a.href)
execCallback()
}
})
addEventListener('popstate', execCallback)
execCallback()
}
}
```
For link clicks, besides calling the callback, we update the URL with `history.pushState()`.
Well move that previous render we did in the main element into the install callback.
```
router.install(result => {
main.innerHTML = result
})
```
#### DOM
Those handlers you pass to the router doesnt need to return a `string`. If you need more power you can return actual DOM. Ex:
```
const homeTmpl = document.createElement('template')
homeTmpl.innerHTML = `
<div class="container">
<h1>Home Page</h1>
</div>
`
function homePage() {
const page = homeTmpl.content.cloneNode(true)
// You can do `page.querySelector()` here...
return page
}
```
And now in the install callback you can check if the result is a `string` or a `Node`.
```
router.install(result => {
if (typeof result === 'string') {
main.innerHTML = result
} else if (result instanceof Node) {
main.innerHTML = ''
main.appendChild(result)
}
})
```
That will cover the basic features. I wanted to share this because Ill use this router in next blog posts.
Ive published it as an [npm package][4].
--------------------------------------------------------------------------------
via: https://nicolasparada.netlify.com/posts/js-router/
作者:[Nicolás Parada][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://nicolasparada.netlify.com/
[1]:https://npm.im/serve
[2]:https://nodejs.org/
[3]:https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/Events#Event_delegation
[4]:https://www.npmjs.com/package/@nicolasparada/router

View File

@ -0,0 +1,58 @@
Everything old is new again: Microservices DXC Blogs
======
![](https://csccommunity.files.wordpress.com/2018/05/old-building-with-modern-addition.jpg?w=610)
If I told you about a software architecture in which components of an application provided services to other components via a communications protocol over a network you would say it was…
Well, it depends. If you got your start programming in the 90s, youd say I just defined a [Service-Oriented Architecture (SOA)][1]. But, if youre younger and cut your developer teeth on the cloud, youd say: “Oh, youre talking about [microservices][2].”
Youd both be right. To really understand the differences, you need to dive deeper into these architectures.
In SOA, a service is a function, which is well-defined, self-contained, and doesnt depend on the context or state of other services. There are two kinds of services. A service consumer, which requests a service from the other type, a service provider. An SOA service can play both roles.
SOA services can trade data with each other. Two or more services can also coordinate with each other. These services carry out basic jobs such as creating a user account, providing login functionality, or validating a payment.
SOA isnt so much about modularizing an application as it is about composing an application by integrating distributed, separately-maintained and deployed components. These components run on servers.
Early versions of SOA used object-oriented protocols to communicate with each other. For example, Microsofts [Distributed Component Object Model (DCOM)][3] and [Object Request Brokers (ORBs)][4] use the [Common Object Request Broker Architecture (CORBA)][5] specification.
Later versions used messaging services such as [Java Message Service (JMS)][6] or [Advanced Message Queuing Protocol (AMQP)][7]. These service connections are called Enterprise Service Buses (ESB). Over these buses, data, almost always in eXtensible Markup Language (XML) format, is transmitted and received.
[Microservices][2] is an architectural style where applications are made up from loosely coupled services or modules. It lends itself to the Continuous Integration/Continuous Deployment (CI/CD) model of developing large, complex applications. An application is the sum of its modules.
Each microservice provides an application programming interface (API) endpoint. These are connected by lightweight protocols such as [REpresentational State Transfer (REST)][8], or [gRPC][9]. Data tends to be represented by [JavaScript Object Notation (JSON)][10] or [Protobuf][11].
Both architectures stand as an alternative to the older, monolithic style of architecture where applications are built as single, autonomous units. For example, in a client-server model, a typical Linux, Apache, MySQL, PHP/Python/Perl (LAMP) server-side application would deal with HTTP requests, run sub-programs and retrieves/updates from the underlying MySQL database. These are all tied closely together. When you change anything, you must build and deploy a new version.
With SOA, you may need to change several components, but never the entire application. With microservices, though, you can make changes one service at a time. With microservices, youre working with a true decoupled architecture.
Microservices are also lighter than SOA. While SOA services are deployed to servers and virtual machines (VMs), microservices are deployed in containers. The protocols are also lighter. This makes microservices more flexible than SOA. Hence, it works better with Agile shops.
So what does this mean? The long and short of it is that microservices are an SOA variation for container and cloud computing.
Old style SOA isnt going away, but as we continue to move applications to containers, the microservice architecture will only grow more popular.
--------------------------------------------------------------------------------
via: https://blogs.dxc.technology/2018/05/08/everything-old-is-new-again-microservices/
作者:[Cloudy Weather][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blogs.dxc.technology/author/steven-vaughan-nichols/
[1]:https://www.service-architecture.com/articles/web-services/service-oriented_architecture_soa_definition.html
[2]:http://microservices.io/
[3]:https://technet.microsoft.com/en-us/library/cc958799.aspx
[4]:https://searchmicroservices.techtarget.com/definition/Object-Request-Broker-ORB
[5]:http://www.corba.org/
[6]:https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html
[7]:https://www.amqp.org/
[8]:https://www.service-architecture.com/articles/web-services/representational_state_transfer_rest.html
[9]:https://grpc.io/
[10]:https://www.json.org/
[11]:https://github.com/google/protobuf/

View File

@ -0,0 +1,74 @@
How Graphics Cards Work
======
![AMD-Polaris][1]
Ever since 3dfx debuted the original Voodoo accelerator, no single piece of equipment in a PC has had as much of an impact on whether your machine could game as the humble graphics card. While other components absolutely matter, a top-end PC with 32GB of RAM, a $500 CPU, and PCIe-based storage will choke and die if asked to run modern AAA titles on a ten year-old card at modern resolutions and detail levels. Graphics cards (also commonly referred to as GPUs, or graphics processing units) are critical to game performance and we cover them extensively. But we dont often dive into what makes a GPU tick and how the cards function.
By necessity, this will be a high-level overview of GPU functionality and cover information common to AMD, Nvidia, and Intels integrated GPUs, as well as any discrete cards Intel might build in the future. It should also be common to the mobile GPUs built by Apple, Imagination Technologies, Qualcomm, ARM, and other vendors.
### Why Dont We Run Rendering With CPUs?
The first point I want to address is why we dont use CPUs for rendering workloads in gaming in the first place. The honest answer to this question is that you can run rendering workloads directly on a CPU, at least in theory. Early 3D games that predate the widespread availability of graphics cards, like Ultima Underworld, ran entirely on the CPU. UU is a useful reference case for multiple reasons — it had a more advanced rendering engine than games like Doom, with full support for looking up and down, as well as then-advanced features like texture mapping. But this kind of support came at a heavy price — many people lacked a PC that could actually run the game.
![](https://www.extremetech.com/wp-content/uploads/2018/05/UU.jpg)
In the early days of 3D gaming, many titles like Half Life and Quake II featured a software renderer to allow players without 3D accelerators to play the title. But the reason we dropped this option from modern titles is simple: CPUs are designed to be general-purpose microprocessors, which is another way of saying they lack the specialized hardware and capabilities that GPUs offer. A modern CPU could easily handle titles that tended to stutter when run in software 18 years ago, but no CPU on Earth could easily handle a modern AAA game from today if run in that mode. Not, at least, without some drastic changes to the scene, resolution, and various visual effects.
### Whats a GPU?
A GPU is a device with a set of specific hardware capabilities that are intended to map well to the way that various 3D engines execute their code, including geometry setup and execution, texture mapping, memory access, and shaders. Theres a relationship between the way 3D engines function and the way GPU designers build hardware. Some of you may remember that AMDs HD 5000 family used a VLIW5 architecture, while certain high-end GPUs in the HD 6000 family used a VLIW4 architecture. With GCN, AMD changed its approach to parallelism, in the name of extracting more useful performance per clock cycle.
![](https://www.extremetech.com/wp-content/uploads/2018/05/GPU-Evolution.jpg)
Nvidia first coined the term “GPU” with the launch of the original GeForce 256 and its support for performing hardware transform and lighting calculations on the GPU (this corresponded, roughly to the launch of Microsofts DirectX 7). Integrating specialized capabilities directly into hardware was a hallmark of early GPU technology. Many of those specialized technologies are still employed (in very different forms), because its more power efficient and faster to have dedicated resources on-chip for handling specific types of workloads than it is to attempt to handle all of the work in a single array of programmable cores.
There are a number of differences between GPU and CPU cores, but at a high level, you can think about them like this. CPUs are typically designed to execute single-threaded code as quickly and efficiently as possible. Features like SMT / Hyper-Threading improve on this, but we scale multi-threaded performance by stacking more high-efficiency single-threaded cores side-by-side. AMDs 32-core / 64-thread Epyc CPUs are the largest you can buy today. To put that in perspective, the lowest-end Pascal GPU from Nvidia has 384 cores. A “core” in GPU parlance refers to a much smaller unit of processing capability than in a typical CPU.
**Note:** You cannot compare or estimate relative gaming performance between AMD and Nvidia simply by comparing the number of GPU cores. Within the same GPU family (for example, Nvidias GeForce GTX 10 series, or AMDs RX 4xx or 5xx family), a higher GPU core count means that GPU is more powerful than a lower-end card.
The reason you cant draw immediate conclusions on GPU performance between manufacturers or core families based solely on core counts is because different architectures are more and less efficient. Unlike CPUs, GPUs are designed to work in parallel. Both AMD and Nvidia structure their cards into blocks of computing resources. Nvidia calls these blocks an SM (Streaming Multiprocessor), while AMD refers to them as a Compute Unit.
![](https://www.extremetech.com/wp-content/uploads/2018/05/PascalSM.png)
Each block contains a group of cores, a scheduler, a register file, instruction cache, texture and L1 cache, and texture mapping units. The SM / CU can be thought of as the smallest functional block of the GPU. It doesnt contain literally everything — video decode engines, render outputs required for actually drawing an image on-screen, and the memory interfaces used to communicate with onboard VRAM are all outside its purview — but when AMD refers to an APU as having 8 or 11 Vega Compute Units, this is the (equivalent) block of silicon theyre talking about. And if you look at a block diagram of a GPU, any GPU, youll notice that its the SM/CU thats duplicated a dozen or more times in the image.
![](https://www.extremetech.com/wp-content/uploads/2016/11/Pascal-Diagram.jpg)
The higher the number of SM/CU units in a GPU, the more work it can perform in parallel per clock cycle. Rendering is a type of problem thats sometimes referred to as “embarrassingly parallel,” meaning it has the potential to scale upwards extremely well as core counts increase.
When we discuss GPU designs, we often use a format that looks something like this: 4096:160:64. The GPU core count is the first number. The larger it is, the faster the GPU, provided were comparing within the same family (GTX 970 versus GTX 980 versus GTX 980 Ti, RX 560 versus RX 580, and so on).
### Texture Mapping and Render Outputs
There are two other major components of a GPU: texture mapping units and render outputs. The number of texture mapping units in a design dictates its maximum texel output and how quickly it can address and map textures on to objects. Early 3D games used very little texturing, because the job of drawing 3D polygonal shapes was difficult enough. Textures arent actually required for 3D gaming, though the list of games that dont use them in the modern age is extremely small.
The number of texture mapping units in a GPU is signified by the second figure in the 4096:160:64 metric. AMD, Nvidia, and Intel typically shift these numbers equivalently as they scale a GPU family up and down. In other words, you wont really find a scenario where one GPU has a 4096:160:64 configuration while a GPU above or below it in the stack is a 4096:320:64 configuration. Texture mapping can absolutely be a bottleneck in games, but the next-highest GPU in the product stack will typically offer at least more GPU cores and texture mapping units (whether higher-end cards have more ROPs depends on the GPU family and the card configuration).
Render outputs (also sometimes called raster operations pipelines) are where the GPUs output is assembled into an image for display on a monitor or television. The number of render outputs multiplied by the clock speed of the GPU controls the pixel fill rate. A higher number of ROPs means that more pixels can be output simultaneously. ROPs also handle antialiasing, and enabling AA — especially supersampled AA — can result in a game thats fill-rate limited.
### Memory Bandwidth, Memory Capacity
The last components well discuss are memory bandwidth and memory capacity. Memory bandwidth refers to how much data can be copied to and from the GPUs dedicated VRAM buffer per second. Many advanced visual effects (and higher resolutions more generally) require more memory bandwidth to run at reasonable frame rates because they increase the total amount of data being copied into and out of the GPU core.
In some cases, a lack of memory bandwidth can be a substantial bottleneck for a GPU. AMDs APUs like the Ryzen 5 2400G are heavily bandwidth-limited, which means increasing your DDR4 clock rate can have a substantial impact on overall performance. The choice of game engine can also have a substantial impact on how much memory bandwidth a GPU needs to avoid this problem, as can a games target resolution.
The total amount of on-board memory is another critical factor in GPUs. If the amount of VRAM needed to run at a given detail level or resolution exceeds available resources, the game will often still run, but itll have to use the CPUs main memory for storing additional texture data — and it takes the GPU vastly longer to pull data out of DRAM as opposed to its onboard pool of dedicated VRAM. This leads to massive stuttering as the game staggers between pulling data from a quick pool of local memory and general system RAM.
One thing to be aware of is that GPU manufacturers will sometimes equip a low-end or midrange card with more VRAM than is otherwise standard as a way to charge a bit more for the product. We cant make an absolute prediction as to whether this makes the GPU more attractive because honestly, the results vary depending on the GPU in question. What we can tell you is that in many cases, it isnt worth paying more for a card if the only difference is a larger RAM buffer. As a rule of thumb, lower-end GPUs tend to run into other bottlenecks before theyre choked by limited available memory. When in doubt, check reviews of the card and look for comparisons of whether a 2GB version is outperformed by the 4GB flavor or whatever the relevant amount of RAM would be. More often than not, assuming all else is equal between the two solutions, youll find the higher RAM loadout not worth paying for.
Check out our [ExtremeTech Explains][2] series for more in-depth coverage of todays hottest tech topics.
--------------------------------------------------------------------------------
via: https://www.extremetech.com/gaming/269335-how-graphics-cards-work
作者:[Joel Hruska][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.extremetech.com/author/jhruska
[1]:https://www.extremetech.com/wp-content/uploads/2016/07/AMD-Polaris-640x353.jpg
[2]:http://www.extremetech.com/tag/extremetech-explains

View File

@ -1,55 +0,0 @@
6 Open Source AI Tools to Know
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/artificial-intelligence-3382507_1920.jpg?itok=HarDnwVX)
In open source, no matter how original your own idea seems, it is always wise to see if someone else has already executed the concept. For organizations and individuals interested in leveraging the growing power of artificial intelligence (AI), many of the best tools are not only free and open source, but, in many cases, have already been hardened and tested.
At leading companies and non-profit organizations, AI is a huge priority, and many of these companies and organizations are open sourcing valuable tools. Here is a sampling of free, open source AI tools available to anyone.
**Acumos.** [Acumos AI][1] is a platform and open source framework that makes it easy to build, share, and deploy AI apps. It standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment. This frees data scientists and model trainers to focus on their core competencies rather than endlessly customizing, modeling, and training an AI implementation.
Acumos is part of the[LF Deep Learning Foundation][2], an organization within The Linux Foundation that supports open source innovation in artificial intelligence, machine learning, and deep learning. The goal is to make these critical new technologies available to developers and data scientists, including those who may have limited experience with deep learning and AI. The LF Deep Learning Foundation just [recently approved a project lifecycle and contribution process][3] and is now accepting proposals for the contribution of projects.
**Facebooks Framework.** Facebook[has open sourced][4] its central machine learning system designed for artificial intelligence tasks at large scale, and a series of other AI technologies. The tools are part of a proven platform in use at the company. Facebook has also open sourced a framework for deep learning and AI [called Caffe2][5].
**Speaking of Caffe.** Yahoo also released its key AI software under an open source license. The[CaffeOnSpark tool][6] is based on deep learning, a branch of artificial intelligence particularly useful in helping machines recognize human speech or the contents of a photo or video. Similarly, IBMs machine learning program known as [SystemML][7] is freely available to share and modify through the Apache Software Foundation.
**Googles Tools.** Google spent years developing its [TensorFlow][8] software framework to support its AI software and other predictive and analytics programs. TensorFlow is the engine behind several Google tools you may already use, including Google Photos and the speech recognition found in the Google app.
Two [AIY kits][9] open sourced by Google let individuals easily get hands-on with artificial intelligence. Focused on computer vision and voice assistants, the two kits come as small self-assembly cardboard boxes with all the components needed for use. The kits are currently available at Target in the United States, and are based on the open source Raspberry Pi platform — more evidence of how much is happening at the intersection of open source and AI.
**H2O.ai.** **** I[previously covered][10] H2O.ai, which has carved out a niche in the machine learning and artificial intelligence arena because its primary tools are free and open source. You can get the main H2O platform and Sparkling Water, which works with Apache Spark, simply by[downloading][11] them. These tools operate under the Apache 2.0 license, one of the most flexible open source licenses available, and you can even run them on clusters powered by Amazon Web Services (AWS) and others for just a few hundred dollars.
**Microsoft Onboard.** “Our goal is to democratize AI to empower every person and every organization to achieve more,” Microsoft CEO Satya Nadella[has said][12]. With that in mind, Microsoft is continuing to iterate its[Microsoft Cognitive Toolkit][13]. Its an open source software framework that competes with tools such as TensorFlow and Caffe. Cognitive Toolkit works with both Windows and Linux on 64-bit platforms.
“Cognitive Toolkit enables enterprise-ready, production-grade AI by allowing users to create, train, and evaluate their own neural networks that can then scale efficiently across multiple GPUs and multiple machines on massive data sets,” reports the Cognitive Toolkit Team.
Learn more about AI in this new ebook from The Linux Foundation. [Open Source AI: Projects, Insights, and Trends by Ibrahim Haddad][14] surveys 16 popular open source AI projects looking in depth at their histories, codebases, and GitHub contributions. [Download the free ebook now.][14]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/6/6-open-source-ai-tools-know
作者:[Sam Dean][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/sam-dean
[1]:https://www.acumos.org/
[2]:https://www.linuxfoundation.org/projects/deep-learning/
[3]:https://www.linuxfoundation.org/blog/lf-deep-learning-foundation-announces-project-contribution-process/
[4]:https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/
[5]:https://venturebeat.com/2017/04/18/facebook-open-sources-caffe2-a-new-deep-learning-framework/
[6]:http://yahoohadoop.tumblr.com/post/139916563586/caffeonspark-open-sourced-for-distributed-deep
[7]:https://systemml.apache.org/
[8]:https://www.tensorflow.org/
[9]:https://www.techradar.com/news/google-assistant-sweetens-raspberry-pi-with-ai-voice-control
[10]:https://www.linux.com/news/sparkling-water-bridging-open-source-machine-learning-and-apache-spark
[11]:http://www.h2o.ai/download
[12]:https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/02/10/microsoft-cognitive-toolkit-cntk/
[13]:https://www.microsoft.com/en-us/cognitive-toolkit/
[14]:https://www.linuxfoundation.org/publications/open-source-ai-projects-insights-and-trends/

View File

@ -1,66 +0,0 @@
Mesos and Kubernetes: It's Not a Competition
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-barge-bay-161764_0.jpg?itok=vNChG5fb)
The roots of Mesos can be traced back to 2009 when Ben Hindman was a PhD student at the University of California, Berkeley working on parallel programming. They were doing massive parallel computations on 128-core chips, trying to solve multiple problems such as making software and libraries run more efficiently on those chips. He started talking with fellow students so see if they could borrow ideas from parallel processing and multiple threads and apply them to cluster management.
“Initially, our focus was on Big Data,” said Hindman. Back then, Big Data was really hot and Hadoop was one of the hottest technologies. “We recognized that the way people were running things like Hadoop on clusters was similar to the way that people were running multiple threaded applications and parallel applications,” said Hindman.
However, it was not very efficient, so they started thinking how it could be done better through cluster management and resource management. “We looked at many different technologies at that time,” Hindman recalled.
Hindman and his colleagues, however, decided to adopt a novel approach. “We decided to create a lower level of abstraction for resource management, and run other services on top to that to do scheduling and other things,” said Hindman, “Thats essentially the essence of Mesos -- to separate out the resource management part from the scheduling part.”
It worked, and Mesos has been going strong ever since.
### The project goes to Apache
The project was founded in 2009. In 2010 the team decided to donate the project to the Apache Software Foundation (ASF). It was incubated at Apache and in 2013, it became a Top-Level Project (TLP).
There were many reasons why the Mesos community chose Apache Software Foundation, such as the permissiveness of Apache licensing, and the fact that they already had a vibrant community of other such projects.
It was also about influence. A lot of people working on Mesos were also involved with Apache, and many people were working on projects like Hadoop. At the same time, many folks from the Mesos community were working on other Big Data projects like Spark. This cross-pollination led all three projects -- Hadoop, Mesos, and Spark -- to become ASF projects.
It was also about commerce. Many companies were interested in Mesos, and the developers wanted it to be maintained by a neutral body instead of being a privately owned project.
### Who is using Mesos?
A better question would be, who isnt? Everyone from Apple to Netflix is using Mesos. However, Mesos had its share of challenges that any technology faces in its early days. “Initially, I had to convince people that there was this new technology called containers that could be interesting as there is no need to use virtual machines,” said Hindman.
The industry has changed a great deal since then, and now every conversation around infrastructure starts with containers -- thanks to the work done by Docker. Today convincing is not needed, but even in the early days of Mesos, companies like Apple, Netflix, and PayPal saw the potential. They knew they could take advantage of containerization technologies in lieu of virtual machines. “These companies understood the value of containers before it became a phenomenon,” said Hindman.
These companies saw that they could have a bunch of containers, instead of virtual machines. All they needed was something to manage and run these containers, and they embraced Mesos. Some of the early users of Mesos included Apple, Netflix, PayPal, Yelp, OpenTable, and Groupon.
“Most of these organizations are using Mesos for just running arbitrary services,” said Hindman, “But there are many that are using it for doing interesting things with data processing, streaming data, analytics workloads and applications.”
One of the reasons these companies adopted Mesos was the clear separation between the resource management layers. Mesos offers the flexibility that companies need when dealing with containerization.
“One of the things we tried to do with Mesos was to create a layering so that people could take advantage of our layer, but also build whatever they wanted to on top,” said Hindman. “I think that's worked really well for the big organizations like Netflix and Apple.”
However, not every company is a tech company; not every company has or should have this expertise. To help those organizations, Hindman co-founded Mesosphere to offer services and solutions around Mesos. “We ultimately decided to build DC/OS for those organizations which didnt have the technical expertise or didn't want to spend their time building something like that on top.”
### Mesos vs. Kubernetes?
People often think in terms of x versus y, but its not always a question of one technology versus another. Most technologies overlap in some areas, and they can also be complementary. “I don't tend to see all these things as competition. I think some of them actually can work in complementary ways with one another,” said Hindman.
“In fact the name Mesos stands for middle; its kind of a middle OS,” said Hindman, “We have the notion of a container scheduler that can be run on top of something like Mesos. When Kubernetes first came out, we actually embraced it in the Mesos ecosystem and saw it as another way of running containers in DC/OS on top of Mesos.”
Mesos also resurrected a project called [Marathon][1](a container orchestrator for Mesos and DC/OS), which they have made a first-class citizen in the Mesos ecosystem. However, Marathon does not really compare with Kubernetes. “Kubernetes does a lot more than what Marathon does, so you cant swap them with each other,” said Hindman, “At the same time, we have done many things in Mesos that are not in Kubernetes. So, these technologies are complementary to each other.”
Instead of viewing such technologies as adversarial, they should be seen as beneficial to the industry. Its not duplication of technologies; its diversity. According to Hindman, “it could be confusing for the end user in the open source space because it's hard to know which technologies are suitable for what kind of workload, but thats the nature of the beast called Open Source.”
That just means there are more choices, and everybody wins.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/6/mesos-and-kubernetes-its-not-competition
作者:[Swapnil Bhartiya][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://mesosphere.github.io/marathon/

View File

@ -0,0 +1,143 @@
Using Ledger for YNAB-like envelope budgeting
======
### Bye bye Elbank
I have to start this post with this: I will not be actively maintaining [Elbank][1] anymore, simply because I switched back to [Ledger][2]. If someone wants to take over, please contact me!
The main reason for switching is budgeting. While Elbank was a cool experiment, it is not an accounting software, and inherently lacks support for powerful budgeting.
When I started working on Elbank as a replacement for Ledger, I was looking for a reporting tool within Emacs that would fetch bank transactions automatically, so I wouldnt have to enter transactions by hand (this is a seriously tedious task, and I grew tired of doing it after roughly two years, and finally gave up).
Since then, I learned about ledger-autosync and boobank, which I use to sync my bank statements with Ledger (more about that in another post).
### YNABs way of budgeting
I only came across [YNAB][3] recently. While I wont use their software (being a non-free web application, and, you know… theres no `M-x ynab`), I think that the principles behind it are really appealing for personal budgeting. I encourage you to [read more about it][4] (or grab a [copy of the book][5], its great), but heres the idea.
1. **Budget every euro** : Quite simple once you get it. Every single Euro you have should be in a budget envelope. You should assign a job to every Euro you earn (thats called [zero-based][6], [envelope system][7]).
2. **Embrace your true expenses** : Plan for larger and less frequent expenses, so when a yearly bill arrives, or your car breaks down, youll be covered.
3. **Roll with the punches** : Address overspending as it happens by taking money overspent from another envelope. As long as you keep budgeting, youre succeeding.
4. **Age your money** : Spend less than you earn, so your money stays in the bank account longer. As you do that, the age of your money will grow, and once you reach the goal of spending money that is at least one month old, you wont worry about that next bill.
### Implementation in Ledger
I assume that you are familiar with Ledger, but if not I recommend reading its great [introduction][8] and [tutorial][9].
The implementation in Ledger uses plain double-entry accounting. I took most of it from [Sacha][10], with some minor differences.
#### Budgeting new money
After each income transaction, I budget the new money:
```
2018-06-12 Employer
Assets:Bank:Checking 1600.00 EUR
Income:Salary -1600.00 EUR
2018-06-12 Budget
[Assets:Budget:Food] 400.00 EUR
[Assets:Budget:Rent] 600.00 EUR
[Assets:Budget:Utilities] 600.00 EUR
[Equity:Budget] -1600.00 EUR
```
Did you notice the square brackets around the accounts of the budget transaction? Its a feature Ledger calls [virtual postings][11]. These postings are not considered real, and wont be present in any report that uses the `--real` flag. This is exactly what we want, since its a budget allocation and not a “real” transaction. Therefore well use the `--real` flag for all reports except for our budget report.
#### Automatically crediting budget accounts when spending money
Next, we need to credit the budget accounts each time we spend money. Ledger has another neat feature called [automated transactions][12] for this:
```
= /Expenses/
[Assets:Budget:Unbudgeted] -1.0
[Equity:Budget] 1.0
= /Expenses:Food/
[Assets:Budget:Food] -1.0
[Assets:Budget:Unbudgeted] 1.0
= /Expenses:Rent/
[Assets:Budget:Rent] -1.0
[Assets:Budget:Unbudgeted] 1.0
= /Expenses:Utilities/
[Assets:Budget:Utilities] -1.0
[Assets:Budget:Unbudgeted] 1.0
```
Every expense is taken out of the `Assets:Budget:Unbudgeted` account by default.
This forces me to budget properly, as `Assets:Budget:Unbudgeted` should always be 0 (if it is not the case I immediately know that there is something wrong going on).
All other automatic transactions take money out of the `Assets:Budget:Unbudgeted` account instead of `Equity:Budget` account.
#### A Budget report
This is the final piece of the puzzle. Heres the budget report command:
```
ledger --empty -S -T -f ledger.dat bal ^assets:budget
```
If we have the following transactions:
```
2018/06/12 Groceries store
Expenses:Food 123.00 EUR
Assets:Bank:Checking
2018/06/12 Landlord
Expenses:Rent 600.00 EUR
Assets:Bank:Checking
2018/06/12 Internet provider
Expenses:Utilities:Internet 40.00 EUR
Assets:Bank:Checking
```
Heres what the report looks like:
```
837.00 EUR Assets:Budget
560.00 EUR Utilities
277.00 EUR Food
0 Rent
0 Unbudgeted
--------------------
837.00 EUR
```
### Conclusion
Ledger is amazingly powerful, and provides a great framework for YNAB-like budgeting. In a future post Ill explain how I automatically import my bank transactions using a mix of `ledger-autosync` and `weboob`.
--------------------------------------------------------------------------------
via: https://emacs.cafe/ledger/emacs/ynab/budgeting/2018/06/12/elbank-ynab.html
作者:[Nicolas Petton][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://emacs.cafe/l
[1]:https://github.com/NicolasPetton/elbank
[2]:https://www.ledger-cli.org/
[3]:https://ynab.com
[4]:https://www.youneedabudget.com/method/
[5]:https://www.youneedabudget.com/book-order-now/
[6]:https://en.wikipedia.org/wiki/Zero-based_budgeting
[7]:https://en.wikipedia.org/wiki/Envelope_system
[8]:https://www.ledger-cli.org/3.0/doc/ledger3.html#Introduction-to-Ledger
[9]:https://www.ledger-cli.org/3.0/doc/ledger3.html#Ledger-Tutorial
[10]:http://sachachua.com/blog/2014/11/keeping-financial-score-ledger/
[11]:https://www.ledger-cli.org/3.0/doc/ledger3.html#Virtual-postings
[12]:https://www.ledger-cli.org/3.0/doc/ledger3.html#Automated-Transactions

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Getting started with Open edX to host your course
======

View File

@ -0,0 +1,102 @@
Bitcoin is a Cult — Adam Caudill
======
The Bitcoin community has changed greatly over the years; from technophiles that could explain a [Merkle tree][1] in their sleep, to speculators driven by the desire for a quick profit & blockchain startups seeking billion dollar valuations led by people who dont even know what a Merkle tree is. As the years have gone on, a zealotry has been building around Bitcoin and other cryptocurrencies driven by people who see them as something far grander than they actually are; people who believe that normal (or fiat) currencies are becoming a thing of the past, and the cryptocurrencies will fundamentally change the worlds economy.
Every year, their ranks grow, and their perception of cryptocurrencies becomes more grandiose, even as [novel uses][2] of the technology brings it to its knees. While Im a firm believer that a well designed cryptocurrency could ease the flow of money across borders, and provide a stable option in areas of mass inflation, the reality is that we arent there yet. In fact, its the substantial instability in value that allows speculators to make money. Those that preach that the US Dollar and Euro are on their deathbed have utterly abandoned an objective view of reality.
### A little background…
I read the Bitcoin white-paper the day it was released an interesting use of [Merkle trees][1] to create a public ledger and a fairly reasonable consensus protocol it got the attention of many in the cryptography sphere for its novel properties. In the years since that paper was released, Bitcoin has become rather valuable, attracted many that see it as an investment, and a loyal (and vocal) following of people who think itll change everything. This discussion is about the latter.
Yesterday, someone on Twitter posted the hash of a recent Bitcoin block, the thousands of Tweets and other conversations that followed have convinced me that Bitcoin has crossed the line into true cult territory.
It all started with this Tweet by Mark Wilcox:
> #00000000000000000021e800c1e8df51b22c1588e5a624bea17e9faa34b2dc4a
> — Mark Wilcox (@mwilcox) June 19, 2018
The value posted is the hash of [Bitcoin block #528249][3]. The leading zeros are a result of the mining process; to mine a block you combine the contents of the block with a nonce (and other data), hash it, and it has to have at least a certain number of leading zeros to be considered valid. If it doesnt have the correct number, you change the nonce and try again. Repeat this until the number of leading zeros is the right number, and you now have a valid block. The part that people got excited about is what follows, 21e800.
Some are claiming this is an intentional reference, that whoever mined this block actually went well beyond the current difficulty to not just bruteforce the leading zeros, but also the next 24 bits which would require some serious computing power. If someone had the ability to bruteforce this, it could indicate something rather serious, such as a substantial breakthrough in computing or cryptography.
You must be asking yourself, whats so important about 21e800 a question you would surely regret. Some are claiming its a reference to [E8 Theory][4] (a widely criticized paper that presents a standard field theory), or to the 21,000,000 total Bitcoins that will eventually exist (despite the fact that `21 x 10^8` would be 2,100,000,000). There are others, they are just too crazy to write about. Another important fact is that a block is mined on average on once a year that has 21e8 following the leading zeros those were never seen as anything important.
This leads to where things get fun: the [theories][5] that are circulating about how this happened.
* A quantum computer, that is somehow able to hash at unbelievable speed. This is despite the fact that theres no indication in theories around quantum computers that theyll be able to do this; hashing is one thing thats considered safe from quantum computers.
* Time travel. Yes, people are actually saying that someone came back from the future to mine this block. I think this is crazy enough that I dont need to get into why this is wrong.
* Satoshi Nakamoto is back. Despite the fact that there has been no activity with his private keys, some theorize that he has returned, and is somehow able to do things that nobody can. These theories dont explain how he could do it.
> So basically (as i understand) Satoshi, in order to have known and computed the things that he did, according to modern science he was either:
>
> A) Using a quantum computer
> B) Fom the future
> C) Both
>
> — Crypto Randy Marsh [REKT] (@nondualrandy) [June 21, 2018][6]
If all this sounds like [numerology][7] to you, you arent alone.
All this discussion around special meaning in block hashes also reignited the discussion around something that is, at least somewhat, interesting. The Bitcoin genesis block, the first bitcoin block, does have an unusual property: the early Bitcoin blocks required that the first 32 bits of the hash be zero; however the genesis block had 43 leading zero bits. As the code that produced the genesis block was never released, its not known how it was produced, nor is it known what type of hardware was used to produce it. Satoshi had an academic background, so may have had access to more substantial computing power than was common at the time via a university. At this point, the oddities of the genesis block are a historical curiosity, nothing more.
### A brief digression on hashing
This hullabaloo started with the hash of a Bitcoin block; so its important to understand just what a hash is, and understand one very important property they have. A hash is a one-way cryptographic function that creates a pseudo-random output based on the data that its given.
What this means, for the purposes of this discussion, is that for each input you get a random output. Random numbers have a way of sometimes looking interesting, simply as a result of being random and the human brains affinity to find order in everything. When you start looking for order in random data, you find interesting things that are yet meaningless, as its simply random. When people ascribe significant meaning to random data, it tells you far more about the mindset of those involved rather than the data itself.
### Cult of the Coin
First, let us define a couple of terms:
* Cult: a system of religious veneration and devotion directed toward a particular figure or object.
* Religion: a pursuit or interest to which someone ascribes supreme importance.
The Cult of the Coin has many saints, perhaps none greater than Satoshi Nakamoto, the pseudonym used by the person(s) that created Bitcoin. Vigorously defended, ascribed with ability and understanding far above that of a normal researcher, seen as a visionary beyond compare that is leading the world to a new economic order. When combined with Satoshis secretive nature and unknown true identify, adherents to the Cult view Satoshi as a truly venerated figure.
That is, of course, with the exception of adherents that follow a different saint, who is unquestionably correct, and any criticism is seen as not only an attack on their saint, but on themselves as well. Those that follow EOS for example, may see Satoshi has a hack that developed a failed project, yet will react fiercely to the slightest criticism of EOS, a reaction so strong that its reserved only for an attack on ones deity. Those that follow IOTA react with equal fierceness; and there are many others.
These adherents have abandoned objectivity and reasonable discourse, and allowed their zealotry to cloud their vision. Any discussion of these projects and the people behind them that doesnt include glowing praise inevitably ends with a level of vitriolic speech that is beyond reason for a discussion of technology.
This is dangerous, for many reasons:
* Developers & researchers are blinded to flaws. Due to the vast quantities of praise by adherents, those involved develop a grandiose view of their own abilities, and begin to view criticism as unjustified attacks as they couldnt possibly have been wrong.
* Real problems are attacked. Instead of technical issues being seen as problems to be solved and opportunities to improve, they are seen as attacks from people who must be motivated to destroy the project.
* One coin to rule them all. Adherents are often aligned to one, and only one, saint. Acknowledging the qualities of another project means acceptance of flaws or deficiencies in their own, which they will not do.
* Preventing real progress. Evolution is brutal, it requires death, it requires projects to fail and that the reasons for those failures to be acknowledged. If lessons from failure are ignored, if things that should die arent allowed to, progress stalls.
Discussions around many of the cryptocurrencies and related blockchain projects are becoming more and more toxic, becoming impossible for well-intentioned people to have real technical discussions without being attacked. With discussions of real flaws, flaws that would doom a design in any other environment, being instantly treated as heretical without any analysis to determine the factual claims becoming routine, the cost for the well-intentioned to get involved has become extremely high. There are at least some that are aware of significant security flaws that have opted to remain silent due to the highly toxic environment.
What was once driven by curiosity, a desire to learn and improve, to determine the viability of ideas, is now driven by blind greed, religious zealotry, self-righteousness, and self-aggrandizement.
I have precious little hope for the future of projects that inspire this type of zealotry, and its continuous spread will likely harm real research in this area for many years to come. These are technical projects, some projects succeed, some fail this is how technology evolves. Those designing these systems are human, just as flawed as the rest of us, and so too are the projects flawed. Some are well suited to certain use cases and not others, some arent suited to any use case, none yet are suited to all. The discussions about these projects should be focused on the technical aspects, and done so to evolve this field of research; adding a religious to these projects harms all.
[Note: There are many examples of this behavior that could be cited, however in the interest of protecting those that have been targeted for criticizing projects, I have opted to minimize such examples. I have seen too many people who I respect, too many that I consider friends, being viciously attacked I have no desire to draw attention to those attacks, and risk restarting them.]
--------------------------------------------------------------------------------
via: https://adamcaudill.com/2018/06/21/bitcoin-is-a-cult/
作者:[Adam Caudill][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://adamcaudill.com/author/adam/
[1]:https://en.wikipedia.org/wiki/Merkle_tree
[2]:https://hackernoon.com/how-crypto-kitties-disrupted-the-ethereum-network-845c22aa1e6e
[3]:https://blockchain.info/block-height/528249
[4]:https://en.wikipedia.org/wiki/An_Exceptionally_Simple_Theory_of_Everything
[5]:https://medium.com/@coop__soup/00000000000000000021e800c1e8df51b22c1588e5a624bea17e9faa34b2dc4a-cd4b67d446be
[6]:https://twitter.com/nondualrandy/status/1009609117768605696?ref_src=twsrc%5Etfw
[7]:https://en.wikipedia.org/wiki/Numerology

View File

@ -1,85 +0,0 @@
translating----geekpi
Automatically Change Wallpapers in Linux with Little Simple Wallpaper Changer
======
**Brief: Here is a tiny script that automatically changes wallpaper at regular intervals in your Linux desktop.**
As the name suggests, LittleSimpleWallpaperChanger is a small script that changes the wallpapers randomly at intervals.
Now I know that there is a random wallpaper option in the Appearance or the Change desktop background settings. But that randomly changes the pre-installed wallpapers and not the wallpapers that you add.
So in this article, well be seeing how to set up a random desktop wallpaper setup consisting of your photos using LittleSimpleWallpaperChanger.
### Little Simple Wallpaper Changer (LSWC)
[LittleSimpleWallpaperChanger][1] or LSWC is a very lightweight script that runs in the background, changing the wallpapers from the user-specified folder. The wallpapers change at a random interval between 1 to 5 minutes. The software is rather simple to set up, and once set up, the user can just forget about it.
![Little Simple Wallpaper Changer to change wallpapers in Linux][2]
#### Installing LSWC
Download LSWC by [clicking on this link.][3] The zipped file is around 15 KB in size.
* Browse to the download location.
* Right click on the downloaded .zip file and select extract here.
* Open the extracted folder, right click and select Open in terminal.
* Copy paste the command in the terminal and hit enter.
`bash ./README_and_install.sh`
* Now a dialogue box will pop up asking you to select the folder containing the wallpapers. Click on it and then select the folder that youve stored your wallpapers in.
* Thats it. Reboot your computer.
![Little Simple Wallpaper Changer for Linux][4]
#### Using LSWC
On installation, LSWC asks you to select the folder containing your wallpapers. So I suggest you create a folder and move all the wallpapers you want to use there before we install LSWC. Or you can just use the Wallpapers folder in the Pictures folder. **All the wallpapers need to be .jpg format.**
You can add more wallpapers or delete the current wallpapers from your selected folder. To change the wallpapers folder location, you can edit the location of the wallpapers in the
following file.
```
.config/lswc/homepath.conf
```
#### To remove LSWC
Open a terminal and run the below command to stop LSWC
```
pkill lswc
```
Open home in your file manager and press ctrl+H to show hidden files, then delete the following files:
* scripts folder from .local
* lswc folder from .config
* lswc.desktop file from .config/autostart
There you have it. How to create your own desktop background slideshow. LSWC is really lightweight and simple to use. Install it and then forget it.
LSWC is not very feature rich but that intentional. It does what it intends to do and that is to change wallpapers. If you want a tool that automatically downloads wallpapers try [WallpaperDownloader][5].
Do share your thoughts on this nifty little software in the comments section below. Dont forget to share this article. Cheers.
--------------------------------------------------------------------------------
via: https://itsfoss.com/little-simple-wallpaper-changer/
作者:[Aquil Roshan][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/aquil/
[1]:https://github.com/LittleSimpleWallpaperChanger/lswc
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/Little-simple-wallpaper-changer-2-800x450.jpg
[3]:https://github.com/LittleSimpleWallpaperChanger/lswc/raw/master/Lswc.zip
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/Little-simple-wallpaper-changer-1-800x450.jpg
[5]:https://itsfoss.com/wallpaperdownloader-linux/

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Intercepting and Emulating Linux System Calls with Ptrace « null program
======

View File

@ -1,3 +1,5 @@
translating---geekpi
TrueOS Doesnt Want to Be BSD for Desktop Anymore
============================================================

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Blockchain evolution: A quick guide and why open source is at the heart of it
======

View File

@ -1,142 +0,0 @@
translating---geekpi
Sosreport A Tool To Collect System Logs And Diagnostic Information
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/sos-720x340.png)
If youre working as RHEL administrator, you might definitely heard about **Sosreport** an extensible, portable and support data collection tool. It is a tool to collect system configuration details and diagnostic information from a Unix-like operating system. When the user raise a support ticket, he/she has to run this tool and send the resulting report generated by Sosreport tool to the Red Hat support executive. The executive will then perform an initial analysis based on the report and try to find whats the problem in the system. Not just on RHEL system, you can use it on any Unix-like operating systems for collecting system logs and other debug information.
### Installing Sosreport
Sosreport is available on Red Hat official systems, so you can install it using Yum Or DNF package managers as shown below.
```
$ sudo yum install sos
```
Or,
```
$ sudo dnf install sos
```
On Debian, Ubuntu and Linux Mint, run:
```
$ sudo apt install sosreport
```
### Usage
Once installed, run the following command to collect your system configuration details and other diagnostic information.
```
$ sudo sosreport
```
You will be asked to enter some details of your system, such as system name, case id etc. Type the details accordingly, and press ENTER key to generate the report. If you dont want to change anything and want to use the default values, simply press ENTER.
Sample output from my CentOS 7 server:
```
sosreport (version 3.5)
This command will collect diagnostic and configuration information from
this CentOS Linux system and installed applications.
An archive containing the collected information will be generated in
/var/tmp/sos.DiJXi7 and may be provided to a CentOS support
representative.
Any information provided to CentOS will be treated in accordance with
the published support policies at:
https://wiki.centos.org/
The generated archive may contain data considered sensitive and its
content should be reviewed by the originating organization before being
passed to any third party.
No changes will be made to system configuration.
Press ENTER to continue, or CTRL-C to quit.
Please enter your first initial and last name [server.ostechnix.local]:
Please enter the case id that you are generating this report for []:
Setting up archive ...
Setting up plugins ...
Running plugins. Please wait ...
Running 73/73: yum...
Creating compressed archive...
Your sosreport has been generated and saved in:
/var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
The checksum is: 8f08f99a1702184ec13a497eff5ce334
Please send this file to your support representative.
```
If you dont want to be prompted for entering such details, simply use batch mode like below.
```
$ sudo sosreport --batch
```
As you can see in the above output, an archived report is generated and saved in **/var/tmp/sos.DiJXi7** file. In RHEL 6/CentOS 6, the report will be generated in **/tmp** location. You can now send this report to your support executive, so that he can do initial analysis and find whats the problem.
You might be concerned or wanted to know whats in the report. If so, you can view it by running the following command:
```
$ sudo tar -tf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
Or,
```
$ sudo vim /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
Please note that above commands will not extract the archive, but only display the list of files and folders in the archive. If you want to view the actual contents of the files in the archive, first extract the archive using command:
```
$ sudo tar -xf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
All the contents of the archive will be extracted in a directory named “sosreport-server.ostechnix.local-20180628171844/” in the current working directory. Go to the directory and view the contents of any file using cat command or any other text viewer:
```
$ cd sosreport-server.ostechnix.local-20180628171844/
$ cat uptime
17:19:02 up 1:03, 2 users, load average: 0.50, 0.17, 0.10
```
For more details about Sosreport, refer man pages.
```
$ man sosreport
```
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/sosreport-a-tool-to-collect-system-logs-and-diagnostic-information/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/

View File

@ -0,0 +1,113 @@
How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers
======
**Some applications and games built with OpenGL support and packaged as Flatpak fail to start with proprietary Nvidia drivers. This article explains how to get such Flatpak applications or games them to start, without installing the open source drivers (Nouveau).**
Here's an example. I'm using the proprietary Nvidia drivers on my Ubuntu 18.04 desktop (`nvidia-driver-390`) and when I try to launch the latest
```
$ /usr/bin/flatpak run --branch=stable --arch=x86_64 --command=krita --file-forwarding org.kde.krita
Gtk-Message: Failed to load module "canberra-gtk-module"
Gtk-Message: Failed to load module "canberra-gtk-module"
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Could not initialize GLX
```
To fix Flatpak games and applications not starting when using OpenGL with proprietary Nvidia graphics drivers, you'll need to install a runtime for your currently installed proprietary Nvidia drivers. Here's how to do this.
**1\. Add the FlatHub repository if you haven't already. You can find exact instructions for your Linux distribution[here][1].**
**2. Now you'll need to figure out the exact version of the proprietary Nvidia drivers installed on your system. **
_This step is dependant of the Linux distribution you're using and I can't cover all cases. The instructions below are Ubuntu-oriented (and Ubuntu flavors) but hopefully you can figure out for yourself the Nvidia drivers version installed on your system._
To do this in Ubuntu, open `Software & Updates` , switch to the `Additional Drivers` tab and note the name of the Nvidia driver package.
As an example, this is `nvidia-driver-390` in my case, as you can see here:
![](https://1.bp.blogspot.com/-FAfjtGNeUJc/WzYXMYTFBcI/AAAAAAAAAx0/xUhIO83IAjMuK4Hn0jFUYKJhSKw8y559QCLcBGAs/s1600/additional-drivers-nvidia-ubuntu.png)
That's not all. We've only found out the Nvidia drivers major version but we'll also need to know the minor version. To get the exact Nvidia driver version, which we'll need for the next step, run this command (should work in any Debian-based Linux distribution, like Ubuntu, Linux Mint and so on):
```
apt-cache policy NVIDIA-PACKAGE-NAME
```
Where NVIDIA-PACKAGE-NAME is the Nvidia drivers package name listed in `Software & Updates` . For example, to see the exact installed version of the `nvidia-driver-390` package, run this command:
```
$ apt-cache policy nvidia-driver-390
nvidia-driver-390:
Installed: 390.48-0ubuntu3
Candidate: 390.48-0ubuntu3
Version table:
* 390.48-0ubuntu3 500
500 http://ro.archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages
100 /var/lib/dpkg/status
```
In this command's output, look for the `Installed` section and note the version numbers (excluding `-0ubuntu3` and anything similar). Now we know the exact version of the installed Nvidia drivers (`390.48` in my example). Remember this because we'll need it for the next step.
**3\. And finally, you can install the Nvidia runtime for your installed proprietary Nvidia graphics drivers, from FlatHub**
To list all the available Nvidia runtime packages available on FlatHub, you can use this command:
```
flatpak remote-ls flathub | grep nvidia
```
Hopefully the runtime for your installed Nvidia drivers is available on FlatHub. You can now proceed to install the runtime by using this command:
* For 64bit systems:
```
flatpak install flathub org.freedesktop.Platform.GL.nvidia-MAJORVERSION-MINORVERSION
```
Replace MAJORVERSION with the Nvidia driver major version installed on your computer (390 in my example above) and
MINORVERSION with the minor version (48 in my example from step 2).
For example, to install the runtime for Nvidia graphics driver version 390.48, you'd have to use this command:
```
flatpak install flathub org.freedesktop.Platform.GL.nvidia-390-48
```
* For 32bit systems (or to be able to run 32bit applications or games on 64bit), install the 32bit runtime using:
```
flatpak install flathub org.freedesktop.Platform.GL32.nvidia-MAJORVERSION-MINORVERSION
```
Once again, replace MAJORVERSION with the Nvidia driver major version installed on your computer (390 in my example above) and MINORVERSION with the minor version (48 in my example from step 2).
For example, to install the 32bit runtime for Nvidia graphics driver version 390.48, you'd have to use this command:
```
flatpak install flathub org.freedesktop.Platform.GL32.nvidia-390-48
```
That is all you need to do to get applications or games packaged as Flatpak that are built with OpenGL to run.
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/06/how-to-get-flatpak-apps-and-games-built.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://flatpak.org/setup/
[2]:https://www.linuxuprising.com/2018/06/free-painting-software-krita-410.html
[3]:https://www.linuxuprising.com/2018/06/winepak-is-flatpak-repository-for.html
[4]:https://github.com/winepak/applications/issues/23
[5]:https://github.com/flatpak/flatpak/issues/138

View File

@ -0,0 +1,154 @@
How to migrate to the world of Linux from Windows
======
Installing Linux on a computer, once you know what youre doing, really isnt a difficult process. After getting accustomed to the ins and outs of downloading ISO images, creating bootable media, and installing your distribution (henceforth referred to as distro) of choice, you can convert a computer to Linux in no time at all. In fact, the time it takes to install Linux and get it updated with all the latest patches is so short that enthusiasts do the process over and over again to try out different distros; this process is called distro hopping.
With this guide, I want to target people who have never used Linux before. Ill give an overview of some distros that are great for beginners, how to write or burn them to media, and how to install them. Ill show you the installation process of Linux Mint, but the process is similar if you choose Ubuntu. For a distro such as Fedora, however, your experience will deviate quite a bit from whats shown in this post. Ill also touch on the sort of software available, and how to install additional software.
The command line will not be covered; despite what some people say, using the command line really is optional in distributions such as Linux Mint, which is aimed at beginners. Most distros come with update managers, software managers, and file managers with graphical interfaces, which largely do away with the need for a command line. Dont get me wrong, the command line can be great I do use it myself from time to time but largely for convenience purposes.
This guide will also not touch on troubleshooting or dual booting. While Linux does generally support new hardware, theres a slight chance that any cutting edge hardware you have might not yet be supported by Linux. Setting up a dual boot system is easy enough, though wiping the disk and doing a clean install is usually my preferred method. For this reason, if you intend to follow the guide, either use a virtual machine to install Linux or use a spare computer that youve got lying around.
The chief appeal for most Linux users is the customisability and the diverse array of Linux distributions or distros that are available. For the majority of people getting into Linux, the usual entry point is Ubuntu, which is backed by Canonical. Ubuntu was my gateway Linux distribution in 2008; although not my favourite, its certainly easy to begin using and is very polished.
Another beginner-friendly distribution is Linux Mint. Its the distribution I use day-to-day on every one of my machines. Its very easy to start using, is generally very stable, and the user interface (UI) doesnt drastically change; anyone familiar with Windows XP or Windows Vista will be familiar with the the UI of Linux Mint. While everyone went chasing the convergence dream of merging mobile and desktop together, Linux Mint stayed staunchly of the position that an operating system on the desktop should be designed for desktop and therefore totally avoids being mobile-friendly UI; desktop and laptops are front and centre.
For your first dive into Linux, I highly recommend the two mentioned above, simply because theyve got huge communities and developers tending to them around the clock. With that said, several other operating systems such as elementary OS (based on Ubuntu) and Fedora (run by Red Hat) are also good ways to get started. Other users are fond of options such as Manjaro and Antergos which make the difficult-to-configure Arch Linux easy to use.
Now, were starting to get our hands dirty. For this guide, I will include screenshots of Linux Mint 18.3 Cinnamon edition. If you decide to go with Ubuntu or another version of Linux Mint, note that things may look slightly different. For example, when it comes to a distro that isnt based on Ubuntu like Fedora or Manjaro things will look significantly different during installation, but not so much that you wont be able to work the process out.
In order to download Linux Mint, head on over to the Linux Mint downloads page and select either the 32-bit version or 64-bit version of the Cinnamon edition. If you arent sure which version is needed for your computer, pick the 64-bit version; this tends to work on computers even from 2007, so its a safe bet. The only time Id advise the 32-bit version is if youre planning to install Linux on a netbook.
Once youve selected your version, you can either download the ISO image via one of the many mirrors, or as a torrent. Its best to download it as a torrent because if your internet cuts out, you wont have to restart the 1.9 GB download. Additionally, the downloaded ISO you receive via torrent will be signed with the correct keys, ensuring authenticity. If you download another distribution, youll be able to continue to the next step once you have an ISO file saved to your computer.
Note: If youre using a virtual machine, you dont need to write or burn the ISO to USB or DVD, just use the ISO to launch the distro on your chosen virtual machine.
Ten years ago when I started using Linux, you could fit an entire distribution onto a CD. Nowadays, youll need a DVD or a USB to boot the distro from.
To write the ISO to a USB device, I recommend downloading a tool called Rufus. Once its downloaded and installed, you should insert a USB stick thats 4GB or more. Be sure to backup the data as the device will be erased.
Next, launch Rufus and select the device you want to write to; if you arent sure which is your USB device, unplug it, check the list, then plug it back in to work out which device you need to write to. Once youve worked out which USB drive you want to write to, select MBR Partition Scheme for BIOS or UEFI under Partition scheme and target system type. Once youve done that, press the optical drive icon alongside the enabled Create a bootable disk using field. You can then navigate to the ISO file that you just downloaded. Once it finishes writing to the USB, youve got everything you need to boot into Linux.
Note: If youre using a virtual machine, you dont need to write or burn the ISO to USB or DVD, just use the ISO to launch the distro on your chosen virtual machine.
If youre on Windows 7 or above and want to burn the ISO to a DVD, simply insert a blank DVD into the computer, then right-click the ISO file and select Burn disc image, from the dialogue window which appears, select the drive where the DVD is located, and tick Verify disc after burning, then hit Burn.
If youre on Windows Vista, XP, or lower, download an install Infra Recorder and insert your blank DVD into your computer, selecting Do nothing or Cancel if any autorun windows pop up. Next, open Infra Recorder and select Write Image on the main screen or go to Actions > Burn Image. From there find the Linux ISO you want to burn and press OK when prompted.
Once youve got your DVD or USB media ready youre ready to boot into Linux; doing so wont harm your Windows install in any way.
Once youve got your installation media on hand, youre ready to boot into the live environment. The operating system will load entirely from your DVD or USB device without making changes to your hard drive, meaning Windows will be left intact. The live environment is used to see whether your graphics card, wireless devices, and so on are compatible with Linux before you install it.
To boot into the live environment youre going to have to switch off the computer and boot it back up with your installation media already inserted into the computer. Its also a must to ensure that your boot up sequence is set to launch from USB or DVD before your current operating system boots up from the hard drive. Configuring the boot sequence is beyond the scope of this guide, but if you cant boot from the USB or DVD, I recommend doing a web search for how to access the BIOS to change the boot sequence order on your specific motherboard. Common keys to enter the BIOS or select the drive to boot from are F2, F10, and F11.
If your boot up sequence is configured correctly, you should see a ten second countdown, that when completed, will automatically boot Linux Mint.
![][1]
![][2]
Those who opted to try Linux Mint can let the countdown run to zero and the boot up will commence normally. On Ubuntu youll probably be prompted to choose a language, then press Try Ubuntu without installing, or the equivalent option on Linux Mint if you interrupted the automatic countdown by pressing the keyboard. If at any time you have the choice between trying or installing your Linux distribution of choice, always opt to try it, as the install option can cause irreversible damage to your Windows installation.
Hopefully, everything went according to plan, and youve made it through to the live environment. The first thing to do now is to check to see whether your Wi-Fi is available. To connect to Wi-Fi press the icon to the left of the clock, where you should see the usual list of available networks; if this is the case, great! If not, dont despair just yet. In the second case, when wireless card doesnt seem to be working, either establish a wired connection via Ethernet or connect your phone to the computer provided your handset supports tethering (via Wi-Fi, not data).
Once youve got some sort of internet connection via one of those methods, press Menu and use the search box to look for Driver Manager. This usually requires an internet connection and may let you enable your wireless card driver. If that doesnt work, youre probably out of luck, but the vast majority of cards should work with Linux Mint.
For those who have a fancy graphics card, chances are that Linux is using an open source driver alternative instead of the proprietary driver you use on Windows. If you notice any issues pertaining to graphics, you can check the Driver Manager and see whether any proprietary drivers are available.
Once those two critical components are confirmed to be up and running, you may want to check printer and webcam compatibility. To test your printer, go to Menu > Office > LibreOffice Writer and try printing a document. If it works, thats great, if not, some printers may be made to work with some effort, but thats outside the scope of this particular guide. Id recommend searching something like Linux [your printer model] and there may be solutions available. As for your webcam, go to Menu again and use the search box to look for Software Manager; this is the Microsoft Store equivalent on Linux Mint. Search for a program named Cheese and install it. Once installed, open it up using the Launch button in Software Manager, or have a look in Menu and find it manually. If it detects a webcam it means its compatible!
![][3]
By now, youve probably had a good look at Linux Mint or your distribution of choice and, hopefully, everything is working for you. If youve had enough and want to return to Windows, simply press Menu and then the power off button which is located right above Menu, then press Shut Down if a dialogue box pops up.
Given that youre sticking with me and want to install Linux Mint on your computer, thus erasing Windows, ensure that youve backed up everything on your computer. Dual boot installations are available from the installer, but in this guide Ill explain how to install Linux as the sole operating system. Assuming you do decide to deviate and set up a dual boot system, then ensure you still back up your files from Windows first, because things could potentially go wrong for you.
In order to do a clean install, close down any programs that youve got running in the live environment. On the desktop, you should see a disc icon labelled Install Linux Mint click that to continue.
![][4]
On the first screen of the installer, choose your language and press continue.
![][5]
On the second screen, most users will want to install third-party software to ensure hardware and codecs work.
![][6]
In the Installation type section you can choose to erase your hard drive or dual boot. You can encrypt the entire drive if you check Encrypt the new Linux Mint installation for security and Use LVM with the new Linux Mint installation. You can press Something else for a specific custom set up. In order to set up a dual boot system, the hard drive which youre installing to must already have Windows installed first.
![][7]
Now pick your location so that the operating systems time can be set correctly, and press continue.
![][8]
Now set your keyboards language, and press continue.
![][9]
On the Who are you screen, youll create a new user. Pop in your name, leave the computers name as default or enter a custom name, pick a username, and enter a password. You can choose to have the system log you in automatically or require a password. If you choose to require a password then you can also encrypt your home folder, which is different from encrypting your entire system. However, if you encrypt your entire system, theres not a lot of point to encrypting your home folder too.
![][10]
Once youve completed the Who are you screen, Linux Mint will begin installing. Youll see a slideshow detailing what the operating system offers.
![][11]
Once the installation finishes, youll be prompted to restart. Go ahead and do so.
Now that youve restarted the computer and removed the Linux media, your computer should boot up straight to your new install. If everything has gone smoothly, you should arrive at the login screen where you just need to enter the password you created during the set up.
![][12]
Once you reach the desktop, the first thing youll want to do is apply all the system updates that are available. On Linux Mint you should see a shield icon with a blue logo in the bottom right-hand corner of the desktop near the clock, click on it to open the Update Manager.
![][13]
You should be prompted to pick an update policy, give them all a read over and apply whichever you think is most appropriate for you then press OK.
![][14]
![][15]
Youll probably be asked to pick a more local mirror too. This is optional, but could allow your updates to download quicker. Now, apply any updates offered, until the shield icon has a green tick indicating that all updates have been applied. In future, the Update Manager will continually check for new updates and alert you to them.
Youve got all the necessary tasks out the way for setting up Linux Mint and now youre free to start using the system for whatever you like. By default, Mozilla Firefox is installed, so if youve got a Sync account its probably a good idea to go pull in all your passwords and bookmarks. If youre a Chrome user, you can either run Chromium which is in the Software Manager, or download Google Chrome from the internet. If you opt to get Chrome, youll be offered a .deb file which you should save to your system and then double-click to install. Installing .deb files is straightforward enough, just press Install when prompted and the system will handle the rest, youll find the new software in Menu.
![][16]
Other pre-installed software includes LibreOffice which has decent compatibility with Microsoft Office; Mozillas Thunderbird for managing your emails; GIMP for editing images; Transmission is readily available for you to begin torrenting files, it supports adding IP block lists too; Pidgin and Hexchat will allow you to send instant messages and connect to IRC respectively. As for media playback, you will find VLC and Rhythmbox under Sound and Video to satisfy all your music and video needs. If you need any other software, check out the Software Manager, there are lots of popular packages including Skype, Minecraft, Google Earth, Steam, and Private Internet Access Manager.
Throughout this guide, Ive explained that it will not touch on troubleshooting problems. However, the Linux Mint community can help you overcome any complications. The first port of call is definitely a quick web search, as most problems have been resolved by others in the past and you might be able to find your solution online. If youre still stuck, you can try the Linux Mint forums as well as the Linux Mint subreddit, both of which are oriented towards troubleshooting.
Linux definitely isnt for everyone. It still lacks on the gaming front, despite the existence of Steam on Linux, and the growing number of games. In addition, some commonly used software isnt available on Linux, but usually there are alternatives available. If, however, you have a computer lying around that isnt powerful enough to support Windows any more, then Linux could be a good option for you. Linux is also free to use, so its great for those who dont want to spend money on a new copy of Windows too.
loading...
--------------------------------------------------------------------------------
via: http://infosurhoy.com/cocoon/saii/xhtml/en_GB/technology/how-to-migrate-to-the-world-of-linux-from-windows/
作者:[Marta Subat][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://infosurhoy.com/cocoon/saii/xhtml/en_GB/author/marta-subat/
[1]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139198_autoboot_linux_mint.jpg
[2]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139206_bootmenu_linux_mint.jpg
[3]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139213_cheese_linux_mint.jpg
[4]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139254_install_1_linux_mint.jpg
[5]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139261_install_2_linux_mint.jpg
[6]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139270_install_3_linux_mint.jpg
[7]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139278_install_4_linux_mint.jpg
[8]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139285_install_5_linux_mint.jpg
[9]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139293_install_6_linux_mint.jpg
[10]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139302_install_7_linux_mint.jpg
[11]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139317_install_8_linux_mint.jpg
[12]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139224_first_boot_1_linux_mint.jpg
[13]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139232_first_boot_2_linux_mint.jpg
[14]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139240_first_boot_3_linux_mint.jpg
[15]:https://cdn.neow.in/news/images/uploaded/2018/02/1519139248_first_boot_4_linux_mint.jpg
[16]:https://cdn.neow.in/news/images/uploaded/2018/02/1519219725_software_1_linux_mint.jpg

View File

@ -0,0 +1,87 @@
10 killer tools for the admin in a hurry
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT)
Administering networks and systems can get very stressful when the workload piles up. Nobody really appreciates how long anything takes, and everyone wants their specific thing done yesterday.
So it's no wonder so many of us are drawn to the open source spirit of figuring out what works and sharing it with everyone. Because, when deadlines are looming, and there just aren't enough hours in the day, it really helps if you can just find free answers you can implement immediately.
So, without further ado, here's my Swiss Army Knife of stuff to get you out of the office before dinner time.
### Server configuration and scripting
Let's jump right in.
**[NixCraft][1]**
Use the site's internal search function. With more than a decade of regular updates, there's gold to be found here—useful scripts and handy hints that can solve your problem straight away. This is often the second place I look after Google.
**[Webmin][2]**
This gives you a nice web interface to remotely edit your configuration files. It cuts down on a lot of time spent having to juggle directory paths and `sudo nano`, which is handy when you're handling several customers.
**[Windows Subsystem for Linux][3]**
The reality of the modern workplace is that most employees are on Windows, while the grown-up gear in the server room is on Linux. So sometimes you find yourself trying to do admin tasks from (gasp) a Windows desktop.
What do you do? Install a virtual machine? It's actually much faster and far less work to configure if you install the Windows Subsystem for Linux compatibility layer, now available at no cost on Windows 10.
This gives you a Bash terminal in a window where you can run Bash scripts and Linux binaries on the local machine, have full access to both Windows and Linux filesystems, and mount network drives. It's available in Ubuntu, OpenSUSE, SLES, Debian, and Kali flavors.
**[mRemoteNG][4]**
This is an excellent SSH and remote desktop client for when you have 100+ servers to manage.
### Setting up a network so you don't have to do it again
A poorly planned network is the sworn enemy of the admin who hates working overtime.
**[IP Addressing Schemes that Scale][5]**
The diabolical thing about running out of IP addresses is that, when it happens, the network's grown large enough that a new addressing scheme is an expensive, time-consuming pain in the proverbial.
Ain't nobody got time for that!
At some point, IPv6 will finally arrive to save the day. Until then, these one-size-fits-most IP addressing schemes should keep you going, no matter how many network-connected wearables, tablets, smart locks, lights, security cameras, VoIP headsets, and espresso machines the world throws at us.
**[Linux Chmod Permissions Cheat Sheet][6]**
A short but sweet cheat sheet of Bash commands to set permissions across the network. This is so when Bill from Customer Service falls for that ransomware scam, you're recovering just his files and not the entire company's.
**[VLSM Subnet Calculator][7]**
Just put in the number of networks you want to create from an address space and the number of hosts you want per network, and it calculates what the subnet mask should be for everything.
### Single-purpose Linux distributions
Need a Linux box that does just one thing? It helps if someone else has already sweated the small stuff on an operating system you can install and have ready immediately.
Each of these has, at one point, made my work day so much easier.
**[Porteus Kiosk][8]**
This is for when you want a computer totally locked down to just a web browser. With a little tweaking, you can even lock the browser down to just one website. This is great for public access machines. It works with touchscreens or with a keyboard and mouse.
**[Parted Magic][9]**
This is an operating system you can boot from a USB drive to partition hard drives, recover data, and run benchmarking tools.
**[IPFire][10]**
Hahahaha, I still can't believe someone called a router/firewall/proxy combo "I pee fire." That's my second favorite thing about this Linux distribution. My favorite is that it's a seriously solid software suite. It's so easy to set up and configure, and there is a heap of plugins available to extend it.
So, how about you? What tools, resources, and cheat sheets have you found to make the workday easier? I'd love to know. Please share in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/7/tools-admin
作者:[Grant Hamono][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/grantdxm
[1]:https://www.cyberciti.biz/
[2]:http://www.webmin.com/
[3]:http://wsl-guide.org/en/latest/
[4]:https://mremoteng.org/
[5]:https://blog.dxmtechsupport.com.au/ip-addressing-for-a-small-business-that-might-grow/
[6]:https://isabelcastillo.com/linux-chmod-permissions-cheat-sheet
[7]:http://www.vlsm-calc.net/
[8]:http://porteus-kiosk.org/
[9]:https://partedmagic.com/
[10]:https://www.ipfire.org/

View File

@ -0,0 +1,320 @@
Install Oracle VirtualBox On Ubuntu 18.04 LTS Headless Server
======
![](https://www.ostechnix.com/wp-content/uploads/2016/07/Install-Oracle-VirtualBox-On-Ubuntu-18.04-720x340.png)
This step by step tutorial walk you through how to install **Oracle VirtualBox** on Ubuntu 18.04 LTS headless server. And, this guide also describes how to manage the VirtualBox headless instances using **phpVirtualBox** , a web-based front-end tool for VirtualBox. The steps described below might also work on Debian, and other Ubuntu derivatives such as Linux Mint. Let us get started.
### Prerequisites
Before installing Oracle VirtualBox, we need to do the following prerequisites in our Ubuntu 18.04 LTS server.
First of all, update the Ubuntu server by running the following commands one by one.
```
$ sudo apt update
$ sudo apt upgrade
$ sudo apt dist-upgrade
```
Next, install the following necessary packages:
```
$ sudo apt install build-essential dkms unzip wget
```
After installing all updates and necessary prerequisites, restart the Ubuntu server.
```
$ sudo reboot
```
### Install Oracle VirtualBox on Ubuntu 18.04 LTS server
Add Oracle VirtualBox official repository. To do so, edit **/etc/apt/sources.list** file:
```
$ sudo nano /etc/apt/sources.list
```
Add the following lines.
Here, I will be using Ubuntu 18.04 LTS, so I have added the following repository.
```
deb http://download.virtualbox.org/virtualbox/debian bionic contrib
```
![][2]
Replace the word **bionic** with your Ubuntu distributions code name, such as xenial, vivid, utopic, trusty, raring, quantal, precise, lucid, jessie, wheezy, or squeeze**.**
Then, run the following command to add the Oracle public key:
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
```
For VirtualBox older versions, add the following key:
```
$ wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
```
Next, update the software sources using command:
```
$ sudo apt update
```
Finally, install latest Oracle VirtualBox latest version using command:
```
$ sudo apt install virtualbox-5.2
```
### Adding users to VirtualBox group
We need to create and add our system user to the **vboxusers** group. You can either create a separate user and assign it to vboxusers group or use the existing user. I dont want to create a new user, so I added my existing user to this group. Please note that if you use a separate user for virtualbox, you must log out and log in to that particular user and do the rest of the steps.
I am going to use my username named **sk** , so, I ran the following command to add it to the vboxusers group.
```
$ sudo usermod -aG vboxusers sk
```
Now, run the following command to check if virtualbox kernel modules are loaded or not.
```
$ sudo systemctl status vboxdrv
```
![][3]
As you can see in the above screenshot, the vboxdrv module is loaded and running!
For older Ubuntu versions, run:
```
$ sudo /etc/init.d/vboxdrv status
```
If the virtualbox module doesnt start, run the following command to start it.
```
$ sudo /etc/init.d/vboxdrv setup
```
Great! We have successfully installed VirtualBox and started virtualbox module. Now, let us go ahead and install Oracle VirtualBox extension pack.
### Install VirtualBox Extension pack
The VirtualBox Extension pack provides the following functionalities to the VirtualBox guests.
* The virtual USB 2.0 (EHCI) device
* VirtualBox Remote Desktop Protocol (VRDP) support
* Host webcam passthrough
* Intel PXE boot ROM
* Experimental support for PCI passthrough on Linux hosts
Download the latest Extension pack for VirtualBox 5.2.x from [**here**][4].
```
$ wget https://download.virtualbox.org/virtualbox/5.2.14/Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
```
Install Extension pack using command:
```
$ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.14.vbox-extpack
```
Congratulations! We have successfully installed Oracle VirtualBox with extension pack in Ubuntu 16.04 LTS server. It is time to deploy virtual machines. Refer the [**virtualbox official guide**][5] to start creating and managing virtual machines in command line.
Not everyone is command line expert. Some of you might want to create and use virtual machines graphically. No worries! Here is where **phpVirtualBox** comes in handy!!
### About phpVirtualBox
**phpVirtualBox** is a free, web-based front-end to Oracle VirtualBox. It is written using PHP language. Using phpVirtualBox, we can easily create, delete, manage and administer virtual machines via a web browser from any remote system on the network.
### Install phpVirtualBox in Ubuntu 18.04 LTS
Since it is a web-based tool, we need to install Apache web server, PHP and some php modules.
To do so, run:
```
$ sudo apt install apache2 php php-mysql libapache2-mod-php php-soap php-xml
```
Then, Download the phpVirtualBox 5.2.x version from the [**releases page**][6]. Please note that we have installed VirtualBox 5.2, so we must install phpVirtualBox version 5.2 as well.
To download it, run:
```
$ wget https://github.com/phpvirtualbox/phpvirtualbox/archive/5.2-0.zip
```
Extract the downloaded archive with command:
```
$ unzip 5.2-0.zip
```
This command will extract the contents of 5.2.0.zip file into a folder named “phpvirtualbox-5.2-0”. Now, copy or move the contents of this folder to your apache web server root folder.
```
$ sudo mv phpvirtualbox-5.2-0/ /var/www/html/phpvirtualbox
```
Assign the proper permissions to the phpvirtualbox folder.
```
$ sudo chmod 777 /var/www/html/phpvirtualbox/
```
Next, let us configure phpVirtualBox.
Copy the sample config file as shown below.
```
$ sudo cp /var/www/html/phpvirtualbox/config.php-example /var/www/html/phpvirtualbox/config.php
```
Edit phpVirtualBox **config.php** file:
```
$ sudo nano /var/www/html/phpvirtualbox/config.php
```
Find the following lines and replace the username and password with your system user (The same username that we used in “Adding users to VirtualBox group” section).
In my case, my Ubuntu system username is **sk** , and its password is **ubuntu**.
```
var $username = 'sk';
var $password = 'ubuntu';
```
![][7]
Save and close the file.
Next, create a new file called **/etc/default/virtualbox** :
```
$ sudo nano /etc/default/virtualbox
```
Add the following line. Replace sk with your own username.
```
VBOXWEB_USER=sk
```
Finally, Reboot your system or simply restart the following services to complete the configuration.
```
$ sudo systemctl restart vboxweb-service
$ sudo systemctl restart vboxdrv
$ sudo systemctl restart apache2
```
### Adjust firewall to allow Apache web server
By default, the apache web browser cant be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https traffic via UFW by following the below steps.
First, let us view which applications have installed a profile using command:
```
$ sudo ufw app list
Available applications:
Apache
Apache Full
Apache Secure
OpenSSH
```
As you can see, Apache and OpenSSH applications have installed UFW profiles.
If you look into the **“Apache Full”** profile, you will see that it enables traffic to the ports **80** and **443** :
```
$ sudo ufw app info "Apache Full"
Profile: Apache Full
Title: Web Server (HTTP,HTTPS)
Description: Apache v2 is the next generation of the omnipresent Apache web
server.
Ports:
80,443/tcp
```
Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile:
```
$ sudo ufw allow in "Apache Full"
Rules updated
Rules updated (v6)
```
If you want to allow https traffic, but only http (80) traffic, run:
```
$ sudo ufw app info "Apache"
```
### Access phpVirtualBox Web console
Now, go to any remote system that has graphical web browser.
In the address bar, type: **<http://IP-address-of-virtualbox-headless-server/phpvirtualbox>**.
In my case, I navigated to this link **<http://192.168.225.22/phpvirtualbox>**
You should see the following screen. Enter the phpVirtualBox administrative user credentials.
The default username and phpVirtualBox is **admin** / **admin**.
![][8]
Congratulations! You will now be greeted with phpVirtualBox dashboard.
![][9]
Now, start creating your VMs and manage them from phpvirtualbox dashboard. As I mentioned earlier, You can access the phpVirtualBox from any system in the same network. All you need is a web browser and the username and password of phpVirtualBox.
If you havent enabled virtualization support in the BISO of host system (not the guest), phpVirtualBox allows you to create 32-bit guests only. To install 64-bit guest systems, you must enable virtualization in your host systems BIOS. Look for an option that is something like “virtualization” or “hypervisor” in your bios and make sure it is enabled.
Thats it. Hope this helps. If you find this guide useful, please share it on your social networks and support us.
More good stuffs to come. Stay tuned!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/install-oracle-virtualbox-ubuntu-16-04-headless-server/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2016/07/Add-VirtualBox-repository.png
[3]:http://www.ostechnix.com/wp-content/uploads/2016/07/vboxdrv-service.png
[4]:https://www.virtualbox.org/wiki/Downloads
[5]:http://www.virtualbox.org/manual/ch08.html
[6]:https://github.com/phpvirtualbox/phpvirtualbox/releases
[7]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-config.png
[8]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-1.png
[9]:http://www.ostechnix.com/wp-content/uploads/2016/07/phpvirtualbox-2.png

View File

@ -0,0 +1,74 @@
Why is Arch Linux So Challenging and What are Its Pros & Cons?
======
![](https://www.fossmint.com/wp-content/plugins/lazy-load/images/1x1.trans.gif)
[Arch Linux][1] is among the most popular Linux distributions and it was first released in **2002** , being spear-headed by **Aaron Grifin**. Yes, it aims to provide simplicity, minimalism, and elegance to the OS user but its target audience is not the faint of hearts. Arch encourages community involvement and a user is expected to put in some effort to better comprehend how the system operates.
Many old-time Linux users know a good amount about **Arch Linux** but you probably dont if you are new to it considering using it for your everyday computing tasks. Im no authority on the distro myself but from my experience with it, here are the pros and cons you will experience while using it.
### 1\. Pro: Build Your Own Linux OS
Other popular Linux Operating Systems like **Fedora** and **Ubuntu** ship with computers, same as **Windows** and **MacOS**. **Arch** , on the other hand, allows you to develop your OS to your taste. If you are able to achieve this, you will end up with a system that will be able to do exactly as you wish.
#### Con: Installation is a Hectic Process
[Installing Arch Linux][2] is far from a walk in a park and since you will be fine-tuning the OS, it will take a while. You will need to have an understanding of various terminal commands and the components you will be working with since you are to pick them yourself. By now, you probably already know that this requires quite a bit of reading.
### 2\. Pro: No Bloatware and Unnecessary Services
Since **Arch** allows you to choose your own components, you no longer have to deal with a bunch of software you dont want. In contrast, OSes like **Ubuntu** come with a huge number of pre-installed desktop and background apps which you may not need and might not be able to know that they exist in the first place, before going on to remove them.
To put simply, **Arch Linux** saves you post-installation time. **Pacman** , an awesome utility app, is the package manager Arch Linux uses by default. There is an alternative to **Pacman** , called [Pamac][3].
### 3\. Pro: No System Upgrades
**Arch Linux** uses the rolling release model and that is awesome. It means that you no longer have to worry about upgrading every now and then. Once you install Arch, say goodbye to upgrading to a new version as updates occur continuously. By default, you will always be using the latest version.
#### Con: Some Updates Can Break Your System
While updates flow in continuously, you have to consciously track what comes in. Nobody knows your softwares specific configuration and its not tested by anyone but you. So, if you are not careful, things on your machine could break.
### 4\. Pro: Arch is Community Based
Linux users generally have one thing in common: The need for independence. Although most Linux distros have less corporate ties, there are still a few you cannot ignore. For instance, a distro based on **Ubuntu** is influenced by whatever decisions Canonical makes.
If you are trying to become even more independent with the use of your computer, then **Arch Linux** is the way to go. Unlike most systems, Arch has no commercial influence and focuses on the community.
### 5\. Pro: Arch Wiki is Awesome
The [Arch Wiki][4] is a super library of everything you need to know about the installation and maintenance of every component in the Linux system. The great thing about this site is that even if you are using a different Linux distro from Arch, you would still find its information relevant. Thats simply because Arch uses the same components as many other Linux distros and its guides and fixes sometimes apply to all.
### 6\. Pro: Check Out the Arch User Repository
The [Arch User Repository (AUR)][5] is a huge collection of software packages from members of the community. If you are looking for a Linux program that is not yet available on Archs repositories, you can find it on the **AUR** for sure.
The **AUR** is maintained by users who compile and install packages from source. Users are also allowed to vote on packages which give them (the packages i.e.) higher rankings that make them more visible to potential users.
#### Ultimately: Is Arch Linux for You?
**Arch Linux** has way more **pros** than **cons** including the ones that arent on this list. The installation process is long and probably too technical for a non-Linux savvy user, but with enough time on your hands and the ability to maximize productivity using wiki guides and the like, you should be good to go.
**Arch Linux** is a great Linux distro not in spite of its complexity, but because of it. And it appeals most to those who are ready to do what needs to be done given that you will have to do your homework and exercise a good amount of patience.
By the time you build this Operating System from scratch, you would have learned many details about GNU/Linux and would never be ignorant of whats going on with your PC again.
What are the **pros** and **cons** of using **Arch Linux** in your experience? And on the whole, why is using it so challenging? Drop your comments in the discussion section below.
--------------------------------------------------------------------------------
via: https://www.fossmint.com/why-is-arch-linux-so-challenging-what-are-pros-cons/
作者:[Martins D. Okoi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.fossmint.com/author/dillivine/
[1]:https://www.archlinux.org/
[2]:https://www.tecmint.com/arch-linux-installation-and-configuration-guide/
[3]:https://www.fossmint.com/pamac-arch-linux-gui-package-manager/
[4]:https://wiki.archlinux.org/
[5]:https://wiki.archlinux.org/index.php/Arch_User_Repository

View File

@ -0,0 +1,107 @@
BASHing data: Truncated data items
======
### Truncated data items
**truncated** (adj.): abbreviated, abridged, curtailed, cut off, clipped, cropped, trimmed...
One way to truncate a data item is to enter it into a database field that has a character limit shorter than the data item. For example, the string
>Yarrow Ravine Rattlesnake Habitat Area, 2 mi ENE of Yermo CA
is 60 characters long. If you enter it into a "Locality" field with a 50-character limit, you get
>Yarrow Ravine Rattlesnake Habitat Area, 2 mi ENE #Ends with a whitespace
Truncations can also be data entry errors. You meant to enter
>Sally Ann Hunter (aka Sally Cleveland)
but you forgot the closing bracket
>Sally Ann Hunter (aka Sally Cleveland
leaving the data user to wonder whether Sally has other aliases that were trimmed off the data item.
Truncated data items are very difficult to detect. When auditing data I use three different methods to find possible truncations, but I probably miss some.
**Item length distribution.** The first method catches most of the truncations I find in individual fields. I pass the field to an AWK command that tallies up data items by field width, then I use **sort** to print the tallies in reverse order of width. For example, to check field 33 in the tab-separated file "midges":
```
awk -F"\t" 'NR>1 {a[length($33)]++} \
END {for (i in a) print i FS a[i]}' midges | sort -nr
```
![distro1][1]
The longest entries have exactly 50 characters, which is suspicious, and there's a "bulge" of data items at that width, which is even more suspicious. Inspection of those 50-character-wide items reveals truncations:
![distro2][2]
Other tables I've checked this way had bulges at 100, 200 and 255 characters. In each case the bulges contained apparent truncations.
**Unmatched brackets**. The second method looks for data items like "...(Sally Cleveland" above. A good starting point is a tally of all the punctuation in the data table. Here I'm checking the file "mag2":
grep -o "[[:punct:]]" file | sort | uniqc
![punct][3]
Note that the numbers of opening and closing round brackets in "mag2" aren't equal. To see what's going on, I use the function "unmatched", which takes three arguments and checks all fields in a data table. The first argument is the filename and the second and third are the opening and closing brackets, enclosed in quotes.
```
unmatched()
{
awk -F"\t" -v start="$2" -v end="$3" \
'{for (i=1;i<=NF;i++) \
if (split($i,a,start) != split($i,b,end)) \
print "line "NR", field "i":\n"$i}' "$1"
}
```
"unmatched" reports line number and field number if it finds a mismatch between opening and closing brackets in the field. It relies on AWK's **split** function, which returns the number of elements (including blank space) separated by the splitting character. This number will always be one more than the number of splitters:
![split][4]
Here "ummatched" checks the round brackets in "mag2" and finds some likely truncations:
![unmatched][5]
I use "unmatched" to locate unmatched round brackets (), square brackets [], curly brackets {} and arrows <>, but the function can be used for any paired punctuation characters.
**Unexpected endings**. The third method looks for data items that end in a trailing space or a non-terminal punctuation mark, like a comma or a hyphen. This can be done on a single field with **cut** piped to **grep** , or in one step with AWK. Here I'm checking field 47 of the tab-separated table "herp5", and pulling out suspect data items and their line numbers:
```
cut -f47 herp5 | grep -n "[ ,;:-]$"
awk -F"\t" '$47 ~ /[ ,;:-]$/ {print NR": "$47}' herp5
```
![herps5][6]
The all-fields version of the AWK command for a tab-separated file is:
```
awk -F"\t" '{for (i=1;i<=NF;i++) if ($i ~ /[ ,;:-]$/) \
print "line "NR", field "i":\n"$i}' file
```
**Cautionary thoughts**. Truncations also appear during the validation tests I do on fields. For example, I might be checking for plausible 4-digit entries in a "Year" field, and there's a 198 that hints at 198n. Or is it 1898? Truncated data items with their lost characters are mysteries. As a data auditor I can only report (possible) character losses and suggest that the (possibly) missing characters be restored by the data compilers or managers.
--------------------------------------------------------------------------------
via: https://www.polydesmida.info/BASHing/2018-07-04.html
作者:[polydesmida][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.polydesmida.info/
[1]:https://www.polydesmida.info/BASHing/img1/2018-07-04_1.png
[2]:https://www.polydesmida.info/BASHing/img1/2018-07-04_2.png
[3]:https://www.polydesmida.info/BASHing/img1/2018-07-04_3.png
[4]:https://www.polydesmida.info/BASHing/img1/2018-07-04_4.png
[5]:https://www.polydesmida.info/BASHing/img1/2018-07-04_5.png
[6]:https://www.polydesmida.info/BASHing/img1/2018-07-04_6.png

View File

@ -0,0 +1,155 @@
Install an NVIDIA GPU on almost any machine
======
![](https://fedoramagazine.org/wp-content/uploads/2018/06/nvidia-816x345.jpg)
Whether for research or recreation, installing a new GPU can bolster your computers performance and enable new functionality across the board. This installation guide uses Fedora 28s brand-new third-party repositories to install NVIDIA drivers. It walks you through the installation of both software and hardware, and covers everything you need to get your NVIDIA card up and running. This process works for any UEFI-enabled computer, and any modern NVIDIA GPU.
### Preparation
This guide relies on the following materials:
* A machine that is [UEFI][1] capable. If youre uncertain whether your machine has this firmware, run sudo dmidecode -t 0. If “UEFI is supported” appears anywhere in the output, you are all set to continue. Otherwise, while its technically possible to update some computers to support UEFI, the process is often finicky and generally not recommended.
* A modern, UEFI-enabled NVIDIA card
* A power source that meets the wattage and wiring requirements for your NVIDIA card (see the Hardware & Modifications section for details)
* Internet connection
* Fedora 28
### Example setup
This example installation uses:
* An Optiplex 9010 (a fairly old machine)
* NVIDIA [GeForce GTX 1050 Ti XLR8 Gaming Overclocked Edition 4GB GDDR5 PCI Express 3.0][2] graphics card
* In order to meet the power requirements of the new GPU, the power supply was upgraded to an [EVGA 80 PLUS 600W ATX 12V/EPS 12V][3]. This new PSU was 300W above the minimum recommendation, but simply meeting the minimum recommendation is sufficient in most cases.
* And, of course, Fedora 28.
### Hardware and modifications
#### PSU
Open up your desktop case and check the maximum power output printed on your power supply. Next, check the documentation on your NVIDIA GPU and determine the minimum recommended power (in watts). Further, take a look at your GPU and see if it requires additional wiring, such as a 6-pin connector. Most entry-level GPUs only draw power directly from the motherboard, but some require extra juice. Youll need to upgrade your PSU if:
1. Your power supplys max power output is below the GPUs suggested minimum power. **Note:** According to some NVIDIA card manufacturers, pre-built systems may require more or less power than recommended, depending on the systems configuration. Use your discretion to determine your requirements if youre using a particularly power-efficient or power-hungry setup.
2. Your power supply does not provide the necessary wiring to power your card.
PSUs are straightforward to replace, but make sure to take note of the wiring layout before detaching your current power supply. Additionally, make sure to select a PSU that fits your desktop case.
#### CPU
Although installing a high-quality NVIDIA GPU is possible in many old machines, a slow or damaged CPU can “bottleneck” the performance of the GPU. To calculate the impact of the bottlenecking effect for your machine, click [here][4]. Its important to know your CPUs performance to avoid pairing a high-powered GPU with a CPU that cant keep up. Upgrading your CPU is a potential consideration.
#### Motherboard
Before proceeding, ensure your motherboard is compatible with your GPU of choice. Your graphics card should be inserted into the PCI-E x16 slot closest to the heat-sink. Ensure that your setup contains enough space for the GPU. In addition, note that most GPUs today employ PCI-E 3.0 technology. Though these GPUs will run best if mounted on a PCI-E 3.0 x16 slot, performance should not suffer significantly with an older version slot.
### Installation
```
sudo dnf update
```
2\. Next, reboot with the simple command:
```
reboot
```
3\. After reboot, install the Fedora 28 workstation repositories:
```
sudo dnf install fedora-workstation-repositories
```
4\. Next, enable the NVIDIA driver repository:
```
sudo dnf config-manager --set-enabled rpmfusion-nonfree-nvidia-driver
```
5\. Then, reboot again.
6\. After the reboot, verify the addition of the repository via the following command:
```
sudo dnf repository-packages rpmfusion-nonfree-nvidia-driver info
```
If several NVIDIA tools and their respective specs are loaded, then proceed to the next step. If not, you may have encountered an error when adding the new repository and you should give it another shot.
7\. Login, connect to the internet, and open the software app. Click Add-ons> Hardware Drivers> NVIDIA Linux Graphics Driver> Install.
Then, reboot once again.
8\. After reboot, go to Show Applications on the side bar, and open up the newly added NVIDIA X Server Settings application. A GUI should open up, and a dialog box will appear with the following message:
![NVIDIA X Server Prompt][5]
Take the applications advice, but before doing so, ensure you have your NVIDIA GPU on-hand and are ready to install. **Please note** that running nvidia xconfig as root and powering off without installing your GPU immediately may cause drastic damage. Doing so may prevent your computer from booting, and force you to repair the system through the reboot screen. A fresh install of Fedora may fix these issues, but the effects can be much worse.
If youre ready to proceed, enter the command:
```
sudo nvidia-xconfig
```
If the system prompts you to perform any downloads, accept them and proceed.
9\. Once this process is complete, close all applications and **shut down** the computer. Unplug the power supply to your machine. Then, press the power button once to drain any residual power to protect yourself from electric shock. If your PSU has a power switch, switch it off.
10\. Finally, install the graphics card. Remove the old GPU and insert your new NVIDIA graphics card into the proper PCI-E x16 slot, with the fans facing down. If there is no space for the fans to ventilate in this position, place the graphics card face up instead, if possible. When you have successfully installed the new GPU, close your case, plug in the PSU, and turn the computer on. It should successfully boot up.
**NOTE:** To disable the NVIDIA driver repository used in this installation, or to disable all fedora workstation repositories, consult [The Fedora Wiki Page][6].
### Verification
1\. If your newly installed NVIDIA graphics card is connected to your monitor and displaying correctly, then your NVIDIA driver has successfully established a connection to the GPU.
If youd like to view your settings, or verify the driver is working (in the case that you have two GPUs installed on the motherboard), open up the NVIDIA X Server Settings app again. This time, you should not be prompted with an error message, and information on the X configuration file and your NVIDIA GPU should be available (see screenshot below).
![NVIDIA X Server Settings][7]
Through this app, you may alter your X configuration file should you please, and may monitor the GPUs performance, clock speed, and thermal information.
2\. To ensure the new card is working at capacity, a GPU performance test is needed. GL Mark 2, a benchmarking tool that provides information on buffering, building, lighting, texturing, etc, offers an excellent solution. GL Mark 2 records frame rates for a variety of different graphical tests, and outputs an overall performance score (called the glmark2 score).
**Note:** glxgears will only test the performance of your screen or monitor, not the graphics card itself. Use GL Mark 2 instead.
To run GLMark2:
1. Open up a terminal and close all other applications
2. sudo dnf install glmark2
3. glmark2
4. Allow the test to run to completion for best results. Check to see if the frame rates match your expectation for your NVIDA card. If youd like additional verification, consult the web to determine if a glmark2 benchmark has been previously conducted on your NVIDA card model and published to the web. Compare scores to assess your GPUs performance.
5. If your framerates and/or glmark2 score are below expected, consider potential causes. CPU-induced bottlenecking? Other issues?
Assuming the diagnostics look good, enjoy using your new GPU.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/install-nvidia-gpu/
作者:[Justice del Castillo][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/justice/
[1]:https://whatis.techtarget.com/definition/Unified-Extensible-Firmware-Interface-UEFI
[2]:https://www.cnet.com/products/pny-geforce-gtx-xlr8-gaming-1050-ti-overclocked-edition-graphics-card-gf-gtx-1050-ti-4-gb/specs/
[3]:https://www.evga.com/products/product.aspx?pn=100-B1-0600-KR
[4]:http://thebottlenecker.com (Home: The Bottle Necker)
[5]:https://bytebucket.org/kenneym/fedora-28-nvidia-gpu-installation/raw/7bee7dc6effe191f1f54b0589fa818960a8fa18b/nvidia_xserver_error.jpg?token=c6a7effe35f1c592a155a4a46a068a19fd060a91 (NVIDIA X Sever Prompt)
[6]:https://fedoraproject.org/wiki/Workstation/Third_Party_Software_Repositories
[7]:https://bytebucket.org/kenneym/fedora-28-nvidia-gpu-installation/raw/7bee7dc6effe191f1f54b0589fa818960a8fa18b/NVIDIA_XCONFIG.png?token=64e1a7be21e5e9ba157f029b65e24e4eef54d88f (NVIDIA X Server Settings)

View File

@ -1,3 +1,5 @@
Translating by shipsw
Python ChatOps libraries: Opsdroid and Errbot
======

View File

@ -0,0 +1,55 @@
应该知道的 6 个开源 AI 工具
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/artificial-intelligence-3382507_1920.jpg?itok=HarDnwVX)
在开源领域不管你的想法是多少的新颖独到先去看一下别人是否已经做成了这个概念总是一个很明智的做法。对于有兴趣借助不断成长的人工智能AI的力量的组织和个人来说许多非常好的工具不仅是免费和开源的而且在很多的情况下它们都已经过测试和久经考验的。
在领先的公司和非盈利组织中AI 的优先级都非常高,并且这些公司和组织都开源了很有价值的工具。下面的样本是任何人都可以使用的免费的、开源的 AI 工具。
**Acumos.** [Acumos AI][1] 是一个平台和开源框架,使用它可以很容易地去构建、共享和分发 AI 应用。它规范了需要的基础设施栈和组件,使其可以在一个“开箱即用的”通用 AI 环境中运行。这使得数据科学家和模型训练者可以专注于它们的核心竞争力,而不用在无止境的定制、建模、以及训练一个 AI 实现上浪费时间。
Acumos 是 [LF 深度学习基金会][2] 的一部分,它是 Linux 基金会中的一个组织,它支持在人工智能、机器学习、以及深度学习方面的开源创新。它的目标是让这些重大的新技术可用于开发者和数据科学家,包括那些在深度学习和 AI 上经验有限的人。LF 深度学习基金会 [最近批准了一个项目生命周期和贡献流程][3],并且它现在正接受项目贡献的建议。
**Facebook 的框架.** Facebook 它自己 [有开源的][4] 中央机器学习系统,它设计用于做一些大规模的人工智能任务,以及一系列其它的 AI 技术。这个工具是经过他们公司验证的平台的一部分。Facebook 也开源了一个叫 [Caffe2][5] 的深度学习和人工智能的框架。
**说到 Caffe.** Yahoo 也在开源许可证下发布了它自己的关键的 AI 软件。[CaffeOnSpark 工具][6] 是基于深度学习的它是人工智能的一个分支在帮助机器识别人类语言、或者照片、视频的内容方面非常有用。同样地IBM 的机器学习程序 [SystemML][7] 可以通过 Apache 软件基金会免费共享和修改。
**Google 的工具.** Google 花费了几年的时间开发了它自己的 [TensorFlow][8] 软件框架,用于去支持它的 AI 软件和其它预测和分析程序。TensorFlow 是你可能都已经在使用的一些 Google 工具背后的引擎,包括 Google Photos 和在 Google app 中使用的语言识别。
Google 开源了两个 [AIY kits][9],它可以让个人很容易地使用人工智能,它们专注于计算机视觉和语音助理。这两个工具包将用到的所有组件封装到一个盒子中。这个工具包目前在美国的 Target 中有售,并且它是基于开源的树莓派平台的 —— 有越来越多的证据表明,在开源和 AI 交集中将发生非常多的事情。
**H2O.ai.** **** 我 [以前介绍过][10] H2O.ai它在机器学习和人工智能领域中占有一席之地因为它的主要工具是免费和开源的。你可以获取主要的 H2O 平台和 Sparkling Water它与 Apache Spark 一起工作,只需要去 [下载][11] 它们即可。这些工具遵循 Apache 2.0 许可证,它是一个非常灵活的开源许可证,你甚至可以在 Amazon Web 服务AWS和其它的集群上运行它们而这仅需要几百美元而已。
**Microsoft Onboard.** “我们的目标是让 AI 大众化,让每个人和组织获得更大的成就,“ Microsoft CEO Satya Nadella [说][12]。因此,微软持续迭代它的 [Microsoft Cognitive Toolkit][13]。它是一个能够与 TensorFlow 和 Caffe 去竞争的一个开源软件框架。Cognitive 工具套件可以工作在 64 位的 Windows 和 Linux 平台上。
Cognitive 工具套件团队的报告称“Cognitive 工具套件通过允许用户去创建、训练、以及评估他们自己的神经网络,以使企业级的、生产系统级的 AI 成为可能,这些神经网络可能跨多个 GPU 以及多个机器在大量的数据集中高效伸缩。”
从来自 Linux 基金会的新电子书中学习更多的有关 AI 知识。Ibrahim Haddad 的 [开源 AI项目、洞察、和趋势][14] 调查了 16 个流行的开源 AI 项目—— 深入研究了他们的历史、代码库、以及 GitHub 的贡献。 [现在可以免费下载这个电子书][14]。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/6/6-open-source-ai-tools-know
作者:[Sam Dean][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/sam-dean
[1]:https://www.acumos.org/
[2]:https://www.linuxfoundation.org/projects/deep-learning/
[3]:https://www.linuxfoundation.org/blog/lf-deep-learning-foundation-announces-project-contribution-process/
[4]:https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/
[5]:https://venturebeat.com/2017/04/18/facebook-open-sources-caffe2-a-new-deep-learning-framework/
[6]:http://yahoohadoop.tumblr.com/post/139916563586/caffeonspark-open-sourced-for-distributed-deep
[7]:https://systemml.apache.org/
[8]:https://www.tensorflow.org/
[9]:https://www.techradar.com/news/google-assistant-sweetens-raspberry-pi-with-ai-voice-control
[10]:https://www.linux.com/news/sparkling-water-bridging-open-source-machine-learning-and-apache-spark
[11]:http://www.h2o.ai/download
[12]:https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/02/10/microsoft-cognitive-toolkit-cntk/
[13]:https://www.microsoft.com/en-us/cognitive-toolkit/
[14]:https://www.linuxfoundation.org/publications/open-source-ai-projects-insights-and-trends/

View File

@ -0,0 +1,66 @@
Mesos 和 Kubernetes不是竞争者
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-barge-bay-161764_0.jpg?itok=vNChG5fb)
Mesos 的起源可以追溯到 2009 年当时Ben Hindman 还是加州大学伯克利分校研究并行编程的博士生。他们在 128 核的芯片上做大规模的并行计算,并尝试去解决多个问题,比如怎么让软件和库在这些芯片上运行更高效。他与同学们讨论能否借鉴并行处理和多线程的思想,并将它们应用到集群管理上。
Hindman 说 "最初,我们专注于大数据” 。那时,大数据非常热门,并且 Hadoop 是其中一个热门技术。“我们发现,人们在集群上运行像 Hadoop 这样的程序与运行多线程应用和并行应用很相似。Hindman 说。
但是,它们的效率并不高,因此,他们开始去思考,如何通过集群管理和资源管理让它们运行的更好。”我们查看了那个时间很多的不同技术“ Hindman 回忆道。
然而Hindman 和他的同事们,决定去采用一种全新的方法。”我们决定去对资源管理创建一个低级的抽象,然后在此之上运行调度服务和做其它的事情。“ Hindman 说,“基本上,这就是 Mesos 的本质 —— 将资源管理部分从调度部分中分离出来。”
他成功了,并且 Mesos 从那时开始强大了起来。
### 将项目呈献给 Apache
这个项目发起于 2009 年。在 2010 年时,团队决定将这个项目捐献给 Apache 软件基金会ASF。它在 Apache 孵化,并于 2013 年成为顶级项目TLP
为什么 Mesos 社区选择 Apache 软件基金会有很多的原因比如Apache 许可证,以及他们已经拥有了一个充满活力的此类项目的许多其它社区。
与影响力也有关系。许多在 Mesos 上工作的人,也参与了 Apache并且许多人也致力于像 Hadoop 这样的项目。同时,来自 Mesos 社区的许多人也致力于其它大数据项目,比如 Spark。这种交叉工作使得这三个项目 —— Hadoop、Mesos、以及 Spark —— 成为 ASF 的项目。
与商业也有关系。许多公司对 Mesos 很感兴趣,并且开发者希望它能由一个中立的机构来维护它,而不是让它成为一个私有项目。
### 谁在用 Mesos
更好的问题应该是,谁不在用 Mesos从 Apple 到 Netflix 每个都在用 Mesos。但是Mesos 也面临任何技术在早期所面对的挑战。”最初,我要说服人们,这是一个很有趣的新技术。它叫做“容器”,因为它不需要使用虚拟机“ Hindman 说。
从那以后,这个行业发生了许多变化,现在,只要与别人聊到基础设施,必然是从”容器“开始的 —— 感谢 Docker 所做出的工作。今天再也不需要说服工作了,而在 Mesos 出现的早期,前面提到的像 Apple、Netflix、以及 PayPal 这样的公司。他们已经知道了容器化替代虚拟机给他们带来的技术优势。”这些公司在容器化成为一种现象之前,已经明白了容器化的价值所在“, Hindman 说。
可以在这些公司中看到,他们有大量的容器而不是虚拟机。他们所做的全部工作只是去管理和运行这些容器,并且他们欣然接受了 Mesos。在 Mesos 早期就使用它的公司有 Apple、Netflix、PayPal、Yelp、OpenTable、和 Groupon。
“大多数组织使用 Mesos 来运行任意需要的服务” Hindman 说,“但也有些公司用它做一些非常有趣的事情,比如,数据处理、数据流、分析负载和应用程序。“
这些公司采用 Mesos 的其中一个原因是资源管理层之间有一个明晰的界线。当公司运营容器的时候Mesos 为他们提供了很好的灵活性。
“我们尝试使用 Mesos 去做的一件事情是去创建一个层,以让使用者享受到我们的层带来的好处,当然也可以在它之上创建任何他们想要的东西,” Hindman 说。 “我认为这对一些像 Netflix 和 Apple 这样的大公司非常有用。”
但是并不是每个公司都是技术型的公司不是每个公司都有或者应该有这种专长。为帮助这样的组织Hindman 联合创建了 Mesosphere 去围绕 Mesos 提供服务和解决方案。“我们最终决定,为这样的组织去构建 DC/OS它不需要技术专长或者不想把时间花费在像构建这样的事情上。”
### Mesos vs. Kubernetes?
人们经常用 x 相对于 y 这样的术语来考虑问题,但是它并不是一个技术对另一个技术的问题。大多数的技术在一些领域总是重叠的,并且它们可以是互补的。“我不喜欢将所有的这些东西都看做是竞争者。我认为它们中的一些与另一个在工作中是互补的,” Hindman 说。
“事实上,名字 Mesos 表示它处于 ‘中间’;它是一种中间的 OS” Hindman 说,“我们有一个容器调度器的概念,它能够运行在像 Mesos 这样的东西之上。当 Kubernetes 刚出现的时候,我们实际上在 Mesos 的生态系统中接受它的,并将它看做是运行在 Mesos 之上、DC/OS 之中的另一种方式的容器。”
Mesos 也复活了一个名为 [Marathon][1](一个用于 Mesos 和 DC/OS 的容器编排器)的项目,它在 Mesos 生态系统中是做的最好的容器编排器。但是Marathon 确实无法与 Kubernetes 相比较。“Kubernetes 比 Marathon 做的更多,因此,你不能将它们简单地相互交换,” Hindman 说,“与此同时,我们在 Mesos 中做了许多 Kubernetes 中没有的东西。因此,这些技术之间是互补的。”
不要将这些技术视为相互之间是敌对的关系,它们应该被看做是对行业有益的技术。它们不是技术上的重复;它们是多样化的。据 Hindman 说,“对于开源领域的终端用户来说,这可能会让他们很困惑,因为他们很难去知道哪个技术适用于哪种负载,但这是被称为开源的这种东西最令人讨厌的本质所在。“
这只是意味着有更多的选择,并且每个都是赢家。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/6/mesos-and-kubernetes-its-not-competition
作者:[Swapnil Bhartiya][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://mesosphere.github.io/marathon/

View File

@ -0,0 +1,82 @@
使用 LSWC(Little Simple Wallpaper Changer) 在 Linux 中自动更改壁纸
======
**简介:这是一个小脚本,可以在 Linux 桌面上定期自动更改壁纸。**
顾名思义LittleSimpleWallpaperChanger 是一个小脚本,可以定期地随机更改壁纸。
我知道在“外观”或“更改桌面背景”设置中有一个随机壁纸选项。但那是随机更改预置壁纸而不是你添加的壁纸。
因此,在本文中,我们将看到如何使用 LittleSimpleWallpaperChanger 设置包含照片的随机桌面壁纸。
### Little Simple Wallpaper Changer (LSWC)
[LittleSimpleWallpaperChanger][1] 或 LSWC 是一个非常轻量级的脚本,它在后台运行,从用户指定的文件夹中更改壁纸。壁纸以 1 至 5 分钟的随机间隔变化。该软件设置起来相当简单,设置完后,用户就可以忘掉它。
![Little Simple Wallpaper Changer to change wallpapers in Linux][2]
#### 安装 LSWC
[点此链接下载 LSWC][3]。压缩文件的大小约为 15KB。
* 进入下载位置。
* 右键单击下载的 .zip 文件,然后选择“在此处解压”。
* 打开解压后的文件夹,右键单击并选择“在终端中打开”。
* 在终端中复制粘贴命令并按 Enter 键。
`bash ./README_and_install.sh`
* 然后会弹出一个对话框,要求你选择包含壁纸的文件夹。单击它,然后选择你存放壁纸的文件夹。
* 就是这样。然后重启计算机。
![Little Simple Wallpaper Changer for Linux][4]
#### 使用 LSWC
安装时LSWC 会要求你选择包含壁纸的文件夹。因此,我建议你在安装 LSWC 之前创建一个文件夹并将你想要的壁纸全部移动到那。或者你可以使用图片文件夹中的“壁纸”文件夹。**所有壁纸都必须是 .jpg 格式。**
你可以添加更多壁纸或从所选文件夹中删除当前壁纸。要更改壁纸文件夹位置,你可以从以下文件中编辑壁纸的位置。
```
.config/lswc/homepath.conf
```
#### 删除 LSWC
打开终端并运行以下命令以停止 LSWC
```
pkill lswc
```
在文件管理器中打开家目录,然后按 ctrl+H 显示隐藏文件,接着删除以下文件:
* .local 中的 “scripts” 文件夹
* .config 中的 “lswc” 文件夹
* .config/autostart 中的 “lswc.desktop” 文件
这就完成了。创建自己的桌面背景幻灯片。LSWC 非常轻巧,易于使用。安装它然后忘记它。
LSWC 功能不是很丰富,但这是有意的。它做了它打算做的事情,那就是更换壁纸。如果你想要一个自动下载壁纸的工具试试 [WallpaperDownloader][5]。
请在下面的评论栏分享你对这个漂亮的小软件的想法。别忘了分享这篇文章。干杯。
--------------------------------------------------------------------------------
via: https://itsfoss.com/little-simple-wallpaper-changer/
作者:[Aquil Roshan][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/aquil/
[1]:https://github.com/LittleSimpleWallpaperChanger/lswc
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/Little-simple-wallpaper-changer-2-800x450.jpg
[3]:https://github.com/LittleSimpleWallpaperChanger/lswc/raw/master/Lswc.zip
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/Little-simple-wallpaper-changer-1-800x450.jpg
[5]:https://itsfoss.com/wallpaperdownloader-linux/

View File

@ -0,0 +1,139 @@
Sosreport - 收集系统日志和诊断信息的工具
======
![](https://www.ostechnix.com/wp-content/uploads/2018/06/sos-720x340.png)
如果你是 RHEL 管理员,你可能肯定听说过 **Sosreport** - 一个可扩展、可移植和支持的数据收集工具。它是一个从类 Unix 操作系统收集系统配置详细信息和诊断信息的工具。当用户提出支持服务单时,他/她必须运行此工具并将由 Sosreport 工具生成的结果报告发送给 Red Hat 支持人员。然后,执行人员将根据报告进行初步分析,并尝试找出系统中的问题。不仅在 RHEL 系统上,你可以在任何类 Unix 操作系统上使用它来收集系统日志和其他调试信息。
### 安装 Sosreport
Sosreport 在 Red Hat 官方系统仓库中,因此你可以使用 Yum 或 DNF 包管理器安装它,如下所示。
```
$ sudo yum install sos
```
要么,
```
$ sudo dnf install sos
```
在 Debian、Ubuntu 和 Linux Mint 上运行:
```
$ sudo apt install sosreport
```
### 用法
安装后,运行以下命令以收集系统配置详细信息和其他诊断信息。
```
$ sudo sosreport
```
系统将要求你输入系统的一些详细信息,例如系统名称、案例 ID 等。相应地输入详细信息,然后按 ENTER 键生成报告。如果你不想更改任何内容并使用默认值,只需按 ENTER 键即可。
我的 CentOS 7 服务器的示例输出:
```
sosreport (version 3.5)
This command will collect diagnostic and configuration information from
this CentOS Linux system and installed applications.
An archive containing the collected information will be generated in
/var/tmp/sos.DiJXi7 and may be provided to a CentOS support
representative.
Any information provided to CentOS will be treated in accordance with
the published support policies at:
https://wiki.centos.org/
The generated archive may contain data considered sensitive and its
content should be reviewed by the originating organization before being
passed to any third party.
No changes will be made to system configuration.
Press ENTER to continue, or CTRL-C to quit.
Please enter your first initial and last name [server.ostechnix.local]:
Please enter the case id that you are generating this report for []:
Setting up archive ...
Setting up plugins ...
Running plugins. Please wait ...
Running 73/73: yum...
Creating compressed archive...
Your sosreport has been generated and saved in:
/var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
The checksum is: 8f08f99a1702184ec13a497eff5ce334
Please send this file to your support representative.
```
如果你不希望系统提示你输入此类详细信息,请如下使用批处理模式。
```
$ sudo sosreport --batch
```
正如你在上面的输出中所看到的,生成了一个归档报告并保存在 **/var/tmp/sos.DiJXi7** 中。在 RHEL 6/CentOS 6 中,报告将在 **/tmp** 中生成。你现在可以将此报告发送给你的支持人员,以便他可以进行初步分析并找出问题所在。
你可能会担心或想知道报告中的内容。如果是这样,你可以通过运行以下命令来查看它:
```
$ sudo tar -tf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
要么,
```
$ sudo vim /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
请注意,上述命令不会解压存档,而只显示存档中的文件和文件夹列表。如果要查看存档中文件的实际内容,请首先使用以下命令解压存档:
```
$ sudo tar -xf /var/tmp/sosreport-server.ostechnix.local-20180628171844.tar.xz
```
存档的所有内容都将解压当前工作目录中 “ssosreport-server.ostechnix.local-20180628171844/” 目录中。进入目录并使用 cat 命令或任何其他文本浏览器查看文件内容:
```
$ cd sosreport-server.ostechnix.local-20180628171844/
$ cat uptime
17:19:02 up 1:03, 2 users, load average: 0.50, 0.17, 0.10
```
有关 Sosreport 的更多详细信息,请参阅手册页。
```
$ man sosreport
```
就是这些了。希望这些有用。还有更多好东西。敬请关注!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/sosreport-a-tool-to-collect-system-logs-and-diagnostic-information/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/