Merge pull request #14 from LCTT/master

update
This commit is contained in:
MjSeven 2018-05-26 14:53:04 +08:00 committed by GitHub
commit a564309276
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 1732 additions and 1139 deletions

View File

@ -3,7 +3,7 @@
这是我们的 LAMP 系列教程的开始:如何在 Ubuntu 上安装 Apache web 服务器。
这些说明适用于任何基于 Ubuntu 的发行版,包括 Ubuntu 14.04, Ubuntu 16.04, [Ubuntu 18.04][1],甚至非 LTS 的 Ubuntu 发行版,例如 Ubuntu 17.10。这些说明经过测试并为 Ubuntu 16.04 编写。
这些说明适用于任何基于 Ubuntu 的发行版,包括 Ubuntu 14.04、 Ubuntu 16.04、 [Ubuntu 18.04][1],甚至非 LTS 的 Ubuntu 发行版,例如 Ubuntu 17.10。这些说明经过测试并为 Ubuntu 16.04 编写。
Apache (又名 httpd) 是最受欢迎和使用最广泛的 web 服务器,所以这应该对每个人都有用。
@ -11,9 +11,9 @@ Apache (又名 httpd) 是最受欢迎和使用最广泛的 web 服务器,所
在我们开始之前,这里有一些要求和说明:
* Apache 可能已经在你的服务器上安装了,所以开始之前首先检查一下。你可以使用 "apachectl -V" 命令来显示你正在使用的 Apache 的版本和一些其他信息。
* 你需要一个 Ubuntu 服务器。你可以从 [Vultr][2] 购买一个,它们是[最便宜的云托管服务商][3]之一。它们的服务器价格每月 2.5 美元起。
* 你需要有 root 用户或具有 sudo 访问权限的用户。下面的所有命令都由 root 用户自行,所以我们不必为每个命令都添加 'sudo'
* Apache 可能已经在你的服务器上安装了,所以开始之前首先检查一下。你可以使用 `apachectl -V` 命令来显示你正在使用的 Apache 的版本和一些其他信息。
* 你需要一个 Ubuntu 服务器。你可以从 [Vultr][2] 购买一个,它们是[最便宜的云托管服务商][3]之一。它们的服务器价格每月 2.5 美元起。LCTT 译注:广告 ≤_≤
* 你需要有 root 用户或具有 sudo 访问权限的用户。下面的所有命令都由 root 用户执行,所以我们不必为每个命令都添加 `sudo`
* 如果你使用 Ubuntu则需要[启用 SSH][4],如果你使用 Windows则应该使用类似 [MobaXterm][5] 的 SSH 客户端。
这就是全部要求和注释了,让我们进入安装过程。
@ -21,16 +21,19 @@ Apache (又名 httpd) 是最受欢迎和使用最广泛的 web 服务器,所
### 在 Ubuntu 上安装 Apache
你需要做的第一件事就是更新 Ubuntu这是在你做任何事情之前都应该做的。你可以运行
```
apt-get update && apt-get upgrade
```
接下来,安装 Apache运行以下命令
```
apt-get install apache2
```
如果你愿意,你也可以安装 Apache 文档和一些 Apache 实用程序。对于我们稍后将要安装的一些模块,你将需要一些 Apache 实用程序。
```
apt-get install apache2-doc apache2-utils
```
@ -46,6 +49,7 @@ apt-get install apache2-doc apache2-utils
#### 检查 Apache 是否正在运行
默认情况下Apache 设置为在机器启动时自动启动,因此你不必手动启用它。你可以使用以下命令检查它是否正在运行以及其他相关信息:
```
systemctl status apache2
```
@ -53,6 +57,7 @@ systemctl status apache2
[![check if apache is running][6]][6]
并且你可以检查你正在使用的版本:
```
apachectl -V
```
@ -64,6 +69,7 @@ apachectl -V
如果你使用防火墙你应该使用它则可能需要更新防火墙规则并允许访问默认端口。Ubuntu 上最常用的防火墙是 UFW因此以下说明使用于 UFW。
要允许通过 80http和 443https端口的流量运行以下命令
```
ufw allow 'Apache Full'
```
@ -76,18 +82,21 @@ ufw allow 'Apache Full'
PageSpeed 模块将自动优化并加速你的 Apache 服务器。
首先,进入 [PageSpeed 下载页][7]并选择你需要的的文件。我们使用的是 64 位 Ubuntu 服务器,所以我们安装最新的稳定版本。使用 wget 下载它:
首先,进入 [PageSpeed 下载页][7]并选择你需要的的文件。我们使用的是 64 位 Ubuntu 服务器,所以我们安装最新的稳定版本。使用 `wget` 下载它:
```
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb
```
然后,使用以下命令安装它:
```
dpkg -i mod-pagespeed-stable_current_amd64.deb
apt-get -f install
```
重启 Apache 以使更改生效:
```
systemctl restart apache2
```
@ -95,6 +104,7 @@ systemctl restart apache2
##### 使用 mod_rewrite 模块启动重写/重定向
顾名思义,该模块用于重写(重定向)。如果你使用 WordPress 或任何其他 CMS 来处理此问题,你就需要它。要安装它,只需运行:
```
a2enmod rewrite
```
@ -104,11 +114,13 @@ a2enmod rewrite
##### 使用 ModSecurity 模块保护你的 Apache
顾名思义ModSecurity 是一个用于安全性的模块,它基本上起着防火墙的作用,它可以监控你的流量。要安装它,运行以下命令:
```
apt-get install libapache2-modsecurity
```
再次重启 Apache
```
systemctl restart apache2
```
@ -118,40 +130,46 @@ ModSecurity 自带了一个默认的设置,但如果你想扩展它,你可
##### 使用 mod_evasive 模块抵御 DDoS 攻击
尽管 mod_evasive 在防止攻击方面有多大用处值得商榷,但是你可以使用它来阻止和防止服务器上的 DDoS 攻击。要安装它,使用以下命令:
```
apt-get install libapache2-mod-evasive
```
默认情况下mod_evasive 是禁用的,要启用它,编辑以下文件:
```
nano /etc/apache2/mods-enabled/evasive.conf
```
取消注释所有行(即删除 #),根据你的要求进行配置。如果你不知道要编辑什么,你可以保持原样。
取消注释所有行(即删除 `#`),根据你的要求进行配置。如果你不知道要编辑什么,你可以保持原样。
[![mod_evasive][9]][9]
创建一个日志文件:
```
mkdir /var/log/mod_evasive
chown -R www-data:www-data /var/log/mod_evasive
```
就是这样。现在重启 Apache 以使更改生效。
```
systemctl restart apache2
```
你可以安装和配置[附加模块][10]但完全取决于你和你使用的软件。它们通常不是必需的。甚至我们上面包含的4个模块也不是必需的。如果特定应用需要模块那么它们可能会注意到这一点。
你可以安装和配置[附加模块][10],但完全取决于你和你使用的软件。它们通常不是必需的。甚至我们上面包含的 4 个模块也不是必需的。如果特定应用需要模块,那么它们可能会注意到这一点。
#### 用 Apache2Buddy 脚本优化 Apache
Apache2Buddy 是一个可以自动调整 Apache 配置的脚本。你唯一需要做的就是运行下面的命令,脚本会自动完成剩下的工作:
```
curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl
```
如果你没有安装 curl那么你可能需要安装它。使用以下命令来安装 curl
如果你没有安装 `curl`,那么你可能需要安装它。使用以下命令来安装 `curl`
```
apt-get install curl
```
@ -165,15 +183,17 @@ apt-get install curl
现在我们已经完成了所有的调优工作,让我们开始创建一个实际的网站。按照我们的指示创建一个简单的 HTML 页面和一个在 Apache 上运行的虚拟主机。
你需要做的第一件事是为你的网站创建一个新的目录。运行以下命令来执行此操作:
```
mkdir -p /var/www/example.com/public_html
```
当然,将 example.com 替换为你所需的域名。你可以从 [Namecheap][11] 获得一个便宜的域名。
当然,将 `example.com` 替换为你所需的域名。你可以从 [Namecheap][11] 获得一个便宜的域名。
不要忘记在下面的所有命令中替换 example.com。
不要忘记在下面的所有命令中替换 `example.com`
接下来,创建一个简单的静态网页。创建 HTML 文件:
```
nano /var/www/example.com/public_html/index.html
```
@ -193,17 +213,20 @@ nano /var/www/example.com/public_html/index.html
保存并关闭文件。
配置目录的权限:
```
chown -R www-data:www-data /var/www/example.com
chmod -R og-r /var/www/example.com
```
为你的网站创建一个新的虚拟主机:
```
nano /etc/apache2/sites-available/example.com.conf
```
粘贴以下内容:
```
<VirtualHost *:80>
     ServerAdmin admin@example.com
@ -217,32 +240,31 @@ nano /etc/apache2/sites-available/example.com.conf
</VirtualHost>
```
这是一个基础的虚拟主机。根据你的设置,你可能需要更高级的 .conf 文件。
这是一个基础的虚拟主机。根据你的设置,你可能需要更高级的 `.conf` 文件。
在更新所有内容后保存并关闭文件。
现在,使用以下命令启用虚拟主机:
```
a2ensite example.com.conf
```
最后,重启 Apache 以使更改生效:
```
systemctl restart apache2
```
这就是全部了,你做完了。现在你可以访问 example.com 并查看你的页面。
--------------------------------------------------------------------------------
via: https://thishosting.rocks/how-to-install-optimize-apache-ubuntu/
作者:[ThisHosting][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,9 +1,11 @@
Loop better更深入的理解Python 中的迭代
更深入的理解 Python 中的迭代
====
> 深入探讨 Python 的 `for` 循环来看看它们在底层如何工作,以及为什么它们会按照它们的方式工作。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pez-candy.png?itok=tRoOn_iy)
Python 的 `for` 循环不会像其他语言中的 `for` 循环那样工作。在这篇文章中,我们将深入探讨 Python 的 `for` 循环来看看它们如何在底层工作,以及为什么它们会按照它们的方式工作。
Python 的 `for` 循环不会像其他语言中的 `for` 循环那样工作。在这篇文章中,我们将深入探讨 Python 的 `for` 循环来看看它们在底层如何工作,以及为什么它们会按照它们的方式工作。
### 循环的问题
@ -12,27 +14,24 @@ Python 的 `for` 循环不会像其他语言中的 `for` 循环那样工作。
#### 问题 1循环两次
假设我们有一个数字列表和一个生成器,生成器会返回这些数字的平方:
```
>>> numbers = [1, 2, 3, 5, 7]
>>> squares = (n**2 for n in numbers)
```
我们可以将生成器对象传递给 `tuple` 构造器,从而使其变为一个元组:
```
>>> tuple(squares)
(1, 4, 9, 25, 49)
```
如果我们使用相同的生成器对象并将其传给 `sum` 函数,我们可能会期望得到这些数的和,即 88。
如果我们使用相同的生成器对象并将其传给 `sum` 函数,我们可能会期望得到这些数的和,即 `88`
```
>>> sum(squares)
0
```
但是我们得到了 `0`
@ -40,6 +39,7 @@ Python 的 `for` 循环不会像其他语言中的 `for` 循环那样工作。
#### 问题 2包含的检查
让我们使用相同的数字列表和相同的生成器对象:
```
>>> numbers = [1, 2, 3, 5, 7]
@ -48,15 +48,12 @@ Python 的 `for` 循环不会像其他语言中的 `for` 循环那样工作。
```
如果我们询问 `9` 是否在 `squares` 生成器中Python 将会告诉我们 9 在 `squares` 中。但是如果我们再次询问相同的问题Python 会告诉我们 9 不在 `squares` 中。
```
>>> 9 in squares
True
>>> 9 in squares
False
```
我们询问相同的问题两次Python 给了两个不同的答案。
@ -64,57 +61,51 @@ False
#### 问题 3 :拆包
这个字典有两个键值对:
```
>>> counts = {'apples': 2, 'oranges': 1}
```
让我们使用多个变量来对这个字典进行拆包:
```
>>> x, y = counts
```
你可能会期望当我们对这个字典进行拆包时,我们会得到键值对或者得到一个错误。
但是解包字典不会引发错误,也不会返回键值对。当你解包一个字典时,你会得到键:
```
>>> x
'apples'
```
### 回顾Python 的 __for__ 循环
### 回顾Python 的 for 循环
在我们了解一些关于这些 Python 片段的逻辑之后,我们将回到这些问题。
Python 没有传统的 `for` 循环。为了解释我的意思,让我们看一看另一种编程语言的 `for` 循环。
这是一种传统 C 风格的 `for` 循环,用 JavaScript 编写:
```
let numbers = [1, 2, 3, 5, 7];
for (let i = 0; i < numbers.length; i += 1) {
    print(numbers[i])
}
```
JavaScript, C, C++, Java, PHP, 和一大堆其他编程语言都有这种风格的 `for` 循环,但是 Python **确实没**。
JavaScript、 C、 C++、 Java、 PHP 和一大堆其他编程语言都有这种风格的 `for` 循环,但是 Python **确实没**。
Python **确实没** 传统 C 风格的 `for` 循环。在 Python 中确实有一些我们称之为 `for` 循环的东西,但是它的工作方式类似于 [foreach loop][1]。
Python **确实没** 传统 C 风格的 `for` 循环。在 Python 中确实有一些我们称之为 `for` 循环的东西,但是它的工作方式类似于 [foreach 循环][1]。
这是 Python 的 `for` 循环的风格:
```
numbers = [1, 2, 3, 5, 7]
for n in numbers:
    print(n)
```
与传统 C 风格的 `for` 循环不同Python 的 `for` 循环没有索引变量没有索引变量初始化边界检查或者索引递增。Python 的 `for` 循环完成了对我们的 `numbers` 列表进行遍历的所有工作。
@ -126,98 +117,75 @@ for n in numbers:
既然我们已经解决了 Python 世界中无索引的 `for` 循环,那么让我们在此之外来看一些定义。
**可迭代**是任何你可以用 Python 中的 `for` 循环遍历的东西。可迭代意味着可以遍历,任何可以遍历的东西都是可迭代的。
```
for item in some_iterable:
    print(item)
```
序列是一种非常常见的可迭代类型,列表,元组和字符串都是序列。
```
>>> numbers = [1, 2, 3, 5, 7]
>>> coordinates = (4, 5, 7)
>>> words = "hello there"
```
序列是可迭代的,它有一些特定的特征集。它们可以从 `0` 开始索引,以小于序列的长度结束,它们有一个长度并且它们可以被切分。列表,元组,字符串和其他所有序列都是这样工作的。
```
>>> numbers[0]
1
>>> coordinates[2]
7
>>> words[4]
'o'
```
Python 中很多东西都是可迭代的,但不是所有可迭代的东西都是序列。集合,字典,文件和生成器都是可迭代的,但是它们都不是序列。
Python 中很多东西都是可迭代的,但不是所有可迭代的东西都是序列。集合、字典、文件和生成器都是可迭代的,但是它们都不是序列。
```
>>> my_set = {1, 2, 3}
>>> my_dict = {'k1': 'v1', 'k2': 'v2'}
>>> my_file = open('some_file.txt')
>>> squares = (n**2 for n in my_set)
```
因此,任何可以用 `for` 循环遍历的东西都是可迭代的,序列只是一种可迭代的类型,但是 Python 也有许多其他种类的迭代器。
### Python 的 __for__ 循环不使用索引
### Python 的 for 循环不使用索引
你可能认为Python 的 `for` 循环在底层使用了索引进行循环。在这里我们使用 `while` 循环和索引手动遍历:
你可能认为,在 hood 下的 `for` 循环中使用索引进行循环。在这里我们使用 `while` 循环和索引手动遍历:
```
numbers = [1, 2, 3, 5, 7]
i = 0
while i < len(numbers):
    print(numbers[i])
    i += 1
```
这适用于列表,但它不会对所有东西都起作用。这种循环方式**只适用于序列**。
如果我们尝试用索引去手动遍历一个集合,我们会得到一个错误:
```
>>> fruits = {'lemon', 'apple', 'orange', 'watermelon'}
>>> i = 0
>>> while i < len(fruits):
...     print(fruits[i])
...     i += 1
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
TypeError: 'set' object does not support indexing
```
集合不是序列,所以它们不支持索引。
我们不能使用索引手动对 Python 中的每一个迭代对象进行遍历。对于那些不是序列的迭代器来说,这是行不通的。
### 迭代器驱动 __for__ 循环
### 迭代器驱动 for 循环
因此我们已经看到Python 的 `for` 循环在底层不使用索引。相反Python 的 `for` 循环使用**迭代器**。
@ -226,148 +194,116 @@ TypeError: 'set' object does not support indexing
让我们来看看它是如何工作的。
这里有三个可迭代对象:一个集合,一个元组和一个字符串。
```
>>> numbers = {1, 2, 3, 5, 7}
>>> coordinates = (4, 5, 7)
>>> words = "hello there"
```
我们可以使用 Python 的内置 `iter` 函数来访问这些迭代器,将一个迭代器传递给 `iter` 函数总会给我们返回一个迭代器,无论我们正在使用哪种类型的迭代器。
```
>>> iter(numbers)
<set_iterator object at 0x7f2b9271c860>
>>> iter(coordinates)
<tuple_iterator object at 0x7f2b9271ce80>
>>> iter(words)
<str_iterator object at 0x7f2b9271c860>
```
一旦我们有了迭代器,我们可以做的事情就是通过将它传递给内置的 `next` 函数来获取它的下一项。
```
>>> numbers = [1, 2, 3]
>>> my_iterator = iter(numbers)
>>> next(my_iterator)
1
>>> next(my_iterator)
2
```
迭代器是有状态的,这意味着一旦你从它们中消耗了一项,它就消失了。
如果你从迭代器中请求 `next` 项,但是其中没有更多的项了,你将得到一个 `StopIteration` 异常:
```
>>> next(my_iterator)
3
>>> next(my_iterator)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration
```
所以你可以从每个迭代中获得一个迭代器,迭代器唯一能做的事情就是用 `next` 函数请求它们的下一项。如果你将它们传递给 `next`,但它们没有下一项了,那么就会引发 `StopIteration` 异常。
你可以将迭代器想象成 Pez 分配器译注Pez 是一个结合玩具的独特复合式糖果),不能重新分配。你可以把 Pez 拿出去,但是一旦 Pez 被移走,它就不能被放回去,一旦分配器空了,它就没用了。
你可以将迭代器想象成 Pez 分配器(LCTT 译注Pez 是一个结合玩具的独特复合式糖果),不能重新分配。你可以把 Pez 拿出去,但是一旦 Pez 被移走,它就不能被放回去,一旦分配器空了,它就没用了。
### 没有 __for__ 的循环
### 没有 for 的循环
既然我们已经了解了迭代器和 `iter` 以及 `next` 函数,我们将尝试在不使用 `for` 循环的情况下手动遍历迭代器。
我们将通过尝试将这个 `for` 循环变为 `while` 循环:
```
def funky_for_loop(iterable, action_to_do):
    for item in iterable:
        action_to_do(item)
```
为了做到这点,我们需要:
1. 从给定的可迭代对象中获得迭代器
2. 反复从迭代器中获得下一项
3. 如果我们成功获得下一项,就执行 `for` 循环的主体
4. 如果我们在获得下一项时得到了一个 `StopIteration` 异常,那么就停止循环
```
def funky_for_loop(iterable, action_to_do):
    iterator = iter(iterable)
    done_looping = False
    while not done_looping:
        try:
            item = next(iterator)
        except StopIteration:
            done_looping = True
        else:
            action_to_do(item)
```
我们只是通过使用 `while` 循环和迭代器重新定义了 `for` 循环。
上面的代码基本上定义了 Python 在底层循环的工作方式。如果你理解内置的 `iter``next` 函数的遍历循环的工作方式,那么你就会理解 Python 的 `for` 循环是如何工作的。
事实上,你不仅仅会理解 `for` 循环在 Python 中如何工作的,所有形式的遍历一个可迭代对象都是这样工作的。
事实上,你不仅仅会理解 `for` 循环在 Python 中如何工作的,所有形式的遍历一个可迭代对象都是这样工作的。
**迭代器协议** 是一种很好的方式,它表示 "在 Python 中遍历迭代器是如何工作的"。它本质上是对 `iter``next` 函数在 Python 中是如何工作的定义。Python 中所有形式的迭代都是由迭代器协议驱动的。
<ruby>迭代器协议<rt>iterator protocol</rt></ruby> 是一种很好表示 “在 Python 中遍历迭代器是如何工作的”的方式。它本质上是对 `iter``next` 函数在 Python 中是如何工作的定义。Python 中所有形式的迭代都是由迭代器协议驱动的。
迭代器协议被 `for` 循环使用(正如我们已经看到的那样):
```
for n in numbers:
    print(n)
```
多重赋值也使用迭代器协议:
```
x, y, z = coordinates
```
星型表达式也是用迭代器协议:
```
a, b, *rest = numbers
print(*numbers)
```
许多内置函数依赖于迭代器协议:
```
unique_numbers = set(numbers)
```
在 Python 中任何与迭代器一起工作的东西都可能以某种方式使用迭代器协议。每当你在 Python 中遍历一个可迭代对象时,你将依赖于迭代器协议。
@ -379,41 +315,31 @@ unique_numbers = set(numbers)
我有消息告诉你:在 Python 中直接使用迭代器是很常见的。
这里的 `squares` 对象是一个生成器:
```
>>> numbers = [1, 2, 3]
>>> squares = (n**2 for n in numbers)
```
生成器是迭代器,这意味着你可以在生成器上调用 `next` 来获得它的下一项:
```
>>> next(squares)
1
>>> next(squares)
4
```
但是如果你以前用过生成器,你可能也知道可以循环遍历生成器:
```
>>> squares = (n**2 for n in numbers)
>>> for n in squares:
...     print(n)
...
1
4
9
```
如果你可以在 Python 中循环遍历某些东西,那么它就是**可迭代的**。
@ -424,90 +350,84 @@ unique_numbers = set(numbers)
所以在我之前解释迭代器如何工作时,我跳过了它们的某些重要的细节。
**生成器是可迭代的。**
#### 生成器是可迭代的
我再说一遍Python 中的每一个迭代器都是可迭代的,意味着你可以循环遍历迭代器。
因为迭代器也是可迭代的,所以你可以使用内置 `next` 函数从可迭代对象中获得迭代器:
```
>>> numbers = [1, 2, 3]
>>> iterator1 = iter(numbers)
>>> iterator2 = iter(iterator1)
```
请记住,当我们在可迭代对象上调用 `iter` 时,它会给我们返回一个迭代器。
当我们在迭代器上调用 `iter` 时,它会给我们返回它自己:
```
>>> iterator1 is iterator2
True
```
迭代器是可迭代的,所有的迭代器都是它们自己的迭代器。
```
def is_iterator(iterable):
    return iter(iterable) is iterable
```
迷惑了吗?
让我们回顾一些这些措辞。
* 一个**可迭代对象**是你可以迭代的东西
* 一个**可迭代对象**是一种可以迭代遍历迭代的代理
(这句话不是很理解)
* 一个**迭代对象器**是一种实际上遍历可迭代对象的代理
此外,在 Python 中迭代器也是可迭代的,它们充当它们自己的迭代器。
所以迭代器是可迭代的,但是它们没有一些可迭代对象拥有的各种特性。
迭代器没有长度,它们不能被索引:
```
>>> numbers = [1, 2, 3, 5, 7]
>>> iterator = iter(numbers)
>>> len(iterator)
TypeError: object of type 'list_iterator' has no len()
>>> iterator[0]
TypeError: 'list_iterator' object is not subscriptable
```
从我们作为 Python 程序员的角度来看,你可以使用迭代器来做的唯一有用的事情是将其传递给内置的 `next` 函数,或者对其进行循环遍历:
```
>>> next(iterator)
1
>>> list(iterator)
[2, 3, 5, 7]
```
如果我们第二次循环遍历迭代器,我们将一无所获:
```
>>> list(iterator)
[]
```
你可以把迭代器看作是**惰性迭代器**,它们是**一次性使用**,这意味着它们只能循环遍历一次。
正如你在下面的真值表中所看到的,可迭代对象并不总是迭代器,但是迭代器总是可迭代的:
可迭代对象?迭代器?可迭代的? ✔️ ❓ 迭代器 ✔️ ✔️ 生成器 ✔️ ✔️ 列表 ✔️ ❌
对象 | 可迭代? | 迭代器?
---------|---------|------
可迭代对象 | V | ?
迭代器 | V | V
生成器 | V | V
列表 | V | X
### 全部的迭代器协议
@ -535,45 +455,33 @@ TypeError: 'list_iterator' object is not subscriptable
### 迭代器无处不在
你已经在 Python 中看到过许多迭代器我也提到过生成器是迭代器。Python 的许多内置类型也是迭代器。例如Python 的 `enumerate``reversed` 对象就是迭代器。
```
>>> letters = ['a', 'b', 'c']
>>> e = enumerate(letters)
>>> e
<enumerate object at 0x7f112b0e6510>
>>> next(e)
(0, 'a')
```
在 Python 3 中,`zip`, `map``filter` 也是迭代器。
```
>>> numbers = [1, 2, 3, 5, 7]
>>> letters = ['a', 'b', 'c']
>>> z = zip(numbers, letters)
>>> z
<zip object at 0x7f112cc6ce48>
>>> next(z)
(1, 'a')
```
Python 中的 文件对象也是迭代器。
Python 中的文件对象也是迭代器。
```
>>> next(open('hello.txt'))
'hello world\n'
```
在 Python 标准库和第三方库中内置了大量的迭代器。这些迭代器首先惰性迭代器一样,延迟工作直到你请求它们下一项。
@ -584,51 +492,37 @@ Python 中的 文件对象也是迭代器。
知道你已经在使用迭代器是很有用的,但是我希望你也知道,你可以创建自己的迭代器和你自己的惰性迭代器。
下面这个类构造了一个迭代器接受一个可迭代的数字,并在循环结束时提供每个数字的平方。
```
class square_all:
    def __init__(self, numbers):
        self.numbers = iter(numbers)
    def __next__(self):
        return next(self.numbers) * 2
    def __iter__(self):
        return self
```
但是在我们开始对该类的实例进行循环遍历之前,没有任何工作要做。
这里,我们有一个无限长的可迭代对象 `count`,你可以看到 `square_all` 接受 `count` 而不用完全循环遍历这个无限长的迭代:
```
>>> from itertools import count
>>> numbers = count(5)
>>> squares = square_all(numbers)
>>> next(squares)
25
>>> next(squares)
36
```
这个迭代器类是有效的,但我们通常不会这样做。通常,当我们想要做一个定制的迭代器时,我们会生成一个生成器函数:
```
def square_all(numbers):
for n in numbers:
yield n**2
```
这个生成器函数等价于我们上面所做的类,它的工作原理是一样的。
@ -636,14 +530,13 @@ def square_all(numbers):
这种 `yield` 语句似乎很神奇,但它非常强大:`yield` 允许我们在调用 `next` 函数之间暂停生成器函数。`yield` 语句是将生成器函数与常规函数分离的东西。
另一种实现相同迭代器的方法是使用生成器表达式。
```
def square_all(numbers):
return (n**2 for n in numbers)
```
这和我们的生成器函数确实是一样的,但是它使用的语法看起来[像列表一样容易理解][2]。如果你需要在代码中使用惰性迭代,请考虑迭代器,并考虑使用生成器函数或生成器表达式。
这和我们的生成器函数确实是一样的,但是它使用的语法看起来[像是一个列表推导一样][2]。如果你需要在代码中使用惰性迭代,请考虑迭代器,并考虑使用生成器函数或生成器表达式。
### 迭代器如何改进你的代码
@ -652,33 +545,24 @@ def square_all(numbers):
#### 惰性求和
这是一个 `for` 循环,它对 Django queryset 中的所有工作时间求和:
```
hours_worked = 0
for event in events:
if event.is_billable():
hours_worked += event.duration
```
下面是使用生成器表达式进行惰性评估的代码:
```
billable_times = (
event.duration
for event in events
if event.is_billable()
)
hours_worked = sum(billable_times)
```
请注意,我们代码的形状发生了巨大变化。
@ -688,27 +572,21 @@ hours_worked = sum(billable_times)
#### 惰性和打破循环
这段代码打印出日志文件的前 10 行:
```
for i, line in enumerate(log_file):
if i >= 10:
break
print(line)
```
这段代码做了同样的事情,但是我们使用的是 `itertools.islice` 函数来惰性地抓取文件中的前 10 行:
```
from itertools import islice
first_ten_lines = islice(log_file, 10)
for line in first_ten_lines:
print(line)
```
我们定义的 `first_ten_lines` 变量是迭代器,同样,使用迭代器允许我们给以前未命名的东西命名(`first_ten_lines`)。命名事物可以使我们的代码更具描述性,更具可读性。
@ -722,15 +600,12 @@ for line in first_ten_lines:
你可以在标准库和第三方库中找到用于循环的辅助函数,但你也可以自己创建!
这段代码列出了序列中连续值之间的差值列表。
```
current = readings[0]
for next_item in readings[1:]:
differences.append(next_item - current)
current = next_item
```
请注意,这段代码中有一个额外的变量,我们每次循环时都要指定它。还要注意,这段代码只适用于我们可以切片的东西,比如序列。如果 `readings` 是一个生成器,一个 zip 对象或其他任何类型的迭代器,那么这段代码就会失败。
@ -738,47 +613,36 @@ for next_item in readings[1:]:
让我们编写一个辅助函数来修复代码。
这是一个生成器函数,它为给定的迭代中的每个项目提供了当前项和下一项:
```
def with_next(iterable):
"""Yield (current, next_item) tuples for each item in iterable."""
iterator = iter(iterable)
current = next(iterator)
for next_item in iterator:
yield current, next_item
current = next_item
```
我们从可迭代对象中手动获取一个迭代器,在它上面调用 `next` 来获取第一项,然后循环遍历迭代器获取后续所有的项目,跟踪后一个项目。这个函数不仅适用于序列,而且适用于任何类型迭代。
这段代码和以前代码是一样的,但是我们使用的是辅助函数而不是手动跟踪 `next_item`:
这段代码和以前代码是一样的,但是我们使用的是辅助函数而不是手动跟踪 `next_item`
```
differences = []
for current, next_item in with_next(readings):
differences.append(next_item - current)
```
请注意,这段代码不会挂在我们循环周围的 `next_item` 上,`with_next` 生成器函数处理跟踪 `next_item` 的工作。
还要注意,这段代码已足够紧凑,如果我们愿意,我们甚至可以[将方法复制到列表中来理解][2]。
还要注意,这段代码已足够紧凑,如果我们愿意,我们甚至可以[将方法复制到列表推导中来][2]。
```
differences = [
(next_item - current)
for current, next_item in with_next(readings)
]
```
### 再次回顾循环问题
@ -787,40 +651,34 @@ differences = [
#### 问题 1耗尽的迭代器
这里我们有一个生成器对象 `squares`:
这里我们有一个生成器对象 `squares`
```
>>> numbers = [1, 2, 3, 5, 7]
>>> squares = (n**2 for n in numbers)
```
如果我们把这个生成器传递给 `tuple` 构造函数,我们将会得到它的一个元组:
```
>>> numbers = [1, 2, 3, 5, 7]
>>> squares = (n**2 for n in numbers)
>>> tuple(squares)
(1, 4, 9, 25, 49)
```
如果我们试着计算这个生成器中数字的和,使用 `sum`,我们就会得到 `0`:
如果我们试着计算这个生成器中数字的和,使用 `sum`,我们就会得到 `0`
```
>>> sum(squares)
0
```
这个生成器现在是空的:我们已经把它耗尽了。如果我们试着再次创建一个元组,我们会得到一个空元组:
```
>>> tuple(squares)
()
```
生成器是迭代器,迭代器是一次性的。它们就像 Hello Kitty Pez 分配器那样不能重新加载。
@ -828,43 +686,35 @@ differences = [
#### 问题 2部分消耗一个迭代器
再次使用那个生成器对象 `squares`
```
>>> numbers = [1, 2, 3, 5, 7]
>>> squares = (n**2 for n in numbers)
```
如果我们询问 `9` 是否在 `squares` 生成器中,我们会得到 `True`:
如果我们询问 `9` 是否在 `squares` 生成器中,我们会得到 `True`
```
>>> 9 in squares
True
```
但是我们再次询问相同的问题,我们会得到 `False`:
但是我们再次询问相同的问题,我们会得到 `False`
```
>>> 9 in squares
False
```
当我们询问 `9` 是否在迭代器中时Python 必须对这个生成器进行循环遍历来找到 `9`。如果我们在检查了 `9` 之后继续循环遍历,我们只会得到最后两个数字,因为我们已经在找到 9 之前消耗了这些数字:
```
>>> numbers = [1, 2, 3, 5, 7]
>>> squares = (n**2 for n in numbers)
>>> 9 in squares
True
>>> list(squares)
[25, 49]
```
询问迭代器中是否包含某些东西将会部分地消耗迭代器。如果没有循环遍历迭代器,那么是没有办法知道某个东西是否在迭代器中。
@ -872,36 +722,29 @@ True
#### 问题 3拆包是迭代
当你在字典上循环时,你会得到键:
```
>>> counts = {'apples': 2, 'oranges': 1}
>>> for key in counts:
... print(key)
...
apples
oranges
```
当你对一个字典进行拆包时,你也会得到键:
```
>>> x, y = counts
>>> x, y
('apples', 'oranges')
```
循环依赖于迭代器协议,可迭代对象拆包也依赖于有迭代器协议。拆包一个字典与在字典上循环遍历是一样的,两者都使用迭代器协议,所以在这两种情况下都得到相同的结果。
### 回顾
序列是迭代器,但是不是所有的迭代器都是序列。当有人说“迭代器”这个词时,你只能假设他们的意思是“你可以迭代的东西”。不要假设迭代器可以被循环遍历两次,询问它们的长度,或者索引。
序列是迭代器,但是不是所有的迭代器都是序列。当有人说“迭代器”这个词时,你只能假设他们的意思是“你可以迭代的东西”。不要假设迭代器可以被循环遍历两次、询问它们的长度或者索引。
迭代器是 Python 中最基本的可迭代形式。如果你想在代码中做一个惰性迭代,请考虑迭代器,并考虑使用生成器函数或生成器表达式。
@ -909,8 +752,15 @@ oranges
这里有一些我推荐的相关文章和视频:
本文是基于作者去年在 [DjangoCon AU][6], [PyGotham][7] 和 [North Bay Python][8] 中发表的 Loop Better 演讲。有关更多内容,请参加将于 2018 年 5 月 9 日至 17 日在 Columbus, Ohio 举办的 [PYCON][9]。
- [Loop Like a Native](https://nedbatchelder.com/text/iter.html) Ned Batchelder 在 PyCon 2013 的讲演
- [Loop Better](https://www.youtube.com/watch?v=V2PkkMS2Ack) ,这篇文章是基于这个讲演的
- [The Iterator Protocol: How For Loops Work](http://treyhunner.com/2016/12/python-iterator-protocol-how-for-loops-work/),我写的关于迭代器协议的短文
- [Comprehensible Comprehensions](https://www.youtube.com/watch?v=5_cJIcgM7rw),关于推导和迭代器表达器的讲演
- [Python: Range is Not an Iterator](http://treyhunner.com/2018/02/python-range-is-not-an-iterator/),我关于范围和迭代器的文章
- [Looping Like a Pro in Python](https://www.youtube.com/watch?v=u8g9scXeAcI)DB 的 PyCon 2017 讲演
本文是基于作者去年在 [DjangoCon AU][6]、 [PyGotham][7] 和 [North Bay Python][8] 中发表的 Loop Better 演讲。有关更多内容,请参加将于 2018 年 5 月 9 日至 17 日在 Columbus Ohio 举办的 [PYCON][9]。
--------------------------------------------------------------------------------
@ -919,7 +769,7 @@ via: https://opensource.com/article/18/3/loop-better-deeper-look-iteration-pytho
作者:[Trey Hunner][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,7 +1,10 @@
使用树莓派构建一个婴儿监视器
======
> 比一般的视频监控还要好,这种 DIY 型号还有婴儿房间的自动室温控制功能。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/baby-chick-egg.png?itok=RcFfqdbA)
香港很湿热,即便是晚上,许多人为了更舒适,在家里也使用空调。当我的大儿子还是一个小婴儿的时候,他卧室的空调还是需要手动控制的,没有温度自动调节的功能。它的控制器只有开或者关,让空调整个晚上持续运行会导致房间过冷,并且也浪费能源和钱。
我决定使用一个基于 [树莓派][2] 的 [物联网][1] 解决方案去修复这个问题。后来我进一步为它添加了一个[婴儿监视器][3]插件。在这篇文章中,我将解释我是如何做的,它的代码在 [我的 GitHub][4] 页面上。
@ -10,17 +13,17 @@
解决我的问题的第一个部分是使用了一个 Orvibo S20 [可通过 WiFi 连接的智能插头][5]和智能手机应用程序。虽然这样可以让我通过远程来控制空调,但是它还是手动处理的,而我希望尝试让它自动化。我在 [Instructables][6] 上找到了一个满足我的需求的项目:他使用树莓派从一个 [AM2302 传感器][7] 上测量附近的温度和湿度,并将它们记录到一个 MySQL 数据库中。
使用接头将温度/湿度传感器连接到树莓派的相应 GPIO 针脚上。幸运的是AM2302 传感器有一个用于读取的 [开源软件][8],并且同时提供了 [Python][9] 示例。
使用接头将温度/湿度传感器连接到树莓派的相应 GPIO 针脚上。幸运的是AM2302 传感器有一个用于读取的 [开源软件][8],并且同时提供了 [Python][9] 示例。
与我的项目放在一起的用于 [AM2302 传感器][10] 接口的软件已经更新了,并且我使用的原始代码现在已经被认为是过时的而停止维护了。这个代码是由一个小的二进制组成,用于连接到传感器以及解释阅读和返回正确值的 Python 脚本。
与我的项目放在一起的用于 [AM2302 传感器][10] 接口的软件已经更新了,并且我使用的原始代码现在应该已经过时了,停止维护了。这个代码是由一个小的二进制组成,用于连接到传感器以及解释读取并返回正确值的 Python 脚本。
![Raspberry Pi, sensor, and Python code][12]
树莓派、传感器、以及用于构建温度/湿度监视器的 Python 代码。
*树莓派、传感器、以及用于构建温度/湿度监视器的 Python 代码。*
将传感器连接到树莓派,这些 Python 代码能够正确地返回温度和湿度读数。将 Python 连接到 MySQL 数据库很简单,并且也有大量的使用 `python-``mysql` 绑定的代码示例。因为我需要持续地监视温度和湿度,所以我写软件来实现这些。
将传感器连接到树莓派,这些 Python 代码能够正确地返回温度和湿度读数。将 Python 连接到 MySQL 数据库很简单,并且也有大量的使用 python-mysql 绑定的代码示例。因为我需要持续地监视温度和湿度,所以我写软件来实现这些。
事实上,最终我用了两个解决方案,一是作为一个持续运行的进程,周期性(一般是间隔一分钟)地获取传感器数据,另一种是让 Python 脚本运行一次然后退出。我决定使用第二种方法,并使用 cron 去每分钟调用一次这个脚本。之所以选择这种方法的主要原因是,(通过循环实现的)持续的脚本偶尔会不返回读数,这将导致尝试读取传感器的进程出现聚集,最终可能会导致系统挂起而缺乏可用资源。
事实上,最终我用了两个解决方案,一是作为一个持续运行的进程,周期性(一般是间隔一分钟)地获取传感器数据,另一种是让 Python 脚本运行一次然后退出。我决定使用第二种方法,并使用 cron 去每分钟调用一次这个脚本。之所以选择这种方法的主要原因是,(通过循环实现的)持续的脚本偶尔会不返回读数,这将导致尝试读取传感器的进程出现堆积,最终可能会导致系统挂起而缺乏可用资源。
我也找到了可以用程序来控制我的智能插头的一个 [Perl 脚本][13]。它是解决这种问题所需的一部分,因此当某些温度/湿度达到触发条件,将触发这个 Perl 脚本。在做了一些测试之后,我决定去设计一个独立的 `checking` 脚本,从 MySQL 去拉取最新的数据,然后根据返回的值去设置智能开关为开或关。将插头控制逻辑与传感器读取脚本分开,意味着它们是各自独立运行的,就算是传感器读取脚本写的有问题也没事。
@ -30,7 +33,7 @@
![Temperature and humidity chart][16]
过去六小时内测量到的温度和湿度
*过去六小时内测量到的温度和湿度*
### 添加一个婴儿监视摄像头
@ -46,7 +49,7 @@
### 做最后的修饰
没有哪个树莓派项目都已经完成了还没有为它选择一个合适的外壳,并且它有各种零件。在大量搜索和比较之后,有了一个明确的 [赢家][20]SmartPi 的乐高积木式外壳。乐高的兼容性可以让我去安装温度/湿度传感器和摄像头。下面是最终的成果图:
没有哪个树莓派项目都已经完成了还没有为它选择一个合适的外壳,并且它有各种零件。在大量搜索和比较之后,有了一个显然的 [赢家][20]SmartPi 的乐高积木式外壳。乐高的兼容性可以让我去安装温度/湿度传感器和摄像头。下面是最终的成果图:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/pibabymonitor_case.png?itok=_ofyN73a)
@ -61,8 +64,6 @@
* 我因此为我的第二个孩子设计了另外一个监视器。
* 因为没有时间去折腾,我为我的第三个孩子购买了夜用摄像头。
想学习更多的东西吗?所有的代码都在 [我的 GitHub][4] 页面上。
想分享你的树莓派项目吗?[将你的故事和创意发送给我们][24]。
@ -73,7 +74,7 @@ via: https://opensource.com/article/18/3/build-baby-monitor-raspberry-pi
作者:[Jonathan Ervine][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,14 +1,13 @@
在 CentOS 6 系统上安装最新版 Python3 软件包的 3 种方法
======
CentOS 克隆自 RHEL无需付费即可使用。CentOS 是一个企业级标准的、前沿的操作系统,被超过 90% 的网络主机托管商采用,因为它提供了技术领先的服务器控制面板 cPanel/WHM。
该控制面板使得用户无需进入命令行即可通过其管理一切。
众所周知RHEL 提供长期支持,出于稳定性考虑,不提供最新版本的软件包。
如果你想安装的最新版本软件包不在默认源中,你需要手动编译源码安装。
但手动编译安装的方式有不小的风险,即如果出现新版本,无法升级手动安装的软件包;你不得不重新手动安装。
如果你想安装的最新版本软件包不在默认源中,你需要手动编译源码安装。但手动编译安装的方式有不小的风险,即如果出现新版本,无法升级手动安装的软件包;你不得不重新手动安装。
那么在这种情况下,安装最新版软件包的推荐方法和方案是什么呢?是的,可以通过为系统添加所需的第三方源来达到目的。
@ -18,19 +17,20 @@ CentOS 克隆自 RHEL无需付费即可使用。CentOS 是一个企业级标
在本教程中,我们将向你展示,如何在 CentOS 6 操作系统上安装最新版本的 Python 3 软件包。
### 方法 1使用 Software Collections 源 (SCL)
### 方法 1使用 Software Collections 源 SCL
SCL 源目前由 CentOS SIG 维护,除了重新编译构建 Red Hat 的 Software Collections 外,还额外提供一些它们自己的软件包。
该源中包含不少程序的更高版本,可以在不改变原有旧版本程序包的情况下安装,使用时需要通过 scl 命令调用。
该源中包含不少程序的更高版本,可以在不改变原有旧版本程序包的情况下安装,使用时需要通过 `scl` 命令调用。
运行如下命令可以在 CentOS 上安装 SCL 源:
```
# yum install centos-release-scl
```
检查可用的 Python 3 版本:
```
# yum info rh-python35
Loaded plugins: fastestmirror, security
@ -38,149 +38,148 @@ Loading mirror speeds from cached hostfile
* epel: ewr.edge.kernel.org
* remi-safe: mirror.team-cymru.com
Available Packages
Name : rh-python35
Arch : x86_64
Version : 2.0
Release : 2.el6
Size : 0.0
Repo : installed
From repo : centos-sclo-rh
Summary : Package that installs rh-python35
License : GPLv2+
Name : rh-python35
Arch : x86_64
Version : 2.0
Release : 2.el6
Size : 0.0
Repo : installed
From repo : centos-sclo-rh
Summary : Package that installs rh-python35
License : GPLv2+
Description : This is the main package for rh-python35 Software Collection.
```
运行如下命令从 scl 源安装可用的最新版 python 3
运行如下命令从 `scl` 源安装可用的最新版 python 3
```
# yum install rh-python35
```
运行如下特殊的 scl 命令,在当前 shell 中启用安装的软件包:
运行如下特殊的 `scl` 命令,在当前 shell 中启用安装的软件包:
```
# scl enable rh-python35 bash
```
运行如下命令检查安装的 python3 版本:
```
# python --version
Python 3.5.1
```
运行如下命令获取系统已安装的 SCL 软件包列表:
```
# scl -l
rh-python35
```
### 方法 2使用 EPEL 源 (Extra Packages for Enterprise Linux)
### 方法 2使用 EPEL 源 (Extra Packages for Enterprise Linux
EPEL 是 Extra Packages for Enterprise Linux 的缩写,该源由 Fedora SIG (Special Interest Group) 维护。
EPEL 是 Extra Packages for Enterprise Linux 的缩写,该源由 Fedora SIG Special Interest Group维护。
该 SIG 为企业级 Linux 创建、维护并管理一系列高品质补充软件包,受益的企业级 Linux 发行版包括但不限于红帽企业级 Linux (RHEL), CentOS, Scientific Linux (SL) 和 Oracle Linux (OL)等。
该 SIG 为企业级 Linux 创建、维护并管理一系列高品质补充软件包,受益的企业级 Linux 发行版包括但不限于红帽企业级 Linux RHEL、 CentOS、 Scientific Linux SL 和 Oracle Linux OL等。
EPEL 通常基于 Fedora 对应代码提供软件包,不会与企业级 Linux 发行版中的基础软件包冲突或替换其中的软件包。
**推荐阅读:** [在 RHEL, CentOS, Oracle Linux 或 Scientific Linux 上安装启用 EPEL 源][1]
EPEL 软件包位于 CentOS 的 Extra 源中,已经默认启用,故我们只需运行如下命令即可:
```
# yum install epel-release
```
检查可用的 python 3 版本:
```
# yum --disablerepo="*" --enablerepo="epel" info python34
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
* epel: ewr.edge.kernel.org
Available Packages
Name : python34
Arch : x86_64
Version : 3.4.5
Release : 4.el6
Size : 50 k
Repo : epel
Summary : Version 3 of the Python programming language aka Python 3000
URL : http://www.python.org/
License : Python
Name : python34
Arch : x86_64
Version : 3.4.5
Release : 4.el6
Size : 50 k
Repo : epel
Summary : Version 3 of the Python programming language aka Python 3000
URL : http://www.python.org/
License : Python
Description : Python 3 is a new version of the language that is incompatible with the 2.x
: line of releases. The language is mostly the same, but many details, especially
: how built-in objects like dictionaries and strings work, have changed
: considerably, and a lot of deprecated features have finally been removed.
: line of releases. The language is mostly the same, but many details, especially
: how built-in objects like dictionaries and strings work, have changed
: considerably, and a lot of deprecated features have finally been removed.
```
运行如下命令从 EPEL 源安装可用的最新版 python 3 软件包:
```
# yum --disablerepo="*" --enablerepo="epel" install python34
```
默认情况下并不会安装 pip 和 setuptools我们需要运行如下命令手动安装
默认情况下并不会安装 `pip``setuptools`,我们需要运行如下命令手动安装:
```
# curl -O https://bootstrap.pypa.io/get-pip.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1603k 100 1603k 0 0 2633k 0 --:--:-- --:--:-- --:--:-- 4816k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1603k 100 1603k 0 0 2633k 0 --:--:-- --:--:-- --:--:-- 4816k
# /usr/bin/python3.4 get-pip.py
Collecting pip
Using cached https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl
Using cached https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl
Collecting setuptools
Downloading https://files.pythonhosted.org/packages/8c/10/79282747f9169f21c053c562a0baa21815a8c7879be97abd930dbcf862e8/setuptools-39.1.0-py2.py3-none-any.whl (566kB)
100% |████████████████████████████████| 573kB 4.0MB/s
Downloading https://files.pythonhosted.org/packages/8c/10/79282747f9169f21c053c562a0baa21815a8c7879be97abd930dbcf862e8/setuptools-39.1.0-py2.py3-none-any.whl (566kB)
100% |████████████████████████████████| 573kB 4.0MB/s
Collecting wheel
Downloading https://files.pythonhosted.org/packages/1b/d2/22cde5ea9af055f81814f9f2545f5ed8a053eb749c08d186b369959189a8/wheel-0.31.0-py2.py3-none-any.whl (41kB)
100% |████████████████████████████████| 51kB 8.0MB/s
Downloading https://files.pythonhosted.org/packages/1b/d2/22cde5ea9af055f81814f9f2545f5ed8a053eb749c08d186b369959189a8/wheel-0.31.0-py2.py3-none-any.whl (41kB)
100% |████████████████████████████████| 51kB 8.0MB/s
Installing collected packages: pip, setuptools, wheel
Successfully installed pip-10.0.1 setuptools-39.1.0 wheel-0.31.0
```
运行如下命令检查已安装的 python3 版本:
```
# python3 --version
Python 3.4.5
```
### 方法 3使用 IUS 社区源
IUS 社区是 CentOS 社区批准的第三方 RPM 源,为企业级 Linux (RHEL 和 CentOS) 5, 6 和 7 版本提供最新上游版本的 PHP, Python, MySQL 等软件包。
IUS 社区是 CentOS 社区批准的第三方 RPM 源,为企业级 Linux RHEL 和 CentOS 5、 6 和 7 版本提供最新上游版本的 PHP、 Python、 MySQL 等软件包。
IUS 社区源依赖于 EPEL 源,故我们需要先安装 EPEL 源,然后再安装 IUS 社区源。按照下面的步骤安装启用 EPEL 源和 IUS 社区源,利用该 RPM 系统安装软件包。
**推荐阅读:** [在 RHEL 或 CentOS 上安装启用 IUS 社区源][2]
EPEL 软件包位于 CentOS 的 Extra 源中,已经默认启用,故我们只需运行如下命令即可:
```
# yum install epel-release
```
下载 IUS 社区源安装脚本:
```
# curl 'https://setup.ius.io/' -o setup-ius.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1914 100 1914 0 0 6563 0 --:--:-- --:--:-- --:--:-- 133k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1914 100 1914 0 0 6563 0 --:--:-- --:--:-- --:--:-- 133k
```
安装启用 IUS 社区源:
```
# sh setup-ius.sh
```
检查可用的 python 3 版本:
```
# yum --enablerepo=ius info python36u
Loaded plugins: fastestmirror, security
@ -189,46 +188,45 @@ Loading mirror speeds from cached hostfile
* ius: mirror.team-cymru.com
* remi-safe: mirror.team-cymru.com
Available Packages
Name : python36u
Arch : x86_64
Version : 3.6.5
Release : 1.ius.centos6
Size : 55 k
Repo : ius
Summary : Interpreter of the Python programming language
URL : https://www.python.org/
License : Python
Name : python36u
Arch : x86_64
Version : 3.6.5
Release : 1.ius.centos6
Size : 55 k
Repo : ius
Summary : Interpreter of the Python programming language
URL : https://www.python.org/
License : Python
Description : Python is an accessible, high-level, dynamically typed, interpreted programming
: language, designed with an emphasis on code readability.
: It includes an extensive standard library, and has a vast ecosystem of
: third-party libraries.
:
: The python36u package provides the "python3.6" executable: the reference
: interpreter for the Python language, version 3.
: The majority of its standard library is provided in the python36u-libs package,
: which should be installed automatically along with python36u.
: The remaining parts of the Python standard library are broken out into the
: python36u-tkinter and python36u-test packages, which may need to be installed
: separately.
:
: Documentation for Python is provided in the python36u-docs package.
:
: Packages containing additional libraries for Python are generally named with
: the "python36u-" prefix.
: language, designed with an emphasis on code readability.
: It includes an extensive standard library, and has a vast ecosystem of
: third-party libraries.
:
: The python36u package provides the "python3.6" executable: the reference
: interpreter for the Python language, version 3.
: The majority of its standard library is provided in the python36u-libs package,
: which should be installed automatically along with python36u.
: The remaining parts of the Python standard library are broken out into the
: python36u-tkinter and python36u-test packages, which may need to be installed
: separately.
:
: Documentation for Python is provided in the python36u-docs package.
:
: Packages containing additional libraries for Python are generally named with
: the "python36u-" prefix.
```
运行如下命令从 IUS 源安装最新可用版本的 python 3 软件包:
```
# yum --enablerepo=ius install python36u
```
运行如下命令检查已安装的 python3 版本:
```
# python3.6 --version
Python 3.6.5
```
--------------------------------------------------------------------------------
@ -238,7 +236,7 @@ via: https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-cen
作者:[PRAKASH SUBRAMANIAN][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,11 +1,11 @@
You-Get - 支持 80+ 网站的命令行多媒体下载器
You-Get:支持 80 多个网站的命令行多媒体下载器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/you-get-1-720x340.jpg)
你们大多数人可能用过或听说过 **Youtube-dl**,这个命令行程序可以从包括 Youtube 在内的 100+ 网站下载视频。我偶然发现了一个类似的工具,名字叫做 **"You-Get"**。这是一个 Python 编写的命令行下载器,可以让你从 YoutubeFacebookTwitter 等很多热门网站下载图片,音频和视频。目前该下载器支持 80+ 站点,点击[**这里**][1]查看所有支持的网站。
你们大多数人可能用过或听说过 **Youtube-dl**,这个命令行程序可以从包括 Youtube 在内的 100+ 网站下载视频。我偶然发现了一个类似的工具,名字叫做 **You-Get**。这是一个 Python 编写的命令行下载器,可以让你从 Youtube、Facebook、Twitter 等很多热门网站下载图片,音频和视频LCTT 译注:首先,它们得是存在的网站)。目前该下载器支持 80+ 站点,点击[这里][1]查看所有支持的网站。
You-Get 不仅仅是一个下载器,它还可以将在线视频导流至你的视频播放器。更进一步,它还允许你在 Google 上搜索视频只要给出搜索项You-Get 使用 Google 搜索并下载相关度最高的视频。另外值得一提的特性是,它允许你暂停和恢复下载过程。它是一个完全自由、开源及跨平台的应用,适用于 LinuxMacOS 及 Windows。
You-Get 不仅仅是一个下载器,它还可以将在线视频导流至你的视频播放器。更进一步,它还允许你在 Google 上搜索视频只要给出搜索项You-Get 使用 Google 搜索并下载相关度最高的视频。另外值得一提的特性是,它允许你暂停和恢复下载过程。它是一个完全自由、开源及跨平台的应用,适用于 LinuxMacOS 及 Windows。
### 安装 You-Get
@ -17,35 +17,36 @@ You-Get 不仅仅是一个下载器,它还可以将在线视频导流至你的
有多种方式安装 You-Get其中官方推荐采用 pip 包管理器安装。如果你还没有安装 pip可以参考如下链接
[如何使用 pip 管理 Python 软件包][2]
- [如何使用 pip 管理 Python 软件包][2]
需要注意的是,你需要安装 Python 3 版本的 pip。
需要注意的是,你需要安装 Python 3 版本的 `pip`
接下来,运行如下命令安装 You-Get
```
$ pip3 install you-get
```
可以使用命令升级 You-Get 至最新版本:
```
$ pip3 install --upgrade you-get
```
### 开始使用 You-Get
使用方式与 Youtube-dl 工具基本一致。
**下载视频**
#### 下载视频
下载视频,只需运行:
```
$ you-get https://www.youtube.com/watch?v=HXaglTFJLMc
```
输出示例:
```
site: YouTube
title: The Last of The Mohicans by Alexandro Querevalú
@ -58,22 +59,22 @@ stream:
Downloading The Last of The Mohicans by Alexandro Querevalú.mp4 ...
100% ( 56.9/ 56.9MB) ├███████████████████████████████████████████████████████┤[1/1] 752 kB/s
```
下载视频前你可能希望查看视频的细节信息。You-Get 提供了 **info”** 或 **“-i”** 参数,使用该参数可以获得给定视频所有可用的分辨率和格式。
下载视频前你可能希望查看视频的细节信息。You-Get 提供了 `info``-i` 参数,使用该参数可以获得给定视频所有可用的分辨率和格式。
```
$ you-get -i https://www.youtube.com/watch?v=HXaglTFJLMc
```
或者
```
$ you-get --info https://www.youtube.com/watch?v=HXaglTFJLMc
```
输出示例如下:
```
site: YouTube
title: The Last of The Mohicans by Alexandro Querevalú
@ -121,18 +122,18 @@ streams: # Available quality and codecs
quality: hd720
size: 56.9 MiB (59654303 bytes)
# download-with: you-get --itag=22 [URL]
```
默认情况下You-Get 会下载标记为 **DEFAULT** 的格式。如果你对格式或分辨率不满意,可以选择你喜欢的格式,使用格式对应的 itag 值即可。
默认情况下You-Get 会下载标记为 “DEFAULT” 的格式。如果你对格式或分辨率不满意,可以选择你喜欢的格式,使用格式对应的 itag 值即可。
```
$ you-get --itag=244 https://www.youtube.com/watch?v=HXaglTFJLMc
```
**下载音频**
#### 下载音频
执行下面的命令,可以从 soundcloud 网站下载音频:
```
$ you-get 'https://soundcloud.com/uiceheidd/all-girls-are-same-999-prod-nick-mira'
Site: SoundCloud.com
@ -145,30 +146,30 @@ Downloading ALL GIRLS ARE THE SAME (PROD. NICK MIRA).mp3 ...
```
查看音频文件细节,使用 **-i** 参数:
查看音频文件细节,使用 `-i` 参数:
```
$ you-get -i 'https://soundcloud.com/uiceheidd/all-girls-are-same-999-prod-nick-mira'
```
**下载图片**
#### 下载图片
运行如下命令下载图片:
```
$ you-get https://pixabay.com/en/mountain-crumpled-cyanus-montanus-3393209/
```
You-Get 也可以下载网页中的全部图片:
You-Get can also download all images from a web page.
```
$ you-get https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
```
**搜索视频**
#### 搜索视频
你只需向 You-Get 传递一个任意的搜索项,而无需给出有效的 URLYou-Get 会使用 Google 搜索并下载与你给出搜索项最相关的视频。(LCTT 译注Google 的机器人检测机制可能导致 503 报错导致该功能无法使用)。
你只需向 You-Get 传递一个任意的搜索项,而无需给出有效的 URLYou-Get 会使用 Google 搜索并下载与你给出搜索项最相关的视频。(译者注Google 的机器人检测机制可能导致 503 报错导致该功能无法使用)。
```
$ you-get 'Micheal Jackson'
Google Videos search:
@ -184,53 +185,53 @@ stream:
Downloading Michael Jackson - Beat It (Official Video).webm ...
100% ( 29.4/ 29.4MB) ├███████████████████████████████████████████████████████┤[1/1] 2 MB/s
```
**观看视频**
#### 观看视频
You-Get 可以将在线视频导流至你的视频播放器或浏览器跳过广告和评论部分。LCTT 译注:使用 `-p` 参数需要对应的 vlc/chrominum 命令可以调用,一般适用于具有图形化界面的操作系统)。
You-Get 可以将在线视频导流至你的视频播放器或浏览器,跳过广告和评论部分。(译者注:使用 -p 参数需要对应的 vlc/chrominum 命令可以调用,一般适用于具有图形化界面的操作系统)。
以 VLC 视频播放器为例,使用如下命令在其中观看视频:
```
$ you-get -p vlc https://www.youtube.com/watch?v=HXaglTFJLMc
```
或者
```
$ you-get --player vlc https://www.youtube.com/watch?v=HXaglTFJLMc
```
类似地,将视频导流至以 chromium 为例的浏览器中,使用如下命令:
```
$ you-get -p chromium https://www.youtube.com/watch?v=HXaglTFJLMc
```
![][3]
在上述屏幕截图中,可以看到并没有广告和评论部分,只是一个包含视频的简单页面。
**设置下载视频的路径及文件名**
#### 设置下载视频的路径及文件名
默认情况下,使用视频标题作为默认文件名,下载至当前工作目录。当然,你可以按照你的喜好进行更改,使用 `output-dir``-o` 参数可以指定路径,使用 `output-filename``-O` 参数可以指定下载文件的文件名。
默认情况下,使用视频标题作为默认文件名,下载至当前工作目录。当然,你可以按照你的喜好进行更改,使用 **output-dir/-o** 参数可以指定路径,使用 **output-filename/-O** 参数可以指定下载文件的文件名。
```
$ you-get -o ~/Videos -O output.mp4 https://www.youtube.com/watch?v=HXaglTFJLMc
```
**暂停和恢复下载**
#### 暂停和恢复下载
**CTRL+C** 可以取消下载。一个以 **.download** 为扩展名的临时文件会保存至输出路径。下次你使用相同的参数下载时,下载过程将延续上一次的过程。
`CTRL+C` 可以取消下载。一个以 `.download` 为扩展名的临时文件会保存至输出路径。下次你使用相同的参数下载时,下载过程将延续上一次的过程。
当文件下载完成后,以 .download 为扩展名的临时文件会自动消失。如果这时你使用同样参数下载You-Get 会跳过下载;如果你想强制重新下载,可以使用 **force/-f** 参数。
当文件下载完成后,以 `.download` 为扩展名的临时文件会自动消失。如果这时你使用同样参数下载You-Get 会跳过下载;如果你想强制重新下载,可以使用 `force``-f` 参数。
查看命令的帮助部分可以获取更多细节,命令如下:
```
$ you-get --help
```
这次的分享到此结束,后续还会介绍更多的优秀工具,敬请期待!
@ -246,7 +247,7 @@ via: https://www.ostechnix.com/you-get-a-cli-downloader-to-download-media-from-8
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,132 +0,0 @@
translating by MZqk
Whats next in IT automation: 6 trends to watch
======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai_artificial_intelligence.png?itok=o0csm9l2)
Weve recently covered the [factors fueling IT automation][1], the [current trends][2] to watch as adoption grows, and [helpful tips][3] for those organizations just beginning to automate certain processes.
Oh, and we also shared expert advice on [how to make the case for automation][4] in your company, as well as [keys for long-term success][5].
Now, theres just one question: Whats next? We asked a range of experts to share a peek into the not-so-distant future of [automation][6]. Here are six trends they advise IT leaders to monitor closely.
### 1. Machine learning matures
For all of the buzz around [machine learning][7] (and the overlapping phrase “self-learning systems”), its still very early days for most organizations in terms of actual implementations. Expect that to change, and for machine learning to play a significant role in the next waves of IT automation.
Mehul Amin, director of engineering for [Advanced Systems Concepts, Inc.][8], points to machine learning as one of the next key growth areas for IT automation.
“With the data that is developed, automation software can make decisions that otherwise might be the responsibility of the developer,” Amin says. “For example, the developer builds what needs to be executed, but identifying the best system to execute the processes might be [done] by software using analytics from within the system.”
That extends elsewhere in this same hypothetical system; Amin notes that machine learning can enable automated systems to provision additional resources when necessary to meet timelines or SLAs, as well as retire those resources when theyre no longer needed, and other possibilities.
Amin is certainly not alone.
“IT automation is moving towards self-learning,” says Kiran Chitturi, CTO architect at [Sungard Availability Services][9]. “Systems will be able to test and monitor themselves, enhancing business processes and software delivery.”
Chitturi points to automated testing as an example; test scripts are already in widespread adoption, but soon those automated testing processes may be more likely to learn as they go, developing, for example, wider recognition of how new code or code changes will impact production environments.
### 2. Artificial intelligence spawns automation opportunities
The same principles above hold true for the related (but separate) field of [artificial intelligence][10]. Depending on your definition of AI, it seems likely that machine learning will have the more significant IT impact in the near term (and were likely to see a lot of overlapping definitions and understandings of the two fields). Assume that emerging AI technologies will spawn new automation opportunities, too.
“The integration of artificial intelligence (AI) and machine learning capabilities is widely perceived as critical for business success in the coming years,” says Patrick Hubbard, head geek at [SolarWinds][11].
### 3. That doesnt mean people are obsolete
Lets try to calm those among us who are now hyperventilating into a paper bag: The first two trends dont necessarily mean were all going to be out of a job.
It is likely to mean changes to various roles and the creation of [new roles][12] altogether.
But in the foreseeable future, at least, you dont need to practice bowing to your robot overlords.
“A machine can only consider the environment variables that it is given it cant choose to include new variables, only a human can do this today,” Hubbard explains. “However, for IT professionals this will necessitate the cultivation of AI- and automation-era skills such as programming, coding, a basic understanding of the algorithms that govern AI and machine learning functionality, and a strong security posture in the face of more sophisticated cyberattacks.”
Hubbard shares the example of new tools or capabilities such as AI-enabled security software or machine-learning applications that remotely spot maintenance needs in an oil pipeline. Both might improve efficiency and effectiveness; neither automatically replaces the people necessary for information security or pipeline maintenance.
“Many new functionalities still require human oversight,” Hubbard says. “In order for a machine to determine if something predictive could become prescriptive, for example, human management is needed.”
The same principle holds true even if you set machine learning and AI aside for a moment and look at IT automation more generally, especially in the software development lifecycle.
Matthew Oswalt, lead architect for automation at [Juniper Networks][13], points out that the fundamental reason IT automation is growing is that it is creating immediate value by reducing the amount of manual effort required to operate infrastructure.
Rather than responding to an infrastructure issue at 3 a.m. themselves, operations engineers can use event-driven automation to define their workflows ahead of time, as code.
“It also sets the stage for treating their operations workflows as code rather than easily outdated documentation or tribal knowledge,” Oswalt explains. “Operations staff are still required to play an active role in how [automation] tooling responds to events. The next phase of adopting automation is to put in place a system that is able to recognize interesting events that take place across the IT spectrum and respond in an autonomous fashion. Rather than responding to an infrastructure issue at 3 a.m. themselves, operations engineers can use event-driven automation to define their workflows ahead of time, as code. They can rely on this system to respond in the same way they would, at any time.”
### 4. Automation anxiety will decrease
Hubbard of SolarWinds notes that the term “automation” itself tends to spawn a lot of uncertainty and concern, not just in IT but across professional disciplines, and he says that concern is legitimate. But some of the attendant fears may be overblown, and even perpetuated by the tech industry itself. Reality might actually be the calming force on this front: When the actual implementation and practice of automation helps people realize #3 on this list, then well see #4 occur.
“This year well likely see a decrease in automation anxiety and more organizations begin to embrace AI and machine learning as a way to augment their existing human resources,” Hubbard says. “Automation has historically created room for more jobs by lowering the cost and time required to accomplish smaller tasks and refocusing the workforce on things that cannot be automated and require human labor. The same will be true of AI and machine learning.”
Automation will also decrease some anxiety around the topic most likely to increase an IT leaders blood pressure: Security. As Matt Smith, chief architect, [Red Hat][14], recently [noted][15], automation will increasingly help IT groups reduce the security risks associated with maintenance tasks.
His advice: “Start by documenting and automating the interactions between IT assets during maintenance activities. By relying on automation, not only will you eliminate tasks that historically required much manual effort and surgical skill, you will also be reducing the risks of human error and demonstrating whats possible when your IT organization embraces change and new methods of work. Ultimately, this will reduce resistance to promptly applying security patches. And it could also help keep your business out of the headlines during the next major security event.”
**[ Read the full article: [12 bad enterprise security habits to break][16]. ] **
### 5. Continued evolution of scripting and automation tools
Many organizations see the first steps toward increasing automation usually in the form of scripting or automation tools (sometimes referred to as configuration management tools) as "early days" work.
But views of those tools are evolving as the use of various automation technologies grows.
“There are many processes in the data center environment that are repetitive and subject to human error, and technologies such as [Ansible][17] help to ameliorate those issues,” says Mark Abolafia, chief operating officer at [DataVision][18]. “With Ansible, one can write a specific playbook for a set of actions and input different variables such as addresses, etc., to automate long chains of process that were previously subject to human touch and longer lead times.”
**[ Want to learn more about this aspect of Ansible? Read the related article:[Tips for success when getting started with Ansible][19]. ]**
Another factor: The tools themselves will continue to become more advanced.
“With advanced IT automation tools, developers will be able to build and automate workflows in less time, reducing error-prone coding,” says Amin of ASCI. “These tools include pre-built, pre-tested drag-and-drop integrations, API jobs, the rich use of variables, reference functionality, and object revision history.”
### 6. Automation opens new metrics opportunities
As weve said previously in this space, automation isnt IT snake oil. It wont fix busted processes or otherwise serve as some catch-all elixir for what ails your organization. Thats true on an ongoing basis, too: Automation doesnt eliminate the need to measure performance.
**[ See our related article[DevOps metrics: Are you measuring what matters?][20] ]**
In fact, automation should open up new opportunities here.
“As more and more development activities source control, DevOps pipelines, work item tracking move to the API-driven platforms the opportunity and temptation to stitch these pieces of raw data together to paint the picture of your organization's efficiency increases,” says Josh Collins, VP of architecture at [Janeiro Digital][21].
Collins thinks of this as a possible new “development organization metrics-in-a-box.” But dont mistake that to mean machines and algorithms can suddenly measure everything IT does.
“Whether measuring individual resources or the team in aggregate, these metrics can be powerful but should be balanced with a heavy dose of context,” Collins says. “Use this data for high-level trends and to affirm qualitative observations not to clinically grade your team.”
**Want more wisdom like this, IT leaders?[Sign up for our weekly email newsletter][22].**
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/3/what-s-next-it-automation-6-trends-watch
作者:[Kevin Casey][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/kevin-casey
[1]:https://enterprisersproject.com/article/2017/12/5-factors-fueling-automation-it-now
[2]:https://enterprisersproject.com/article/2017/12/4-trends-watch-it-automation-expands
[3]:https://enterprisersproject.com/article/2018/1/getting-started-automation-6-tips
[4]:https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
[5]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success
[6]:https://enterprisersproject.com/tags/automation
[7]:https://enterprisersproject.com/article/2018/2/how-spot-machine-learning-opportunity
[8]:https://www.advsyscon.com/en-us/
[9]:https://www.sungardas.com/en/
[10]:https://enterprisersproject.com/tags/artificial-intelligence
[11]:https://www.solarwinds.com/
[12]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros
[13]:https://www.juniper.net/
[14]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
[15]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break
[16]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break?sc_cid=70160000000h0aXAAQ
[17]:https://opensource.com/tags/ansible
[18]:https://datavision.com/
[19]:https://opensource.com/article/18/2/tips-success-when-getting-started-ansible?intcmp=701f2000000tjyaAAA
[20]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters?sc_cid=70160000000h0aXAAQ
[21]:https://www.janeirodigital.com/
[22]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -0,0 +1,258 @@
Writing eBPF tracing tools in Rust
============================================================
tl;dr: I made an experimental Rust repository that lets you write BPF tracing tools from Rust! Its at [https://github.com/jvns/rust-bcc][4] or [https://crates.io/crates/bcc][5], and has a couple of hopefully easy to understand examples. It turns out that writing BPF-based tracing tools in Rust is really easy (in some ways easier than doing the same things in Python). In this post Ill explain why I think this is useful/important.
For a long time Ive been interested in the [BPF compiler collection][6], a C -> BPF compiler, C library, and Python bindings to make it easy to write tools like:
* [opensnoop][1] (spies on which files are being opened)
* [tcplife][2] (track length of TCP connections)
* [cpudist][3] (count how much time every program spends on- and off-CPU)
and a lot more. The list of available tools in [the /tools directory][7] is really impressive and I could write a whole blog post about that. If youre familiar with dtrace the idea is that BCC is a little bit like dtrace, and in fact theres a dtrace-like language [named ply][8] implemented with BPF.
This blog post isnt about `ply` or the great BCC tools though its about what tools we need to build more complicated/powerful BPF-based programs.
### What does the BPF compiler collection let you do?
Heres a quick overview of what BCC lets you do:
* compile BPF programs from C into eBPF bytecode.
* attach this eBPF bytecode to a userspace function or kernel function (as a “uprobe” / “kprobe”) or install it as XDP
* communicate with the eBPF bytecode to get information with it
A basic example of using BCC is this [strlen_count.py][9] program and I think its useful to look at this program to understand how BCC works and how you might be able to implement more advanced tools.
First, theres an eBPF program. This program is going to be attached to the `strlen` function from libc (the C standard library) every time we call `strlen`, this code will be run.
This eBPF program
* gets the first argument to the `strlen` function (the address of a string)
* reads the first 80 characters of that string (using `bpf_probe_read`)
* increments a counter in a hashmap (basically `counts[str] += 1`)
The result is that you can count every call to `strlen`. Heres the eBPF program:
```
struct key_t {
char c[80];
};
BPF_HASH(counts, struct key_t);
int count(struct pt_regs *ctx) {
if (!PT_REGS_PARM1(ctx))
return 0;
struct key_t key = {};
u64 zero = 0, *val;
bpf_probe_read(&key.c, sizeof(key.c), (void *)PT_REGS_PARM1(ctx));
val = counts.lookup_or_init(&key, &zero);
(*val)++;
return 0;
};
```
After that program is compiled, theres a Python part which does `b.attach_uprobe(name="c", sym="strlen", fn_name="count")`  it tells the Linux kernel to actually attach the compiled BPF to the `strlen` function so that it runs every time `strlen` runs.
The really exciting thing about eBPF is what comes next theres no use keeping a hashmap of string counts if you cant access it! BPF has a number of data structures that let you share information between BPF programs (that run in the kernel / in uprobes) and userspace.
So in this case the Python program accesses this `counts` data structure.
### BPF data structures: hashmaps, buffers, and more!
Theres a great list of available BPF data structures in the [BCC reference guide][10].
There are basically 2 kinds of BPF data structures data structures suitable for storing statistics (BPF_HASH, BPF_HISTOGRAM etc), and data structures suitable for storing events (like BPF_PERF_MAP) where you send a stream of events to a userspace program which then displays them somehow.
There are a lot of interesting BPF data structures (like a trie!) and I havent fully worked out what all of the possibilities are with them yet :)
### What Im interested in: BPF for profiling & tracing
Okay!! Were done with the background, lets talk about why Im interested in BCC/BPF right now.
Im interested in using BPF to implement profiling/tracing tools for dynamic programming languages, specifically tools to do things like “trace all memory allocations in this Ruby program”. I think its exciting that you can say “hey, run this tiny bit of code every time a Ruby object is allocated” and get data back about ongoing allocations!
### Rust: a way to build more powerful BPF-based tools
The issue I see with the Python BPF libraries (which are GREAT, of course) is that while theyre perfect for building tools like `tcplife` which track tcp connnection lengths, once you want to start doing more complicated experiments like “stream every memory allocation from this Ruby program, calculate some metadata about it, query the original process to find out the class name for that address, and display a useful summary”, Python doesnt really cut it.
So I decided to spend 4 days trying to build a BCC library for Rust that lets you attach + interact with BPF programs from Rust!
Basically I worked on porting [https://github.com/iovisor/gobpf][11] (a go BCC library) to Rust.
The easiest and most exciting way to explain this is to show an example of what using the library looks like.
### Rust example 1: strlen
Lets start with the strlen example from above. Heres [strlen.rs][12] from the examples!
Compiling & attaching the `strlen` code is easy:
```
let mut module = BPF::new(code)?;
let uprobe_code = module.load_uprobe("count")?;
module.attach_uprobe("/lib/x86_64-linux-gnu/libc.so.6", "strlen", uprobe_code, -1 /* all PIDs */)?;
let table = module.table("counts");
```
This table contains a hashmap mapping strings to counts. So we need to iterate over that table and print out the keys and values. This is pretty simple: it looks like this.
```
let iter = table.into_iter();
for e in iter {
// key and value are each a Vec<u8> so we need to transform them into a string and
// a u64 respectively
let key = get_string(&e.key);
let value = Cursor::new(e.value).read_u64::<NativeEndian>().unwrap();
println!("{:?} {:?}", key, value);
}
```
Basically all the data that comes out of a BPF program is an opaque `Vec<u8>`right now, so you need to figure out how to decode them yourself. Luckily decoding binary data is something that Rust is quite good at the `byteorder`crate lets you easily decode `u64`s, and translating a vector of bytes into a String is easy (I wrote a quick `get_string` helper function to do that).
I thought this was really nice because the code for this program in Rust is basically exactly the same as the corresponding Python version. So it very pretty approachable to start doing experiments and seeing whats possible.
### Reading perf events from Rust
The next thing I wanted to do after getting this `strlen` example to work in rust was to handle events!!
Events are a little different / more complicated. The way you stream events in a BCC program is it uses `perf_event_open` to create a ring buffer where the events get stored.
Dealing with events from a perf ring buffer normally is a huge pain because perf has this complicated data structure. The C BCC library makes this easier for you by letting you specify a C callback that gets called on every new event, and it handles dealing with perf. This is super helpful. To make this work with Rust, the `rust-bcc` library lets you pass in a Rust closure to run on every event.
### Rust example 2: opensnoop.rs (events!!)
To make sure reading BPF events actually worked, I implemented a basic version of `opensnoop.py` from the iovisor bcc tools: [opensnoop.rs][13].
I wont walk through the [C code][14] in this case because theres a lot of it but basically the eBPF C part generates an event every time a file is opened on the system. I copied the C code verbatim from [opensnoop.py][15].
Heres the type of the event thats generated by the BPF code:
```
#[repr(C)]
struct data_t {
id: u64, // pid + thread id
ts: u64,
ret: libc::c_int,
comm: [u8; 16], // process name
fname: [u8; 255], // filename
}
```
The Rust part starts out by compiling BPF code & attaching kprobes (to the `open`system call in the kernel, `do_sys_open`). I wont paste that code here because its basically the same as the `strlen` example. What happens next is the new part: we install a callback with a Rust closure on the `events` table, and then call `perf_map.poll(200)` in a loop. The design of the BCC library is a little confusing to me still, but you need to repeatedly poll the perf reader objects to make sure that the callbacks you installed actually get called.
```
let table = module.table("events");
let mut perf_map = init_perf_map(table, perf_data_callback)?;
loop {
perf_map.poll(200);
}
```
This is the callback code I wrote, that gets called every time. Again, it takes an opaque `Vec<u8>` event and translates it into a `data_t` struct to print it out. Doing this is kind of annoying (I actually called `libc::memcpy` which is Not Encouraged Rust Practice), I need to figure out a less gross/unsafe way to do that. The really nice thing is that if you put `#[repr(C)]` on your Rust structs it represents them in memory the exact same way C will represent that struct. So its quite easy to share data structures between Rust and C.
```
fn perf_data_callback() -> Box<Fn(Vec<u8>)> {
Box::new(|x| {
// This callback
let data = parse_struct(&x);
println!("{:-7} {:-16} {}", data.id >> 32, get_string(&data.comm), get_string(&data.fname));
})
}
```
You might notice that this is actually a weird function that returns a callback this is because I needed to install 4 callbacks (1 per CPU), and in stable Rust you cant copy closures yet.
output
Heres what the output of that `opensnoop` program looks like!
This is kind of meta these are the files that were being opened on my system when I saved this blog post :). You can see that git is looking at some files, vim is saving a file, and my static site generator Hugo is opening the changed file so that it can update the site. Neat!
```
PID COMMAND FILENAME
8519 git /home/bork/work/homepage/.gitmodules
8519 git /home/bork/.gitconfig
8519 git .git/config
22877 vim content/post/2018-02-05-writing-ebpf-programs-in-rust.markdown
22877 vim .
7312 hugo /home/bork/work/homepage/content/post/2018-02-05-writing-ebpf-programs-in-rust.markdown
7312 hugo /home/bork/work/homepage/content/post/2018-02-05-writing-ebpf-programs-in-rust.markdown
```
### using rust-bcc to implement Ruby experiments
Now that I have this basic library that I can use I can get counts + stream events in Rust, Im excited about doing some experiments with making BCC programs in Rust that talk to Ruby programs!
The first experiment (that I blogged about last week) is [count-ruby-allocs.rs][16]which prints out a live count of current allocation activity. Heres an example of what it prints out: (the numbers are counts of the number of objects allocated of that type so far).
```
RuboCop::Token 53
RuboCop::Token 112
MatchData 246
Parser::Source::Rang 255
Proc 323
Enumerator 328
Hash 475
Range 1210
??? 1543
String 3410
Array 7879
Total allocations since we started counting: 16932
Allocations this second: 954
```
### Related work
Geoffrey Couprie is interested in building more advanced BPF tracing tools with Rust too and wrote a great blog post with a cool proof of concept: [Compiling to eBPF from Rust][17].
I think the idea of not requiring the user to compile the BPF program is exciting, because you could imagine distributing a statically linked Rust binary (which links in libcc.so) with a pre-compiled BPF program that the binary just installs and then uses to do cool stuff.
Also theres another Rust BCC library at [https://bitbucket.org/photoszzt/rust-bpf/][18] at which has a slightly different set of capabilities than [jvns/rust-bcc][19] (going to spend some time looking at that one later, I just found about it like 30 minutes ago :)).
### thats it for now
This crate is still extremely sketchy and there are bugs & missing features but I wanted to put it on the internet because I think the examples of what you can do with it are really exciting!!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/02/05/rust-bcc/
作者:[Julia Evans ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about/
[1]:https://github.com/iovisor/bcc/blob/master/tools/opensnoop.py
[2]:https://github.com/iovisor/bcc/blob/master/tools/tcplife.py
[3]:https://github.com/iovisor/bcc/blob/master/tools/cpudist.py
[4]:https://github.com/jvns/rust-bcc
[5]:https://crates.io/crates/bcc
[6]:https://github.com/iovisor/bcc
[7]:https://github.com/iovisor/bcc/tree/master/tools
[8]:https://github.com/iovisor/ply
[9]:https://github.com/iovisor/bcc/blob/master/examples/tracing/strlen_count.py
[10]:https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md#maps
[11]:https://github.com/iovisor/gobpf
[12]:https://github.com/jvns/rust-bcc/blob/f15d2983ddbe349aac3d2fcaeacf924a66db4be7/examples/strlen.rs
[13]:https://github.com/jvns/rust-bcc/blob/f15d2983ddbe349aac3d2fcaeacf924a66db4be7/examples/opensnoop.rs
[14]:https://github.com/jvns/rust-bcc/blob/f15d2983ddbe349aac3d2fcaeacf924a66db4be7/examples/opensnoop.c
[15]:https://github.com/iovisor/bcc/blob/master/tools/opensnoop.py
[16]:https://github.com/jvns/ruby-mem-watcher-demo/blob/dd189b178a2813e6445063f0f84063e6e978ee79/src/bin/count-ruby-allocs.rs
[17]:https://unhandledexpression.com/2018/02/02/poc-compiling-to-ebpf-from-rust/
[18]:https://bitbucket.org/photoszzt/rust-bpf/
[19]:https://github.com/jvns/rust-bcc

View File

@ -1,157 +0,0 @@
Translating by qhwdw
Top 9 open source ERP systems to consider | Opensource.com
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
Businesses with more than a handful of employees have a lot to balance including pricing, product planning, accounting and finance, managing payroll, dealing with inventory, and more. Stitching together a set of disparate tools to handle those jobs is a quick, cheap, and dirty way to get things done.
That approach isn't scalable. It's difficult to efficiently move data between the various pieces of such an ad-hoc system. As well, it can be difficult to maintain.
Instead, most growing businesses turn to an [enterprise resource planning][1] (ERP) system.
The big guns in that space are Oracle, SAP, and Microsoft Dynamics. Their offerings are comprehensive, but also expensive. What happens if your business can't afford one of those big implementations or if your needs are simple? You turn to the open source alternatives.
### What to look for in an ERP system
Obviously, you want a system that suits your needs. Depending on those needs, more features doesn't always mean better. However, your needs might change as your business grows, so you'll want to find an ERP system that can expand to meet your new needs. That could mean the system has additional modules or just supports plugins and add-ons.
Most open source ERP systems are web applications. You can download and install them on your server. But if you don't want (or don't have the skills or staff) to maintain a system yourself, then make sure there's a hosted version of the application available.
Finally, you'll want to make sure the application has good documentation and good support—either in the form of paid support or an active user community.
There are a number of flexible, feature-rich, and cost-effective open source ERP systems out there. Here are nine to check out if you're in the market for such a system.
### ADempiere
Like most other open source ERP solutions, [ADempiere][2] is targeted at small and midsized businesses. It's been around awhile—the project was formed in 2006 as a fork from the Compiere ERP software.
Its Italian name means to achieve or satisfy, and its "multidimensional" ERP features aim to help businesses satisfy a wide range of needs. It adds supply chain management (SCM) and customer relationship management (CRM) features to its ERP suite to help manage sales, purchasing, inventory, and accounting processes in one piece of software. Its latest release, v.3.9.0, updated its user interface, point-of-sale, HR, payroll, and other features.
As a multiplatform, Java-based cloud solution, ADempiere is accessible on Linux, Unix, Windows, MacOS, smartphones, and tablets. It is licensed under [GPLv2][3]. If you'd like to learn more, take its [demo][4] for a test run or access its [source code][5] on GitHub.
### Apache OFBiz
[Apache OFBiz][6]'s suite of related business tools is built on a common architecture that enables organizations to customize the ERP to their needs. As a result, it's best suited for midsize or large enterprises that have the internal development resources to adapt and integrate it within their existing IT and business processes.
OFBiz is a mature open source ERP system; its website says it's been a top-level Apache project for a decade. [Modules][7] are available for accounting, manufacturing, HR, inventory management, catalog management, CRM, and e-commerce. You can also try out its e-commerce web store and backend ERP applications on its [demo page][8].
Apache OFBiz's source code can be found in the [project's repository][9]. It is written in Java and licensed under an [Apache 2.0 license][10].
### Dolibarr
[Dolibarr][11] offers end-to-end management for small and midsize businesses—from keeping track of invoices, contracts, inventory, orders, and payments to managing documents and supporting electronic point-of-sale system. It's all wrapped in a fairly clean interface.
If you're wondering what Dolibarr can't do, [here's some documentation about that][12].
In addition to an [online demo][13], Dolibarr also has an [add-ons store][14] where you can buy software that extends its features. You can check out its [source code][15] on GitHub; it's licensed under [GPLv3][16] or any later version.
### ERPNext
[ERPNext][17] is one of those classic open source projects; in fact, it was [featured on Opensource.com][18] way back in 2014. It was designed to scratch a particular itch, in this case replacing a creaky and expensive proprietary ERP implementation.
ERPNext was built for small and midsized businesses. It includes modules for accounting, managing inventory, sales, purchase, and project management. The applications that make up ERPNext are form-driven—you fill information in a set of fields and let the application do the rest. The whole suite is easy to use.
If you're interested, you can request a [demo][19] before taking the plunge and [downloading it][20] or [buying a subscription][21] to the hosted service.
### Metasfresh
[Metasfresh][22]'s name reflects its commitment to keeping its code "fresh." It's released weekly updates since late 2015, when its founders forked the code from the ADempiere project. Like ADempiere, it's an open source ERP based on Java targeted at the small and midsize business market.
While it's a younger project than most of the other software described here, it's attracted some early, positive attention, such as being named a finalist for the Initiative Mittelstand "best of open source" IT innovation award.
Metasfresh is free when self-hosted or for one user via the cloud, or on a monthly subscription fee basis as a cloud-hosted solution for 1-100 users. Its [source code][23] is available under the [GPLv2][24] license at GitHub and its cloud version is licensed under GPLv3.
### Odoo
[Odoo][25] is an integrated suite of applications that includes modules for project management, billing, accounting, inventory management, manufacturing, and purchasing. Those modules can communicate with each other to efficiently and seamlessly exchange information.
While ERP can be complex, Odoo makes it friendlier with a simple, almost spartan interface. The interface is reminiscent of Google Drive, with just the functions you need visible. You can [give Odoo a try][26] before you decide to sign up.
Odoo is a web-based tool. Subscriptions to individual modules will set you back $20 (USD) a month for each one. You can also [download it][27] or grab the [source code][28] from GitHub. It's licensed under [LGPLv3][29].
### Opentaps
[Opentaps][30], one of the few open source ERP solutions designed for larger businesses, packs a lot of power and flexibility. This is no surprise because it's built on top of Apache OFBiz.
You get the expected set of modules that help you manage inventory, manufacturing, financials, and purchasing. You also get an analytics feature that helps you analyze all aspects of your business. You can use that information to better plan into the future. Opentaps also packs a powerful reporting function.
On top of that, you can [buy add-ons and additional modules][31] to enhance Opentaps' capabilities. They include integration with Amazon Marketplace Services and FedEx. Before you [download Opentaps][32], give the [online demo][33] a try. It's licensed under [GPLv3][34].
### WebERP
[WebERP][35] is exactly as it sounds: An ERP system that operates through a web browser. The only other software you need is a PDF reader to view reports.
Specifically, its an accounting and business management solution geared toward wholesale, distribution, and manufacturing businesses. It also integrates with [third-party business software][36], including a point-of-sale system for multi-branch retail management, an e-commerce module, and wiki software for building a business knowledge base. It's written in PHP and aims to be a low-footprint, efficient, fast, and platform-independent system that's easy for general business users.
WebERP is actively being developed and has an active [forum][37], where you can ask questions or learn more about using the application. You can also try a [demo][38] or download the [source code][39] (licensed under [GPLv2][40]) on GitHub.
### xTuple PostBooks
If your manufacturing, distribution, or e-commerce business has outgrown its small business roots and is looking for an ERP to grow with you, you may want to check out [xTuple PostBooks][41]. It's a comprehensive solution built around its core ERP, accounting, and CRM features that adds inventory, distribution, purchasing, and vendor reporting capabilities.
xTuple is available under the Common Public Attribution License ([CPAL][42]), and the project welcomes developers to fork it to create other business software for inventory-based manufacturers. Its web app core is written in JavaScript, and its [source code][43] can be found on GitHub. To see if it's right for you, register for a free [demo][44] on xTuple's website.
There are many other open source ERP options you can choose from—others you might want to check out include [Tryton][45], which is written in Python and uses the PostgreSQL database engine, or the Java-based [Axelor][46], which touts users' ability to create or modify business apps with a drag-and-drop interface. And, if your favorite open source ERP solution isn't on the list, please share it with us in the comments. You might also check out our list of top [supply chain management tools][47].
This article is updated from a [previous version][48] authored by Opensource.com moderator [Scott Nesbitt][49].
--------------------------------------------------------------------------------
via: https://opensource.com/tools/enterprise-resource-planning
作者:[Opensource.com][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:http://en.wikipedia.org/wiki/Enterprise_resource_planning
[2]:http://www.adempiere.net/welcome
[3]:http://wiki.adempiere.net/License
[4]:http://www.adempiere.net/web/guest/demo
[5]:https://github.com/adempiere/adempiere
[6]:http://ofbiz.apache.org/
[7]:https://ofbiz.apache.org/business-users.html#UsrModules
[8]:http://ofbiz.apache.org/ofbiz-demos.html
[9]:http://ofbiz.apache.org/source-repositories.html
[10]:http://www.apache.org/licenses/LICENSE-2.0
[11]:http://www.dolibarr.org/
[12]:http://wiki.dolibarr.org/index.php/What_Dolibarr_can%27t_do
[13]:http://www.dolibarr.org/onlinedemo
[14]:http://www.dolistore.com/
[15]:https://github.com/Dolibarr/dolibarr
[16]:https://github.com/Dolibarr/dolibarr/blob/develop/COPYING
[17]:https://erpnext.com/
[18]:https://opensource.com/business/14/11/building-open-source-erp
[19]:https://frappe.erpnext.com/request-a-demo
[20]:https://erpnext.com/download
[21]:https://erpnext.com/pricing
[22]:http://metasfresh.com/en/
[23]:https://github.com/metasfresh/metasfresh
[24]:https://github.com/metasfresh/metasfresh/blob/master/LICENSE.md
[25]:https://www.odoo.com/
[26]:https://www.odoo.com/page/start
[27]:https://www.odoo.com/page/download
[28]:https://github.com/odoo
[29]:https://github.com/odoo/odoo/blob/11.0/LICENSE
[30]:http://www.opentaps.org/
[31]:http://shop.opentaps.org/
[32]:http://www.opentaps.org/products/download
[33]:http://www.opentaps.org/products/online-demo
[34]:https://www.gnu.org/licenses/agpl-3.0.html
[35]:http://www.weberp.org/
[36]:http://www.weberp.org/Links.html
[37]:http://www.weberp.org/forum/
[38]:http://www.weberp.org/weberp/
[39]:https://github.com/webERP-team/webERP
[40]:https://github.com/webERP-team/webERP#legal
[41]:https://xtuple.com/
[42]:https://xtuple.com/products/license-options#cpal
[43]:http://xtuple.github.io/
[44]:https://xtuple.com/free-demo
[45]:http://www.tryton.org/
[46]:https://www.axelor.com/
[47]:https://opensource.com/tools/supply-chain-management
[48]:https://opensource.com/article/16/3/top-4-open-source-erp-systems
[49]:https://opensource.com/users/scottnesbitt

View File

@ -1,155 +0,0 @@
translating by wyxplus
Things You Should Know About Ubuntu 18.04
======
[Ubuntu 18.04 release][1] is just around the corner. I can see lots of questions from Ubuntu users in various Facebook groups and forums. I also organized Q&A sessions on Facebook and Instagram to know what Ubuntu users are wondering about Ubuntu 18.04.
I have tried to answer those frequently asked questions about Ubuntu 18.04 here. I hope it helps clear your doubts if you had any. And if you still have questions, feel free to ask in the comment section below.
### What to expect in Ubuntu 18.04
![Ubuntu 18.04 Frequently Asked Questions][2]
Just for clarification, some of the answers here are influenced by my personal opinion. If you are an experienced/aware Ubuntu user, some of the questions may sound silly to you. If thats case, just ignore those questions.
#### Can I install Unity on Ubuntu 18.04?
Yes, you can.
Canonical knows that there are people who simply loved Unity. This is why it has made Unity 7 available in the Universe repository. This is a community maintained edition and Ubuntu doesnt develop it directly.
I advise using the default GNOME first and if you really cannot tolerate it, then go on [installing Unity on Ubuntu 18.04][3].
#### What GNOME version does it have?
At the time of its release, Ubuntu 18.04 has GNOME 3.28.
#### Can I install vanilla GNOME on it?
Yes, you can.
Existing GNOME users might not like the Unity resembling, customized GNOME desktop in Ubuntu 18.04. There are some packages available in Ubuntus main and universe repositories that allows you to [install vanilla GNOME on Ubuntu 18.04][4].
#### Has the memory leak in GNOME fixed?
Yes. The [infamous memory leak in GNOME 3.28][5] has been fixed and [Ubuntu is already testing the fix][6].
Just to clarify, the memory leak was not caused by Ubuntu. It was/is impacting all Linux distributions that use GNOME 3.28. A new patch was released under GNOME 3.28.1 to fix this memory leak.
#### How long will Ubuntu 18.04 be supported?
It is a long-term support (LTS) release and like any LTS release, it will be supported for five years. Which means that Ubuntu 18.04 will get security and maintenance updates until April 2023. This is also true for all participating flavors except Ubuntu Studio.
#### When will Ubuntu 18.04 be released?
Ubuntu 18.04 LTS has been released on 26th April. All the participating flavors like Kubuntu, Lubuntu, Xubuntu, Budgie, MATE etc will have their 18.04 release available on the same day.
It seems [Ubuntu Studio will not have 18.04 as LTS release][7].
#### Is it possible to upgrade to Ubuntu 18.04 from 16.04/17.10? Can I upgrade from Ubuntu 16.04 with Unity to Ubuntu 18.04 with GNOME?
Yes, absolutely. Once Ubuntu 18.04 LTS is released, you can easily upgrade to the new version.
If you are using Ubuntu 17.10, make sure that in Software & Updates -> Updates, the Notify me of a new Ubuntu version is set to For any new version.
![Get notified for a new version in Ubuntu][8]
If you are using Ubuntu 16.04, make sure that in Software & Updates -> Updates, the Notify me of a new Ubuntu version is set to For long-term support versions.
![Ubuntu 18.04 upgrade from Ubuntu 16.04][9]
You should get system notification about the availability of the new versions. After that, upgrading to Ubuntu 18.04 is a matter of clicks.
Even if Ubuntu 16.04 was Unity, you can still [upgrade to Ubuntu 18.04][10] GNOME.
#### What does upgrading to Ubuntu 18.04 mean? Will I lose my data?
If you are using Ubuntu 17.10 or Ubuntu 16.04, sooner or later, Ubuntu will notify you that Ubuntu 18.04 is available. If you have a good internet connection that can download 1.5 Gb of data, you can upgrade to Ubuntu 18.04 in a few clicks and in under 30 minutes.
You dont need to create a new USB and do a fresh install. Once the upgrade procedure finishes, youll have the new Ubuntu version available.
Normally, your data, documents etc are safe in the upgrade procedure. However, keeping a backup of your important documents is always a good idea.
#### When will I get to upgrade to Ubuntu 18.04?
If you are using Ubuntu 17.10 and have correct update settings in place (as mentioned in the previous section), you should be notified for upgrading to Ubuntu 18.04 within a few days of Ubuntu 18.04 release. Since Ubuntu servers encounter heavy load on the release day, not everyone gets the upgrade the same day.
For Ubuntu 16.04 users, it may take some weeks before they are officially notified of the availability of Ubuntu 18.04. Usually, this will happen after the first point release Ubuntu 18.04.1. This point release fixes the newly discovered bugs in 18.04.
#### If I upgrade to Ubuntu 18.04 can I downgrade to 17.10 or 16.04?
No, you cannot. While upgrading to the newer version is easy, there is no option to downgrade. If you want to go back to Ubuntu 16.04, youll have to do a fresh install.
#### Can I use Ubuntu 18.04 on 32-bit systems?
Yes and no.
If you are already using the 32-bit version of Ubuntu 16.04 or 17.10, you may still get to upgrade to Ubuntu 18.04. However, you wont find Ubuntu 18.04 bit ISO in 32-bit format anymore. In other words, you cannot do a fresh install of the 32-bit version of Ubuntu 18.04 GNOME.
The good news here is that other official flavors like Ubuntu MATE, Lubuntu etc still have the 32-bit ISO of their new versions.
In any case, if you have a 32-bit system, chances are that your system is weak on hardware. Youll be better off using lightweight [Ubuntu MATE][11] or [Lubuntu][12] on such system.
#### Where can I download Ubuntu 18.04?
Once 18.04 is released, you can get the ISO image of Ubuntu 18.04 from its website. You have both direct download and torrent options. Other official flavors will be available on their official websites.
#### Should I do a fresh install of Ubuntu 18.04 or upgrade to it from 16.04/17.10?
If you have a choice, make a backup of your data and do a fresh install of Ubuntu 18.04.
Upgrading to 18.04 from an existing version is a convenient option. However, in my opinion, it still keeps some traces/packages of the older version. A fresh install is always cleaner.
For a fresh install, should I install Ubuntu 16.04 or Ubuntu 18.04?
If you are going to install Ubuntu on a system, go for Ubuntu 18.04 instead of 16.04.
Both of them are long-term support release and will be supported for a long time. Ubuntu 16.04 will get maintenance and security updates until 2021 and 18.04 until 2023.
However, I would suggest that you use Ubuntu 18.04. Any LTS release gets [hardware updates for a limited time][13] (two and a half years I think). After that, it only gets maintenance updates. If you have newer hardware, youll get better support in 18.04.
Also, many application developers will start focusing on Ubuntu 18.04 soon. Newly created PPAs might only support 18.04 in a few months. Using 18.04 has its advantages over 16.04.
#### Will it be easier to install printer-scanner drivers instead of using the CLI?
I am not an expert when it comes to printers so my opinion is based on my limited knowledge in this field. Most of the new printers support [IPP protocol][14] and thus they should be well supported in Ubuntu 18.04. I cannot say the same about older printers.
#### Does Ubuntu 18.04 have better support for Realtek and other WiFi adapters?
No specific information on this part.
#### What are the system requirements for Ubuntu 18.04?
For the default GNOME version, you should have [4 GB of RAM for a comfortable use][15]. A processor released in last 8 years will work as well. Anything older than that should use a [lightweight Linux distribution][16] such as [Lubuntu][12].
#### Any other questions about Ubuntu 18.04?
If you have any other doubts regarding Ubuntu 18.04, please feel free to leave a comment below. If you think some other information should be added to the list, please let me know.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-18-04-faq/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/ubuntu-18-04-release-features/
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubuntu-18-04-faq-800x450.png
[3]:https://itsfoss.com/use-unity-ubuntu-17-10/
[4]:https://itsfoss.com/vanilla-gnome-ubuntu/
[5]:https://feaneron.com/2018/04/20/the-infamous-gnome-shell-memory-leak/
[6]:https://community.ubuntu.com/t/help-test-memory-leak-fixes-in-18-04-lts/5251
[7]:https://www.omgubuntu.co.uk/2018/04/ubuntu-studio-plans-to-reboot
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/03/upgrade-ubuntu-2.jpeg
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/ubuntu-18-04-upgrade-settings-800x379.png
[10]:https://itsfoss.com/upgrade-ubuntu-version/
[11]:https://ubuntu-mate.org/
[12]:https://lubuntu.net/
[13]:https://www.ubuntu.com/info/release-end-of-life
[14]:https://www.pwg.org/ipp/everywhere.html
[15]:https://help.ubuntu.com/community/Installation/SystemRequirements
[16]:https://itsfoss.com/lightweight-linux-beginners/

View File

@ -0,0 +1,194 @@
How to get a core dump for a segfault on Linux
============================================================
This week at work I spent all week trying to debug a segfault. Id never done this before, and some of the basic things involved (get a core dump! find the line number that segfaulted!) took me a long time to figure out. So heres a blog post explaining how to do those things!
At the end of this blog post, you should know how to go from “oh no my program is segfaulting and I have no idea what is happening” to “well I know what its stack / line number was when it segfaulted at at least!“.
### whats a segfault?
A “segmentation fault” is when your program tries to access memory that its not allowed to access, or tries to . This can be caused by:
* trying to dereference a null pointer (youre not allowed to access the memory address `0`)
* trying to dereference some other pointer that isnt in your memory
* a C++ vtable pointer that got corrupted and is pointing to the wrong place, which causes the program to try to execute some memory that isnt executable
* some other things that I dont understand, like I think misaligned memory accesses can also segfault
This “C++ vtable pointer” thing is what was happening to my segfaulting program. I might explain that in a future blog post because I didnt know any C++ at the beginning of this week and this vtable lookup thing was a new way for a program to segfault that I didnt know about.
But! This blog post isnt about C++ bugs. Lets talk about the basics, like, how do we even get a core dump?
### step 1: run valgrind
I found the easiest way to figure out why my program is segfaulting was to use valgrind: I ran
```
valgrind -v your-program
```
and this gave me a stack trace of what happened. Neat!
But I wanted also wanted to do a more in-depth investigation and find out more than just what valgrind was telling me! So I wanted to get a core dump and explore it.
### How to get a core dump
A core dump is a copy of your programs memory, and its useful when youre trying to debug what went wrong with your problematic program.
When your program segfaults, the Linux kernel will sometimes write a core dump to disk. When I originally tried to get a core dump, I was pretty frustrated for a long time because Linux wasnt writing a core dump!! Where was my core dump????
Heres what I ended up doing:
1. Run `ulimit -c unlimited` before starting my program
2. Run `sudo sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t`
### ulimit: set the max size of a core dump
`ulimit -c` sets the maximum size of a core dump. Its often set to 0, which means that the kernel wont write core dumps at all. Its in kilobytes. ulimits are per process  you can see a processs limits by running `cat /proc/PID/limit`
For example these are the limits for a random Firefox process on my system:
```
$ cat /proc/6309/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 30571 30571 processes
Max open files 1024 1048576 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 30571 30571 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
```
The kernel uses the soft limit (in this case, “max core file size = 0”) when deciding how big of a core file to write. You can increase the soft limit up to the hard limit using the `ulimit` shell builtin (`ulimit -c unlimited`!)
### kernel.core_pattern: where core dumps are written
`kernel.core_pattern` is a kernel parameter or a “sysctl setting” that controls where the Linux kernel writes core dumps to disk.
Kernel parameters are a way to set global settings on your system. You can get a list of every kernel parameter by running `sysctl -a`, or use `sysctl kernel.core_pattern` to look at the `kernel.core_pattern` setting specifically.
So `sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t` will write core dumps to `/tmp/core-<a bunch of stuff identifying the process>`
If you want to know more about what these `%e`, `%p` parameters read, see [man core][1].
Its important to know that `kernel.core_pattern` is a global settings its good to be a little careful about changing it because its possible that other systems depend on it being set a certain way.
### kernel.core_pattern & Ubuntu
By default on Ubuntu systems, this is what `kernel.core_pattern` is set to
```
$ sysctl kernel.core_pattern
kernel.core_pattern = |/usr/share/apport/apport %p %s %c %d %P
```
This caused me a lot of confusion (what is this apport thing and what is it doing with my core dumps??) so heres what I learned about this:
* Ubuntu uses a system called “apport” to report crashes in apt packages
* Setting `kernel.core_pattern=|/usr/share/apport/apport %p %s %c %d %P`means that core dumps will be piped to `apport`
* apport has logs in /var/log/apport.log
* apport by default will ignore crashes from binaries that arent part of an Ubuntu packages
I ended up just overriding this Apport business and setting `kernel.core_pattern` to `sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t` because I was on a dev machine, I didnt care whether Apport was working on not, and I didnt feel like trying to convince Apport to give me my core dumps.
### So you have a core dump. Now what?
Okay, now we know about ulimits and `kernel.core_pattern` and you have actually have a core dump file on disk in `/tmp`. Amazing! Now what??? We still dont know why the program segfaulted!
The next step is to open the core file with `gdb` and get a backtrace.
### Getting a backtrace from gdb
You can open a core file with gdb like this:
```
$ gdb -c my_core_file
```
Next, we want to know what the stack was when the program crashed. Running `bt` at the gdb prompt will give you a backtrace. In my case gdb hadnt loaded symbols for the binary, so it was just like `??????`. Luckily, loading symbols fixed it.
Heres how to load debugging symbols.
```
symbol-file /path/to/my/binary
sharedlibrary
```
This loads symbols from the binary and from any shared libraries the binary uses. Once I did that, gdb gave me a beautiful stack trace with line numbers when I ran `bt`!!!
If you want this to work, the binary should be compiled with debugging symbols. Having line numbers in your stack traces is extremely helpful when trying to figure out why a program crashed :)
### look at the stack for every thread
Heres how to get the stack for every thread in gdb!
```
thread apply all bt full
```
### gdb + core dumps = amazing
If you have a core dump & debugging symbols and gdb, you are in an amazing situation!! You can go up and down the call stack, print out variables, and poke around in memory to see what happened. Its the best.
If you are still working on being a gdb wizard, you can also just print out the stack trace with `bt` and thats okay :)
### ASAN
Another path to figuring out your segfault is to do one compile the program with AddressSanitizer (“ASAN”) (`$CC -fsanitize=address`) and run it. Im not going to discuss that in this post because this is already pretty long and anyway in my case the segfault disappeared with ASAN turned on for some reason, possibly because the ASAN build used a different memory allocator (system malloc instead of tcmalloc).
I might write about ASAN more in the future if I ever get it to work :)
### getting a stack trace from a core dump is pretty approachable!
This blog post sounds like a lot and I was pretty confused when I was doing it but really there arent all that many steps to getting a stack trace out of a segfaulting program:
1. try valgrind
if that doesnt work, or if you want to have a core dump to investigate:
1. make sure the binary is compiled with debugging symbols
2. set `ulimit` and `kernel.core_pattern` correctly
3. run the program
4. open your core dump with `gdb`, load the symbols, and run `bt`
5. try to figure out what happened!!
I was able using gdb to figure out that there was a C++ vtable entry that is pointing to some corrupt memory, which was somewhat helpful and helped me feel like I understood C++ a bit better. Maybe well talk more about how to use gdb to figure things out another day!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2018/04/28/debugging-a-segfault-on-linux/
作者:[Julia Evans ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://jvns.ca/about/
[1]:http://man7.org/linux/man-pages/man5/core.5.html

View File

@ -1,231 +0,0 @@
KevinSJ Translating
A Beginners Guide To Cron Jobs
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/cron-jobs1-720x340.jpg)
**Cron** is one of the most useful utility that you can find in any Unix-like operating system. It is used to schedule commands at a specific time. These scheduled commands or tasks are known as “Cron Jobs”. Cron is generally used for running scheduled backups, monitoring disk space, deleting files (for example log files) periodically which are no longer required, running system maintenance tasks and a lot more. In this brief guide, we will see the basic usage of Cron Jobs in Linux.
### The Beginners Guide To Cron Jobs
The typical format of a cron job is:
```
Minute(0-59) Hour(0-24) Day_of_month(1-31) Month(1-12) Day_of_week(0-6) Command_to_execute
```
Just memorize the cron job format or print the following illustration and keep it in your desk.
![][2]
In the above picture, the asterisks refers the specific blocks of time.
To display the contents of the **crontab** file of the currently logged in user:
```
$ crontab -l
```
To edit the current users cron jobs, do:
```
$ crontab -e
```
If it is the first time, you will be asked to editor to edit the jobs.
```
no crontab for sk - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/nano <---- easiest
2. /usr/bin/vim.basic
3. /usr/bin/vim.tiny
4. /bin/ed
Choose 1-4 [1]:
```
Choose any one that suits you. Here it is how a sample crontab file looks like.
![][3]
In this file, you need to add your cron jobs.
To edit the crontab of a different user, for example ostechnix, do:
```
$ crontab -u ostechnix -e
```
Let us see some examples.
To run a cron job **every minute** , the format should be like below.
```
* * * * * <command-to-execute>
```
To run cron job every 5 minute, add the following in your crontab file.
```
*/5 * * * * <command-to-execute>
```
To run a cron job at every quarter hour (every 15th minute), add this:
```
*/15 * * * * <command-to-execute>
```
To run a cron job every hour at 30 minutes, run:
```
30 * * * * <command-to-execute>
```
You can also define multiple time intervals separated by commas. For example, the following cron job will run three times every hour, at minutes 0, 5 and 10:
```
0,5,10 * * * * <command-to-execute>
```
Run a cron job every half hour:
```
*/30 * * * * <command-to-execute>
```
Run a job every hour:
```
0 * * * * <command-to-execute>
```
Run a job every 2 hours:
```
0 */2 * * * <command-to-execute>
```
Run a job every day (It will run at 00:00):
```
0 0 * * * <command-to-execute>
```
Run a job every day at 3am:
```
0 3 * * * <command-to-execute>
```
Run a job every sunday:
```
0 0 * * SUN <command-to-execute>
```
Or,
```
0 0 * * 0 <command-to-execute>
```
It will run at exactly at 00:00 on Sunday.
Run a job on every day-of-week from Monday through Friday i.e every weekday:
```
0 0 * * 1-5 <command-to-execute>
```
The job will start at 00:00.
Run a job every month:
```
0 0 1 * * <command-to-execute>
```
Run a job at 16:15 on day-of-month 1:
```
15 16 1 * * <command-to-execute>
```
Run a job at every quarter i.e on day-of-month 1 in every 3rd month:
```
0 0 1 */3 * <command-to-execute>
```
Run a job on a specific month at a specific time:
```
5 0 * 4 * <command-to-execute>
```
The job will start at 00:05 in April.
Run a job every 6 months:
```
0 0 1 */6 * <command-to-execute>
```
This cron job will start at 00:00 on day-of-month 1 in every 6th month.
Run a job every year:
```
0 0 1 1 * <command-to-execute>
```
This cron job will start at 00:00 on day-of-month 1 in January.
We can also use the following strings to define job.
@reboot Run once, at startup. @yearly Run once a year. @annually (same as @yearly). @monthly Run once a month. @weekly Run once a week. @daily Run once a day. @midnight (same as @daily). @hourly Run once an hour.
For example, to run a job every time the server is rebooted, add this line in your crontab file.
```
@reboot <command-to-execute>
```
To remove all cron jobs for the current user:
```
$ crontab -r
```
There is also a dedicated website named [**crontab.guru**][4] for learning cron jobs examples. This site provides a lot of cron job examples.
For more details, check man pages.
```
$ man crontab
```
And, thats all for now. At this point, you might have a basic understanding of cron jobs and how to use them in real time. More good stuffs to come. Stay tuned!!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-job-format-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-jobs-1.png
[4]:https://crontab.guru/

View File

@ -1,3 +1,6 @@
Translating by MjSeven
4 Firefox extensions to install now
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/redpanda_firefox_pet_animal.jpg?itok=aSpKsyna)

View File

@ -0,0 +1,92 @@
Give Your Linux Desktop a Stunning Makeover With Xenlism Themes
============================================================
_Brief: Xenlism theme pack provides an aesthetically pleasing GTK theme, colorful icons, and minimalist wallpapers to transform your Linux desktop into an eye-catching setup._
Its not every day that I dedicate an entire article to a theme unless I find something really awesome. I used to cover themes and icons regularly. But lately, I preferred having lists of [best GTK themes][6] and icon themes. This is more convenient for me and for you as well as you get to see many beautiful themes in one place.
After [Pop OS theme][7] suit, Xenlism is another theme that has left me awestruck by its look. 
![Xenlism GTK theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlishm-minimalism-gtk-theme-800x450.jpeg)
Xenlism GTK theme is based on the Arc theme, an inspiration behind so many themes these days. The GTK theme provides Windows buttons similar to macOS which I neither like nor dislike. The GTK theme has a flat, minimalist layout and I like that.
There are two icon themes in the Xenlism suite. Xenlism Wildfire is an old one and had already made to our list of [best icon themes][8].
![Beautiful Xenlism Wildfire theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlism-wildfire-theme-800x450.jpeg)
Xenlism Wildfire Icons
Xenlsim Storm is the relatively new icon theme but is equally beautiful.
![Beautiful Xenlism Storm theme for Ubuntu and Other Linux](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/xenlism-storm-theme-1-800x450.jpeg)
Xenlism Storm Icons
Xenlism themes are open source under GPL license.
### How to install Xenlism theme pack on Ubuntu 18.04
Xenlism dev provides an easier way of installing the theme pack through a PPA. Though the PPA is available for Ubuntu 16.04, I found the GTK theme wasnt working with Unity. It works fine with the GNOME desktop in Ubuntu 18.04.
Open a terminal (Ctrl+Alt+T) and use the following commands one by one:
```
sudo add-apt-repository ppa:xenatt/xenlism
sudo apt update
```
This PPA offers four packages:
* xenlism-finewalls: for a set of wallpapers that will be available directly in the wallpaper section of Ubuntu. One of the wallpapers has been used in the screenshot.
* xenlism-minimalism-theme: GTK theme
* xenlism-storm: an icon theme (see previous screenshots)
* xenlism-wildfire-icon-theme: another icon theme with several color variants (folder colors get changed in the variants)
You can decide on your own what theme component you want to install. Personally, I dont see any harm in installing all the components.
```
sudo apt install xenlism-minimalism-theme xenlism-storm-icon-theme xenlism-wildfire-icon-theme xenlism-finewalls
```
You can use GNOME Tweaks for changing the theme and icons. If you are not familiar with the procedure already, I suggest reading this tutorial to learn [how to install themes in Ubuntu 18.04 GNOME][9].
### Getting Xenlism themes in other Linux distributions
You can install Xenlism themes on other Linux distributions as well. Installation instructions for various Linux distributions can be found on its website:
[Install Xenlism Themes][10]
### What do you think?
I know not everyone would agree with me but I loved this theme. I think you are going to see the glimpse of Xenlism theme in the screenshots in future tutorials on Its FOSS.
Did you like Xenlism theme? If not, what theme do you like the most? Share your opinion in the comment section below.
#### 关于作者
I am a professional software developer, and founder of It's FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I'm a huge fan of Agatha Christie's work.
--------------------------------------------------------------------------------
via: https://itsfoss.com/xenlism-theme/
作者:[Abhishek Prakash ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/author/abhishek/
[2]:https://itsfoss.com/xenlism-theme/#comments
[3]:https://itsfoss.com/category/desktop/
[4]:https://itsfoss.com/tag/themes/
[5]:https://itsfoss.com/tag/xenlism/
[6]:https://itsfoss.com/best-gtk-themes/
[7]:https://itsfoss.com/pop-icon-gtk-theme-ubuntu/
[8]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[9]:https://itsfoss.com/install-themes-ubuntu/
[10]:http://xenlism.github.io/minimalism/#install

View File

@ -1,57 +0,0 @@
translating---geekpi
How to find your IP address in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/satellite_radio_location.jpg?itok=KJUKSB6x)
Internet Protocol (IP) needs no introduction—we all use it daily. Even if you don't use it directly, when you type website-name.com on your web browser, it looks up the IP address of that URL and then loads the website.
Let's divide IP addresses into two categories: private and public. Private IP addresses are the ones your WiFi box (and company intranet) provide. They are in the range of 10.x.x.x, 172.16.x.x-172.31.x.x, and 192.168.x.x, where x=0 to 255. Public IP addresses, as the name suggests, are "public" and you can reach them from anywhere in the world. Every website has a unique IP address that can be reached by anyone and from anywhere; that is considered a public IP address.
Furthermore, there are two types of IP addresses: IPv4 and IPv6.
IPv4 addresses have the format x.x.x.x, where x=0 to 255. There are 2^32 (approximately 4 billion) possible IPv4 addresses.
IPv6 addresses have a more complex format using hex numbers. The total number of bits is 128, which means there are 2^128—340 undecillion!—possible IPv6 addresses. IPv6 was introduced to tackle the foreseeable exhaustion of IPv4 addresses in the near future.
As a network engineer, I recommend not sharing your machines public IP address with anyone. Your WiFi router has a public IP, which is the WAN (wide-area network) IP address, and it will be the same for any device connected to that WiFi. All the devices connected to the same WiFi have private IP addresses locally identified by the range provided above. For example, my laptop is connected with the IP address 192.168.0.5, and my phone is connected with 192.168.0.8. These are private IP addresses, but both would have the same public IP address.
The following commands will get you the IP address list to find public IP addresses for your machine:
1. `ifconfig.me`
2. `curl -4/-6 icanhazip.com`
3. `curl ipinfo.io/ip`
4. `curl api.ipify.org`
5. `curl checkip.dyndns.org`
6. `dig +short myip.opendns.com @resolver1.opendns.com`
7. `host myip.opendns.com resolver1.opendns.com`
8. `curl ident.me`
9. `curl bot.whatismyipaddress.com`
10. `curl ipecho.net/plain`
The following commands will get you the private IP address of your interfaces:
1. `ifconfig -a`
2. `ip addr (ip a)`
3. `hostname -I | awk {print $1}`
4. `ip route get 1.2.3.4 | awk '{print $7}'`
5. `(Fedora) Wifi-Settings→ click the setting icon next to the Wifi name that you are connected to → Ipv4 and Ipv6 both can be seen`
6. `nmcli -p device show`
_Note: Some utilities need to be installed on your system based on the Linux distro you are using. Also, some of the noted commands use a third-party website to get the IP_
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/how-find-ip-address-linux
作者:[Archit Modi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/architmodi

View File

@ -1,3 +1,4 @@
translating by wyxplus
How to Install and Configure KVM on Ubuntu 18.04 LTS Server
======
**KVM** (Kernel-based Virtual Machine) is an open source full virtualization solution for Linux like systems, KVM provides virtualization functionality using the virtualization extensions like **Intel VT** or **AMD-V**. Whenever we install KVM on any linux box then it turns it into the hyervisor by loading the kernel modules like **kvm-intel.ko** ( for intel based machines) and **kvm-amd.ko** ( for amd based machines).

View File

@ -0,0 +1,102 @@
How to Enable Click to Minimize On Ubuntu
============================================================
_Brief: This quick tutorial shows you how to enable click to minimize option on Ubuntu 18.04 and Ubuntu 16.04._
The launcher at the left hand side in [Ubuntu][7] is a handy tool for quickly accessing applications. When you click on an icon in the launcher, the application window appears in focus.
If you click again on the icon of an application already in focus, the default behavior is to do nothing. This may bother you if you expect the application window to be minimized on the second click.
Perhaps this GIF will be better in explaining the click on minimize behavior on Ubuntu.
[video](https://giphy.com/gifs/linux-ubuntu-itsfoss-52FlrSIMxnZ1qq9koP?utm_source=iframe&utm_medium=embed&utm_campaign=Embeds&utm_term=https%3A%2F%2Fitsfoss.com%2Fclick-to-minimize-ubuntu%2F%3Futm_source%3Dnewsletter&%3Butm_medium=email&%3Butm_campaign=new_linux_laptop_ubuntu_1804_flavor_reviews_meltdown_20_and_other_linux_stuff&%3Butm_term=2018-05-23)
In my opinion, this should be the default behavior but apparently Ubuntu doesnt think so. So what? Customization is one of the main reason [why I use Linux][8] and this behavior can also be easily changed.
In this quick tutorial, Ill show you how to enable click to minimize on Ubuntu 18.04 and 16.04\. Ill show both command line and the GUI methods here.
### Enable click to minimize on Ubuntu using command line (recommended)
_This method is for Ubuntu 18.04 and 17.10 users with [GNOME desktop environment][1]_ .
The first option is using the terminal. I recommend this way to minimize on click even if you are not comfortable with the command line.
Its not at all complicated. Open a terminal using Ctrl+Alt+T shortcut or searching for it in the menu. All you need is to copy paste the command below in the terminal.
```
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
```
No need of restarting your system or any thing of that sort. You can test the minimize on click behavior immediately after it.
If you do not like click to minimize behavior, you can set it back to default using the command below:
```
gsettings reset org.gnome.shell.extensions.dash-to-dock click-action
```
### Enable click to minimize on Ubuntu using GUI tool
You can do the same steps mentioned above using a GUI tool called [Dconf Editor][10]. It is a powerful tool that allows you to change many hidden aspects of your Linux desktop. I avoid recommending it because one wrong click here and there may screw up your desktop settings. So be careful while using this tool keeping in mind that it works on single click and changes are applied immediately.
You can find and install Dconf Editor in the Ubuntu Software Center.
![dconf editor in Ubuntu software center](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/dconf-editor-ubuntu-800x250.png)
Once installed, launch Dconf Editor and go to org -> gnome -> shell -> extensions -> dash-to-dock. Scroll down a bit until you find click-action. Click on it to access the click action settings.
In here, turn off the Use default value option and change the Custom Valueto minimize.
![Enable minmize to click on Ubuntu using dconf editor](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/enable-minimize-click-dconf-800x425.png)
You can see that the minimize on click behavior has been applied instantly.
### Enable click to minimize on Ubuntu 16.04 Unity
If you are using Unity desktop environment, you can easily d it using Unity Tweak Tool. If you have not installed it already, look for Unity Tweak Tool in Software Center and install it.
Once installed, launch Unity Tweak Tool and click on Launcher here.
![Enable minmize to click using Unity Tweak Tool](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/minimiz-click-ubuntu-unity-1.png)
Check the “Minimize single window application on click” option here.
![Enable minmize to click using Unity Tweak Tool](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/minimiz-click-ubuntu-unity.png)
Thats all. The change takes into effect right away.
### Did it work for you?
I hope this quick tip helped you to enable the minimize on click feature in Ubuntu. If you are using Ubuntu 18.04, I suggest reading [GNOME customization tips][11] for more such options.
If you have any questions or suggestions, please leave a comment. If it helped you, perhaps you could share this article on various social media platforms such as Reddit and Twitter.
#### 关于作者
I am a professional software developer, and founder of It's FOSS. I am an avid Linux lover and Open Source enthusiast. I use Ubuntu and believe in sharing knowledge. Apart from Linux, I love classic detective mysteries. I'm a huge fan of Agatha Christie's work.
--------------------------------------------------------------------------------
via: https://itsfoss.com/click-to-minimize-ubuntu/
作者:[Abhishek Prakash ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/abhishek/
[1]:https://www.gnome.org/
[2]:https://itsfoss.com/author/abhishek/
[3]:https://itsfoss.com/click-to-minimize-ubuntu/#comments
[4]:https://itsfoss.com/category/how-to/
[5]:https://itsfoss.com/tag/quick-tip/
[6]:https://itsfoss.com/tag/ubuntu-18-04/
[7]:https://www.ubuntu.com/
[8]:https://itsfoss.com/reasons-switch-linux-windows-xp/
[9]:https://itsfoss.com/how-to-know-ubuntu-unity-version/
[10]:https://wiki.gnome.org/Projects/dconf
[11]:https://itsfoss.com/gnome-tricks-ubuntu/

View File

@ -0,0 +1,69 @@
KevinSJ translating
How CERN Is Using Linux and Open Source
============================================================
![CERN](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/atlas-cern.jpg?itok=IRLUYCNQ "CERN")
>CERN relies on open source technology to handle huge amounts of data generated by the Large Hadron Collider. The ATLAS (shown here) is a general-purpose detector that probes for fundamental particles. (Image courtesy: CERN)[Used with permission][2]
[CERN][3]
[CERN][6] really needs no introduction. Among other things, the European Organization for Nuclear Research created the World Wide Web and the Large Hadron Collider (LHC), the worlds largest particle accelerator, which was used in discovery of the [Higgs boson][7].  Tim Bell, who is responsible for the organizations  IT Operating Systems and Infrastructure group, says the goal of his team is “to provide the compute facility for 13,000 physicists around the world to analyze those collisions, understand what the universe is made of and how it works.”
CERN is conducting hardcore science, especially with the LHC, which [generates massive amounts of data][8] when its operational. “CERN currently stores about 200 petabytes of data, with over 10 petabytes of data coming in each month when the accelerator is running. This certainly produces extreme challenges for the computing infrastructure, regarding storing this large amount of data, as well as the having the capability to process it in a reasonable timeframe. It puts pressure on the networking and storage technologies and the ability to deliver an efficient compute framework,” Bell said.
### [tim-bell-cern.png][4]
![Tim Bell](https://www.linux.com/sites/lcom/files/styles/floated_images/public/tim-bell-cern.png?itok=5eUOpip- "Tim Bell")
Tim Bell, CERN[Used with permission][1]Swapnil Bhartiya
The scale at which LHC operates and the amount of data it generates pose some serious challenges. But CERN is not new to such problems. Founded in 1954, CERN has been around for about 60 years. “We've always been facing computing challenges that are difficult problems to solve, but we have been working with open source communities to solve them,” Bell said. “Even in the 90s, when we invented the World Wide Web, we were looking to share this with the rest of humanity in order to be able to benefit from the research done at CERN and open source was the right vehicle to do that.”
### Using OpenStack and CentOS
Today, CERN is a heavy user of OpenStack, and Bell is one of the Board Members of the OpenStack Foundation. But CERN predates OpenStack. For several years, they have been using various open source technologies to deliver services through Linux servers.
“Over the past 10 years, we've found that rather than taking our problems ourselves, we find upstream open source communities with which we can work, who are facing similar challenges and then we contribute to those projects rather than inventing everything ourselves and then having to maintain it as well,” said Bell.
A good example is Linux itself. CERN used to be a Red Hat Enterprise Linux customer. But, back in 2004, they worked with Fermilab to  build their own Linux distribution called [Scientific Linux][9]. Eventually they realized that, because they were not modifying the kernel, there was no point in spending time spinning up their own distribution; so they migrated to CentOS. Because CentOS is a fully open source and community driven project, CERN could collaborate with the project and contribute to how CentOS is built and distributed.
CERN helps CentOS with infrastructure and they also organize CentOS DoJo at CERN where engineers can get together to improve the CentOS packaging.
In addition to OpenStack and CentOS, CERN is a heavy user of other open source projects, including Puppet for configuration management, Grafana and  influxDB for monitoring, and is involved in many more.
“We collaborate with around 170 labs around the world. So every time that we find an improvement in an open source project, other labs can easily take that and use it,” said Bell, “At the same time, we also learn from others. When large scale installations like eBay and Rackspace make changes to improve scalability of solutions, it benefits us and allows us to scale.”
### Solving realistic problems
Around 2012, CERN was looking at ways to scale computing for the LHC, but the challenge was people rather than technology. The number of staff that CERN employs is fixed. “We had to find ways in which we can scale the compute without requiring a large number of additional people in order to administer that,” Bell said. “OpenStack provided us with an automated API-driven, software-defined infrastructure.” OpenStack also allowed CERN to look at problems related to the delivery of services and then automate those, without having to scale the staff.
“We're currently running about 280,000 cores and 7,000 servers across two data centers in Geneva and in Budapest. We are  using software-defined infrastructure to automate everything, which allows us to continue to add additional servers while remaining within the same envelope of staff,” said Bell.
As time progresses, CERN will be dealing with even bigger challenges. Large Hadron Collider has a roadmap out to 2035, including a number of significant upgrades. “We run the accelerator for three to four years and then have a period of 18 months or two years when we upgrade the infrastructure. This maintenance period allows us to also do some computing planning,” said Bell. CERN is also  planning High Luminosity Large Hadron Collider upgrade, which will allow for beams with higher luminosity. The upgrade would mean about 60 times more compute requirement compared to what CERN has today.
“With Moore's Law, we will maybe get one quarter of the way there, so we have to find ways under which we can be scaling the compute and the storage infrastructure correspondingly  and finding automation and solutions such as OpenStack will help that,” said Bell.
“When we started off the large Hadron collider and looked at how we would deliver the computing, it was clear that we couldn't put everything into the data center at CERN, so we devised a distributed grid structure, with tier zero at CERN and then a cascading structure around that,” said Bell. “There are around 12 large tier one centers and then 150 small universities and labs around the world. They receive samples at the data from the LHC in order to assist the physicists to understand and analyze the data.”
That structure means CERN is collaborating internationally, with hundreds of countries contributing toward the analysis of that data. It boils down to the fundamental principle that open source is not just about sharing code, its about collaboration among people to share knowledge and achieve what no single individual, organization, or company can achieve alone. Thats the Higgs boson of the open source world.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/5/how-cern-using-linux-open-source
作者:[SWAPNIL BHARTIYA ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://www.linux.com/licenses/category/used-permission
[3]:https://home.cern/about/experiments/atlas
[4]:https://www.linux.com/files/images/tim-bell-cernpng
[5]:https://www.linux.com/files/images/atlas-cernjpg
[6]:https://home.cern/
[7]:https://home.cern/topics/higgs-boson
[8]:https://home.cern/about/computing
[9]:https://www.scientificlinux.org/

View File

@ -0,0 +1,130 @@
IT自动化的下一步是什么: 6 大趋势
======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai_artificial_intelligence.png?itok=o0csm9l2)
我们最近介绍了 [促进自动化的因素][1] ,目前正在被人们采用的 [趋势][2], 以及那些刚开始使用自动化部分流程组织 [有用的技巧][3] 。
噢, 我们也分享了在你的公司[如何使用自动化的案例][4] , 以及 [长期成功的关键][5].
现在, 只有一个问题: 自动化的下一步是什么? 我们邀请一系列专家分享一下 [自动化][6]不远的将来。 以下是他们建议IT领域领导需密切关注的六大趋势。
### 1. 机器学习的成熟
对于关于 [机器学习][7]的讨论 (与“自我学习系统”相似的定义),对于绝大多数组织的项目来说实际执行起来它仍然为时过早。但预计这将发生变化机器学习将在下一次IT自动化浪潮中将扮演着至关重要的角色。
[Advanced Systems Concepts, Inc.][8]公司工程总监 Mehul Amin 指出机器学习是IT自动化下一个关键增长领域之一。
“随着数据化的发展, 自动化软件理应可以自我决策,否则这就是开发人员的责任了”, Amin 说。 “例如, 开发者需要执行构建内容, 但是识别系统最佳执行流程的,可能是由系统内软件分析完成。”
假设将这个系统延伸到其他地方中。Amin 指出机器学习可以使自动化系统在必要的时候提供额外的资源以需要满足时间线或SLA同样在不需要资源的时候退出以及其他的可能性。
显然不只有 Amin 一个人这样认为。
[Sungard Availability Services][9] 公司首席架构师 Kiran Chitturi 表示,“IT自动化正在走向自我学习的方向” 。“系统将会能测试和监控自己,加强业务流程和软件交付能力。”
Chitturi 指出自动化测试就是个例子。脚本测试已经被广泛采用,但很快这些自动化测试流程将会更容易学习,更快发展,例如开发出新的代码或将更为广泛地影响生产环境。
### 2. 人工智能催生的自动化
上述原则同样适合 [人工智能][10]但是为独立的领域。假定新兴的人工智能技术将也会产生新的自动化机会。根据对人工智能的定义机器学习在短时间内可能会对IT领域产生巨大的影响并且我们可能会看到这两个领域的许多重叠的定义和理解
[SolarWinds][11]公司技术负责人 Patrick Hubbard说“人工智能AI和机器学习的整合普遍被认为对未来几年的商业成功起至关重要的作用。”
### 3. 这并不意味着不再需要人力
让我们试着安慰一下那些不知所措的人:前两种趋势并不一定意味着我们将失去工作。
这很可能意味着各种角色的改变以及[全新角色][12]的创造。
但是在可预见的将来,至少,你不必需要机器人鞠躬。
“一台机器只能运行在给定的环境变量中它不能选择包含新的变量,在今天只有人类可以这样做,” Hubbard 解释说。“但是对于IT专业人员来说这将是需要培养AI和自动化技能的时代。如对程序设计、编程、管理人工智能和机器学习功能算法的基本理解以及用强大的安全状态面对更复杂的网络攻击。”
Hubbard 分享一些新的工具或功能例子,例如支持人工智能的安全软件或机器学习的应用程序,这些应用程序可以远程发现石油管道中的维护需求。两者都可以提高效益和效果,自然不会代替需要信息安全或管道维护的人员。
“许多新功能仍需要人工监控”Hubbard 说。“例如,为了让机器确定一些‘预测’是否可能成为‘规律’,人为的管理是必要的。”
即使你把机器学习和AI先放在一边看待一般地IT自动化同样原理也是成立的,尤其是在软件开发生命周期中。
[Juniper Networks][13]公司自动化首席架构师 Matthew Oswalt 指出IT自动化增长的根本原因是它通过减少操作基础设施所需的人工工作量来创造直接价值。
在代码上操作工程师可以使用事件驱动的自动化提前定义他们的工作流程而不是在凌晨3点来应对基础设施的问题。
“它也将操作工作流程作为代码而不再是容易过时的文档或系统知识阶段”Oswalt解释说。“操作人员仍然需要在[自动化]工具响应事件方面后发挥积极作用。采用自动化的下一个阶段是建立一个能够跨IT频谱识别发生的有趣事件的系统并以自主方式进行响应。在代码上操作工程师可以使用事件驱动的自动化提前定义他们的工作流程而不是在凌晨3点来应对基础设施的问题。他们可以依靠这个系统在任何时候以同样的方式作出回应。”
### 4. 对自动化的焦虑将会减少
SolarWinds公司的 Hubbard 指出“自动化”一词本身就产生大量的不确定性和担忧不仅仅是在IT领域而且是跨专业领域他说这种担忧是合理的。但一些随之而来的担忧可能被夸大了甚至是科技产业本身。现实可能实际上是这方面的镇静力当自动化的实际实施和实践帮助人们认识到这个列表中的“3”时我们将看到“4”的出现。
“今年我们可能会看到对自动化焦虑的减少更多的组织开始接受人工智能和机器学习作为增加现有人力资源的一种方式”Hubbard说。“自动化历史上的今天为更多的工作创造了空间,通过降低成本和时间来完成较小任务,并将劳动力重新集中到无法自动化并需要人力的事情上。人工智能和机器学习也是如此。”
自动化还将减少IT领导者神经紧张主题的一些焦虑安全。正如[红帽][14]公司首席架构师 Matt Smith 最近[指出][15]的那样自动化将越来越多地帮助IT部门降低与维护任务相关的安全风险。
他的建议是“首先在维护活动期间记录和自动化IT资产之间的交互。通过依靠自动化您不仅可以消除历史上需要大量手动操作和手术技巧的任务还可以降低人为错误的风险并展示当您的IT组织采纳变更和新工作方法时可能发生的情况。最终这将迅速减少对应用安全补丁的抵制。而且它还可以帮助您的企业在下一次重大安全事件中摆脱头条新闻。”
**[ 阅读全文: [12个企业安全坏习惯要打破。][16] ] **
### 5. 脚本和自动化工具将持续发展
看到许多组织增加自动化的第一步 - 通常以脚本或自动化工具(有时称为配置管理工具)的形式 - 作为“早期”工作。
但是随着各种自动化技术的使用,对这些工具的观点也在不断发展。
[DataVision][18]首席运营官 Mark Abolafia 表示:“数据中心环境中存在很多重复性过程,容易出现人为错误,[Ansible][17]等技术有助于缓解这些问题。“通过 Ansible ,人们可以为一组操作编写特定的步骤,并输入不同的变量,例如地址等,使过去长时间的过程链实现自动化,而这些过程以前都需要人为触摸和更长的交货时间。”
**[想了解更多关于Ansible这个方面的知识吗阅读相关文章:[使用Ansible时的成功秘诀][19]。 ]**
另一个因素是:工具本身将继续变得更先进。
“使用先进的IT自动化工具开发人员将能够在更短的时间内构建和自动化工作流程减少易出错的编码” ASCI 公司的 Amin 说。“这些工具包括预先构建的预先测试过的拖放式集成API作业丰富的变量使用参考功能和对象修订历史记录。”
### 6. 自动化开创了新的指标机会
正如我们在此前所说的那样IT自动化不是万能的。它不会修复被破坏的流程或者以其他方式为您的组织提供全面的灵丹妙药。这也是持续不断的自动化并不排除衡量性能的必要性。
**[ 参见我们的相关文章 [DevOps指标你在衡量什么重要吗][20] ]**
实际上,自动化应该打开新的机会。
[Janeiro Digital][21]公司架构师总裁 Josh Collins 说,“随着越来越多的开发活动 - 源代码管理DevOps管道工作项目跟踪 - 转向API驱动的平台 - 将这些原始数据拼接在一起以描绘组织效率提升的机会和图景”。
Collins 认为这是一种可能的新型“开发组织度量指标”。但不要误认为这意味着机器和算法可以突然预测IT所做的一切。
“无论是衡量个人资源还是整体团队,这些指标都可以很强大 - 但应该用大量的背景来衡量。”Collins说“将这些数据用于高层次趋势并确认定性观察 - 而不是临床评级你的团队。”
**想要更多这样知识, IT领导者[注册我们的每周电子邮件通讯][22]。**
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2018/3/what-s-next-it-automation-6-trends-watch
作者:[Kevin Casey][a]
译者:[MZqk](https://github.com/MZqk)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/kevin-casey
[1]:https://enterprisersproject.com/article/2017/12/5-factors-fueling-automation-it-now
[2]:https://enterprisersproject.com/article/2017/12/4-trends-watch-it-automation-expands
[3]:https://enterprisersproject.com/article/2018/1/getting-started-automation-6-tips
[4]:https://enterprisersproject.com/article/2018/1/how-make-case-it-automation
[5]:https://enterprisersproject.com/article/2018/1/it-automation-best-practices-7-keys-long-term-success
[6]:https://enterprisersproject.com/tags/automation
[7]:https://enterprisersproject.com/article/2018/2/how-spot-machine-learning-opportunity
[8]:https://www.advsyscon.com/en-us/
[9]:https://www.sungardas.com/en/
[10]:https://enterprisersproject.com/tags/artificial-intelligence
[11]:https://www.solarwinds.com/
[12]:https://enterprisersproject.com/article/2017/12/8-emerging-ai-jobs-it-pros
[13]:https://www.juniper.net/
[14]:https://www.redhat.com/en?intcmp=701f2000000tjyaAAA
[15]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break
[16]:https://enterprisersproject.com/article/2018/2/12-bad-enterprise-security-habits-break?sc_cid=70160000000h0aXAAQ
[17]:https://opensource.com/tags/ansible
[18]:https://datavision.com/
[19]:https://opensource.com/article/18/2/tips-success-when-getting-started-ansible?intcmp=701f2000000tjyaAAA
[20]:https://enterprisersproject.com/article/2017/7/devops-metrics-are-you-measuring-what-matters?sc_cid=70160000000h0aXAAQ
[21]:https://www.janeirodigital.com/
[22]:https://enterprisersproject.com/email-newsletter?intcmp=701f2000000tsjPAAQ

View File

@ -0,0 +1,156 @@
可以考虑的 9 个开源 ERP 系统
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)
拥有一定数量员工的企业就需要大量的协调工作,包括价格、生产计划、帐务和财务、管理支出、管理存货等等。把一套截然不同的工具拼接到一起去处理这些工作,是一种粗制滥造和无价值的做法。
那种方法没有任何弹性。并且那样在各种各样的自组织系统之间高效移动数据是非常困难的。同样,它也很难维护。
因此,大多数成长型企业都转而使用一个 [企业资源计划][1] (ERP) 系统。
在这个行业中的大咖有 Oracle、SAP、以及 Microsoft Dynamics。它们都提供了一个综合的系统但同时也很昂贵。如果你的企业支付不起如此昂贵的大系统或者你仅需要一个简单的系统怎么办呢你可以使用开源的产品来作为替代。
### 一个 ERP 系统中有什么东西
显然,你希望有一个满足你需要的系统。基于那些需要,更多的功能并不意味着就更好。但是,你的需要会根据你的业务的增长而变化的,因此,你希望能够找到一个 ERP 系统,它能够根据你新的需要而扩展它。那就意味着系统有额外的模块或者支持插件和附加功能。
大多数的开源 ERP 系统都是 web 应用程序。你可以下载并将它们安装到你的服务器上。但是,如果你不希望(或者没有相应技能或者人员)自己去维护系统,那么应该确保它们的应用程序提供托管版本。
最后,你还应该确保应用程序有良好的文档和支持 —— 要么是付费支持或者有一个活跃的用户社区。
有很多弹性很好的、功能丰富的、很划算的开源 ERP 系统。如果你正打算购买这样的系统,这里有我们挑选出来的 9 个。
### ADempiere
像大多数其它开源 ERP 解决方案,[ADempiere][2] 的目标客户是中小企业。它已经存在一段时间了— 这个项目出现于 2006它是 Compiere ERP 软件的一个分支。
它的意大利语名字的意思是“实现”或者“满足”,它“涉及多个方面”的 ERP 特性旨在帮企业去满足各种需求。它在 ERP 中增加了供应链管理SCM和客户关系管理CRM功能能够让 ERP 套件在一个软件中去管理销售、采购、库存、以及帐务处理。它的最新版本是 v.3.9.0,更新了用户界面、销售点、人力资源、工资、以及其它的特性。
因为是一个跨平台的、基于 Java 的云解决方案ADempiere 可以运行在Linux、Unix、Windows、MacOS、智能手机、平板电脑上。它使用 [GPLv2][3] 授权。如果你想了解更多信息,这里有一个用于测试的 [demo][4],或者也可以在 GitHub 上查看它的 [源代码][5]。
### Apache OFBiz
[Apache OFBiz][6] 的业务相关的套件是构建在通用的架构上的,它允许企业根据自己的需要去定制 ERP。因此它是有内部开发资源的大中型企业去修改和集成它到它们现有的 IT 和业务流程的最佳套件。
OFBiz 是一个成熟的开源 ERP 系统;它的网站上说它是一个有十年历史的顶级 Apache 项目。可用的 [模块][7] 有帐务、生产制造、人力资源、存货管理、目录管理、客户关系管理、以及电子商务。你可以在它的 [demo 页面][8] 上试用电子商务的网上商店以及后端的 ERP 应用程序。
Apache OFBiz 的源代码能够在它的 [项目仓库][9] 中找到。它是用 Java 写的,它在 [Apache 2.0 license][10] 下可用。
### Dolibarr
[Dolibarr][11] 提供了中小型企业端到端的业务管理,从发票跟踪、合同、存货、订单、以及支付,到文档管理和电子化 POS 系统支持。它的全部功能封装在一个清晰的界面中。
如果你担心不会使用 Dolibarr[这里有一些关于它的文档][12]。
另外,还有一个 [在线演示][13]Dolibarr 也有一个 [插件商店][14],你可以在那是购买一些软件来扩展它的功能。你可以在 GitHub 上查看它的 [源代码][15];它在 [GPLv3][16] 或者任何它的最新版本许可下面使用。
### ERPNext
[ERPNext][17] 是这类开源项目中的其中一个;实际上它最初 [出现在 Opensource.com][18]。它被设计用于打破一个陈旧而昂贵的专用 ERP 系统的垄断局面。
ERPNext 适合于中小型企业。它包含的模块有帐务、存货管理、销售、采购、以及项目管理。ERPNext 是表单驱动的应用程序 — 你可以在一组字段中填入信息,然后让应用程序去完成剩余部分。整个套件非常易用。
如果你感兴趣,在你考虑参与之前,你可以请求获取一个 [demo][19],去 [下载它][20] 或者在托管服务上 [购买一个订阅][21]。
### Metasfresh
[Metasfresh][22] 的名字表示它承诺软件的代码始终保持“新鲜”。它自 2015 年以来每周发行一个更新版本,那时,它的代码是由创始人从 ADempiere 项目中 fork 的。与 ADempiere 一样,它是一个基于 Java 的开源 ERP目标客户是中小型企业。
虽然,相比在这里介绍的其它软件来说,它是一个很 “年青的” 项目,但是它早早就引起了一起人的注意,获得很多积极的评价,比如,被提名为“最佳开源”的 IT 创新奖入围者。
Metasfresh 在自托管系统上或者在云上单用户使用时是免费的,或者可以按月交纳订阅费用。它的 [源代码][23] 在 GitHub 上,可以在遵守 [GPLv2][24] 许可的情况下使用,它的云版本是以 GPLv3 方式授权使用。
### Odoo
[Odoo][25] 是一个应用程序集成解决方案,它包含的模块有项目管理、帐单、存货管理、生产制造、以及采购。这些模块之间可以相互通讯,实现高效平滑地信息交换。
虽然 ERP 可能很复杂但是Odoo 通过简单的,甚至是简洁的界面使它变得很友好。这个界面让人联想到谷歌云盘,它只让你需要的功能可见。在你决定签定采购合同之前,你可以 [得到一个 Odoo 去试用][26]。
Odoo 是基于 web 的工具。按单个模块来订阅的话,每个模块每月需要支付 20 美元。你也可以 [下载它][27],或者可以从 GitHub 上获得 [源代码][28],它以 [LGPLv3][29] 方式授权。
### Opentaps
[Opentaps][30] 是专为大型业务设计的几个开源 ERP 解决方案之一,它的功能强大而灵活。这并不奇怪,因为它是在 Apache OFBiz 基础之上构建的。
你可以得到你所希望的模块组合来帮你管理存货、生产制造、财务、以及采购。它也有分析功能帮你去分析业务的各个方面。你可以借助这些信息让未来的计划做的更好。Opentaps 也包含一个强大的报表功能。
在它的基础之上,你还可以 [购买一些插件和附加模块][31] 去增强 Opentaps 的功能。包括与 Amazon Marketplace Services 和 FedEx 的集成等。在你 [下载 Opentaps][32] 之前,你可以到 [在线 demo][33] 上试用一下。它遵守 [GPLv3][34] 许可。
### WebERP
[WebERP][35] 是一个如它的名字所表示的那样:一个通过 Web 浏览器来使用的 ERP 系统。另外还需要的其它软件只有一个,那就是查看报告所使用的 PDF 阅读器。
具体来说,它是一个面向批发、分销、生产制造业务的账务和业务管理解决方案。它也可以与 [第三方的业务软件][36] 集成,包括多地点零售管理的销售点系统、电子商务模块、以及构建业务知识库的 wiki 软件。它是用 PHP 写的,并且它致力于成为低资源占用、高效、快速、以及平台无关的、普通商业用户易于使用的 ERP 系统。
WebERP 正在积极地进行开发,并且它有一个活跃的 [论坛][37],在那里你可以咨询问题或者学习关于如何使用这个应用程序的相关知识。你也可以试用一个 [demo][38],或者在 GitHub 上下载它的 [源代码][39](遵守 [GPLv2][40] 许可)
### xTuple PostBooks
如果你的生产制造、分销、电子商务业务已经从小规模业务成长起来了,并且正在寻找一个适合你的成长型企业的 ERP 系统,那么,你可以去了解一下 [xTuple PostBooks][41]。它是围绕核心 ERP 功能、帐务、以及可以添加存货、分销、采购、以及供应商报告等 CRM 功能构建的全面解决方案的系统。
xTuple 在通用公共属性许可证([CPAL][42])下使用,并且这个项目欢迎开发者去 fork 它,然后为基于存货的生产制造型企业开发其它的业务软件。它的基于 web 的核心是用 JavaScript 写的,它的 [源代码][43] 可以在 GitHub 上找到。你可以去在 xTuple 的网站上注册一个免费的 [demo][44] 去了解它。
还有许多其它的开源 ERP 可供你选择 — 另外你可以去了解的还有 [Tryton][45],它是用 Python 写的,并且使用的是 PostgreSQL 数据库引擎,或者基于 Java 的 [Axelor][46],它的好处是用户可以使用拖放界面来创建或者修改业务应用。如果还有在这里没有列出的你喜欢的开源 ERP 解决方案,请在下面的评论区共享出来。你也可以去查看我们的 [供应链管理工具][47] 榜单。
这篇文章是 [以前版本][48] 的一个更新版,它是由 Opensource.com 的主席 [Scott Nesbitt][49] 所写。
--------------------------------------------------------------------------------
via: https://opensource.com/tools/enterprise-resource-planning
作者:[Opensource.com][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com
[1]:http://en.wikipedia.org/wiki/Enterprise_resource_planning
[2]:http://www.adempiere.net/welcome
[3]:http://wiki.adempiere.net/License
[4]:http://www.adempiere.net/web/guest/demo
[5]:https://github.com/adempiere/adempiere
[6]:http://ofbiz.apache.org/
[7]:https://ofbiz.apache.org/business-users.html#UsrModules
[8]:http://ofbiz.apache.org/ofbiz-demos.html
[9]:http://ofbiz.apache.org/source-repositories.html
[10]:http://www.apache.org/licenses/LICENSE-2.0
[11]:http://www.dolibarr.org/
[12]:http://wiki.dolibarr.org/index.php/What_Dolibarr_can%27t_do
[13]:http://www.dolibarr.org/onlinedemo
[14]:http://www.dolistore.com/
[15]:https://github.com/Dolibarr/dolibarr
[16]:https://github.com/Dolibarr/dolibarr/blob/develop/COPYING
[17]:https://erpnext.com/
[18]:https://opensource.com/business/14/11/building-open-source-erp
[19]:https://frappe.erpnext.com/request-a-demo
[20]:https://erpnext.com/download
[21]:https://erpnext.com/pricing
[22]:http://metasfresh.com/en/
[23]:https://github.com/metasfresh/metasfresh
[24]:https://github.com/metasfresh/metasfresh/blob/master/LICENSE.md
[25]:https://www.odoo.com/
[26]:https://www.odoo.com/page/start
[27]:https://www.odoo.com/page/download
[28]:https://github.com/odoo
[29]:https://github.com/odoo/odoo/blob/11.0/LICENSE
[30]:http://www.opentaps.org/
[31]:http://shop.opentaps.org/
[32]:http://www.opentaps.org/products/download
[33]:http://www.opentaps.org/products/online-demo
[34]:https://www.gnu.org/licenses/agpl-3.0.html
[35]:http://www.weberp.org/
[36]:http://www.weberp.org/Links.html
[37]:http://www.weberp.org/forum/
[38]:http://www.weberp.org/weberp/
[39]:https://github.com/webERP-team/webERP
[40]:https://github.com/webERP-team/webERP#legal
[41]:https://xtuple.com/
[42]:https://xtuple.com/products/license-options#cpal
[43]:http://xtuple.github.io/
[44]:https://xtuple.com/free-demo
[45]:http://www.tryton.org/
[46]:https://www.axelor.com/
[47]:https://opensource.com/tools/supply-chain-management
[48]:https://opensource.com/article/16/3/top-4-open-source-erp-systems
[49]:https://opensource.com/users/scottnesbitt

View File

@ -0,0 +1,166 @@
你应该知道关于 Ubuntu 18.04 的一些事
======
[Ubuntu 18.04 版本][1] 即将到来。我可以在各种 Facebook 群组和论坛中看到许多来自 Ubuntu 用户的提问。我还在 Facebook 和 Instagram 上组织了 Q&A 会议,以了解 Ubuntu 用户对 Ubuntu 18.04 的想法。
我试图在这里回答关于 Ubuntu 18.04 的常见问题。如果您有任何疑问,我希望这能帮助您解决疑问。如果您仍有问题,请随时在下面的评论区提问。
### Ubuntu 18.04 中有什么值得期待
![Ubuntu 18.04 Frequently Asked Questions][2]
解释一下,这里的一些问答会受到我个人的影响。如果您是一位经验丰富或了解 Ubuntu 的用户,其中一些问题可能对您而言很简单。如果是这样的话,就请忽略这些问题。
#### 我能够在 Ubuntu 18.04 中安装 Unity 吗?
当然能够哦!
Canonical 公司知道有些人喜欢 Unity。这就是为什么它已经在 Universe 软件库LCTT译者注社区维护的软件库中提供了 Unity 7。但这是一个社区维护版官方并不直接参与开发。
但我建议是使用默认的 GNOME除非您真的无法容忍它再在 [Ubuntu 18.04 上安装 Unity][3]。
#### GNOME 是什么版本?
在这次发行的 Ubuntu 18.04 版本中GNOME 版本号是 3.28。
#### 我能够安装 vanilla GNOME
当然没问题!
因为存在一些 GNOME 用户可能不喜欢 Ubuntu 18.04 中的 Unity 风格。在 Ubuntu 中的 mainLCTT译者注官方支持的软件库和 universe 软件库有安装包可安装,能使您在 [Ubuntu 18.04 中安装 vanilla GNOME][4]。
#### GNOME中的内存泄漏已修复了吗
已经修复了。[GNOME 3.28 中臭名昭着的内存泄漏][5] 已经被修复了,并且 [Ubuntu 官方已经在测试这个修复程序][6]。
澄清一点,内存泄漏不是由 Ubuntu 系统引起的。它影响了所有使用 GNOME 3.28 的 Linux 发行版。GNOME 3.28.1 发布了一个新的补丁修复内存泄漏问题。
#### Ubuntu 18.04 将会被支持多久?
这是一个长期支持LTS版本与任何 LTS 版本一样,官方会支持五年。这意味着 Ubuntu 18.04 将在 2023 年 4 月之前能获得安全和维护更新。这对于除 Ubuntu Studio 之外的所有基于 Ubuntu 的 Linux 发行版也一样。
#### Ubuntu 18.04 什么时候会发布?
Ubuntu 18.04 LTS 在 4 月 26 日发布。 所有基于 Ubuntu 的 Linux 发行版,如 KubuntuLubuntuXubuntuBudgieMATE 等都会在同一天发布其 18.04 版本。
不过 [Ubuntu Studio 不会有 18.04 的 LTS 版本][7]。
#### 是否能从16.04/17.10升级到 Ubuntu 18.04?我可以从使用 Unity 的 Ubuntu 16.04 升级到使用 GNOME 的 Ubuntu 18.04 吗?
绝对没问题。当 Ubuntu 18.04 LTS 发布后,您可以很容易的升级到最新版。
如果您使用的是 Ubuntu 17.10,请确保在软件和更新->更新中,将“有新版本时通知我”设置为“适用任何新版本”。
![Get notified for a new version in Ubuntu][8]
如果您使用的是 Ubuntu 16.04,请确保在软件和更新->更新中,将“有新版本时通知我”设置为“适用长期支持版本”。
![Ubuntu 18.04 upgrade from Ubuntu 16.04][9]
然后您应该能获得有关新版本更新的系统通知。之后,升级到 Ubuntu 18.04 只需要点击几下鼠标而已。
即使 Ubuntu 16.04 使用的是 Unity但您仍然可以 [升级到使用 GNOME 的 Ubuntu 18.04][10]。
#### 升级到 Ubuntu 18.04 意味着什么?我会丢失数据吗?
如果您使用的是 Ubuntu 17.10 或 Ubuntu 16.04,系统会提示您可升级到 Ubuntu 18.04。如果您从互联网上下载 1.5 Gb 的数据不成问题,则只需点击几下鼠标,即可在 30 分钟内升级到 Ubuntu 18.04。
您不需要通过 U 盘来重装系统。升级过程完成后,您将可以使用新的 Ubuntu 版本。
通常,您的数据和文档等在升级过程中是安全的。但是,对重要文档进行备份始终是一个好的习惯。
#### 我什么时候能升级到 Ubuntu 18.04
如果您使用的是 Ubuntu 17.10 并且正确设置(设置方法在之前提到的问题中),那么在 Ubuntu 18.04 发布的几天内应该会通知您升级到 Ubuntu 18.04。为避免 Ubuntu 服务器在发布日期负载量过大,因此不是每个人都会在同一天收到升级提示。
对于 Ubuntu 16.04 用户,可能需要几周时间才能正式收到 Ubuntu 18.04 升级提示。通常,这将在第一次发布 Ubuntu 18.04.1 之后提示。该版本修复了 18.04 中发现的新 bug。
#### 如果我升级到 Ubuntu 18.04,我可以降级到 17.10/16.04
抱歉,并不行。尽管升级到新版本很容易,但没有降级的选项。如果您想回到 Ubuntu 16.04,只能重新安装。
#### 我能使用 Ubuntu 18.04 在 32 位系统上吗?
可以,但最好不要这样做。
如果您已经在使用 32 位版本的 Ubuntu 16.04 或 17.10,您依旧可以升级到 Ubuntu 18.04。 但是,您找到不到 32 位的 Ubuntu 18.04 ISO 镜像。换句话说,您无法安装 32 位版本的 Ubuntu 18.04。
有一个好消息是Ubuntu MATELubuntu 等其他官方版本仍然具有其新版本的 32 位 ISO 镜像。
无论如何,如果您使用一个 32 位系统,那么很可能您的计算机硬件性能过低。在这样的电脑上使用轻量级 [Ubuntu MATE][11] 或 [Lubuntu][12] 系统会更好。
#### 我可以在哪下载 Ubuntu 18.04
一旦发布了 18.04,您可以从其网站获得 Ubuntu 18.04 的 ISO 镜像。您既可以直接官网下载,也能用种子下载。其他官方版本将在其官方网站上提供下载。
#### 我应该重新安装 Ubuntu 18.04 还是从 16.04/17.10 升级上来?
如果您有重新安装的机会,建议备份您的数据并重新安装 Ubuntu 18.04。
从现有版本升级到 18.04 是一个方便的选择。不过,就我个人而言,它仍然保留了旧版本的依赖包。重新安装还是比较干净。
对于重新安装来说,我应该安装 Ubuntu 16.04 还是 Ubuntu 18.04
如果您要在计算机上安装 Ubuntu请尽量使用 Ubuntu 18.04 而不是 16.04。
他们都是长期支持版本并被支持很长一段时间。Ubuntu 16.04 会获得维护和安全更新服务直到 2021 年,而 18.04 则会到 2023 年。
不过,我建议您使用 Ubuntu 18.04。任何 LTS 版本都会在 [一段时间内获得硬件更新支持][13](我认为是两年半的时间内)。之后,它只获得维护更新。如果您有更新的硬件,您将在 18.04 获得更好的支持。
此外,许多应用程序开发人员将很快开始关注 Ubuntu 18.04。新创建的 PPA 可能仅在几个月内支持 18.04。所以使用 18.04 比 16.04 更好。
#### 安装打印机-扫描仪驱动程序比使用 CLI 安装会更容易吗?
在打印机方面,我不是专家,所以我的观点是基于我在这方面有限的知识。大多数新打印机都支持 [IPP协议][14],因此它们应该在 Ubuntu 18.04 中能够获到很好的支持。 然而对较旧的打印机我则无法保证。
#### Ubuntu 18.04 是否对 Realtek 和其他 WiFi 适配器有更好的支持?
抱歉,没有关于这部分的具体信息。
#### Ubuntu 18.04 的系统要求?
对于默认的 GNOME 版本,最好您应该有 [4 GB 的内存以便正常使用][15]。使用过去 8 年中发布的处理器也可以运行。但任何比这性能更差的硬件建议使用 [轻量级 Linux 发行版][16],例如 [Lubuntu][12]。
#### 有关 Ubuntu 18.04 的其问题?
如果还有其他疑问,请随时在下方评论区留言。如果您认为应将其他信息添加到列表中,请告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-18-04-faq/
作者:[Abhishek Prakash][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[wyxplus](https://github.com/wyxplus)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:https://itsfoss.com/ubuntu-18-04-release-features/
[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubuntu-18-04-faq-800x450.png
[3]:https://itsfoss.com/use-unity-ubuntu-17-10/
[4]:https://itsfoss.com/vanilla-gnome-ubuntu/
[5]:https://feaneron.com/2018/04/20/the-infamous-gnome-shell-memory-leak/
[6]:https://community.ubuntu.com/t/help-test-memory-leak-fixes-in-18-04-lts/5251
[7]:https://www.omgubuntu.co.uk/2018/04/ubuntu-studio-plans-to-reboot
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/03/upgrade-ubuntu-2.jpeg
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/ubuntu-18-04-upgrade-settings-800x379.png
[10]:https://itsfoss.com/upgrade-ubuntu-version/
[11]:https://ubuntu-mate.org/
[12]:https://lubuntu.net/
[13]:https://www.ubuntu.com/info/release-end-of-life
[14]:https://www.pwg.org/ipp/everywhere.html
[15]:https://help.ubuntu.com/community/Installation/SystemRequirements
[16]:https://itsfoss.com/lightweight-linux-beginners/

View File

@ -0,0 +1,227 @@
Cron 任务入门指南
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/cron-任务s1-720x340.jpg)
** Cron **是您可以在任何类 Unix 操作系统中找到的最有用的实用程序之一。它用于安排命令在特定时间执行。这些预定的命令或任务被称为 “Cron 任务”。Cron 通常用于运行计划备份,监视磁盘空间,定期删除不再需要的文件(例如日志文件),运行系统维护任务等等。在本简要指南中,我们将看到 Linux 中 Cron 任务 的基本用法。
Cron 任务入门指南
cron 任务的典型格式是:
```
Minute(0-59) Hour(0-24) Day_of_month(1-31) Month(1-12) Day_of_week(0-6) Command_to_execute
```
只需记住 cron 任务的格式或打印下面的插图并将其放在你桌面上即可。
![][2]
在上图中,星号表示特定的时间块。
要显示当前登录用户的 ** crontab ** 文件的内容:
```
$ crontab -l
```
要编辑当前用户的 cron 任务,请执行以下操作:
```
$ crontab -e
```
如果这是第一次编辑此文件,你将需要使用编辑器来编辑此文件。
```
no crontab for sk - using an empty one
Select an editor. To change later, run 'select-editor'.
1. /bin/nano <---- easiest
2. /usr/bin/vim.basic
3. /usr/bin/vim.tiny
4. /bin/ed
Choose 1-4 [1]:
```
选择适合你的编辑器。这里是一个示例crontab文件的样子。
![][3]
在这个文件中,你需要添加你的 cron 任务s。
要编辑其他用户的crontab例如 ostechnix请执行
```
$ crontab -u ostechnix -e
```
让我们看看一些例子。
**每分钟** 执行一次 cron 任务 , 需使用如下格式.
```
* * * * * <command-to-execute>
```
要每5分钟运行一次cron 任务请在crontab文件中添加以下内容。
```
*/5 * * * * <command-to-execute>
```
要在每 1/4 个小时每15分钟运行一次 cron 任务,请添加以下内容:
```
*/15 * * * * <command-to-execute>
```
要每小时的第30分钟运行一次 cron 任务,请运行:
```
30 * * * * <command-to-execute>
```
您还可以使用逗号定义多个时间间隔。例如,以下 cron 任务 每小时运行三次,分别在第 0, 5 和 10 分钟运行:
```
0,5,10 * * * * <command-to-execute>
```
每半小时运行一次 cron 任务:
```
*/30 * * * * <command-to-execute>
```
每小时运行一次:
```
0 * * * * <command-to-execute>
```
每2小时运行一次
```
0 */2 * * * <command-to-execute>
```
每天运行一项在00:00运行
```
0 0 * * * <command-to-execute>
```
每天凌晨3点运行
```
0 3 * * * <command-to-execute>
```
每周日运行:
```
0 0 * * SUN <command-to-execute>
```
或使用,
```
0 0 * * 0 <command-to-execute>
```
它将在每周日的午夜 00:00 运行。
星期一至星期五每天运行一次,亦即每个工作日运行一次:
```
0 0 * * 1-5 <command-to-execute>
```
这项工作将于00:00开始。
每个月运行一次:
```
0 0 1 * * <command-to-execute>
```
于每月第1天的16:15运行
```
15 16 1 * * <command-to-execute>
```
每季度运行一次亦即每隔3个月的第1天运行
```
0 0 1 */3 * <command-to-execute>
```
在特定月份的特定时间运行:
```
5 0 * 4 * <command-to-execute>
```
每个四月的 00:05 运行。
每6个月运行
```
0 0 1 */6 * <command-to-execute>
```
这个定时任务将在每六个月的第一天的 00:00 运行。
每年运行:
```
0 0 1 1 * <command-to-execute>
```
这项 cron 任务将于 1 月份的第一天的 00:00 运行。
我们也可以使用以下字符串来定义任务。
@reboot 在每次启动时运行一次。 @yearly 每年运行一次。 @annually(和 @yearly 一样)。 @monthly 每月运行一次。 @weekly 每周运行一次。 @daily 每天运行一次。 @midnight (和 @daily 一样)。 @hourly 每小时运行一次。
例如,要在每次重新启动服务器时运行任务,请将此行添加到您的 crontab 文件中。
```
@reboot <command-to-execute>
```
要删除当前用户的所有 cron 任务:
```
$ crontab -r
```
还有一个名为 [** crontab.guru **] [4] 的专业网站,用于学习 cron 任务示例。这个网站提供了很多 cron 任务的例子。
有关更多详细信息,请查看手册页。
```
$ man crontab
```
那么,就是这样。到此为止,您应该对 cron 任务以及如何在世使用它们有了一个基本的了解。后续还会介绍更多的优秀工具。敬请关注!!
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/a-beginners-guide-to-cron-任务s/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者KevinSJ](https://github.com/KevinSJ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-任务-format-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/cron-任务s-1.png
[4]:https://crontab.guru/

View File

@ -0,0 +1,55 @@
如何在 Linux 中找到你的 IP 地址
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/satellite_radio_location.jpg?itok=KJUKSB6x)
互联网协议IP不需要介绍 - 我们每天都在使用它。即使你不直接使用它,当你在浏览器上输入 website-name.com 时,它会查找该 URL 的 IP 地址,然后加载该网站。
我们将 IP 地址分为两类:私有和公共。私有 IP 地址是你的无线路由(和公司内网)提供的私有 IP 地址。它们的范围是 10.xxx、172.16.xx-172.31.xx 和 192.168.xx其中 x=0 到 255。公有 IP 地址,顾名思义,是“公共”的,你可以在世界上任何地方访问它。每个网站都有一个唯一的 IP 地址,任何人可在任何地点访问,这可被视为公共 IP 地址。
此外,还有两种类型的 IP 地址IPv4 和 IPv6。
IPv4 地址格式为 x.x.x.x其中 x=0 到 255。有 2^32大约 40 亿个)可能的 IPv4 地址。
IPv6 地址使用更复杂的十六进制。总的比特数是 128这意味着有 2^128-340 后面有 36 个零! - 可能的 IPv6 地址。IPv6 已经被引入解决了可预见的 IPv4 地址耗尽问题。
作为网络工程师,我建议不要与任何人共享你机器的公有 IP 地址。你的 WiFi 路由器有公共 IP即 WAN广域网IP 地址,并且连接到该 WiFi 的任何设备都是相同的。连接到相同 WiFi 的所有设备都有上面所说的私有 IP 地址。例如,我的笔记本电脑的 IP 地址 192.168.0.5,而我的电话是 192.168.0.8。这些是私有 IP 地址,但两者都有相同的公有 IP 地址。
以下命令将列出IP地址列表以查找你计算机的公有 IP 地址:
1. `ifconfig.me`
2. `curl -4/-6 icanhazip.com`
3. `curl ipinfo.io/ip`
4. `curl api.ipify.org`
5. `curl checkip.dyndns.org`
6. `dig +short myip.opendns.com @resolver1.opendns.com`
7. `host myip.opendns.com resolver1.opendns.com`
8. `curl ident.me`
9. `curl bot.whatismyipaddress.com`
10. `curl ipecho.net/plain`
以下命令将为你提供接口的私有 IP 地址:
1. `ifconfig -a`
2. `ip addr (ip a)`
3. `hostname -I | awk {print $1}`
4. `ip route get 1.2.3.4 | awk '{print $7}'`
5. `(Fedora) Wifi-Settings→ click the setting icon next to the Wifi name that you are connected to → Ipv4 and Ipv6 both can be seen`
6. `nmcli -p device show`
_注意一些工具需要根据你正在使用的 Linux 发行版安装在你的系统上。另外,一些提到的命令使用第三方网站来获取 IP_
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/how-find-ip-address-linux
作者:[Archit Modi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/architmodi