mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject
merge from LCTT
This commit is contained in:
commit
b901817ce0
@ -1,37 +1,39 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10850-1.html)
|
||||
[#]: subject: (Build a game framework with Python using the module Pygame)
|
||||
[#]: via: (https://opensource.com/article/17/12/game-framework-python)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
使用 Python 和 Pygame 模块构建一个游戏框架
|
||||
======
|
||||
这系列的第一篇通过创建一个简单的骰子游戏来探究 Python。现在是来从零制作你自己的游戏的时间。
|
||||
|
||||
> 这系列的第一篇通过创建一个简单的骰子游戏来探究 Python。现在是来从零制作你自己的游戏的时间。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python2-header.png?itok=tEvOVo4A)
|
||||
|
||||
在我的 [这系列的第一篇文章][1] 中, 我已经讲解如何使用 Python 创建一个简单的,基于文本的骰子游戏。这次,我将展示如何使用 Python 和 Pygame 模块来创建一个图形化游戏。它将占用一些文章来得到一个确实完成一些东西的游戏,但是在这系列的结尾,你将有一个更好的理解,如何查找和学习新的 Python 模块和如何从其基础上构建一个应用程序。
|
||||
在我的[这系列的第一篇文章][1] 中, 我已经讲解如何使用 Python 创建一个简单的、基于文本的骰子游戏。这次,我将展示如何使用 Python 模块 Pygame 来创建一个图形化游戏。它将需要几篇文章才能来得到一个确实做成一些东西的游戏,但是到这系列的结尾,你将更好地理解如何查找和学习新的 Python 模块和如何从其基础上构建一个应用程序。
|
||||
|
||||
在开始前,你必须安装 [Pygame][2]。
|
||||
|
||||
### 安装新的 Python 模块
|
||||
|
||||
这里有一些方法来安装 Python 模块,但是最通用的两个是:
|
||||
有几种方法来安装 Python 模块,但是最通用的两个是:
|
||||
|
||||
* 从你的发行版的软件存储库
|
||||
* 使用 Python 的软件包管理器,pip
|
||||
* 使用 Python 的软件包管理器 `pip`
|
||||
|
||||
两个方法都工作很好,并且每一个都有它自己的一套优势。如果你是在 Linux 或 BSD 上开发,促使你的发行版的软件存储库确保自动及时更新。
|
||||
两个方法都工作的很好,并且每一个都有它自己的一套优势。如果你是在 Linux 或 BSD 上开发,可以利用你的发行版的软件存储库来自动和及时地更新。
|
||||
|
||||
然而,使用 Python 的内置软件包管理器给予你控制更新模块时间的能力。而且,它不是明确指定操作系统的,意味着,即使当你不是在你常用的开发机器上时,你也可以使用它。pip 的其它的优势是允许模块局部安装,如果你没有一台正在使用的计算机的权限,它是有用的。
|
||||
然而,使用 Python 的内置软件包管理器可以给予你控制更新模块时间的能力。而且,它不是特定于操作系统的,这意味着,即使当你不是在你常用的开发机器上时,你也可以使用它。`pip` 的其它的优势是允许本地安装模块,如果你没有正在使用的计算机的管理权限,这是有用的。
|
||||
|
||||
### 使用 pip
|
||||
|
||||
如果 Python 和 Python3 都安装在你的系统上,你想使用的命令很可能是 `pip3`,它区分来自Python 2.x 的 `pip` 的命令。如果你不确定,先尝试 `pip3`。
|
||||
如果 Python 和 Python3 都安装在你的系统上,你想使用的命令很可能是 `pip3`,它用来区分 Python 2.x 的 `pip` 的命令。如果你不确定,先尝试 `pip3`。
|
||||
|
||||
`pip` 命令有些像大多数 Linux 软件包管理器的工作。你可以使用 `search` 搜索 Pythin 模块,然后使用 `install` 安装它们。如果你没有你正在使用的计算机的权限来安装软件,你可以使用 `--user` 选项来仅仅安装模块到你的 home 目录。
|
||||
`pip` 命令有些像大多数 Linux 软件包管理器一样工作。你可以使用 `search` 搜索 Python 模块,然后使用 `install` 安装它们。如果你没有你正在使用的计算机的管理权限来安装软件,你可以使用 `--user` 选项来仅仅安装模块到你的家目录。
|
||||
|
||||
```
|
||||
$ pip3 search pygame
|
||||
@ -44,11 +46,11 @@ pygame_cffi (0.2.1) - A cffi-based SDL wrapper that copies the
|
||||
$ pip3 install Pygame --user
|
||||
```
|
||||
|
||||
Pygame 是一个 Python 模块,这意味着它仅仅是一套可以被使用在你的 Python 程序中库。换句话说,它不是一个你启动的程序,像 [IDLE][3] 或 [Ninja-IDE][4] 一样。
|
||||
Pygame 是一个 Python 模块,这意味着它仅仅是一套可以使用在你的 Python 程序中的库。换句话说,它不是一个像 [IDLE][3] 或 [Ninja-IDE][4] 一样可以让你启动的程序。
|
||||
|
||||
### Pygame 新手入门
|
||||
|
||||
一个电子游戏需要一个故事背景;一个发生的地点。在 Python 中,有两种不同的方法来创建你的故事背景:
|
||||
一个电子游戏需要一个背景设定:故事发生的地点。在 Python 中,有两种不同的方法来创建你的故事背景:
|
||||
|
||||
* 设置一种背景颜色
|
||||
* 设置一张背景图片
|
||||
@ -57,15 +59,15 @@ Pygame 是一个 Python 模块,这意味着它仅仅是一套可以被使用
|
||||
|
||||
### 设置你的 Pygame 脚本
|
||||
|
||||
为了开始一个新的 Pygame 脚本,在计算机上创建一个文件夹。游戏的全部文件被放在这个目录中。在工程文件夹内部保持所需要的所有的文件来运行游戏是极其重要的。
|
||||
要开始一个新的 Pygame 工程,先在计算机上创建一个文件夹。游戏的全部文件被放在这个目录中。在你的工程文件夹内部保持所需要的所有的文件来运行游戏是极其重要的。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/project.jpg)
|
||||
|
||||
一个 Python 脚本以文件类型,你的姓名,和你想使用的协议开始。使用一个开放源码协议,以便你的朋友可以改善你的游戏并与你一起分享他们的更改:
|
||||
一个 Python 脚本以文件类型、你的姓名,和你想使用的许可证开始。使用一个开放源码许可证,以便你的朋友可以改善你的游戏并与你一起分享他们的更改:
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
# Seth Kenlon 编写
|
||||
# by Seth Kenlon
|
||||
|
||||
## GPLv3
|
||||
# This program is free software: you can redistribute it and/or
|
||||
@ -75,14 +77,14 @@ Pygame 是一个 Python 模块,这意味着它仅仅是一套可以被使用
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful, but
|
||||
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
# General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
```
|
||||
|
||||
然后,你告诉 Python 你想使用的模块。一些模块是常见的 Python 库,当然,你想包括一个你刚刚安装的,Pygame 。
|
||||
然后,你告诉 Python 你想使用的模块。一些模块是常见的 Python 库,当然,你想包括一个你刚刚安装的 Pygame 模块。
|
||||
|
||||
```
|
||||
import pygame # 加载 pygame 关键字
|
||||
@ -90,7 +92,7 @@ import sys # 让 python 使用你的文件系统
|
||||
import os # 帮助 python 识别你的操作系统
|
||||
```
|
||||
|
||||
由于你将用这个脚本文件工作很多,在文件中制作成段落是有帮助的,以便你知道在哪里放原料。使用语句块注释来做这些,这些注释仅在看你的源文件代码时是可见的。在你的代码中创建三个语句块。
|
||||
由于你将用这个脚本文件做很多工作,在文件中分成段落是有帮助的,以便你知道在哪里放代码。你可以使用块注释来做这些,这些注释仅在看你的源文件代码时是可见的。在你的代码中创建三个块。
|
||||
|
||||
```
|
||||
'''
|
||||
@ -114,7 +116,7 @@ Main Loop
|
||||
|
||||
接下来,为你的游戏设置窗口大小。注意,不是每一个人都有大计算机屏幕,所以,最好使用一个适合大多数人的计算机的屏幕大小。
|
||||
|
||||
这里有一个方法来切换全屏模式,很多现代电子游戏做的方法,但是,由于你刚刚开始,保存它简单和仅设置一个大小。
|
||||
这里有一个方法来切换全屏模式,很多现代电子游戏都会这样做,但是,由于你刚刚开始,简单起见仅设置一个大小即可。
|
||||
|
||||
```
|
||||
'''
|
||||
@ -124,7 +126,7 @@ worldx = 960
|
||||
worldy = 720
|
||||
```
|
||||
|
||||
在一个脚本中使用 Pygame 引擎前,你需要一些基本的设置。你必需设置帧频,启动它的内部时钟,然后开始 (`init`) Pygame 。
|
||||
在脚本中使用 Pygame 引擎前,你需要一些基本的设置。你必须设置帧频,启动它的内部时钟,然后开始 (`init`)Pygame 。
|
||||
|
||||
```
|
||||
fps = 40 # 帧频
|
||||
@ -137,17 +139,15 @@ pygame.init()
|
||||
|
||||
### 设置背景
|
||||
|
||||
在你继续前,打开一个图形应用程序,并为你的游戏世界创建一个背景。在你的工程目录中的 `images` 文件夹内部保存它为 `stage.png` 。
|
||||
在你继续前,打开一个图形应用程序,为你的游戏世界创建一个背景。在你的工程目录中的 `images` 文件夹内部保存它为 `stage.png` 。
|
||||
|
||||
这里有一些你可以使用的自由图形应用程序。
|
||||
|
||||
* [Krita][5] 是一个专业级绘图原料模拟器,它可以被用于创建漂亮的图片。如果你对电子游戏创建艺术作品非常感兴趣,你甚至可以购买一系列的[游戏艺术作品教程][6].
|
||||
* [Pinta][7] 是一个基本的,易于学习的绘图应用程序。
|
||||
* [Inkscape][8] 是一个矢量图形应用程序。使用它来绘制形状,线,样条曲线,和 Bézier 曲线。
|
||||
* [Krita][5] 是一个专业级绘图素材模拟器,它可以被用于创建漂亮的图片。如果你对创建电子游戏艺术作品非常感兴趣,你甚至可以购买一系列的[游戏艺术作品教程][6]。
|
||||
* [Pinta][7] 是一个基本的,易于学习的绘图应用程序。
|
||||
* [Inkscape][8] 是一个矢量图形应用程序。使用它来绘制形状、线、样条曲线和贝塞尔曲线。
|
||||
|
||||
|
||||
|
||||
你的图像不必很复杂,你可以以后回去更改它。一旦你有它,在你文件的 setup 部分添加这些代码:
|
||||
你的图像不必很复杂,你可以以后回去更改它。一旦有了它,在你文件的 Setup 部分添加这些代码:
|
||||
|
||||
```
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
@ -155,13 +155,13 @@ backdrop = pygame.image.load(os.path.join('images','stage.png').convert())
|
||||
backdropbox = world.get_rect()
|
||||
```
|
||||
|
||||
如果你仅仅用一种颜色来填充你的游戏的背景,你需要做的全部是:
|
||||
如果你仅仅用一种颜色来填充你的游戏的背景,你需要做的就是:
|
||||
|
||||
```
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
```
|
||||
|
||||
你也必需定义一个来使用的颜色。在你的 setup 部分,使用红,绿,蓝 (RGB) 的值来创建一些颜色的定义。
|
||||
你也必须定义颜色以使用。在你的 Setup 部分,使用红、绿、蓝 (RGB) 的值来创建一些颜色的定义。
|
||||
|
||||
```
|
||||
'''
|
||||
@ -173,13 +173,13 @@ BLACK = (23,23,23 )
|
||||
WHITE = (254,254,254)
|
||||
```
|
||||
|
||||
在这点上,你能理论上启动你的游戏。问题是,它可能仅持续一毫秒。
|
||||
至此,你理论上可以启动你的游戏了。问题是,它可能仅持续了一毫秒。
|
||||
|
||||
为证明这一点,保存你的文件为 `your-name_game.py` (用你真实的名称替换 `your-name` )。然后启动你的游戏。
|
||||
为证明这一点,保存你的文件为 `your-name_game.py`(用你真实的名称替换 `your-name`)。然后启动你的游戏。
|
||||
|
||||
如果你正在使用 IDLE ,通过选择来自 Run 菜单的 `Run Module` 来运行你的游戏。
|
||||
如果你正在使用 IDLE,通过选择来自 “Run” 菜单的 “Run Module” 来运行你的游戏。
|
||||
|
||||
如果你正在使用 Ninja ,在左侧按钮条中单击 `Run file` 按钮。
|
||||
如果你正在使用 Ninja,在左侧按钮条中单击 “Run file” 按钮。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/ninja_run_0.png)
|
||||
|
||||
@ -189,27 +189,27 @@ WHITE = (254,254,254)
|
||||
$ python3 ./your-name_game.py
|
||||
```
|
||||
|
||||
如果你正在使用 Windows ,使用这命令:
|
||||
如果你正在使用 Windows,使用这命令:
|
||||
|
||||
```
|
||||
py.exe your-name_game.py
|
||||
```
|
||||
|
||||
你启动它,不过不要期望很多,因为你的游戏现在仅仅持续几毫秒。你可以在下一部分中修复它。
|
||||
启动它,不过不要期望很多,因为你的游戏现在仅仅持续几毫秒。你可以在下一部分中修复它。
|
||||
|
||||
### 循环
|
||||
|
||||
除非另有说明,一个 Python 脚本运行一次并仅一次。近来计算机的运行速度是非常快的,所以你的 Python 脚本运行时间少于1秒钟。
|
||||
除非另有说明,一个 Python 脚本运行一次并仅一次。近来计算机的运行速度是非常快的,所以你的 Python 脚本运行时间会少于 1 秒钟。
|
||||
|
||||
为强制你的游戏来处于足够长的打开和活跃状态来让人看到它(更不要说玩它),使用一个 `while` 循环。为使你的游戏保存打开,你可以设置一个变量为一些值,然后告诉一个 `while` 循环只要变量保持未更改则一直保存循环。
|
||||
|
||||
这经常被称为一个"主循环",你可以使用术语 `main` 作为你的变量。在你的 setup 部分的任意位置添加这些代码:
|
||||
这经常被称为一个“主循环”,你可以使用术语 `main` 作为你的变量。在你的 Setup 部分的任意位置添加代码:
|
||||
|
||||
```
|
||||
main = True
|
||||
```
|
||||
|
||||
在主循环期间,使用 Pygame 关键字来检查是否在键盘上的按键已经被按下或释放。添加这些代码到你的主循环部分:
|
||||
在主循环期间,使用 Pygame 关键字来检查键盘上的按键是否已经被按下或释放。添加这些代码到你的主循环部分:
|
||||
|
||||
```
|
||||
'''
|
||||
@ -228,7 +228,7 @@ while main == True:
|
||||
main = False
|
||||
```
|
||||
|
||||
也在你的循环中,刷新你世界的背景。
|
||||
也是在你的循环中,刷新你世界的背景。
|
||||
|
||||
如果你使用一个图片作为背景:
|
||||
|
||||
@ -242,33 +242,33 @@ world.blit(backdrop, backdropbox)
|
||||
world.fill(BLUE)
|
||||
```
|
||||
|
||||
最后,告诉 Pygame 来刷新在屏幕上的所有内容并推进游戏的内部时钟。
|
||||
最后,告诉 Pygame 来重新刷新屏幕上的所有内容,并推进游戏的内部时钟。
|
||||
|
||||
```
|
||||
pygame.display.flip()
|
||||
clock.tick(fps)
|
||||
```
|
||||
|
||||
保存你的文件,再次运行它来查看曾经创建的最无趣的游戏。
|
||||
保存你的文件,再次运行它来查看你曾经创建的最无趣的游戏。
|
||||
|
||||
退出游戏,在你的键盘上按 `q` 键。
|
||||
|
||||
在这系列的 [下一篇文章][9] 中,我将向你演示,如何加强你当前空的游戏世界,所以,继续学习并创建一些将要使用的图形!
|
||||
在这系列的 [下一篇文章][9] 中,我将向你演示,如何加强你当前空空如也的游戏世界,所以,继续学习并创建一些将要使用的图形!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
通过: https://opensource.com/article/17/12/game-framework-python
|
||||
via: https://opensource.com/article/17/12/game-framework-python
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/17/10/python-101
|
||||
[1]: https://linux.cn/article-9071-1.html
|
||||
[2]: http://www.pygame.org/wiki/about
|
||||
[3]: https://en.wikipedia.org/wiki/IDLE
|
||||
[4]: http://ninja-ide.org/
|
@ -1,20 +1,21 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (cycoe)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10858-1.html)
|
||||
[#]: subject: (How to add a player to your Python game)
|
||||
[#]: via: (https://opensource.com/article/17/12/game-python-add-a-player)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
如何在你的 Python 游戏中添加一个玩家
|
||||
======
|
||||
用 Python 从头开始构建游戏的系列文章的第三部分。
|
||||
> 这是用 Python 从头开始构建游戏的系列文章的第三部分。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python3-game.png?itok=jG9UdwC3)
|
||||
|
||||
在 [这个系列的第一篇文章][1] 中,我解释了如何使用 Python 创建一个简单的基于文本的骰子游戏。在第二部分中,我向你们展示了如何从头开始构建游戏,即从 [创建游戏的环境][2] 开始。但是每个游戏都需要一名玩家,并且每个玩家都需要一个可操控的角色,这也就是我们接下来要在这个系列的第三部分中需要做的。
|
||||
|
||||
在 Pygame 中,玩家操控的图标或者化身被称作妖精。如果你现在还没有任何图像可用于玩家妖精,你可以使用 [Krita][3] 或 [Inkscape][4] 来自己创建一些图像。如果你对自己的艺术细胞缺乏自信,你也可以在 [OpenClipArt.org][5] 或 [OpenGameArt.org][6] 搜索一些现成的图像。如果你还未按照上一篇文章所说的单独创建一个 images 文件夹,那么你需要在你的 Python 项目目录中创建它。将你想要在游戏中使用的图片都放 images 文件夹中。
|
||||
在 Pygame 中,玩家操控的图标或者化身被称作<ruby>妖精<rt>sprite</rt></ruby>。如果你现在还没有任何可用于玩家妖精的图像,你可以使用 [Krita][3] 或 [Inkscape][4] 来自己创建一些图像。如果你对自己的艺术细胞缺乏自信,你也可以在 [OpenClipArt.org][5] 或 [OpenGameArt.org][6] 搜索一些现成的图像。如果你还未按照上一篇文章所说的单独创建一个 `images` 文件夹,那么你需要在你的 Python 项目目录中创建它。将你想要在游戏中使用的图片都放 `images` 文件夹中。
|
||||
|
||||
为了使你的游戏真正的刺激,你应该为你的英雄使用一张动态的妖精图片。这意味着你需要绘制更多的素材,并且它们要大不相同。最常见的动画就是走路循环,通过一系列的图像让你的妖精看起来像是在走路。走路循环最快捷粗糙的版本需要四张图像。
|
||||
|
||||
@ -73,7 +74,7 @@ class Player(pygame.sprite.Sprite):
|
||||
|
||||
### 将玩家带入游戏世界
|
||||
|
||||
现在一个 Player 类已经创建好了,你需要使用它在你的游戏世界中生成一个玩家妖精。如果你不调用 Player 类,那它永远不会起作用,(游戏世界中)也就不会有玩家。你可以通过立马运行你的游戏来验证一下。游戏会像上一篇文章末尾看到的那样运行,并得到明确的结果:一个空荡荡的游戏世界。
|
||||
现在已经创建好了一个 Player 类,你需要使用它在你的游戏世界中生成一个玩家妖精。如果你不调用 Player 类,那它永远不会起作用,(游戏世界中)也就不会有玩家。你可以通过立马运行你的游戏来验证一下。游戏会像上一篇文章末尾看到的那样运行,并得到明确的结果:一个空荡荡的游戏世界。
|
||||
|
||||
为了将一个玩家妖精带到你的游戏世界,你必须通过调用 Player 类来生成一个妖精,并将它加入到 Pygame 的妖精组中。在如下的代码示例中,前三行是已经存在的代码,你需要在其后添加代码:
|
||||
|
||||
@ -106,11 +107,11 @@ player_list.add(player)
|
||||
|
||||
### 设置 alpha 通道
|
||||
|
||||
根据你如何创建你的玩家妖精,在它周围可能会有一个色块。你所看到的是 alpha 通道应该占据的空间。它本来是不可见的“颜色”,但 Python 现在还不知道要使它不可见。那么你所看到的,是围绕在妖精周围的边界区(或现代游戏术语中的“命中区”)内的空间。
|
||||
根据你如何创建你的玩家妖精,在它周围可能会有一个色块。你所看到的是 alpha 通道应该占据的空间。它本来是不可见的“颜色”,但 Python 现在还不知道要使它不可见。那么你所看到的,是围绕在妖精周围的边界区(或现代游戏术语中的“<ruby>命中区<rt>hit box</rt></ruby>”)内的空间。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/greenscreen.jpg)
|
||||
|
||||
你可以通过设置一个 alpha 通道和 RGB 值来告诉 Python 使哪种颜色不可见。如果你不知道你使用 alpha 通道的图像的 RGB 值,你可以使用 Krita 或 Inkscape 打开它,并使用一种独特的颜色,比如 #00ff00(差不多是“绿屏绿”)来填充图像周围的空白区域。记下颜色对应的十六进制值(此处为 #00ff00,绿屏绿)并将其作为 alpha 通道用于你的 Python 脚本。
|
||||
你可以通过设置一个 alpha 通道和 RGB 值来告诉 Python 使哪种颜色不可见。如果你不知道你使用 alpha 通道的图像的 RGB 值,你可以使用 Krita 或 Inkscape 打开它,并使用一种独特的颜色,比如 `#00ff00`(差不多是“绿屏绿”)来填充图像周围的空白区域。记下颜色对应的十六进制值(此处为 `#00ff00`,绿屏绿)并将其作为 alpha 通道用于你的 Python 脚本。
|
||||
|
||||
使用 alpha 通道需要在你的妖精生成相关代码中添加如下两行。类似第一行的代码已经存在于你的脚本中,你只需要添加另外两行:
|
||||
|
||||
@ -126,11 +127,11 @@ player_list.add(player)
|
||||
ALPHA = (0, 255, 0)
|
||||
```
|
||||
|
||||
在以上示例代码中,**0,255,0** 被我们使用,它在 RGB 中所代表的值与 #00ff00 在十六进制中所代表的值相同。你可以通过一个优秀的图像应用程序,如 [GIMP][7]、Krita 或 Inkscape,来获取所有这些颜色值。或者,你可以使用一个优秀的系统级颜色选择器,如 [KColorChooser][8],来检测颜色。
|
||||
在以上示例代码中,`0,255,0` 被我们使用,它在 RGB 中所代表的值与 `#00ff00` 在十六进制中所代表的值相同。你可以通过一个优秀的图像应用程序,如 [GIMP][7]、Krita 或 Inkscape,来获取所有这些颜色值。或者,你可以使用一个优秀的系统级颜色选择器,如 [KColorChooser][8],来检测颜色。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/kcolor.png)
|
||||
|
||||
如果你的图像应用程序将你的妖精背景渲染成了其他的值,你可以按需调整 ``ALPHA`` 变量的值。不论你将 alpha 设为多少,最后它都将“不可见”。RGB 颜色值是非常严格的,因此如果你需要将 alpha 设为 000,但你又想将 000 用于你图像中的黑线,你只需要将图像中线的颜色设为 111。这样一来,(图像中的黑线)就足够接近黑色,但除了电脑以外没有人能看出区别。
|
||||
如果你的图像应用程序将你的妖精背景渲染成了其他的值,你可以按需调整 `ALPHA` 变量的值。不论你将 alpha 设为多少,最后它都将“不可见”。RGB 颜色值是非常严格的,因此如果你需要将 alpha 设为 000,但你又想将 000 用于你图像中的黑线,你只需要将图像中线的颜色设为 111。这样一来,(图像中的黑线)就足够接近黑色,但除了电脑以外没有人能看出区别。
|
||||
|
||||
运行你的游戏查看结果。
|
||||
|
||||
@ -145,14 +146,14 @@ via: https://opensource.com/article/17/12/game-python-add-a-player
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[cycoe](https://github.com/cycoe)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/17/10/python-101
|
||||
[2]: https://opensource.com/article/17/12/program-game-python-part-2-creating-game-world
|
||||
[1]: https://linux.cn/article-9071-1.html
|
||||
[2]: https://linux.cn/article-10850-1.html
|
||||
[3]: http://krita.org
|
||||
[4]: http://inkscape.org
|
||||
[5]: http://openclipart.org
|
@ -0,0 +1,596 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10848-1.html)
|
||||
[#]: subject: (TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop)
|
||||
[#]: via: (https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
TLP:一个可以延长 Linux 笔记本电池寿命的高级电源管理工具
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201905/13/094413iu77i8w75t80tq7h.jpg)
|
||||
|
||||
笔记本电池是针对 Windows 操作系统进行了高度优化的,当我在笔记本电脑中使用 Windows 操作系统时,我已经意识到这一点,但对于 Linux 来说却不一样。
|
||||
|
||||
多年来,Linux 在电池优化方面取得了很大进步,但我们仍然需要做一些必要的事情来改善 Linux 中笔记本电脑的电池寿命。
|
||||
|
||||
当我考虑延长电池寿命时,我没有多少选择,但我觉得 TLP 对我来说是一个更好的解决方案,所以我会继续使用它。
|
||||
|
||||
在本教程中,我们将详细讨论 TLP 以延长电池寿命。
|
||||
|
||||
我们之前在我们的网站上写过三篇关于 Linux [笔记本电池节电工具][1] 的文章:[PowerTOP][2] 和 [电池充电状态][3]。
|
||||
|
||||
### TLP
|
||||
|
||||
[TLP][4] 是一款自由开源的高级电源管理工具,可在不进行任何配置更改的情况下延长电池寿命。
|
||||
|
||||
由于它的默认配置已针对电池寿命进行了优化,因此你可能只需要安装,然后就忘记它吧。
|
||||
|
||||
此外,它可以高度定制化,以满足你的特定要求。TLP 是一个具有自动后台任务的纯命令行工具。它不包含GUI。
|
||||
|
||||
TLP 适用于各种品牌的笔记本电脑。设置电池充电阈值仅适用于 IBM/Lenovo ThinkPad。
|
||||
|
||||
所有 TLP 设置都存储在 `/etc/default/tlp` 中。其默认配置提供了开箱即用的优化的节能设置。
|
||||
|
||||
以下 TLP 设置可用于自定义,如果需要,你可以相应地进行必要的更改。
|
||||
|
||||
### TLP 功能
|
||||
|
||||
* 内核笔记本电脑模式和脏缓冲区超时
|
||||
* 处理器频率调整,包括 “turbo boost”/“turbo core”
|
||||
* 限制最大/最小的 P 状态以控制 CPU 的功耗
|
||||
* HWP 能源性能提示
|
||||
* 用于多核/超线程的功率感知进程调度程序
|
||||
* 处理器性能与节能策略(`x86_energy_perf_policy`)
|
||||
* 硬盘高级电源管理级别(APM)和降速超时(按磁盘)
|
||||
* AHCI 链路电源管理(ALPM)与设备黑名单
|
||||
* PCIe 活动状态电源管理(PCIe ASPM)
|
||||
* PCI(e) 总线设备的运行时电源管理
|
||||
* Radeon 图形电源管理(KMS 和 DPM)
|
||||
* Wifi 省电模式
|
||||
* 关闭驱动器托架中的光盘驱动器
|
||||
* 音频省电模式
|
||||
* I/O 调度程序(按磁盘)
|
||||
* USB 自动暂停,支持设备黑名单/白名单(输入设备自动排除)
|
||||
* 在系统启动和关闭时启用或禁用集成的 wifi、蓝牙或 wwan 设备
|
||||
* 在系统启动时恢复无线电设备状态(从之前的关机时的状态)
|
||||
* 无线电设备向导:在网络连接/断开和停靠/取消停靠时切换无线电
|
||||
* 禁用 LAN 唤醒
|
||||
* 挂起/休眠后恢复集成的 WWAN 和蓝牙状态
|
||||
* 英特尔处理器的动态电源降低 —— 需要内核和 PHC-Patch 支持
|
||||
* 电池充电阈值 —— 仅限 ThinkPad
|
||||
* 重新校准电池 —— 仅限 ThinkPad
|
||||
|
||||
### 如何在 Linux 上安装 TLP
|
||||
|
||||
TLP 包在大多数发行版官方存储库中都可用,因此,使用发行版的 [包管理器][5] 来安装它。
|
||||
|
||||
对于 Fedora 系统,使用 [DNF 命令][6] 安装 TLP。
|
||||
|
||||
```
|
||||
$ sudo dnf install tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPad 需要一些附加软件包。
|
||||
|
||||
```
|
||||
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
|
||||
$ sudo dnf install http://repo.linrunner.de/fedora/tlp/repos/releases/tlp-release.fc$(rpm -E %fedora).noarch.rpm
|
||||
$ sudo dnf install akmod-tp_smapi akmod-acpi_call kernel-devel
|
||||
```
|
||||
|
||||
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
|
||||
|
||||
```
|
||||
$ sudo dnf install smartmontools
|
||||
```
|
||||
|
||||
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][7] 或 [APT 命令][8] 安装 TLP。
|
||||
|
||||
```
|
||||
$ sudo apt install tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPad 需要一些附加软件包。
|
||||
|
||||
```
|
||||
$ sudo apt-get install tp-smapi-dkms acpi-call-dkms
|
||||
```
|
||||
|
||||
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
|
||||
|
||||
```
|
||||
$ sudo apt-get install smartmontools
|
||||
```
|
||||
|
||||
当基于 Ubuntu 的系统的官方软件包过时时,请使用以下 PPA 存储库,该存储库提供最新版本。运行以下命令以使用 PPA 安装 TLP。
|
||||
|
||||
```
|
||||
$ sudo add-apt-repository ppa:linrunner/tlp
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install tlp
|
||||
```
|
||||
|
||||
对于基于 Arch Linux 的系统,使用 [Pacman 命令][9] 安装 TLP。
|
||||
|
||||
```
|
||||
$ sudo pacman -S tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPad 需要一些附加软件包。
|
||||
|
||||
```
|
||||
$ pacman -S tp_smapi acpi_call
|
||||
```
|
||||
|
||||
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
|
||||
|
||||
```
|
||||
$ sudo pacman -S smartmontools
|
||||
```
|
||||
|
||||
对于基于 Arch Linux 的系统,在启动时启用 TLP 和 TLP-Sleep 服务。
|
||||
|
||||
```
|
||||
$ sudo systemctl enable tlp.service
|
||||
$ sudo systemctl enable tlp-sleep.service
|
||||
```
|
||||
|
||||
对于基于 Arch Linux 的系统,你还应该屏蔽以下服务以避免冲突,并确保 TLP 的无线电设备切换选项的正确操作。
|
||||
|
||||
```
|
||||
$ sudo systemctl mask systemd-rfkill.service
|
||||
$ sudo systemctl mask systemd-rfkill.socket
|
||||
```
|
||||
|
||||
对于 RHEL/CentOS 系统,使用 [YUM 命令][10] 安装 TLP。
|
||||
|
||||
```
|
||||
$ sudo yum install tlp tlp-rdw
|
||||
```
|
||||
|
||||
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
|
||||
|
||||
```
|
||||
$ sudo yum install smartmontools
|
||||
```
|
||||
|
||||
对于 openSUSE Leap 系统,使用 [Zypper 命令][11] 安装 TLP。
|
||||
|
||||
```
|
||||
$ sudo zypper install TLP
|
||||
```
|
||||
|
||||
安装 smartmontool 以显示 tlp-stat 中 S.M.A.R.T. 数据。
|
||||
|
||||
```
|
||||
$ sudo zypper install smartmontools
|
||||
```
|
||||
|
||||
成功安装 TLP 后,使用以下命令启动服务。
|
||||
|
||||
```
|
||||
$ systemctl start tlp.service
|
||||
```
|
||||
|
||||
### 使用方法
|
||||
|
||||
#### 显示电池信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -b
|
||||
或
|
||||
$ sudo tlp-stat --battery
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Battery Status
|
||||
/sys/class/power_supply/BAT0/manufacturer = SMP
|
||||
/sys/class/power_supply/BAT0/model_name = L14M4P23
|
||||
/sys/class/power_supply/BAT0/cycle_count = (not supported)
|
||||
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_full = 48850 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_now = 48850 [mWh]
|
||||
/sys/class/power_supply/BAT0/power_now = 0 [mW]
|
||||
/sys/class/power_supply/BAT0/status = Full
|
||||
|
||||
Charge = 100.0 [%]
|
||||
Capacity = 81.4 [%]
|
||||
```
|
||||
|
||||
#### 显示磁盘信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -d
|
||||
或
|
||||
$ sudo tlp-stat --disk
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Storage Devices
|
||||
/dev/sda:
|
||||
Model = WDC WD10SPCX-24HWST1
|
||||
Firmware = 02.01A02
|
||||
APM Level = 128
|
||||
Status = active/idle
|
||||
Scheduler = mq-deadline
|
||||
|
||||
Runtime PM: control = on, autosuspend_delay = (not available)
|
||||
|
||||
SMART info:
|
||||
4 Start_Stop_Count = 18787
|
||||
5 Reallocated_Sector_Ct = 0
|
||||
9 Power_On_Hours = 606 [h]
|
||||
12 Power_Cycle_Count = 1792
|
||||
193 Load_Cycle_Count = 25775
|
||||
194 Temperature_Celsius = 31 [°C]
|
||||
|
||||
|
||||
+++ AHCI Link Power Management (ALPM)
|
||||
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
|
||||
|
||||
+++ AHCI Host Controller Runtime Power Management
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
|
||||
```
|
||||
|
||||
#### 显示 PCI 设备信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -e
|
||||
或
|
||||
$ sudo tlp-stat --pcie
|
||||
```
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -e
|
||||
or
|
||||
$ sudo tlp-stat --pcie
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Runtime Power Management
|
||||
Device blacklist = (not configured)
|
||||
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
|
||||
|
||||
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
|
||||
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
|
||||
/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd)
|
||||
|
||||
......
|
||||
```
|
||||
|
||||
#### 显示图形卡信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -g
|
||||
或
|
||||
$ sudo tlp-stat --graphics
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Intel Graphics
|
||||
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
|
||||
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
|
||||
/sys/module/i915/parameters/enable_psr = 0 (disabled)
|
||||
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
|
||||
```
|
||||
|
||||
#### 显示处理器信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -p
|
||||
或
|
||||
$ sudo tlp-stat --processor
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Processor
|
||||
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
|
||||
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
......
|
||||
|
||||
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
|
||||
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
|
||||
|
||||
x86_energy_perf_policy: program not installed.
|
||||
|
||||
/sys/module/workqueue/parameters/power_efficient = Y
|
||||
/proc/sys/kernel/nmi_watchdog = 0
|
||||
|
||||
+++ Undervolting
|
||||
PHC kernel not available.
|
||||
```
|
||||
|
||||
#### 显示系统数据信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -s
|
||||
或
|
||||
$ sudo tlp-stat --system
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ System Info
|
||||
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
|
||||
BIOS = CDCN35WW
|
||||
Release = "Manjaro Linux"
|
||||
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
|
||||
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
|
||||
Init system = systemd
|
||||
Boot mode = BIOS (CSM, Legacy)
|
||||
|
||||
+++ TLP Status
|
||||
State = enabled
|
||||
Last run = 11:04:00 IST, 596 sec(s) ago
|
||||
Mode = battery
|
||||
Power source = battery
|
||||
```
|
||||
|
||||
#### 显示温度和风扇速度信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -t
|
||||
或
|
||||
$ sudo tlp-stat --temp
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Temperatures
|
||||
CPU temp = 36 [°C]
|
||||
Fan speed = (not available)
|
||||
```
|
||||
|
||||
#### 显示 USB 设备数据信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -u
|
||||
或
|
||||
$ sudo tlp-stat --usb
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ USB
|
||||
Autosuspend = disabled
|
||||
Device whitelist = (not configured)
|
||||
Device blacklist = (not configured)
|
||||
Bluetooth blacklist = disabled
|
||||
Phone blacklist = disabled
|
||||
WWAN blacklist = enabled
|
||||
|
||||
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
|
||||
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
|
||||
|
||||
......
|
||||
```
|
||||
|
||||
#### 显示警告信息
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -w
|
||||
或
|
||||
$ sudo tlp-stat --warn
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
No warnings detected.
|
||||
```
|
||||
|
||||
#### 状态报告及配置和所有活动的设置
|
||||
|
||||
```
|
||||
$ sudo tlp-stat
|
||||
```
|
||||
|
||||
```
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Configured Settings: /etc/default/tlp
|
||||
TLP_ENABLE=1
|
||||
TLP_DEFAULT_MODE=AC
|
||||
TLP_PERSISTENT_DEFAULT=0
|
||||
DISK_IDLE_SECS_ON_AC=0
|
||||
DISK_IDLE_SECS_ON_BAT=2
|
||||
MAX_LOST_WORK_SECS_ON_AC=15
|
||||
MAX_LOST_WORK_SECS_ON_BAT=60
|
||||
|
||||
......
|
||||
|
||||
+++ System Info
|
||||
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
|
||||
BIOS = CDCN35WW
|
||||
Release = "Manjaro Linux"
|
||||
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
|
||||
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
|
||||
Init system = systemd
|
||||
Boot mode = BIOS (CSM, Legacy)
|
||||
|
||||
+++ TLP Status
|
||||
State = enabled
|
||||
Last run = 11:04:00 IST, 684 sec(s) ago
|
||||
Mode = battery
|
||||
Power source = battery
|
||||
|
||||
+++ Processor
|
||||
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
|
||||
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
|
||||
|
||||
......
|
||||
|
||||
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
|
||||
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
|
||||
|
||||
x86_energy_perf_policy: program not installed.
|
||||
|
||||
/sys/module/workqueue/parameters/power_efficient = Y
|
||||
/proc/sys/kernel/nmi_watchdog = 0
|
||||
|
||||
+++ Undervolting
|
||||
PHC kernel not available.
|
||||
|
||||
+++ Temperatures
|
||||
CPU temp = 42 [°C]
|
||||
Fan speed = (not available)
|
||||
|
||||
+++ File System
|
||||
/proc/sys/vm/laptop_mode = 2
|
||||
/proc/sys/vm/dirty_writeback_centisecs = 6000
|
||||
/proc/sys/vm/dirty_expire_centisecs = 6000
|
||||
/proc/sys/vm/dirty_ratio = 20
|
||||
/proc/sys/vm/dirty_background_ratio = 10
|
||||
|
||||
+++ Storage Devices
|
||||
/dev/sda:
|
||||
Model = WDC WD10SPCX-24HWST1
|
||||
Firmware = 02.01A02
|
||||
APM Level = 128
|
||||
Status = active/idle
|
||||
Scheduler = mq-deadline
|
||||
|
||||
Runtime PM: control = on, autosuspend_delay = (not available)
|
||||
|
||||
SMART info:
|
||||
4 Start_Stop_Count = 18787
|
||||
5 Reallocated_Sector_Ct = 0
|
||||
9 Power_On_Hours = 606 [h]
|
||||
12 Power_Cycle_Count = 1792
|
||||
193 Load_Cycle_Count = 25777
|
||||
194 Temperature_Celsius = 31 [°C]
|
||||
|
||||
|
||||
+++ AHCI Link Power Management (ALPM)
|
||||
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
|
||||
|
||||
+++ AHCI Host Controller Runtime Power Management
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
|
||||
|
||||
+++ PCIe Active State Power Management
|
||||
/sys/module/pcie_aspm/parameters/policy = powersave
|
||||
|
||||
+++ Intel Graphics
|
||||
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
|
||||
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
|
||||
/sys/module/i915/parameters/enable_psr = 0 (disabled)
|
||||
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
|
||||
|
||||
+++ Wireless
|
||||
bluetooth = on
|
||||
wifi = on
|
||||
wwan = none (no device)
|
||||
|
||||
hci0(btusb) : bluetooth, not connected
|
||||
wlp8s0(iwlwifi) : wifi, connected, power management = on
|
||||
|
||||
+++ Audio
|
||||
/sys/module/snd_hda_intel/parameters/power_save = 1
|
||||
/sys/module/snd_hda_intel/parameters/power_save_controller = Y
|
||||
|
||||
+++ Runtime Power Management
|
||||
Device blacklist = (not configured)
|
||||
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
|
||||
|
||||
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
|
||||
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
|
||||
|
||||
......
|
||||
|
||||
+++ USB
|
||||
Autosuspend = disabled
|
||||
Device whitelist = (not configured)
|
||||
Device blacklist = (not configured)
|
||||
Bluetooth blacklist = disabled
|
||||
Phone blacklist = disabled
|
||||
WWAN blacklist = enabled
|
||||
|
||||
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
|
||||
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
|
||||
Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid)
|
||||
Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb)
|
||||
Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub)
|
||||
|
||||
+++ Battery Status
|
||||
/sys/class/power_supply/BAT0/manufacturer = SMP
|
||||
/sys/class/power_supply/BAT0/model_name = L14M4P23
|
||||
/sys/class/power_supply/BAT0/cycle_count = (not supported)
|
||||
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_full = 51690 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_now = 50140 [mWh]
|
||||
/sys/class/power_supply/BAT0/power_now = 12185 [mW]
|
||||
/sys/class/power_supply/BAT0/status = Discharging
|
||||
|
||||
Charge = 97.0 [%]
|
||||
Capacity = 86.2 [%]
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/check-laptop-battery-status-and-charging-state-in-linux-terminal/
|
||||
[2]: https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/
|
||||
[3]: https://www.2daygeek.com/monitor-laptop-battery-charging-state-linux/
|
||||
[4]: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html
|
||||
[5]: https://www.2daygeek.com/category/package-management/
|
||||
[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -62,47 +62,42 @@ producer-------->| disk file |<-------consumer
|
||||
#include <stdlib.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <string.h>
|
||||
|
||||
#define FileName "data.dat"
|
||||
#define DataString "Now is the winter of our discontent\nMade glorious summer by this sun of York\n"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][4](msg);
|
||||
[exit][5](-1); /* EXIT_FAILURE */
|
||||
perror(msg);
|
||||
exit(-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
struct flock lock;
|
||||
lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
|
||||
lock.l_type = F_WRLCK; /* read/write (exclusive versus shared) lock */
|
||||
lock.l_whence = SEEK_SET; /* base for seek offsets */
|
||||
lock.l_start = 0; /* 1st byte in file */
|
||||
lock.l_len = 0; /* 0 here means 'until EOF' */
|
||||
lock.l_pid = getpid(); /* process id */
|
||||
|
||||
int fd; /* file descriptor to identify a file within a process */
|
||||
if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
|
||||
report_and_exit("open to read failed...");
|
||||
if ((fd = open(FileName, O_RDWR | O_CREAT, 0666)) < 0) /* -1 signals an error */
|
||||
report_and_exit("open failed...");
|
||||
|
||||
/* If the file is write-locked, we can't continue. */
|
||||
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
|
||||
if (lock.l_type != F_UNLCK)
|
||||
report_and_exit("file is still write locked...");
|
||||
if (fcntl(fd, F_SETLK, &lock) < 0) /** F_SETLK doesn't block, F_SETLKW does **/
|
||||
report_and_exit("fcntl failed to get lock...");
|
||||
else {
|
||||
write(fd, DataString, strlen(DataString)); /* populate data file */
|
||||
fprintf(stderr, "Process %d has written to data file...\n", lock.l_pid);
|
||||
}
|
||||
|
||||
lock.l_type = F_RDLCK; /* prevents any writing during the reading */
|
||||
if (fcntl(fd, F_SETLK, &lock) < 0)
|
||||
report_and_exit("can't get a read-only lock...");
|
||||
|
||||
/* Read the bytes (they happen to be ASCII codes) one at a time. */
|
||||
int c; /* buffer for read bytes */
|
||||
while (read(fd, &c, 1) > 0) /* 0 signals EOF */
|
||||
write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
|
||||
|
||||
/* Release the lock explicitly. */
|
||||
/* Now release the lock explicitly. */
|
||||
lock.l_type = F_UNLCK;
|
||||
if (fcntl(fd, F_SETLK, &lock) < 0)
|
||||
report_and_exit("explicit unlocking failed...");
|
||||
|
||||
close(fd);
|
||||
return 0;
|
||||
close(fd); /* close the file: would unlock if needed */
|
||||
return 0; /* terminating the process would unlock as well */
|
||||
}
|
||||
```
|
||||
|
||||
@ -140,8 +135,8 @@ lock.l_type = F_UNLCK;
|
||||
#define FileName "data.dat"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][4](msg);
|
||||
[exit][5](-1); /* EXIT_FAILURE */
|
||||
perror(msg);
|
||||
exit(-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
@ -240,37 +235,37 @@ This is the way the world ends...
|
||||
#include "shmem.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][4](msg);
|
||||
[exit][5](-1);
|
||||
perror(msg);
|
||||
exit(-1);
|
||||
}
|
||||
|
||||
int main() {
|
||||
int fd = shm_open(BackingFile, /* name from smem.h */
|
||||
O_RDWR | O_CREAT, /* read/write, create if needed */
|
||||
AccessPerms); /* access permissions (0644) */
|
||||
O_RDWR | O_CREAT, /* read/write, create if needed */
|
||||
AccessPerms); /* access permissions (0644) */
|
||||
if (fd < 0) report_and_exit("Can't open shared mem segment...");
|
||||
|
||||
ftruncate(fd, ByteSize); /* get the bytes */
|
||||
|
||||
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
|
||||
ByteSize, /* how many bytes */
|
||||
PROT_READ | PROT_WRITE, /* access protections */
|
||||
MAP_SHARED, /* mapping visible to other processes */
|
||||
fd, /* file descriptor */
|
||||
0); /* offset: start at 1st byte */
|
||||
ByteSize, /* how many bytes */
|
||||
PROT_READ | PROT_WRITE, /* access protections */
|
||||
MAP_SHARED, /* mapping visible to other processes */
|
||||
fd, /* file descriptor */
|
||||
0); /* offset: start at 1st byte */
|
||||
if ((caddr_t) -1 == memptr) report_and_exit("Can't get segment...");
|
||||
|
||||
[fprintf][7](stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
|
||||
[fprintf][7](stderr, "backing file: /dev/shm%s\n", BackingFile );
|
||||
fprintf(stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
|
||||
fprintf(stderr, "backing file: /dev/shm%s\n", BackingFile );
|
||||
|
||||
/* semahore code to lock the shared mem */
|
||||
/* semaphore code to lock the shared mem */
|
||||
sem_t* semptr = sem_open(SemaphoreName, /* name */
|
||||
O_CREAT, /* create the semaphore */
|
||||
AccessPerms, /* protection perms */
|
||||
0); /* initial value */
|
||||
O_CREAT, /* create the semaphore */
|
||||
AccessPerms, /* protection perms */
|
||||
0); /* initial value */
|
||||
if (semptr == (void*) -1) report_and_exit("sem_open");
|
||||
|
||||
[strcpy][8](memptr, MemContents); /* copy some ASCII bytes to the segment */
|
||||
strcpy(memptr, MemContents); /* copy some ASCII bytes to the segment */
|
||||
|
||||
/* increment the semaphore so that memreader can read */
|
||||
if (sem_post(semptr) < 0) report_and_exit("sem_post");
|
||||
@ -341,8 +336,8 @@ munmap(memptr, ByteSize); /* unmap the storage *
|
||||
#include "shmem.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][4](msg);
|
||||
[exit][5](-1);
|
||||
perror(msg);
|
||||
exit(-1);
|
||||
}
|
||||
|
||||
int main() {
|
||||
@ -351,24 +346,24 @@ int main() {
|
||||
|
||||
/* get a pointer to memory */
|
||||
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
|
||||
ByteSize, /* how many bytes */
|
||||
PROT_READ | PROT_WRITE, /* access protections */
|
||||
MAP_SHARED, /* mapping visible to other processes */
|
||||
fd, /* file descriptor */
|
||||
0); /* offset: start at 1st byte */
|
||||
ByteSize, /* how many bytes */
|
||||
PROT_READ | PROT_WRITE, /* access protections */
|
||||
MAP_SHARED, /* mapping visible to other processes */
|
||||
fd, /* file descriptor */
|
||||
0); /* offset: start at 1st byte */
|
||||
if ((caddr_t) -1 == memptr) report_and_exit("Can't access segment...");
|
||||
|
||||
/* create a semaphore for mutual exclusion */
|
||||
sem_t* semptr = sem_open(SemaphoreName, /* name */
|
||||
O_CREAT, /* create the semaphore */
|
||||
AccessPerms, /* protection perms */
|
||||
0); /* initial value */
|
||||
O_CREAT, /* create the semaphore */
|
||||
AccessPerms, /* protection perms */
|
||||
0); /* initial value */
|
||||
if (semptr == (void*) -1) report_and_exit("sem_open");
|
||||
|
||||
/* use semaphore as a mutex (lock) by waiting for writer to increment it */
|
||||
if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
|
||||
int i;
|
||||
for (i = 0; i < [strlen][6](MemContents); i++)
|
||||
for (i = 0; i < strlen(MemContents); i++)
|
||||
write(STDOUT_FILENO, memptr + i, 1); /* one byte at a time */
|
||||
sem_post(semptr);
|
||||
}
|
||||
|
@ -87,8 +87,8 @@ world
|
||||
#define WriteEnd 1
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /** failure **/
|
||||
perror(msg);
|
||||
exit(-1); /** failure **/
|
||||
}
|
||||
|
||||
int main() {
|
||||
@ -112,11 +112,11 @@ int main() {
|
||||
else { /*** parent ***/
|
||||
close(pipeFDs[ReadEnd]); /* parent writes, doesn't read */
|
||||
|
||||
write(pipeFDs[WriteEnd], msg, [strlen][8](msg)); /* write the bytes to the pipe */
|
||||
write(pipeFDs[WriteEnd], msg, strlen(msg)); /* write the bytes to the pipe */
|
||||
close(pipeFDs[WriteEnd]); /* done writing: generate eof */
|
||||
|
||||
wait(NULL); /* wait for child to exit */
|
||||
[exit][7](0); /* exit normally */
|
||||
exit(0); /* exit normally */
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
@ -249,7 +249,7 @@ bye, bye ## ditto
|
||||
```c
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
#include <fcntl.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <time.h>
|
||||
#include <stdlib.h>
|
||||
@ -264,24 +264,24 @@ int main() {
|
||||
const char* pipeName = "./fifoChannel";
|
||||
mkfifo(pipeName, 0666); /* read/write for user/group/others */
|
||||
int fd = open(pipeName, O_CREAT | O_WRONLY); /* open as write-only */
|
||||
if (fd < 0) return -1; /** error **/
|
||||
|
||||
if (fd < 0) return -1; /* can't go on */
|
||||
|
||||
int i;
|
||||
for (i = 0; i < MaxLoops; i++) { /* write MaxWrites times */
|
||||
int j;
|
||||
for (j = 0; j < ChunkSize; j++) { /* each time, write ChunkSize bytes */
|
||||
int k;
|
||||
int chunk[IntsPerChunk];
|
||||
for (k = 0; k < IntsPerChunk; k++)
|
||||
chunk[k] = [rand][9]();
|
||||
write(fd, chunk, sizeof(chunk));
|
||||
for (k = 0; k < IntsPerChunk; k++)
|
||||
chunk[k] = rand();
|
||||
write(fd, chunk, sizeof(chunk));
|
||||
}
|
||||
usleep(([rand][9]() % MaxZs) + 1); /* pause a bit for realism */
|
||||
usleep((rand() % MaxZs) + 1); /* pause a bit for realism */
|
||||
}
|
||||
|
||||
close(fd); /* close pipe: generates an end-of-file */
|
||||
unlink(pipeName); /* unlink from the implementing file */
|
||||
[printf][10]("%i ints sent to the pipe.\n", MaxLoops * ChunkSize * IntsPerChunk);
|
||||
close(fd); /* close pipe: generates an end-of-stream marker */
|
||||
unlink(pipeName); /* unlink from the implementing file */
|
||||
printf("%i ints sent to the pipe.\n", MaxLoops * ChunkSize * IntsPerChunk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -318,13 +318,12 @@ unlink(pipeName); /* unlink from the implementing file */
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
|
||||
|
||||
unsigned is_prime(unsigned n) { /* not pretty, but gets the job done efficiently */
|
||||
unsigned is_prime(unsigned n) { /* not pretty, but efficient */
|
||||
if (n <= 3) return n > 1;
|
||||
if (0 == (n % 2) || 0 == (n % 3)) return 0;
|
||||
|
||||
unsigned i;
|
||||
for (i = 5; (i * i) <= n; i += 6)
|
||||
for (i = 5; (i * i) <= n; i += 6)
|
||||
if (0 == (n % i) || 0 == (n % (i + 2))) return 0;
|
||||
|
||||
return 1; /* found a prime! */
|
||||
@ -332,25 +331,25 @@ unsigned is_prime(unsigned n) { /* not pretty, but gets the job done efficiently
|
||||
|
||||
int main() {
|
||||
const char* file = "./fifoChannel";
|
||||
int fd = open(file, O_RDONLY);
|
||||
int fd = open(file, O_RDONLY);
|
||||
if (fd < 0) return -1; /* no point in continuing */
|
||||
unsigned count = 0, total = 0, primes_count = 0;
|
||||
|
||||
while (1) {
|
||||
int next;
|
||||
int i;
|
||||
ssize_t count = read(fd, &next, sizeof(int));
|
||||
|
||||
ssize_t count = read(fd, &next, sizeof(int));
|
||||
if (0 == count) break; /* end of stream */
|
||||
else if (count == sizeof(int)) { /* read a 4-byte int value */
|
||||
total++;
|
||||
if (is_prime(next)) primes_count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
close(fd); /* close pipe from read end */
|
||||
unlink(file); /* unlink from the underlying file */
|
||||
[printf][10]("Received ints: %u, primes: %u\n", total, primes_count);
|
||||
printf("Received ints: %u, primes: %u\n", total, primes_count);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -434,23 +433,23 @@ ID `qid` 在效果上是消息队列文件描述符的对应物。
|
||||
#### 示例 5. sender 程序
|
||||
|
||||
```c
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include "queue.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /* EXIT_FAILURE */
|
||||
perror(msg);
|
||||
exit(-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
key_t key = ftok(PathName, ProjectId);
|
||||
key_t key = ftok(PathName, ProjectId);
|
||||
if (key < 0) report_and_exit("couldn't get key...");
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT);
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT);
|
||||
if (qid < 0) report_and_exit("couldn't get queue id...");
|
||||
|
||||
char* payloads[] = {"msg1", "msg2", "msg3", "msg4", "msg5", "msg6"};
|
||||
@ -460,11 +459,11 @@ int main() {
|
||||
/* build the message */
|
||||
queuedMessage msg;
|
||||
msg.type = types[i];
|
||||
[strcpy][11](msg.payload, payloads[i]);
|
||||
strcpy(msg.payload, payloads[i]);
|
||||
|
||||
/* send the message */
|
||||
msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT); /* don't block */
|
||||
[printf][10]("%s sent as type %i\n", msg.payload, (int) msg.type);
|
||||
printf("%s sent as type %i\n", msg.payload, (int) msg.type);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
@ -481,21 +480,21 @@ msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT);
|
||||
#### 示例 6. receiver 程序
|
||||
|
||||
```c
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdlib.h>
|
||||
#include "queue.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /* EXIT_FAILURE */
|
||||
perror(msg);
|
||||
exit(-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
|
||||
int main() {
|
||||
key_t key= ftok(PathName, ProjectId); /* key to identify the queue */
|
||||
if (key < 0) report_and_exit("key not gotten...");
|
||||
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT); /* access if created already */
|
||||
if (qid < 0) report_and_exit("no access to queue...");
|
||||
|
||||
@ -504,15 +503,15 @@ int main() {
|
||||
for (i = 0; i < MsgCount; i++) {
|
||||
queuedMessage msg; /* defined in queue.h */
|
||||
if (msgrcv(qid, &msg, sizeof(msg), types[i], MSG_NOERROR | IPC_NOWAIT) < 0)
|
||||
[puts][12]("msgrcv trouble...");
|
||||
[printf][10]("%s received as type %i\n", msg.payload, (int) msg.type);
|
||||
puts("msgrcv trouble...");
|
||||
printf("%s received as type %i\n", msg.payload, (int) msg.type);
|
||||
}
|
||||
|
||||
/** remove the queue **/
|
||||
if (msgctl(qid, IPC_RMID, NULL) < 0) /* NULL = 'no flags' */
|
||||
report_and_exit("trouble removing queue...");
|
||||
|
||||
return 0;
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -1,16 +1,18 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10852-1.html)
|
||||
[#]: subject: (Getting started with social media sentiment analysis in Python)
|
||||
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python)
|
||||
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
|
||||
|
||||
使用 Python 进行社交媒体情感分析入门
|
||||
======
|
||||
学习自然语言处理的基础知识并探索两个有用的 Python 包。
|
||||
![Raspberry Pi and Python][1]
|
||||
|
||||
> 学习自然语言处理的基础知识并探索两个有用的 Python 包。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201905/14/002943t6udxhhcq1zoxu15.jpg)
|
||||
|
||||
自然语言处理(NLP)是机器学习的一种,它解决了口语或书面语言和计算机辅助分析这些语言之间的相关性。日常生活中我们经历了无数的 NLP 创新,从写作帮助和建议到实时语音翻译,还有口译。
|
||||
|
||||
@ -20,25 +22,25 @@
|
||||
|
||||
### 自然语言和文本数据
|
||||
|
||||
合理的起点是从定义开始:“什么是自然语言?”它是我们人类相互交流的方式,沟通的主要方式是口语和文字。我们可以更进一步,只关注文本交流。毕竟,生活在 Siri, Alexa 等无处不在的时代,我们知道语音是一组与文本无关的计算。
|
||||
合理的起点是从定义开始:“什么是自然语言?”它是我们人类相互交流的方式,沟通的主要方式是口语和文字。我们可以更进一步,只关注文本交流。毕竟,生活在 Siri、Alexa 等无处不在的时代,我们知道语音是一组与文本无关的计算。
|
||||
|
||||
### 数据前景和挑战
|
||||
|
||||
我们只考虑使用文本数据,我们可以对语言和文本做什么呢?首先是语言,特别是英语,除了规则还有很多例外,含义的多样性和语境差异,这些都可能使人类口译员感到困惑,更不用说计算机翻译了。在小学,我们学习文章和标点符号,通过讲母语,我们获得了寻找直觉上表示唯一意义的词的能力。比如,出现诸如 "a"、"the" 和 "or" 之类的文章,它们在 NLP 中被称为 _停用词_,因为传统上 NLP 算法是在一个序列中找到这些词时意味着搜索停止。
|
||||
我们只考虑使用文本数据,我们可以对语言和文本做什么呢?首先是语言,特别是英语,除了规则还有很多例外,含义的多样性和语境差异,这些都可能使人类口译员感到困惑,更不用说计算机翻译了。在小学,我们学习文章和标点符号,通过讲母语,我们获得了寻找直觉上表示唯一意义的词的能力。比如,出现诸如 “a”、“the” 和 “or” 之类的文章,它们在 NLP 中被称为*停止词*,因为传统上 NLP 算法是在一个序列中找到这些词时意味着搜索停止。
|
||||
|
||||
由于我们的目标是自动将文本分类为情感类,因此我们需要一种以计算方式处理文本数据的方法。因此,我们必须考虑如何向机器表示文本数据。众所周知,利用和解释语言的规则很复杂,输入文本的大小和结构可能会有很大差异。我们需要将文本数据转换为数字数据,这是机器和数学的首选方式。这种转变属于 _特征提取_ 的范畴。
|
||||
由于我们的目标是自动将文本分类为情感类,因此我们需要一种以计算方式处理文本数据的方法。因此,我们必须考虑如何向机器表示文本数据。众所周知,利用和解释语言的规则很复杂,输入文本的大小和结构可能会有很大差异。我们需要将文本数据转换为数字数据,这是机器和数学的首选方式。这种转变属于*特征提取*的范畴。
|
||||
|
||||
在提取输入文本数据的数字表示形式后,一个改进可能是:给定一个文本输入体,为上面列出的文章确定一组向量统计数据,并根据这些数据对文档进行分类。例如,过多的副词可能会使撰稿人感到愤怒,或者过度使用停用词可能有助于识别带有内容填充的学期论文。诚然,这可能与我们情感分析的目标没有太大关系。
|
||||
在提取输入文本数据的数字表示形式后,一个改进可能是:给定一个文本输入体,为上面列出的文章确定一组向量统计数据,并根据这些数据对文档进行分类。例如,过多的副词可能会使撰稿人感到愤怒,或者过度使用停止词可能有助于识别带有内容填充的学期论文。诚然,这可能与我们情感分析的目标没有太大关系。
|
||||
|
||||
### 词袋
|
||||
|
||||
当你评估一个文本陈述是积极还是消极的时候,你使用哪些上下文来评估它的极性?(例如,文本中是否具有积极的、消极的或中性的情感)一种方式是隐含形容词:被称为 "disgusting" 的东西被认为是消极的,但如果同样的东西被称为 "beautiful",你会认为它是积极的。从定义上讲,俗语给人一种熟悉感,通常是积极的,而脏话可能是敌意的表现。文本数据也可以包括表情符号,它带有固定的情感。
|
||||
当你评估一个文本陈述是积极还是消极的时候,你使用哪些上下文来评估它的极性?(例如,文本中是否具有积极的、消极的或中性的情感)一种方式是隐含形容词:被称为 “disgusting”(恶心) 的东西被认为是消极的,但如果同样的东西被称为 “beautiful”(漂亮),你会认为它是积极的。从定义上讲,俗语给人一种熟悉感,通常是积极的,而脏话可能是敌意的表现。文本数据也可以包括表情符号,它带有固定的情感。
|
||||
|
||||
理解单个单词的极性影响为文本的[_词袋_][3](BoW) 模型提供了基础。它考虑一组单词或词汇表,并提取关于这些单词在输入文本中是否存在的度量。词汇表是通过考虑极性已知的文本形成的,称为 _标记的训练数据_。从这组标记数据中提取特征,然后分析特征之间的关系,并将标签与数据关联起来。
|
||||
理解单个单词的极性影响为文本的<ruby>[词袋][3]<rt>bag-of-words</rt></ruby>(BoW)模型提供了基础。它分析一组单词或词汇表,并提取关于这些单词在输入文本中是否存在的度量。词汇表是通过处理已知极性的文本形成称为*标记的训练数据*。从这组标记数据中提取特征,然后分析特征之间的关系,并将标记与数据关联起来。
|
||||
|
||||
“词袋”这个名称说明了它的用途:即不考虑空间位置或上下文的的单个词。词汇表通常是由训练集中出现的所有单词构建的,在训练结束后被删除。如果在训练之前没有清理停用词,那么停用词会因为其高频率和低语境而被移除。很少使用的单词也可以删除,因为一般情况下它们提供了缺失的信息。
|
||||
“词袋”这个名称说明了它的用途:即不考虑空间位置或上下文的的单个词。词汇表通常是由训练集中出现的所有单词构建的,训练后往往会被修剪。如果在训练之前没有清理停止词,那么停止词会因为其高频率和低语境而被移除。很少使用的单词也可以删除,因为缺乏为一般输入实例提供的信息。
|
||||
|
||||
但是,重要的是要注意,你可以(并且应该)进一步考虑单词在单独的训练数据实例中使用之外的外观,称为[_词频_][4] (TF)。你还应该考虑输入数据的所有实例中的单词计数,通常,所有文档中的单词频率显著,这被称为[_逆文本频率指数_][5](IDF)。这些指标一定会在本主题的其他文章和软件包中提及,因此了解它们会有所帮助。
|
||||
但是,重要的是要注意,你可以(并且应该)进一步考虑单词在单个训练数据实例之外的情形,这称为<ruby>[词频][4]<rt>term frequency</rt></ruby>(TF)。你还应该考虑输入数据在所有训练实例中的单词计数,通常,出现在所有文档中的低频词更重要,这被称为<ruby>[逆文本频率指数][5]<rt>inverse document frequency</rt></ruby>(IDF)。这些指标一定会在本主题系列的其他文章和软件包中提及,因此了解它们会有所帮助。
|
||||
|
||||
词袋在许多文档分类应用程序中很有用。然而,在情感分析中,当缺乏情境意识的问题被利用时,事情就可以解决。考虑以下句子:
|
||||
|
||||
@ -46,32 +48,31 @@
|
||||
* 我讨厌下雨天,好事是今天是晴天。
|
||||
* 这不是生死攸关的问题。
|
||||
|
||||
这些短语的情感对于人类口译员来说是有难度的,而且通过严格关注单个词汇的实例,对于机器翻译来说也是困难的。
|
||||
|
||||
这些短语的情感对于人类口译员来说是有难度的,而且由于严格关注单个词汇的实例,对于机器翻译来说也是困难的。
|
||||
|
||||
在 NLP 中也可以考虑称为 _n-grams_ 的单词分组。一个二元组考虑两个相邻单词组成的组而不是(或除了)单个词袋。这应该可以缓解诸如上述“不喜欢”之类的情况,但由于缺乏语境意思,它仍然是个问题。此外,在上面的第二句中,下半句的情感语境可以被理解为否定前半部分。因此,这种方法中也会丢失上下文线索的空间局部性。从实用角度来看,使问题复杂化的是从给定输入文本中提取的特征的稀疏性。对于一个完整的大型词汇表,每个单词都有一个计数,可以将其视为一个整数向量。大多数文档的向量中都有大量的零计数,这给操作增加了不必要的空间和时间复杂度。虽然已经提出了许多用于降低这种复杂性的简便方法,但它仍然是一个问题。
|
||||
在 NLP 中也可以使用称为 “n-grams” 的单词分组。一个二元组考虑两个相邻单词组成的组而不是(或除了)单个词袋。这应该可以缓解诸如上述“不喜欢”之类的情况,但由于缺乏语境意思,它仍然是个问题。此外,在上面的第二句中,下半句的情感语境可以被理解为否定前半部分。因此,这种方法中也会丢失上下文线索的空间局部性。从实用角度来看,使问题复杂化的是从给定输入文本中提取的特征的稀疏性。对于一个完整的大型词汇表,每个单词都有一个计数,可以将其视为一个整数向量。大多数文档的向量中都有大量的零计数向量,这给操作增加了不必要的空间和时间复杂度。虽然已经提出了许多用于降低这种复杂性的简便方法,但它仍然是一个问题。
|
||||
|
||||
### 词嵌入
|
||||
|
||||
词嵌入是一种分布式表示,它允许具有相似含义的单词具有相似的表示。这是基于使用实值向量来与它们周围相关联。重点在于使用单词的方式,而不仅仅是它们的存在。此外,词嵌入的一个巨大语用优势是它们对密集向量的关注。通过摆脱具有相应数量的零值向量元素的单词计数模型,词嵌入在时间和存储方面提供了一个更有效的计算范例。
|
||||
<ruby>词嵌入<rt>Word embedding</rt></ruby>是一种分布式表示,它允许具有相似含义的单词具有相似的表示。这是基于使用实值向量来与它们周围相关联。重点在于使用单词的方式,而不仅仅是它们的存在与否。此外,词嵌入的一个巨大实用优势是它们关注于密集向量。通过摆脱具有相应数量的零值向量元素的单词计数模型,词嵌入在时间和存储方面提供了一个更有效的计算范例。
|
||||
|
||||
以下是两个优秀的词嵌入方法。
|
||||
|
||||
#### Word2vec
|
||||
|
||||
第一个是 [Word2vec][6],它是由 Google 开发的。随着你对 NLP 和情绪分析研究的深入,你可能会看到这种嵌入方法。它要么使用一个 _连续的词袋_(CBOW),要么使用一个 _连续的 skip-gram_ 模型。在 CBOW 中,一个单词的上下文是在训练中根据围绕它的单词来学习的。连续的 skip-gram 学习倾向于围绕给定的单词学习单词。虽然这可能超出了你需要解决的问题,但是如果你曾经面对必须生成自己的词嵌入情况,那么 Word2vec 的作者提倡使用 CBOW 方法来提高速度并评估频繁的单词,而 skip-gram 方法更适合嵌入稀有单词更重要的嵌入。
|
||||
第一个是 [Word2vec][6],它是由 Google 开发的。随着你对 NLP 和情绪分析研究的深入,你可能会看到这种嵌入方法。它要么使用一个<ruby>连续的词袋<rt>continuous bag of words</rt></ruby>(CBOW),要么使用一个连续 skip-gram 模型。在 CBOW 中,一个单词的上下文是在训练中根据围绕它的单词来学习的。连续 skip-gram 学习倾向于围绕给定的单词学习单词。虽然这可能超出了你需要解决的问题,但是如果你曾经面对必须生成自己的词嵌入情况,那么 Word2vec 的作者就提倡使用 CBOW 方法来提高速度并评估频繁的单词,而 skip-gram 方法更适合嵌入稀有单词更重要的嵌入。
|
||||
|
||||
#### GloVe
|
||||
|
||||
第二个是 [ _Global Vectors for Word Representation_][7(GloVe),它是斯坦福大学开发的。它是 Word2vec 方法的扩展,它试图将通过经典的全局文本统计特征提取获得的信息与 Word2vec 确定的本地上下文信息相结合。实际上,在一些应用程序中,GloVe 性能优于 Word2vec,而在另一些应用程序中则不如 Word2vec。最终,用于词嵌入的目标数据集将决定哪种方法最优。因此,最好了解它们的存在性和高级机制,因为你很可能会遇到它们。
|
||||
第二个是<ruby>[用于词表示的全局向量][7]<rt>Global Vectors for Word Representation</rt></ruby>(GloVe),它是斯坦福大学开发的。它是 Word2vec 方法的扩展,试图通过将经典的全局文本统计特征提取获得的信息与 Word2vec 确定的本地上下文信息相结合。实际上,在一些应用程序中,GloVe 性能优于 Word2vec,而在另一些应用程序中则不如 Word2vec。最终,用于词嵌入的目标数据集将决定哪种方法最优。因此,最好了解它们的存在性和高级机制,因为你很可能会遇到它们。
|
||||
|
||||
#### 创建和使用词嵌入
|
||||
|
||||
最后,知道如何获得词嵌入是有用的。在第 2 部分中,你将看到我们通过利用社区中其他人的实质性工作,可以说我们是站在了巨人的肩膀上。这是获取词嵌入的一种方法:即使用现有的经过训练和验证的模型。实际上,有无数的模型适用于英语和其他语言,一定会有一种模型可以满足你的应用程序,让你开箱即用!
|
||||
最后,知道如何获得词嵌入是有用的。在第 2 部分中,你将看到我们通过利用社区中其他人的实质性工作,站到了巨人的肩膀上。这是获取词嵌入的一种方法:即使用现有的经过训练和验证的模型。实际上,有无数的模型适用于英语和其他语言,一定会有一种模型可以满足你的应用程序,让你开箱即用!
|
||||
|
||||
如果没有的话,就开发工作而言,另一个极端是培训你自己的独立模型,而不考虑你的应用程序。实质上,你将获得大量标记的训练数据,并可能使用上述方法之一来训练模型。即使这样,你仍然只是在获取对输入文本数据的理解。然后,你需要为你应用程序开发一个特定的模型(例如,分析软件版本控制消息中的情感价值),这反过来又需要自己的时间和精力。
|
||||
如果没有的话,就开发工作而言,另一个极端是培训你自己的独立模型,而不考虑你的应用程序。实质上,你将获得大量标记的训练数据,并可能使用上述方法之一来训练模型。即使这样,你仍然只是在理解你输入文本数据。然后,你需要为你应用程序开发一个特定的模型(例如,分析软件版本控制消息中的情感价值),这反过来又需要自己的时间和精力。
|
||||
|
||||
你还可以为你的应用程序数据训练一个词嵌入,虽然这可以减少时间和精力,但这个词嵌入将是特定于应用程序的,这将会降低它的可重用性。
|
||||
你还可以对针对你的应用程序的数据训练一个词嵌入,虽然这可以减少时间和精力,但这个词嵌入将是特定于应用程序的,这将会降低它的可重用性。
|
||||
|
||||
### 可用的工具选项
|
||||
|
||||
@ -83,9 +84,9 @@
|
||||
|
||||
#### vaderSentiment
|
||||
|
||||
[vaderSentiment][10] 包提供了积极、消极和中性情绪的衡量标准。正如 [original paper][11] 的标题(“VADER:一个基于规则的社交媒体文本情感分析模型”)所示,这些模型是专门为社交媒体文本数据开发和调整的。VADER 接受了一组完整的人类标记数据的训练,包括常见的表情符号、UTF-8 编码的表情符号以及口语术语和缩写(例如 meh、lol、sux)。
|
||||
[vaderSentiment][10] 包提供了积极、消极和中性情绪的衡量标准。正如 [原论文][11] 的标题(《VADER:一个基于规则的社交媒体文本情感分析模型》)所示,这些模型是专门为社交媒体文本数据开发和调整的。VADER 接受了一组完整的人类标记过的数据的训练,包括常见的表情符号、UTF-8 编码的表情符号以及口语术语和缩写(例如 meh、lol、sux)。
|
||||
|
||||
对于给定的输入文本数据,vaderSentiment 返回一个极性分数百分比的三元组。它还提供了一个单个的评分标准,称为 _vaderSentiment 复合指标_。这是一个在 **[-1, 1]** 范围内的实值,其中对于分值大于 **0.05** 的情绪被认为是积极的,对于分值小于 **-0.05** 的被认为是消极的,否则为中性。
|
||||
对于给定的输入文本数据,vaderSentiment 返回一个极性分数百分比的三元组。它还提供了一个单个的评分标准,称为 *vaderSentiment 复合指标*。这是一个在 `[-1, 1]` 范围内的实值,其中对于分值大于 `0.05` 的情绪被认为是积极的,对于分值小于 `-0.05` 的被认为是消极的,否则为中性。
|
||||
|
||||
在[第 2 部分][2]中,你将学习如何使用这些工具为你的设计添加情感分析功能。
|
||||
|
||||
@ -93,10 +94,10 @@
|
||||
|
||||
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python
|
||||
|
||||
作者:[Michael McCune ][a]
|
||||
作者:[Michael McCune][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
96
published/20190430 Upgrading Fedora 29 to Fedora 30.md
Normal file
96
published/20190430 Upgrading Fedora 29 to Fedora 30.md
Normal file
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10854-1.html)
|
||||
[#]: subject: (Upgrading Fedora 29 to Fedora 30)
|
||||
[#]: via: (https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/)
|
||||
[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/)
|
||||
|
||||
将 Fedora 29 升级到 Fedora 30
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Fedora 30 [已经发布了][2]。你可能希望将系统升级到最新版本的 Fedora。Fedora 工作站版本有图形化升级的方法。另外,Fedora 也提供了一个命令行方法,用于将 Fedora 29 升级到 Fedora 30。
|
||||
|
||||
### 将 Fedora 29 工作站版本升级到 Fedora 30
|
||||
|
||||
在发布不久后,桌面会显示一条通知告诉你可以升级。你可以单击通知启动 “GNOME 软件” 应用。或者你可以从 GNOME Shell 中选择“软件”。
|
||||
|
||||
在 “GNOME 软件” 中选择*更新*选项卡,你会看到一个页面通知你可以更新 Fedora 30。
|
||||
|
||||
如果你在屏幕上看不到任何内容,请尝试点击左上角的重新加载按钮。发布后,所有系统都可能需要一段时间才能看到可用的升级。
|
||||
|
||||
选择“下载”获取升级包。你可以继续做其他的事直到下载完成。然后使用 “GNOME 软件” 重启系统并应用升级。升级需要时间,因此你可以喝杯咖啡,稍后再回来。
|
||||
|
||||
### 使用命令行
|
||||
|
||||
如果你过去升级过 Fedora 版本,你可能熟悉 `dnf upgrade` 插件。这是从 Fedora 29 升级到 Fedora 30 的推荐和支持的方式。使用这个插件将使你的 Fedora 30 升级简单易行。
|
||||
|
||||
#### 1、更新软件并备份系统
|
||||
|
||||
在你执行任何操作之前,你需要确保在开始升级之前拥有 Fedora 29 的最新软件。要更新软件,请使用 “GNOME 软件” 或在终端中输入以下命令。
|
||||
|
||||
```
|
||||
sudo dnf upgrade --refresh
|
||||
```
|
||||
|
||||
此外,请确保在继续之前备份系统。关于备份的帮助,请参阅 Fedora Magazine 上的[备份系列][3]。
|
||||
|
||||
#### 2、安装 DNF 插件
|
||||
|
||||
接下来,打开终端并输入以下命令来安装插件:
|
||||
|
||||
```
|
||||
sudo dnf install dnf-plugin-system-upgrade
|
||||
```
|
||||
|
||||
#### 3、使用 DNF 开始更新
|
||||
|
||||
现在你的系统是最新的,完成了备份,并且已安装 DNF 插件,你可以在终端中使用以下命令开始升级:
|
||||
|
||||
```
|
||||
sudo dnf system-upgrade download --releasever=30
|
||||
```
|
||||
|
||||
此命令将开始在本地下载所有升级文件以准备升级。如果你因为没有更新包、错误的依赖,或过时的包在升级时遇到问题,请在输入上面的命令时添加 `-- allowerasing` 标志。这将允许 DNF 删除可能阻止系统升级的软件包。
|
||||
|
||||
#### 4、重启并升级
|
||||
|
||||
当前面的命令完成下载所有升级文件后,你的系统就可以重启了。要将系统引导至升级过程,请在终端中输入以下命令:
|
||||
|
||||
```
|
||||
sudo dnf system-upgrade reboot
|
||||
```
|
||||
|
||||
此后你的系统将重启。在许多版本之前,`fedup` 工具将在内核选择/引导页面上创建一个新选项。使用 `dnf-plugin-system-upgrade` 包,你的系统将使用当前 Fedora 29 安装的内核重启。这个是正常的。在内核选择页面后不久,系统开始升级过程。
|
||||
|
||||
现在可以休息一下了!完成后你的系统将重启,你就可以登录新升级的 Fedora 30 了。
|
||||
|
||||
![][4]
|
||||
|
||||
### 解决升级问题
|
||||
|
||||
升级系统时偶尔可能会出现意外问题。如果你遇到任何问题,请访问 [DNF 系统升级的维基页面][5],以获取有关出现问题时的故障排除的更多信息。
|
||||
|
||||
如果你在升级时遇到问题并在系统上安装了第三方仓库,那么可能需要在升级时禁用这些仓库。有关 Fedora 对未提供仓库的支持,请与仓库的提供商联系。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/
|
||||
|
||||
作者:[Ryan Lerch][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/ryanlerch/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/29-30-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/announcing-fedora-30/
|
||||
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
|
||||
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
|
||||
[5]: https://fedoraproject.org/wiki/DNF_system_upgrade#Resolving_post-upgrade_issues
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10859-1.html)
|
||||
[#]: subject: (Write faster C extensions for Python with Cython)
|
||||
[#]: via: (https://opensource.com/article/19/5/python-cython)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez/users/foundjem/users/jugmac00)
|
||||
|
||||
使用 Cython 为 Python 编写更快的 C 扩展
|
||||
======
|
||||
|
||||
> 在我们这个包含了 7 个 PyPI 库的系列文章中学习解决常见的 Python 问题的方法。
|
||||
|
||||
![Hand drawing out the word "code"](https://img.linux.net.cn/data/attachment/album/201905/15/225506fnn2mz6l3u122n70.jpg)
|
||||
|
||||
Python 是当今使用最多的[流行编程语言][2]之一,因为:它是开源的,它有广泛的用途(例如 Web 编程、业务应用、游戏、科学编程等等),它有一个充满活力和专注的社区支持它。这个社区可以让我们在 [Python Package Index][3](PyPI)中有如此庞大、多样化的软件包,用以扩展和改进 Python 并解决不可避免的问题。
|
||||
|
||||
在本系列中,我们将介绍七个可以帮助你解决常见 Python 问题的 PyPI 库。首先是 [Cython][4],一个简化 Python 编写 C 扩展的语言。
|
||||
|
||||
### Cython
|
||||
|
||||
使用 Python 很有趣,但有时,用它编写的程序可能很慢。所有的运行时动态调度会带来很大的代价:有时它比用 C 或 Rust 等系统语言编写的等效代码慢 10 倍。
|
||||
|
||||
将代码迁移到一种全新的语言可能会在成本和可靠性方面付出巨大代价:所有的手工重写工作都将不可避免地引入错误。我们可以两者兼得么?
|
||||
|
||||
为了练习一下优化,我们需要一些慢代码。有什么比斐波那契数列的意外指数实现更慢?
|
||||
|
||||
```
|
||||
def fib(n):
|
||||
if n < 2:
|
||||
return 1
|
||||
return fib(n-1) + fib(n-2)
|
||||
```
|
||||
|
||||
由于对 `fib` 的调用会导致两次再次调用,因此这种效率极低的算法需要很长时间才能执行。例如,在我的新笔记本电脑上,`fib(36)` 需要大约 4.5 秒。这个 4.5 秒会成为我们探索 Python 的 Cython 扩展能提供的帮助的基准。
|
||||
|
||||
使用 Cython 的正确方法是将其集成到 `setup.py` 中。然而,使用 `pyximport` 可以快速地进行尝试。让我们将 `fib` 代码放在 `fib.pyx` 中并使用 Cython 运行它。
|
||||
|
||||
```
|
||||
>>> import pyximport; pyximport.install()
|
||||
>>> import fib
|
||||
>>> fib.fib(36)
|
||||
```
|
||||
|
||||
只使用 Cython 而不*修改*代码,这个算法在我笔记本上花费的时间减少到大约 2.5 秒。几乎无需任何努力,这几乎减少了 50% 的运行时间。当然,得到了一个不错的成果。
|
||||
|
||||
加把劲,我们可以让它变得更快。
|
||||
|
||||
```
|
||||
cpdef int fib(int n):
|
||||
if n < 2:
|
||||
return 1
|
||||
return fib(n - 1) + fib(n - 2)
|
||||
```
|
||||
|
||||
我们将 `fib` 中的代码变成用 `cpdef` 定义的函数,并添加了两个类型注释:它接受一个整数并返回一个整数。
|
||||
|
||||
这个变得快*多*了,大约只用了 0.05 秒。它是如此之快,以至于我可能开始怀疑我的测量方法包含噪声:之前,这种噪声在信号中丢失了。
|
||||
|
||||
当下次你的 Python 代码花费太多 CPU 时间时,也许会导致风扇狂转,为何不看看 Cython 是否可以解决问题呢?
|
||||
|
||||
在本系列的下一篇文章中,我们将看一下 Black,一个自动纠正代码格式错误的项目。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/python-cython
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez/users/moshez/users/foundjem/users/jugmac00
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code")
|
||||
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
|
||||
[3]: https://pypi.org/
|
||||
[4]: https://pypi.org/project/Cython/
|
@ -1,18 +1,18 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10864-1.html)
|
||||
[#]: subject: (Format Python however you like with Black)
|
||||
[#]: via: (https://opensource.com/article/19/5/python-black)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez/users/moshez)
|
||||
|
||||
使用 Black 随意格式化 Python
|
||||
使用 Black 自由格式化 Python
|
||||
======
|
||||
|
||||
> 在我们覆盖 7 个 PyPI 库的系列文章中了解解决 Python 问题的更多信息。
|
||||
|
||||
![OpenStack source code \(Python\) in VIM][1]
|
||||
![OpenStack source code \(Python\) in VIM](https://img.linux.net.cn/data/attachment/album/201905/16/220249ethkikh5h1uib5iy.jpg)
|
||||
|
||||
Python 是当今使用最多的[流行编程语言][2]之一,因为:它是开源的,它有广泛的用途(例如 Web 编程、业务应用、游戏、科学编程等等),它有一个充满活力和专注的社区支持它。这个社区可以让我们在 [Python Package Index][3](PyPI)中有如此庞大、多样化的软件包,用以扩展和改进 Python 并解决不可避免的问题。
|
||||
|
||||
@ -83,10 +83,10 @@ $ echo $?
|
||||
|
||||
via: https://opensource.com/article/19/5/python-black
|
||||
|
||||
作者:[Moshe Zadka ][a]
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
197
published/20190505 How To Create SSH Alias In Linux.md
Normal file
197
published/20190505 How To Create SSH Alias In Linux.md
Normal file
@ -0,0 +1,197 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10851-1.html)
|
||||
[#]: subject: (How To Create SSH Alias In Linux)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
如何在 Linux 中创建 SSH 别名
|
||||
======
|
||||
|
||||
![How To Create SSH Alias In Linux](https://img.linux.net.cn/data/attachment/album/201905/13/222910h2uwy06um3byr68r.jpg)
|
||||
|
||||
如果你经常通过 SSH 访问许多不同的远程系统,这个技巧将为你节省一些时间。你可以通过 SSH 为频繁访问的系统创建 SSH 别名,这样你就不必记住所有不同的用户名、主机名、SSH 端口号和 IP 地址等。此外,它避免了在 SSH 到 Linux 服务器时重复输入相同的用户名、主机名、IP 地址、端口号。
|
||||
|
||||
### 在 Linux 中创建 SSH 别名
|
||||
|
||||
在我知道这个技巧之前,我通常使用以下任意一种方式通过 SSH 连接到远程系统。
|
||||
|
||||
使用 IP 地址:
|
||||
|
||||
```
|
||||
$ ssh 192.168.225.22
|
||||
```
|
||||
|
||||
或使用端口号、用户名和 IP 地址:
|
||||
|
||||
```
|
||||
$ ssh -p 22 sk@192.168.225.22
|
||||
```
|
||||
|
||||
或使用端口号、用户名和主机名:
|
||||
|
||||
```
|
||||
$ ssh -p 22 sk@server.example.com
|
||||
```
|
||||
|
||||
这里
|
||||
|
||||
* `22` 是端口号,
|
||||
* `sk` 是远程系统的用户名,
|
||||
* `192.168.225.22` 是我远程系统的 IP,
|
||||
* `server.example.com` 是远程系统的主机名。
|
||||
|
||||
我相信大多数 Linux 新手和(或一些)管理员都会以这种方式通过 SSH 连接到远程系统。但是,如果你通过 SSH 连接到多个不同的系统,记住所有主机名或 IP 地址,还有用户名是困难的,除非你将它们写在纸上或者将其保存在文本文件中。别担心!这可以通过为 SSH 连接创建别名(或快捷方式)轻松解决。
|
||||
|
||||
我们可以用两种方法为 SSH 命令创建别名。
|
||||
|
||||
#### 方法 1 – 使用 SSH 配置文件
|
||||
|
||||
这是我创建别名的首选方法。
|
||||
|
||||
我们可以使用 SSH 默认配置文件来创建 SSH 别名。为此,编辑 `~/.ssh/config` 文件(如果此文件不存在,只需创建一个):
|
||||
|
||||
```
|
||||
$ vi ~/.ssh/config
|
||||
```
|
||||
|
||||
添加所有远程主机的详细信息,如下所示:
|
||||
|
||||
```
|
||||
Host webserver
|
||||
HostName 192.168.225.22
|
||||
User sk
|
||||
|
||||
Host dns
|
||||
HostName server.example.com
|
||||
User root
|
||||
|
||||
Host dhcp
|
||||
HostName 192.168.225.25
|
||||
User ostechnix
|
||||
Port 2233
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
*使用 SSH 配置文件在 Linux 中创建 SSH 别名*
|
||||
|
||||
将 `Host`、`Hostname`、`User` 和 `Port` 配置的值替换为你自己的值。添加所有远程主机的详细信息后,保存并退出该文件。
|
||||
|
||||
现在你可以使用以下命令通过 SSH 进入系统:
|
||||
|
||||
```
|
||||
$ ssh webserver
|
||||
$ ssh dns
|
||||
$ ssh dhcp
|
||||
```
|
||||
|
||||
就是这么简单!
|
||||
|
||||
看看下面的截图。
|
||||
|
||||
![][3]
|
||||
|
||||
*使用 SSH 别名访问远程系统*
|
||||
|
||||
看到了吗?我只使用别名(例如 `webserver`)来访问 IP 地址为 `192.168.225.22` 的远程系统。
|
||||
|
||||
请注意,这只使用于当前用户。如果要为所有用户(系统范围内)提供别名,请在 `/etc/ssh/ssh_config` 文件中添加以上行。
|
||||
|
||||
你还可以在 SSH 配置文件中添加许多其他内容。例如,如果你[已配置基于 SSH 密钥的身份验证][4],说明 SSH 密钥文件的位置,如下所示:
|
||||
|
||||
```
|
||||
Host ubuntu
|
||||
HostName 192.168.225.50
|
||||
User senthil
|
||||
IdentityFIle ~/.ssh/id_rsa_remotesystem
|
||||
```
|
||||
|
||||
确保已使用你自己的值替换主机名、用户名和 SSH 密钥文件路径。
|
||||
|
||||
现在使用以下命令连接到远程服务器:
|
||||
|
||||
```
|
||||
$ ssh ubuntu
|
||||
```
|
||||
|
||||
这样,你可以添加希望通过 SSH 访问的任意多台远程主机,并使用别名快速访问它们。
|
||||
|
||||
#### 方法 2 – 使用 Bash 别名
|
||||
|
||||
这是创建 SSH 别名的一种应急变通的方法,可以加快通信的速度。你可以使用 [alias 命令][5]使这项任务更容易。
|
||||
|
||||
打开 `~/.bashrc` 或者 `~/.bash_profile` 文件:
|
||||
|
||||
```
|
||||
alias webserver='ssh sk@server.example.com'
|
||||
alias dns='ssh sk@server.example.com'
|
||||
alias dhcp='ssh sk@server.example.com -p 2233'
|
||||
alias ubuntu='ssh sk@server.example.com -i ~/.ssh/id_rsa_remotesystem'
|
||||
```
|
||||
|
||||
再次确保你已使用自己的值替换主机、主机名、端口号和 IP 地址。保存文件并退出。
|
||||
|
||||
然后,使用命令应用更改:
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
$ source ~/.bash_profile
|
||||
```
|
||||
|
||||
在此方法中,你甚至不需要使用 `ssh 别名` 命令。相反,只需使用别名,如下所示。
|
||||
|
||||
```
|
||||
$ webserver
|
||||
$ dns
|
||||
$ dhcp
|
||||
$ ubuntu
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
这两种方法非常简单,但对于经常通过 SSH 连接到多个不同系统的人来说非常有用,而且非常方便。使用适合你的上述任何一种方法,通过 SSH 快速访问远程 Linux 系统。
|
||||
|
||||
建议阅读:
|
||||
|
||||
* [允许或拒绝 SSH 访问 Linux 中的特定用户或组][7]
|
||||
* [如何在 Linux 上 SSH 到特定目录][8]
|
||||
* [如何在 Linux 中断开 SSH 会话][9]
|
||||
* [4 种方式在退出 SSH 会话后保持命令运行][10]
|
||||
* [SSLH – 共享相同端口的 HTTPS 和 SSH][11]
|
||||
|
||||
目前这就是全部了,希望它对你有帮助。更多好东西要来了,敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/ssh-alias-720x340.png
|
||||
[2]: http://www.ostechnix.com/wp-content/uploads/2019/04/Create-SSH-Alias-In-Linux.png
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias.png
|
||||
[4]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
|
||||
[5]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias-1.png
|
||||
[7]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
|
||||
[8]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
|
||||
[9]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
|
||||
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
|
||||
[11]: https://www.ostechnix.com/sslh-share-port-https-ssh/
|
@ -0,0 +1,263 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10860-1.html)
|
||||
[#]: subject: (21 Best Kali Linux Tools for Hacking and Penetration Testing)
|
||||
[#]: via: (https://itsfoss.com/best-kali-linux-tools/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
用于黑客渗透测试的 21 个最佳 Kali Linux 工具
|
||||
======
|
||||
|
||||
> 这里是最好的 Kali Linux 工具列表,它们可以让你评估 Web 服务器的安全性,并帮助你执行黑客渗透测试。
|
||||
|
||||
如果你读过 [Kali Linux 点评][1],你就知道为什么它被认为是[最好的黑客渗透测试的 Linux 发行版][2]之一,而且名副其实。它带有许多工具,使你可以更轻松地测试、破解以及进行与数字取证相关的任何其他工作。
|
||||
|
||||
它是<ruby>道德黑客<rt>ethical hacker</rt></ruby>最推荐的 Linux 发行版之一。即使你不是黑客而是网站管理员 —— 你仍然可以利用其中某些工具轻松地扫描你的网络服务器或网页。
|
||||
|
||||
在任何一种情况下,无论你的目的是什么 —— 让我们来看看你应该使用的一些最好的 Kali Linux 工具。
|
||||
|
||||
*注意:这里不是所提及的所有工具都是开源的。*
|
||||
|
||||
### 用于黑客渗透测试的 Kali Linux 工具
|
||||
|
||||
![Kali Linux](https://img.linux.net.cn/data/attachment/album/201905/15/234125c22rx77mmz9m37zo.jpg)
|
||||
|
||||
Kali Linux 预装了几种类型的工具。如果你发现有的工具没有安装,只需下载并进行设置即可。这很简单。
|
||||
|
||||
#### 1、Nmap
|
||||
|
||||
![Kali Linux Nmap][4]
|
||||
|
||||
[Nmap][5] (即 “<ruby>网络映射器<rt>Network Mapper</rt></ruby>”)是 Kali Linux 上最受欢迎的信息收集工具之一。换句话说,它可以获取有关主机的信息:其 IP 地址、操作系统检测以及网络安全的详细信息(如开放的端口数量及其含义)。
|
||||
|
||||
它还提供防火墙规避和欺骗功能。
|
||||
|
||||
#### 2、Lynis
|
||||
|
||||
![Lynis Kali Linux Tool][6]
|
||||
|
||||
[Lynis][7] 是安全审计、合规性测试和系统强化的强大工具。当然,你也可以将其用于漏洞检测和渗透测试。
|
||||
|
||||
它将根据检测到的组件扫描系统。例如,如果它检测到 Apache —— 它将针对入口信息运行与 Apache 相关的测试。
|
||||
|
||||
#### 3、WPScan
|
||||
|
||||
![][8]
|
||||
|
||||
WordPress 是[最好的开源 CMS][9]之一,而这个工具是最好的免费 WordpPress 安全审计工具。它是免费的,但不是开源的。
|
||||
|
||||
如果你想知道一个 WordPress 博客是否在某种程度上容易受到攻击,[WPScan][10] 就是你的朋友。
|
||||
|
||||
此外,它还为你提供了所用的插件的详细信息。当然,一个安全性很好的博客可能不会暴露给你很多细节,但它仍然是 WordPress 安全扫描找到潜在漏洞的最佳工具。
|
||||
|
||||
#### 4、Aircrack-ng
|
||||
|
||||
![][11]
|
||||
|
||||
[Aircrack-ng][12] 是评估 WiFi 网络安全性的工具集合。它不仅限于监控和获取信息 —— 还包括破坏网络(WEP、WPA 1 和 WPA 2)的能力。
|
||||
|
||||
如果你忘记了自己的 WiFi 网络的密码,可以尝试使用它来重新获得访问权限。它还包括各种无线攻击能力,你可以使用它们来定位和监控 WiFi 网络以增强其安全性。
|
||||
|
||||
#### 5、Hydra
|
||||
|
||||
![][13]
|
||||
|
||||
如果你正在寻找一个有趣的工具来破解登录密码,[Hydra][14] 将是 Kali Linux 预装的最好的工具之一。
|
||||
|
||||
它可能不再被积极维护,但它现在放在 [GitHub][15] 上,所以你也可以为它做贡献。
|
||||
|
||||
#### 6、Wireshark
|
||||
|
||||
![][17]
|
||||
|
||||
[Wireshark][18] 是 Kali Linux 上最受欢迎的网络分析仪。它也可以归类为用于网络嗅探的最佳 Kali Linux 工具之一。
|
||||
|
||||
它正在积极维护,所以我肯定会建议你试试它。
|
||||
|
||||
#### 7、Metasploit Framework
|
||||
|
||||
![][19]
|
||||
|
||||
[Metsploit Framework][20](MSF)是最常用的渗透测试框架。它提供两个版本:一个开源版,另外一个是其专业版。使用此工具,你可以验证漏洞、测试已知漏洞并执行完整的安全评估。
|
||||
|
||||
当然,免费版本不具备所有功能,所以如果你在意它们的区别,你应该在[这里][21]比较一下版本。
|
||||
|
||||
#### 8、Skipfish
|
||||
|
||||
![][22]
|
||||
|
||||
与 WPScan 类似,但它不仅仅专注于 WordPress。[Skipfish][23] 是一个 Web 应用扫描程序,可以为你提供几乎所有类型的 Web 应用程序的洞察信息。它快速且易于使用。此外,它的递归爬取方法使它更好用。
|
||||
|
||||
Skipfish 生成的报告可以用于专业的 Web 应用程序安全评估。
|
||||
|
||||
#### 9、Maltego
|
||||
|
||||
![][24]
|
||||
|
||||
[Maltego][25] 是一种令人印象深刻的数据挖掘工具,用于在线分析信息并连接信息点(如果有的话)。 根据这些信息,它创建了一个有向图,以帮助分析这些数据之间的链接。
|
||||
|
||||
请注意,这不是一个开源工具。
|
||||
|
||||
它已预装,但你必须注册才能选择要使用的版本。如果个人使用,社区版就足够了(只需要注册一个帐户),但如果想用于商业用途,则需要订阅 classic 或 XL 版本。
|
||||
|
||||
#### 10、Nessus
|
||||
|
||||
![Nessus][26]
|
||||
|
||||
如果你的计算机连接到了网络,Nessus 可以帮助你找到潜在攻击者可能利用的漏洞。当然,如果你是多台连接到网络的计算机的管理员,则可以使用它并保护这些计算机。
|
||||
|
||||
但是,它不再是免费的工具了,你可以从[官方网站][27]免费试用 7 天。
|
||||
|
||||
#### 11、Burp Suite Scanner
|
||||
|
||||
![][28]
|
||||
|
||||
[Burp Suite Scanner][29] 是一款出色的网络安全分析工具。与其它 Web 应用程序安全扫描程序不同,Burp 提供了 GUI 和一些高级工具。
|
||||
|
||||
社区版仅将功能限制为一些基本的手动工具。对于专业人士,你必须考虑升级。与前面的工具类似,这也不是开源的。
|
||||
|
||||
我使用过免费版本,但是如果你想了解更多细节,你应该查看他们[官方网站][29]上提供的功能。
|
||||
|
||||
#### 12、BeEF
|
||||
|
||||
![][30]
|
||||
|
||||
BeEF(<ruby>浏览器利用框架<rt>Browser Exploitation Framework</rt></ruby>)是另一个令人印象深刻的工具。它专为渗透测试人员量身定制,用于评估 Web 浏览器的安全性。
|
||||
|
||||
这是最好的 Kali Linux 工具之一,因为很多用户在谈论 Web 安全时希望了解并修复客户端的问题。
|
||||
|
||||
#### 13、Apktool
|
||||
|
||||
![][31]
|
||||
|
||||
[Apktool][32] 确实是 Kali Linux 上用于逆向工程 Android 应用程序的流行工具之一。当然,你应该正确利用它 —— 出于教育目的。
|
||||
|
||||
使用此工具,你可以自己尝试一下,并让原开发人员了解你的想法。你认为你会用它做什么?
|
||||
|
||||
#### 14、sqlmap
|
||||
|
||||
![][34]
|
||||
|
||||
如果你正在寻找一个开源渗透测试工具 —— [sqlmap][35] 是最好的之一。它可以自动化利用 SQL 注入漏洞的过程,并帮助你接管数据库服务器。
|
||||
|
||||
#### 15、John the Ripper
|
||||
|
||||
![John The Ripper][36]
|
||||
|
||||
[John the Ripper][37] 是 Kali Linux 上流行的密码破解工具。它也是自由开源的。但是,如果你对[社区增强版][37]不感兴趣,可以用于商业用途的[专业版][38]。
|
||||
|
||||
#### 16、Snort
|
||||
|
||||
想要实时流量分析和数据包记录功能吗?[Snort][39] 可以鼎力支持你。即使它是一个开源的入侵防御系统,也有很多东西可以提供。
|
||||
|
||||
如果你还没有安装它,[官方网站][40]提及了安装过程。
|
||||
|
||||
#### 17、Autopsy Forensic Browser
|
||||
|
||||
![][41]
|
||||
|
||||
[Autopsy][42] 是一个数字取证工具,用于调查计算机上发生的事情。那么,你也可以使用它从 SD 卡恢复图像。它也被执法官员使用。你可以阅读[文档][43]来探索可以用它做什么。
|
||||
|
||||
你还应该查看他们的 [GitHub 页面][44]。
|
||||
|
||||
#### 18、King Phisher
|
||||
|
||||
![King Phisher][45]
|
||||
|
||||
网络钓鱼攻击现在非常普遍。[King Phisher 工具][46]可以通过模拟真实的网络钓鱼攻击来帮助测试和提升用户意识。出于显而易见的原因,在模拟一个组织的服务器内容前,你需要获得许可。
|
||||
|
||||
#### 19、Nikto
|
||||
|
||||
![Nikto][47]
|
||||
|
||||
[Nikto][48] 是一款功能强大的 Web 服务器扫描程序 —— 这使其成为最好的 Kali Linux 工具之一。 它会检查存在潜在危险的文件/程序、过时的服务器版本等等。
|
||||
|
||||
#### 20、Yersinia
|
||||
|
||||
![][49]
|
||||
|
||||
[Yersinia][50] 是一个有趣的框架,用于在网络上执行第 2 层攻击(第 2 层是指 [OSI 模型][51]的数据链路层)。当然,如果你希望你的网络安全,则必须考虑所有七个层。但是,此工具侧重于第 2 层和各种网络协议,包括 STP、CDP,DTP 等。
|
||||
|
||||
#### 21、Social Engineering Toolkit (SET)
|
||||
|
||||
![][52]
|
||||
|
||||
如果你正在进行相当严格的渗透测试,那么这应该是你应该检查的最佳工具之一。社交工程是一个大问题,使用 [SET][53] 工具,你可以帮助防止此类攻击。
|
||||
|
||||
### 总结
|
||||
|
||||
实际上 Kali Linux 捆绑了很多工具。请参考 Kali Linux 的[官方工具列表页面][54]来查找所有内容。
|
||||
|
||||
你会发现其中一些是完全自由开源的,而有些则是专有解决方案(但是免费)。但是,出于商业目的,你应该始终选择高级版本。
|
||||
|
||||
我们可能错过了你最喜欢的某个 Kali Linux 工具。请在下面的评论部分告诉我们。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/best-kali-linux-tools/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/kali-linux-review/
|
||||
[2]: https://itsfoss.com/linux-hacking-penetration-testing/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/kali-linux-tools.jpg?resize=800%2C518&ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/kali-linux-nmap.jpg?resize=800%2C559&ssl=1
|
||||
[5]: https://nmap.org/
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/lynis-kali-linux-tool.jpg?resize=800%2C525&ssl=1
|
||||
[7]: https://cisofy.com/lynis/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/wpscan-kali-linux.jpg?resize=800%2C545&ssl=1
|
||||
[9]: https://itsfoss.com/open-source-cms/
|
||||
[10]: https://wpscan.org/
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/aircrack-ng-kali-linux-tool.jpg?resize=800%2C514&ssl=1
|
||||
[12]: https://www.aircrack-ng.org/
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/hydra-kali-linux.jpg?resize=800%2C529&ssl=1
|
||||
[14]: https://github.com/vanhauser-thc/thc-hydra
|
||||
[15]: https://github.com/vanhauser-thc/THC-Archive
|
||||
[16]: https://itsfoss.com/new-linux-distros-2013/
|
||||
[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/wireshark-network-analyzer.jpg?resize=800%2C556&ssl=1
|
||||
[18]: https://www.wireshark.org/
|
||||
[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/metasploit-framework.jpg?resize=800%2C561&ssl=1
|
||||
[20]: https://github.com/rapid7/metasploit-framework
|
||||
[21]: https://www.rapid7.com/products/metasploit/download/editions/
|
||||
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/skipfish-kali-linux-tool.jpg?resize=800%2C515&ssl=1
|
||||
[23]: https://gitlab.com/kalilinux/packages/skipfish/
|
||||
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/maltego.jpg?resize=800%2C403&ssl=1
|
||||
[25]: https://www.paterva.com/web7/buy/maltego-clients.php
|
||||
[26]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/nessus.jpg?resize=800%2C456&ssl=1
|
||||
[27]: https://www.tenable.com/try
|
||||
[28]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/burp-suite-community-edition-800x582.jpg?resize=800%2C582&ssl=1
|
||||
[29]: https://portswigger.net/burp
|
||||
[30]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/beef-framework.jpg?resize=800%2C339&ssl=1
|
||||
[31]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/apktool.jpg?resize=800%2C504&ssl=1
|
||||
[32]: https://github.com/iBotPeaches/Apktool
|
||||
[33]: https://itsfoss.com/format-factory-alternative-linux/
|
||||
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/sqlmap.jpg?resize=800%2C528&ssl=1
|
||||
[35]: http://sqlmap.org/
|
||||
[36]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/john-the-ripper.jpg?ssl=1
|
||||
[37]: https://github.com/magnumripper/JohnTheRipper
|
||||
[38]: https://www.openwall.com/john/pro/
|
||||
[39]: https://www.snort.org/
|
||||
[40]: https://www.snort.org/#get-started
|
||||
[41]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/autopsy-forensic-browser.jpg?resize=800%2C319&ssl=1
|
||||
[42]: https://www.sleuthkit.org/autopsy/
|
||||
[43]: https://www.sleuthkit.org/autopsy/docs.php
|
||||
[44]: https://github.com/sleuthkit/autopsy
|
||||
[45]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/king-phisher.jpg?resize=800%2C626&ssl=1
|
||||
[46]: https://github.com/securestate/king-phisher
|
||||
[47]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/nikto.jpg?resize=800%2C511&ssl=1
|
||||
[48]: https://gitlab.com/kalilinux/packages/nikto/
|
||||
[49]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/yersinia.jpg?resize=800%2C516&ssl=1
|
||||
[50]: https://github.com/tomac/yersinia
|
||||
[51]: https://en.wikipedia.org/wiki/OSI_model
|
||||
[52]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/social-engineering-toolkit.jpg?resize=800%2C511&ssl=1
|
||||
[53]: https://www.trustedsec.com/social-engineer-toolkit-set/
|
||||
[54]: https://tools.kali.org/tools-listing
|
@ -0,0 +1,109 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10855-1.html)
|
||||
[#]: subject: (How to Use 7Zip in Ubuntu and Other Linux [Quick Tip])
|
||||
[#]: via: (https://itsfoss.com/use-7zip-ubuntu-linux/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何在 Ubuntu 和其他 Linux 发行版上使用 7Zip
|
||||
==============================================
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201905/14/154515xqy7nbq6eyjzu7qj.jpg)
|
||||
|
||||
> 不能在 Linux 中提取 .7z 文件?学习如何在 Ubuntu 和其他 Linux 发行版中安装和使用 7zip。
|
||||
|
||||
[7Zip][1](更适当的写法是 7-Zip)是一种在 Windows 用户中广泛流行的归档格式。一个 7Zip 归档文件通常以 .7z 扩展结尾。它大部分是开源的,除了包含一些少量解压 rar 文件的代码。
|
||||
|
||||
默认大多数 Linux 发行版不支持 7Zip。如果你试图提取它,你会看见这个错误:
|
||||
|
||||
> 不能打开这种文件类型
|
||||
|
||||
> 没有已安装的适用 7-zip 归档文件的命令。你想搜索一个命令来打开这个文件吗?
|
||||
|
||||
![][2]
|
||||
|
||||
不要担心,你可以轻松的在 Ubuntu 和其他 Linux 发行版中安装 7zip。
|
||||
|
||||
一个问题是你会注意到如果你试图用 [apt-get install 命令][3],你会发现没有以 7zip 开头的候选安装。因为在 Linux 中 7Zip 包的名字是 [p7zip][4]。以字母 “p” 开头而不是预期的数字 “7”。
|
||||
|
||||
让我们看一下如何在 Ubuntu 和其他 Linux 发行版中安装 7zip。
|
||||
|
||||
### 在 Ubuntu Linux 中安装 7Zip
|
||||
|
||||
你需要做的第一件事是安装 p7zip 包。你会在 Ubuntu 中发现 3 个包:p7zip、p7zip-full 和 pzip-rar。
|
||||
|
||||
pzip 和 p7zip-full 的不同是 pzip 是一个轻量级的版本,仅仅对 .7z 文件提供支持,而 p7zip-full 提供了更多的 7z 压缩算法(例如音频文件)。
|
||||
|
||||
p7zip-rar 包在 7z 中提供了对 [RAR 文件][6] 的支持
|
||||
|
||||
在大多数情况下安装 p7zip-full 就足够了,但是你可能想安装 p7zip-rar 来支持 rar 文件的解压。
|
||||
|
||||
p7zip 包在 [Ubuntu 的 universe 仓库][7] 因此保证你可以使用以下命令:
|
||||
|
||||
```
|
||||
sudo add-apt-repository universe
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
在 Ubuntu 和基于 Debian 的发行版中使用以下命令。
|
||||
|
||||
```
|
||||
sudo apt install p7zip-full p7zip-rar
|
||||
```
|
||||
|
||||
这很好。现在在你的系统就有了 7zip 归档的支持。
|
||||
|
||||
### 在 Linux 中提取 7Zip 归档文件
|
||||
|
||||
安装了 7Zip 后,在 Linux 中,你可以在图形用户界面或者 命令行中提取 7zip 文件。
|
||||
|
||||
在图形用户界面,你可以像提取其他压缩文件一样提取 .7z 文件。右击文件来提取它。
|
||||
|
||||
在终端中,你可以使用下列命令提取 .7z 归档文件:
|
||||
|
||||
```
|
||||
7z e file.7z
|
||||
```
|
||||
|
||||
### 在 Linux 中压缩文件为 7zip 归档格式
|
||||
|
||||
你可以在图形界面压缩文件为 7zip 归档格式。简单的在文件或目录上右击,选择“压缩”。你应该看到几种类型的文件格式选项。选择 .7z。
|
||||
|
||||
![7zip Archive Ubuntu][9]
|
||||
|
||||
作为替换,你也可以在命令行中使用。这里是你可以用来压缩的命令:
|
||||
|
||||
```
|
||||
7z a 输出的文件名 要压缩的文件
|
||||
```
|
||||
|
||||
默认,归档文件有 .7z 扩展。你可以通过在指定输出文件扩展名为 .zip 以压缩为 zip 格式。
|
||||
|
||||
### 总结
|
||||
|
||||
就是这样。看,在 Linux 中使用 7zip 多简单?我希望你喜欢这个快速指南。如果你有问题或者建议,请随意在下方评论让我知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/use-7zip-ubuntu-linux/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.7-zip.org/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2015/07/Install_7zip_ubuntu_1.png?ssl=1
|
||||
[3]: https://itsfoss.com/apt-get-linux-guide/
|
||||
[4]: https://sourceforge.net/projects/p7zip/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/7zip-linux.png?resize=800%2C450&ssl=1
|
||||
[6]: https://itsfoss.com/use-rar-ubuntu-linux/
|
||||
[7]: https://itsfoss.com/ubuntu-repositories/
|
||||
[8]: https://itsfoss.com/easily-share-files-linux-windows-mac-nitroshare/
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/7zip-archive-ubuntu.png?resize=800%2C239&ssl=1
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco boosts SD-WAN with multicloud-to-branch access system)
|
||||
[#]: via: (https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco boosts SD-WAN with multicloud-to-branch access system
|
||||
======
|
||||
Cisco's SD-WAN Cloud onRamp for CoLocation can tie branch offices to private data centers in regional corporate headquarters via colocation facilities for shorter, faster, possibly more secure connections.
|
||||
![istock][1]
|
||||
|
||||
Cisco is looking to give traditional or legacy wide-area network users another reason to move to the [software-defined WAN world][2].
|
||||
|
||||
The company has rolled out an integrated hardware/software package called SD-WAN Cloud onRamp for CoLocation that lets customers tie distributed multicloud applications back to a local branch office or local private data center. The idea is that a cloud-to-branch link would be shorter, faster and possibly more secure that tying cloud-based applications directly all the way to the data center.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
|
||||
* [How to pick an off-site data-backup method][4]
|
||||
* [SD-Branch: What it is and why you’ll need it][5]
|
||||
* [What are the options for security SD-WAN?][6]
|
||||
|
||||
|
||||
|
||||
“With Cisco SD-WAN Cloud onRamp for CoLocation operating regionally, connections from colocation facilities to branches are set up and configured according to traffic loads (such as video vs web browsing vs email) SLAs (requirements for low latency/jitter), and Quality of Experience for optimizing cloud application performance,” wrote Anand Oswal, senior vice president of engineering, in Cisco’s Enterprise Networking Business in a [blog about the new service][7].
|
||||
|
||||
According to Oswal, each branch or private data center is equipped with a network interface that provides a secure tunnel to the regional colocation facility. In turn, the Cloud onRamp for CoLocation establishes secure tunnels to SaaS application platforms, multi-cloud platform services, and enterprise data centers, he stated.
|
||||
|
||||
Traffic is securely routed through the Cloud onRamp for CoLocation stack which includes security features such as application-aware firewalls, URL-filtering, intrusion detection/prevention, DNS-layer security, and Advanced Malware Protection (AMP) Threat Grid, as well as other network services such as load-balancing and Wide Area Application Services, Oswal wrote.
|
||||
|
||||
A typical use case for the package is an enterprise that has dozens of distributed branch offices, clustered around major cities, spread over several countries. The goal is to tie each branch to enterprise data center databases, SaaS applications, and multi-cloud services while meeting service level agreements and application quality of experience, Oswal stated.
|
||||
|
||||
“With virtualized Cisco SD-WAN running on regional colocation centers, the branch workforce has access to applications and data residing in AWS, Azure, and Google cloud platforms as well as SaaS providers such as Microsoft 365 and Salesforce—transparently and securely,” Oswal said. “Distributing SD-WAN features over a regional architecture also brings processing power closer to where data is being generated—at the cloud edge.”
|
||||
|
||||
The idea is that paths to designated SaaS applications will be monitored continuously for performance, and the application traffic will be dynamically routed to the best-performing path, without requiring human intervention, Oswal stated.
|
||||
|
||||
For a typical configuration, a region covering a target city uses a colocation IaaS provider that hosts the Cisco Cloud onRamp for CoLocation, which includes:
|
||||
|
||||
* Cisco vManage software that lets customers manage applications and provision, monitor and troubleshooting the WAN.
|
||||
* [Cisco Cloud Services Platform (CSP) 5000][8] The systems are x86 Linux Kernel-based Virtual Machine (KVM) software and hardware platforms for the data center, regional hub, and colocation Network Functions Virtualization (NFV). The platforms let enterprise IT teams or service providers deploy any Cisco or third-party network virtual service with Cisco’s [Network Services Orchestrator (NSO)][9] or any other northbound management and orchestration system.
|
||||
* The Cisco [Catalyst 9500 Series][10] aggregation switches. Based on an x86 CPU, the Catalyst 9500 Series is Cisco’s lead purpose-built fixed core and aggregation enterprise switching platform, built for security, IoT, and cloud. The switches come with a 4-core x86, 2.4-GHz CPU, 16-GB DDR4 memory, and 16-GB internal storage.
|
||||
|
||||
|
||||
|
||||
If the features of the package sound familiar, that’s because the [Cloud onRamp for CoLocation][11] package is the second generation of a similar SD-WAN package offered by Viptela which Cisco [bought in 2017][12].
|
||||
|
||||
SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
|
||||
|
||||
It's a profoundly hot market with tons of players including [Cisco][13], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa. IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
|
||||
|
||||
[SD-WAN][14] lets networks route traffic based on centrally managed roles and rules, no matter what the entry and exit points of the traffic are, and with full security. For example, if a user in a branch office is working in Office365, SD-WAN can route their traffic directly to the closest cloud data center for that app, improving network responsiveness for the user and lowering bandwidth costs for the business.
|
||||
|
||||
"SD-WAN has been a promised technology for years, but in 2019 it will be a major driver in how networks are built and re-built," Oswal said a Network World [article][15] earlier this year.
|
||||
|
||||
Join the Network World communities on [Facebook][16] and [LinkedIn][17] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/02/istock-578801262-100750453-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html
|
||||
[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[7]: https://blogs.cisco.com/enterprise/cisco-sd-wan-cloud-onramp-for-colocation-multicloud
|
||||
[8]: https://www.cisco.com/c/en/us/products/collateral/switches/cloud-services-platform-5000/nb-06-csp-5k-data-sheet-cte-en.html#ProductOverview
|
||||
[9]: https://www.cisco.com/go/nso
|
||||
[10]: https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9500-series-switches/data_sheet-c78-738978.html
|
||||
[11]: https://www.networkworld.com/article/3207751/viptela-cloud-onramp-optimizes-cloud-access.html
|
||||
[12]: https://www.networkworld.com/article/3193784/cisco-grabs-up-sd-wan-player-viptela-for-610m.html?nsdr=true
|
||||
[13]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html
|
||||
[14]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[15]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
|
||||
[16]: https://www.facebook.com/NetworkWorld/
|
||||
[17]: https://www.linkedin.com/company/network-world
|
74
sources/talk/20190507 SD-WAN is Critical for IoT.md
Normal file
74
sources/talk/20190507 SD-WAN is Critical for IoT.md
Normal file
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SD-WAN is Critical for IoT)
|
||||
[#]: via: (https://www.networkworld.com/article/3393445/sd-wan-is-critical-for-iot.html#tk.rss_all)
|
||||
[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/)
|
||||
|
||||
SD-WAN is Critical for IoT
|
||||
======
|
||||
|
||||
![istock][1]
|
||||
|
||||
The Internet of Things (IoT) is everywhere and its use is growing fast. IoT is used by local governments to build smart cities. It’s used to build smart businesses. And, consumers are benefitting as it’s built into smart homes and smart cars. Industry analyst first estimates that over 20 billion IoT devices will be connected by 2020. That’s a 2.5x increase from the more than 8 billion connected devices in 2017*.
|
||||
|
||||
Manufacturing companies have the highest IoT spend to date of industries while the health care market is experiencing the highest IoT growth. By 2020, 50 percent of IoT spending will be driven by manufacturing, transportation and logistics, and utilities.
|
||||
|
||||
IoT growth is being fueled by the promise of analytical data insights that will ultimately yield greater efficiencies and enhanced customer satisfaction. The top use cases driving IoT growth are self-optimizing production, predictive maintenance and automated inventory management.
|
||||
|
||||
From a high-level view, the IoT architecture includes sensors that collect and transmit data (i.e. temperature, speed, humidity, video feed, pressure, IR, proximity, etc.) from “things” like cars, trucks, machines, etc. that are connected over the internet. Data collected is then analyzed, translating raw data into actionable information. Businesses can then act on this information. And at more advanced levels, machine learning and AI algorithms learn and adapt to this information and automatically respond at a system level.
|
||||
|
||||
IDC estimates that by 2025, over 75 billion IoT devices* will be connected. By that time, nearly a quarter of the world’s projected 163 zettabytes* (163 trillion gigabytes) of data will have been created in real-time, and the vast majority of that data will have been created by IoT devices. This massive amount of data will drive an exponential increase in traffic on the network infrastructure requiring massive scalability. Also, this increasing amount of data will require tremendous processing power to mine it and transform it into actionable intelligence. In parallel, security risks will continue to increase as there will be many more potential entry points onto the network. Lastly, management of the overall infrastructure will require better orchestration of policies as well as the means to streamline on-going operations.
|
||||
|
||||
### **How does SD-WAN enable IoT business initiatives?**
|
||||
|
||||
There are three key elements that an [SD-WAN][2] platform must include:
|
||||
|
||||
1. **Visibility** : Real-time visibility into the network is key. It takes the guesswork out of rapid problem resolution, enabling organizations to run more efficiently by accelerating troubleshooting and applying preventive measures. Furthermore, a CIO is able to pull metrics and see bandwidth consumed by any IoT application.
|
||||
2. **Security** : IoT traffic must be isolated from other application traffic. IT must prevent – or at least reduce – the possible attack surface that may be exposed to IoT device traffic. Also, the network must continue delivering other application traffic in the event of a melt down on a WAN link caused by a DDoS attack.
|
||||
3. **Agility** : With the increased number of connected devices, applications and users, a comprehensive, intelligent and centralized orchestration approach that continuously adapts to deliver the best experience to the business and users is critical to success.
|
||||
|
||||
|
||||
|
||||
### Key Silver Peak EdgeConnect SD-WAN capabilities for IoT
|
||||
|
||||
1\. Silver Peak has an [embedded real-time visibility engine][3] allowing IT to gain complete observability into the performance attributes of the network and applications in real-time. The [EdgeConnect][4] SD-WAN appliances deployed in branch offices send information to the centralized [Unity Orchestrator™][5]. Orchestrator collects the data and presents it in a comprehensive management dashboard via customizable widgets. These widgets provide a wealth of operational data including a health heatmap for every SD-WAN appliance deployed, flow counts, active tunnels, logical topologies, top talkers, alarms, bandwidth consumed by each application and location, latency and jitter and much more. Furthermore, the platform maintains weeks’ worth of data with context allowing IT to playback and see what has transpired at a specific time and location, similar to a DVR.
|
||||
|
||||
![Click to read Solution Brief][6]
|
||||
|
||||
2\. The second set of key capabilities center around security and end-to-end zone-based segmentation. An IoT traffic zone may be created on the LAN or branch side. IoT traffic is then mapped all the way across the WAN to the data center or cloud where the data will be processed. Zone-based segmentation is accomplished in a simplified and automated way within the Orchestrator GUI. In cases where further traffic inspection is required, IT can simply service chain to another security service. There are several key benefits realized by this approach. IT can easily and quickly apply segmentation policies; segmentation mitigates the attack surface; and IT can save on additional security investments.
|
||||
|
||||
![***Click to read Solution Brief ***][7]
|
||||
|
||||
3\. EdgeConnect employs machine learning at the global level where with internet sensors and third-party sensors feed into the cloud portal software. The software tracks the geolocation of all IP addresses and IP reputation, distributing signals down to the Unity Orchestrator running in each individual customer’s enterprise. In turn, it is speaking to the edge devices sitting in the branch offices. There, distributed learning is done by looking at the first packet, making an inference based on the first packet what the application is. So, if seeing that 100 times now, every time packets come from that particular IP address and turns out to be an IoT, we can make an inference that IP belongs to IoT application. In parallel, we’re using a mix of traditional techniques to validate the identification of the application. All this combined other multi-level intelligence enables simple and automated policy orchestration across a large number of devices and applications.
|
||||
|
||||
![***Click to read Solution Brief ***][8]
|
||||
|
||||
SD-WAN plays a foundational role as businesses continue to embrace IoT, but choosing the right SD-WAN platform is even more critical to ensuring businesses are ultimately able to fully optimize their operations.
|
||||
|
||||
* Source: [IDC][9]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3393445/sd-wan-is-critical-for-iot.html#tk.rss_all
|
||||
|
||||
作者:[Rami Rammaha][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Rami-Rammaha/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/istock-1019172426-100795551-large.jpg
|
||||
[2]: https://www.silver-peak.com/sd-wan/sd-wan-explained
|
||||
[3]: https://www.silver-peak.com/resource-center/simplify-sd-wan-operations-greater-visibility
|
||||
[4]: https://www.silver-peak.com/products/unity-edge-connect
|
||||
[5]: https://www.silver-peak.com/products/unity-orchestrator
|
||||
[6]: https://images.idgesg.net/images/article/2019/05/1_simplify-100795554-large.jpg
|
||||
[7]: https://images.idgesg.net/images/article/2019/05/2_centralize-100795555-large.jpg
|
||||
[8]: https://images.idgesg.net/images/article/2019/05/3_increase-100795558-large.jpg
|
||||
[9]: https://www.information-age.com/data-forecast-grow-10-fold-2025-123465538/
|
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Server shipments to pick up in the second half of 2019)
|
||||
[#]: via: (https://www.networkworld.com/article/3393167/server-shipments-to-pick-up-in-the-second-half-of-2019.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Server shipments to pick up in the second half of 2019
|
||||
======
|
||||
Server sales slowed in anticipation of the new Intel Xeon processors, but they are expected to start up again before the end of the year.
|
||||
![Thinkstock][1]
|
||||
|
||||
Global server shipments are not expected to return to growth momentum until the third quarter or even the fourth quarter of 2019, according to Taiwan-based tech news site DigiTimes, which cited unnamed server supply chain sources. The one bright spot remains cloud providers like Amazon, Google, and Facebook, which continue their buying binge.
|
||||
|
||||
Normally I’d be reluctant to cite such a questionable source, but given most of the OEMs and ODMs are based in Taiwan and DigiTimes (the article is behind a paywall so I cannot link) has shown it has connections to them, I’m inclined to believe them.
|
||||
|
||||
Quanta Computer chairman Barry Lam told the publication that Quanta's shipments of cloud servers have risen steadily, compared to sharp declines in shipments of enterprise servers. Lam continued that enterprise servers command only 1-2% of the firm's total server shipments.
|
||||
|
||||
**[ Also read:[Gartner: IT spending to drop due to falling equipment prices][2] ]**
|
||||
|
||||
[Server shipments began to slow down in the first quarter][3] thanks in part to the impending arrival of second-generation Xeon Scalable processors from Intel. And since it takes a while to get parts and qualify them, this quarter won’t be much better.
|
||||
|
||||
In its latest quarterly earnings, Intel's data center group (DCG) said sales declined 6% year over year, the first decline of its kind since the first quarter of 2012 and reversing an average growth of over 20% in the past.
|
||||
|
||||
[The Osbourne Effect][4] wasn’t the sole reason. An economic slowdown in China and the trade war, which will add significant tariffs to Chinese-made products, are also hampering sales.
|
||||
|
||||
DigiTimes says Inventec, Intel's largest server motherboard supplier, expects shipments of enterprise server motherboards to further lose steams for the rest of the year, while sales of data center servers are expected to grow 10-15% on year in 2019.
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][5] ]**
|
||||
|
||||
It went on to say server shipments may concentrate in the second half or even the fourth quarter of the year, while cloud-based data center servers for the cloud giants will remain positive as demand for edge computing, new artificial intelligence (AI) applications, and the proliferation of 5G applications begin in 2020.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3393167/server-shipments-to-pick-up-in-the-second-half-of-2019.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/04/2_data_center_servers-100718306-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3391062/it-spending-to-drop-due-to-falling-equipment-prices-gartner-predicts.html
|
||||
[3]: https://www.networkworld.com/article/3332144/server-sales-projected-to-slow-while-memory-prices-drop.html
|
||||
[4]: https://en.wikipedia.org/wiki/Osborne_effect
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Some IT pros say they have too much data)
|
||||
[#]: via: (https://www.networkworld.com/article/3393205/some-it-pros-say-they-have-too-much-data.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Some IT pros say they have too much data
|
||||
======
|
||||
IT professionals have too many data sources to count, and they spend a huge amount of time wrestling that data into usable condition, a survey from Ivanti finds.
|
||||
![Getty Images][1]
|
||||
|
||||
A new survey has found that a growing number of IT professionals have too many data sources to even count, and they are spending more and more time just wrestling that data into usable condition.
|
||||
|
||||
Ivanti, an IT asset management firm, [surveyed 400 IT professionals on their data situation][2] and found IT faces numerous challenges when it comes to siloes, data, and implementation. The key takeaway is data overload is starting to overwhelm IT managers and data lakes are turning into data oceans.
|
||||
|
||||
**[ Read also:[Understanding mass data fragmentation][3] | Get daily insights [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
Among the findings from Ivanti's survey:
|
||||
|
||||
* Fifteen percent of IT professionals say they have too many data sources to count, and 37% of professionals said they have about 11-25 different sources for data.
|
||||
* More than half of IT professionals (51%) report they have to work with their data for days, weeks or more before it's actionable.
|
||||
* Only 10% of respondents said the data they receive is actionable within minutes.
|
||||
* One in three respondents said they have the resources to act on their data, but more than half (52%) said they only sometimes have the resources.
|
||||
|
||||
|
||||
|
||||
“It’s clear from the results of this survey that IT professionals are in need of a more unified approach when working across organizational departments and resulting silos,” said Duane Newman, vice president of product management at Ivanti, in a statement.
|
||||
|
||||
### The problem with siloed data
|
||||
|
||||
The survey found siloed data represents a number of problems and challenges. Three key priorities suffer the most: automation (46%), user productivity and troubleshooting (42%), and customer experience (41%). The survey also found onboarding/offboarding suffers the least (20%) due to siloes, so apparently HR and IT are getting things right.
|
||||
|
||||
In terms of what they want from real-time insight, about 70% of IT professionals said their security status was the top priority over other issues. Respondents were least interested in real-time insights around warranty data.
|
||||
|
||||
### Data lake method a recipe for disaster
|
||||
|
||||
I’ve been immersed in this subject for other publications for some time now. Too many companies are hoovering up data for the sake of collecting it with little clue as to what they will do with it later. One thing you have to say about data warehouses, the schema on write at least forces you to think about what you are collecting and how you might use it because you have to store it away in a usable form.
|
||||
|
||||
The new data lake method is schema on read, meaning you filter/clean it when you read it into an application, and that’s just a recipe for disaster. If you are looking at data collected a month or a year ago, do you even know what it all is? Now you have to apply schema to data and may not even remember collecting it.
|
||||
|
||||
Too many people think more data is good when it isn’t. You just drown in it. When you reach a point of having too many data sources to count, you’ve gone too far and are not going to get insight. You’re going to get overwhelmed. Collect data you know you can use. Otherwise you are wasting petabytes of disk space.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3393205/some-it-pros-say-they-have-too-much-data.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/03/database_data-center_futuristic-technology-100752012-large.jpg
|
||||
[2]: https://www.ivanti.com/blog/survey-it-professionals-data-sources
|
||||
[3]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco adds AMP to SD-WAN for ISR/ASR routers)
|
||||
[#]: via: (https://www.networkworld.com/article/3394597/cisco-adds-amp-to-sd-wan-for-israsr-routers.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco adds AMP to SD-WAN for ISR/ASR routers
|
||||
======
|
||||
Cisco SD-WAN now sports Advanced Malware Protection on its popular edge routers, adding to their routing, segmentation, security, policy and orchestration capabilities.
|
||||
![vuk8691 / Getty Images][1]
|
||||
|
||||
Cisco has added support for Advanced Malware Protection (AMP) to its million-plus ISR/ASR edge routers, in an effort to [reinforce branch and core network malware protection][2] at across the SD-WAN.
|
||||
|
||||
Cisco last year added its Viptela SD-WAN technology to the IOS XE version 16.9.1 software that runs its core ISR/ASR routers such as the ISR models 1000, 4000 and ASR 5000, in use by organizations worldwide. Cisco bought Viptela in 2017.
|
||||
|
||||
**More about SD-WAN**
|
||||
|
||||
* [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][3]
|
||||
* [How to pick an off-site data-backup method][4]
|
||||
* [SD-Branch: What it is and why you’ll need it][5]
|
||||
* [What are the options for security SD-WAN?][6]
|
||||
|
||||
|
||||
|
||||
The release of Cisco IOS XE offered an instant upgrade path for creating cloud-controlled SD-WAN fabrics to connect distributed offices, people, devices and applications operating on the installed base, Cisco said. At the time Cisco said that Cisco SD-WAN on edge routers builds a secure virtual IP fabric by combining routing, segmentation, security, policy and orchestration.
|
||||
|
||||
With the recent release of [IOS-XE SD-WAN 16.11][7], Cisco has brought AMP and other enhancements to its SD-WAN.
|
||||
|
||||
“Together with Cisco Talos [Cisco’s security-intelligence arm], AMP imbues your SD-WAN branch, core and campuses locations with threat intelligence from millions of worldwide users, honeypots, sandboxes, and extensive industry partnerships,” wrote Cisco’s Patrick Vitalone a product marketing manager in a [blog][8] about the security portion of the new software. “In total, AMP identifies more than 1.1 million unique malware samples a day." When AMP in Cisco SD-WAN spots malicious behavior it automatically blocks it, he wrote.
|
||||
|
||||
The idea is to use integrated preventative engines, exploit prevention and intelligent signature-based antivirus to stop malicious attachments and fileless malware before they execute, Vitalone wrote.
|
||||
|
||||
AMP support is added to a menu of security features already included in the SD-WAN software including support for URL filtering, [Cisco Umbrella][9] DNS security, Snort Intrusion Prevention, the ability to segment users across the WAN and embedded platform security, including the [Cisco Trust Anchor][10] module.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][11] ]**
|
||||
|
||||
The software also supports [SD-WAN Cloud onRamp for CoLocation][12], which lets customers tie distributed multicloud applications back to a local branch office or local private data center. That way a cloud-to-branch link would be shorter, faster and possibly more secure that tying cloud-based applications directly to the data center.
|
||||
|
||||
“The idea that this kind of security technology is now integrated into Cisco’s SD-WAN offering is a critical for Cisco and customers looking to evaluate SD-WAN offerings,” said Lee Doyle, principal analyst at Doyle Research.
|
||||
|
||||
IOS-XE SD-WAN 16.11 is available now.
|
||||
|
||||
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3394597/cisco-adds-amp-to-sd-wan-for-israsr-routers.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/09/shimizu_island_el_nido_palawan_philippines_by_vuk8691_gettyimages-155385042_1200x800-100773533-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3285728/what-are-the-options-for-securing-sd-wan.html
|
||||
[3]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[4]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[5]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[6]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true
|
||||
[7]: https://www.cisco.com/c/en/us/td/docs/routers/sdwan/release/notes/xe-16-11/sd-wan-rel-notes-19-1.html
|
||||
[8]: https://blogs.cisco.com/enterprise/enabling-amp-in-cisco-sd-wan
|
||||
[9]: https://www.networkworld.com/article/3167837/cisco-umbrella-cloud-service-shapes-security-for-cloud-mobile-resources.html
|
||||
[10]: https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/trustworthy-technologies-datasheet.pdf
|
||||
[11]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[12]: https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html
|
||||
[13]: https://www.facebook.com/NetworkWorld/
|
||||
[14]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (When it comes to uptime, not all cloud providers are created equal)
|
||||
[#]: via: (https://www.networkworld.com/article/3394341/when-it-comes-to-uptime-not-all-cloud-providers-are-created-equal.html#tk.rss_all)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
When it comes to uptime, not all cloud providers are created equal
|
||||
======
|
||||
Cloud uptime is critical today, but vendor-provided data can be confusing. Here's an analysis of how AWS, Google Cloud and Microsoft Azure compare.
|
||||
![Getty Images][1]
|
||||
|
||||
The cloud is not just important; it's mission-critical for many companies. More and more IT and business leaders I talk to look at public cloud as a core component of their digital transformation strategies — using it as part of their hybrid cloud or public cloud implementation.
|
||||
|
||||
That raises the bar on cloud reliability, as a cloud outage means important services are not available to the business. If this is a business-critical service, the company may not be able to operate while that key service is offline.
|
||||
|
||||
Because of the growing importance of the cloud, it’s critical that buyers have visibility into the reliability number for the cloud providers. The challenge is the cloud providers don't disclose the disruptions in a consistent manner. In fact, some are confusing to the point where it’s difficult to glean any kind of meaningful conclusion.
|
||||
|
||||
**[ RELATED:[What IT pros need to know about Azure Stack][2] and [Which cloud performs better, AWS, Azure or Google?][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]**
|
||||
|
||||
### Reported cloud outage times don't always reflect actual downtime
|
||||
|
||||
Microsoft Azure and Google Cloud Platform (GCP) both typically provide information on date and time, but only high-level data on the services affected and sparse information on regional impact. The problem with that is it’s difficult to get a sense of overall reliability. For instance, if Azure reports a one-hour outage that impacts five services in three regions, the website might show just a single hour. In actuality, that’s 15 hours of total downtime.
|
||||
|
||||
Between Azure, GCP and Amazon Web Services (AWS), [Azure is the most obscure][5], as it provides the least amount of detail. [GCP does a better job of providing detail][6] at the service level but tends to be obscure with regional information. Sometimes it’s very clear as to what services are unavailable, and other times it’s not.
|
||||
|
||||
[AWS has the most granular reporting][7], as it shows every service in every region. If an incident occurs that impacts three services, all three of those services would light up red. If those were unavailable for one hour, AWS would record three hours of downtime.
|
||||
|
||||
Another inconsistency between the cloud providers is the amount of historical downtime data that is available. At one time, all three of the cloud vendors provided a one-year view into outages. GCP and AWS still do this, but Azure moved to only a [90-day view][5] sometime over the past year.
|
||||
|
||||
### Azure has significantly higher downtime than GCP and AWS
|
||||
|
||||
The next obvious question is who has the most downtime? To answer that, I worked with a third-party firm that has continually collected downtime information directly from the vendor websites. I have personally reviewed the information and can validate its accuracy. Based on the vendors own reported numbers, from the beginning of 2018 through May 3, 2019, AWS leads the pack with only 338 hours of downtime, followed by GCP closely at 361. Microsoft Azure has a whopping total of 1,934 hours of self-reported downtime.
|
||||
|
||||
![][8]
|
||||
|
||||
A few points on these numbers. First, this is an aggregation of the self-reported data from the vendors' websites, which isn’t the “true” number, as regional information or service granularity is sometimes obscured. If a service is unavailable for an hour and it’s reported for an hour on the website but it spanned five regions, correctly five hours should have been used. But for this calculation, we used only one hour because that is what was self-reported.
|
||||
|
||||
Because of this, the numbers are most favorable to Microsoft because they provide the least amount of regional information. The numbers are least favorable to AWS because they provide the most granularity. Also, I believe AWS has the most services in most regions, so they have more opportunities for an outage.
|
||||
|
||||
We had considered normalizing the data, but that would require a significant amount of work to destruct the downtime on a per service per region basis. I may choose to do that in the future, but for now, the vendor-reported view is a good indicator of relative performance.
|
||||
|
||||
Another important point is that only infrastructure as a service (IaaS) services were used to calculate downtime. If Google Street View or Bing Maps went down, most businesses would not care, so it would have been unfair to roll those number in.
|
||||
|
||||
### SLAs do not correlate to reliability
|
||||
|
||||
Given the importance of cloud services today, I would like to see every cloud provider post a 12-month running total of downtime somewhere on their website so customers can do an “apples to apples” comparison. This obviously isn’t the only factor used in determining which cloud provider to use, but it is one of the more critical ones.
|
||||
|
||||
Also, buyers should be aware that there is a big difference between service-level agreements (SLAs) and downtime. A cloud operator can promise anything they want, even provide a 100% SLA, but that just means they need to reimburse the business when a service isn’t available. Most IT leaders I have talked to say the few bucks they get back when a service is out is a mere fraction of what the outage actually cost them.
|
||||
|
||||
### Measure twice and cut once to minimize business disruption
|
||||
|
||||
If you’re reading this and you’re researching cloud services, it’s important to not just make the easy decision of buying for convenience. Many companies look at Azure because Microsoft gives away Azure credits as part of the Enterprise Agreement (EA). I’ve interviewed several companies that took the path of least resistance, but they wound up disappointed with availability and then switched to AWS or GCP later, which can have a disruptive effect.
|
||||
|
||||
I’m certainly not saying to not buy Microsoft Azure, but it is important to do your homework to understand the historical performance of the services you’re considering in the regions you need them. The information on the vendor websites may not tell the full picture, so it’s important to do the necessary due diligence to ensure you understand what you’re buying before you buy it.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3394341/when-it-comes-to-uptime-not-all-cloud-providers-are-created-equal.html#tk.rss_all
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/cloud_comput_connect_blue-100787048-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3208029/azure-stack-microsoft-s-private-cloud-platform-and-what-it-pros-need-to-know-about-it
|
||||
[3]: https://www.networkworld.com/article/3319776/the-network-matters-for-public-cloud-performance.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://azure.microsoft.com/en-us/status/history/
|
||||
[6]: https://status.cloud.google.com/incident/appengine/19008
|
||||
[7]: https://status.aws.amazon.com/
|
||||
[8]: https://images.idgesg.net/images/article/2019/05/public-cloud-downtime-100795948-large.jpg
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Supermicro moves production from China)
|
||||
[#]: via: (https://www.networkworld.com/article/3394404/supermicro-moves-production-from-china.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Supermicro moves production from China
|
||||
======
|
||||
Supermicro was cleared of any activity related to the Chinese government and secret chips in its motherboards, but it is taking no chances and is moving its facilities.
|
||||
![Frank Schwichtenberg \(CC BY 4.0\)][1]
|
||||
|
||||
Server maker Supermicro, based in Fremont, California, is reportedly moving production out of China over customer concerns that the Chinese government had secretly inserted chips for spying into its motherboards.
|
||||
|
||||
The claims were made by Bloomberg late last year in a story that cited more than 100 sources in government and private industry, including Apple and Amazon Web Services (AWS). However, Apple CEO Tim Cook and AWS CEO Andy Jassy denied the claims and called for Bloomberg to retract the article. And a few months later, the third-party investigations firm Nardello & Co examined the claims and [cleared Supermicro][2] of any surreptitious activity.
|
||||
|
||||
At first it seemed like Supermicro was weathering the storm, but the story did have a negative impact. Server sales have fallen since the Bloomberg story, and the company is forecasting a near 10% decline in total revenues for the March quarter compared to the previous three months.
|
||||
|
||||
**[ Also read:[Who's developing quantum computers][3] ]**
|
||||
|
||||
And now, Nikkei Asian Review reports that despite the strong rebuttals, some customers remain cautious about the company's products. To address those concerns, Nikkei says Supermicro has told suppliers to [move production out of China][4], citing industry sources familiar with the matter.
|
||||
|
||||
It also has the side benefit of mitigating against the U.S.-China trade war, which is only getting worse. Since the tariffs are on the dollar amount of the product, that can quickly add up even for a low-end system, as Serve The Home noted in [this analysis][5].
|
||||
|
||||
Supermicro is the world's third-largest server maker by shipments, selling primarily to cloud providers like Amazon and Facebook. It does its own assembly in its Fremont facility but outsources motherboard production to numerous suppliers, mostly China and Taiwan.
|
||||
|
||||
"We have to be more self-reliant [to build in-house manufacturing] without depending only on those outsourcing partners whose production previously has mostly been in China," an executive told Nikkei.
|
||||
|
||||
Nikkei notes that roughly 90% of the motherboards shipped worldwide in 2017 were made in China, but that percentage dropped to less than 50% in 2018, according to Digitimes Research, a tech supply chain specialist based in Taiwan.
|
||||
|
||||
Supermicro just held a groundbreaking ceremony in Taiwan for a 800,000 square foot manufacturing plant in Taiwan and is expanding its San Jose, California, plant as well. So, they must be anxious to be free of China if they are willing to expand in one of the most expensive real estate markets in the world.
|
||||
|
||||
A Supermicro spokesperson said via email, “We have been expanding our manufacturing capacity for many years to meet increasing customer demand. We are currently constructing a new Green Computing Park building in Silicon Valley, where we are the only Tier 1 solutions vendor manufacturing in Silicon Valley, and we proudly broke ground this week on a new manufacturing facility in Taiwan. To support our continued global growth, we look forward to expanding in Europe as well.”
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3394404/supermicro-moves-production-from-china.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/05/supermicro_-_x11sae__cebit_2016_01-100796121-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3326828/investigator-finds-no-evidence-of-spy-chips-on-super-micro-motherboards.html
|
||||
[3]: https://www.networkworld.com/article/3275385/who-s-developing-quantum-computers.html
|
||||
[4]: https://asia.nikkei.com/Economy/Trade-war/Server-maker-Super-Micro-to-ditch-made-in-China-parts-on-spy-fears
|
||||
[5]: https://www.servethehome.com/how-tariffs-hurt-intel-xeon-d-atom-and-amd-epyc-3000/
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,4 +1,3 @@
|
||||
liujing97 is translating
|
||||
Working with data streams on the Linux command line
|
||||
======
|
||||
Learn to connect data streams from one utility to another using STDIO.
|
||||
|
@ -1,745 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop)
|
||||
[#]: via: (https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop
|
||||
======
|
||||
|
||||
Laptop battery is highly optimized for Windows OS, that i had realized when i was using Windows OS in my laptop but it’s not same for Linux.
|
||||
|
||||
Over the years Linux has improved a lot for battery optimization but still we need make some necessary things to improve laptop battery life in Linux.
|
||||
|
||||
When i think about battery life, i got few options for that but i felt TLP is a better solutions for me so, i’m going with it.
|
||||
|
||||
In this tutorial we are going to discuss about TLP in details to improve battery life.
|
||||
|
||||
We had written three articles previously in our site about **[laptop battery saving utilities][1]** for Linux **[PowerTOP][2]** and **[Battery Charging State][3]**.
|
||||
|
||||
### What is TLP?
|
||||
|
||||
[TLP][4] is a free opensource advanced power management tool that improve your battery life without making any configuration change.
|
||||
|
||||
Since it comes with a default configuration already optimized for battery life, so you may just install and forget it.
|
||||
|
||||
Also, it is highly customizable to fulfill your specific requirements. TLP is a pure command line tool with automated background tasks. It does not contain a GUI.
|
||||
|
||||
TLP runs on every laptop brand. Setting the battery charge thresholds is available for IBM/Lenovo ThinkPads only.
|
||||
|
||||
All TLP settings are stored in `/etc/default/tlp`. The default configuration provides optimized power saving out of the box.
|
||||
|
||||
The following TLP settings is available for customization and you need to make the necessary changes accordingly if you want it.
|
||||
|
||||
### TLP Features
|
||||
|
||||
* Kernel laptop mode and dirty buffer timeouts
|
||||
* Processor frequency scaling including “turbo boost” / “turbo core”
|
||||
* Limit max/min P-state to control power dissipation of the CPU
|
||||
* HWP energy performance hints
|
||||
* Power aware process scheduler for multi-core/hyper-threading
|
||||
* Processor performance versus energy savings policy (x86_energy_perf_policy)
|
||||
* Hard disk advanced power magement level (APM) and spin down timeout (per disk)
|
||||
* AHCI link power management (ALPM) with device blacklist
|
||||
* PCIe active state power management (PCIe ASPM)
|
||||
* Runtime power management for PCI(e) bus devices
|
||||
* Radeon graphics power management (KMS and DPM)
|
||||
* Wifi power saving mode
|
||||
* Power off optical drive in drive bay
|
||||
* Audio power saving mode
|
||||
* I/O scheduler (per disk)
|
||||
* USB autosuspend with device blacklist/whitelist (input devices excluded automatically)
|
||||
* Enable or disable integrated wifi, bluetooth or wwan devices upon system startup and shutdown
|
||||
* Restore radio device state on system startup (from previous shutdown).
|
||||
* Radio device wizard: switch radios upon network connect/disconnect and dock/undock
|
||||
* Disable Wake On LAN
|
||||
* Integrated WWAN and bluetooth state is restored after suspend/hibernate
|
||||
* Untervolting of Intel processors – requires kernel with PHC-Patch
|
||||
* Battery charge thresholds – ThinkPads only
|
||||
* Recalibrate battery – ThinkPads only
|
||||
|
||||
|
||||
|
||||
### How to Install TLP in Linux
|
||||
|
||||
TLP package is available in most of the distributions official repository so, use the distributions **[Package Manager][5]** to install it.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][6]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo dnf install tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPads require an additional packages.
|
||||
|
||||
```
|
||||
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
|
||||
$ sudo dnf install http://repo.linrunner.de/fedora/tlp/repos/releases/tlp-release.fc$(rpm -E %fedora).noarch.rpm
|
||||
$ sudo dnf install akmod-tp_smapi akmod-acpi_call kernel-devel
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo dnf install smartmontools
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][7]** or **[APT Command][8]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo apt install tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPads require an additional packages.
|
||||
|
||||
```
|
||||
$ sudo apt-get install tp-smapi-dkms acpi-call-dkms
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo apt-get install smartmontools
|
||||
```
|
||||
|
||||
When the official package becomes outdated for Ubuntu based systems then use the following PPA repository which provides an up-to-date version. Run the following commands to install TLP using the PPA.
|
||||
|
||||
```
|
||||
$ sudo apt-get install tlp tlp-rdw
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo pacman -S tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPads require an additional packages.
|
||||
|
||||
```
|
||||
$ pacman -S tp_smapi acpi_call
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo pacman -S smartmontools
|
||||
```
|
||||
|
||||
Enable TLP & TLP-Sleep service on boot for Arch Linux based systems.
|
||||
|
||||
```
|
||||
$ sudo systemctl enable tlp.service
|
||||
$ sudo systemctl enable tlp-sleep.service
|
||||
```
|
||||
|
||||
You should also mask the following services to avoid conflicts and assure proper operation of TLP’s radio device switching options for Arch Linux based systems.
|
||||
|
||||
```
|
||||
$ sudo systemctl mask systemd-rfkill.service
|
||||
$ sudo systemctl mask systemd-rfkill.socket
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][10]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo yum install tlp tlp-rdw
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo yum install smartmontools
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][11]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo zypper install TLP
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo zypper install smartmontools
|
||||
```
|
||||
|
||||
After successfully TLP installed, use the following command to start the service.
|
||||
|
||||
```
|
||||
$ systemctl start tlp.service
|
||||
```
|
||||
|
||||
To show battery information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -b
|
||||
or
|
||||
$ sudo tlp-stat --battery
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Battery Status
|
||||
/sys/class/power_supply/BAT0/manufacturer = SMP
|
||||
/sys/class/power_supply/BAT0/model_name = L14M4P23
|
||||
/sys/class/power_supply/BAT0/cycle_count = (not supported)
|
||||
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_full = 48850 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_now = 48850 [mWh]
|
||||
/sys/class/power_supply/BAT0/power_now = 0 [mW]
|
||||
/sys/class/power_supply/BAT0/status = Full
|
||||
|
||||
Charge = 100.0 [%]
|
||||
Capacity = 81.4 [%]
|
||||
```
|
||||
|
||||
To show disk information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -d
|
||||
or
|
||||
$ sudo tlp-stat --disk
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Storage Devices
|
||||
/dev/sda:
|
||||
Model = WDC WD10SPCX-24HWST1
|
||||
Firmware = 02.01A02
|
||||
APM Level = 128
|
||||
Status = active/idle
|
||||
Scheduler = mq-deadline
|
||||
|
||||
Runtime PM: control = on, autosuspend_delay = (not available)
|
||||
|
||||
SMART info:
|
||||
4 Start_Stop_Count = 18787
|
||||
5 Reallocated_Sector_Ct = 0
|
||||
9 Power_On_Hours = 606 [h]
|
||||
12 Power_Cycle_Count = 1792
|
||||
193 Load_Cycle_Count = 25775
|
||||
194 Temperature_Celsius = 31 [°C]
|
||||
|
||||
|
||||
+++ AHCI Link Power Management (ALPM)
|
||||
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
|
||||
|
||||
+++ AHCI Host Controller Runtime Power Management
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
|
||||
```
|
||||
|
||||
To show PCI device information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -e
|
||||
or
|
||||
$ sudo tlp-stat --pcie
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Runtime Power Management
|
||||
Device blacklist = (not configured)
|
||||
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
|
||||
|
||||
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
|
||||
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
|
||||
/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd)
|
||||
/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me)
|
||||
/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci)
|
||||
/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver)
|
||||
/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver)
|
||||
/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel)
|
||||
/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus)
|
||||
/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau)
|
||||
/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci)
|
||||
/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi)
|
||||
/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168)
|
||||
/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme)
|
||||
```
|
||||
|
||||
To show graphics card information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -g
|
||||
or
|
||||
$ sudo tlp-stat --graphics
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Intel Graphics
|
||||
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
|
||||
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
|
||||
/sys/module/i915/parameters/enable_psr = 0 (disabled)
|
||||
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
|
||||
```
|
||||
|
||||
To show Processor information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -p
|
||||
or
|
||||
$ sudo tlp-stat --processor
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Processor
|
||||
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
|
||||
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
|
||||
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
|
||||
|
||||
x86_energy_perf_policy: program not installed.
|
||||
|
||||
/sys/module/workqueue/parameters/power_efficient = Y
|
||||
/proc/sys/kernel/nmi_watchdog = 0
|
||||
|
||||
+++ Undervolting
|
||||
PHC kernel not available.
|
||||
```
|
||||
|
||||
To show system data information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -s
|
||||
or
|
||||
$ sudo tlp-stat --system
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ System Info
|
||||
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
|
||||
BIOS = CDCN35WW
|
||||
Release = "Manjaro Linux"
|
||||
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
|
||||
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
|
||||
Init system = systemd
|
||||
Boot mode = BIOS (CSM, Legacy)
|
||||
|
||||
+++ TLP Status
|
||||
State = enabled
|
||||
Last run = 11:04:00 IST, 596 sec(s) ago
|
||||
Mode = battery
|
||||
Power source = battery
|
||||
```
|
||||
|
||||
To show temperatures and fan speed information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -t
|
||||
or
|
||||
$ sudo tlp-stat --temp
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Temperatures
|
||||
CPU temp = 36 [°C]
|
||||
Fan speed = (not available)
|
||||
```
|
||||
|
||||
To show USB device data information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -u
|
||||
or
|
||||
$ sudo tlp-stat --usb
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ USB
|
||||
Autosuspend = disabled
|
||||
Device whitelist = (not configured)
|
||||
Device blacklist = (not configured)
|
||||
Bluetooth blacklist = disabled
|
||||
Phone blacklist = disabled
|
||||
WWAN blacklist = enabled
|
||||
|
||||
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
|
||||
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
|
||||
Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid)
|
||||
Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb)
|
||||
Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub)
|
||||
```
|
||||
|
||||
To show warnings.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -w
|
||||
or
|
||||
$ sudo tlp-stat --warn
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
No warnings detected.
|
||||
```
|
||||
|
||||
Status report with configuration and all active settings.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Configured Settings: /etc/default/tlp
|
||||
TLP_ENABLE=1
|
||||
TLP_DEFAULT_MODE=AC
|
||||
TLP_PERSISTENT_DEFAULT=0
|
||||
DISK_IDLE_SECS_ON_AC=0
|
||||
DISK_IDLE_SECS_ON_BAT=2
|
||||
MAX_LOST_WORK_SECS_ON_AC=15
|
||||
MAX_LOST_WORK_SECS_ON_BAT=60
|
||||
CPU_HWP_ON_AC=balance_performance
|
||||
CPU_HWP_ON_BAT=balance_power
|
||||
SCHED_POWERSAVE_ON_AC=0
|
||||
SCHED_POWERSAVE_ON_BAT=1
|
||||
NMI_WATCHDOG=0
|
||||
ENERGY_PERF_POLICY_ON_AC=performance
|
||||
ENERGY_PERF_POLICY_ON_BAT=power
|
||||
DISK_DEVICES="sda sdb"
|
||||
DISK_APM_LEVEL_ON_AC="254 254"
|
||||
DISK_APM_LEVEL_ON_BAT="128 128"
|
||||
SATA_LINKPWR_ON_AC="med_power_with_dipm max_performance"
|
||||
SATA_LINKPWR_ON_BAT="med_power_with_dipm max_performance"
|
||||
AHCI_RUNTIME_PM_TIMEOUT=15
|
||||
PCIE_ASPM_ON_AC=performance
|
||||
PCIE_ASPM_ON_BAT=powersave
|
||||
RADEON_POWER_PROFILE_ON_AC=default
|
||||
RADEON_POWER_PROFILE_ON_BAT=low
|
||||
RADEON_DPM_STATE_ON_AC=performance
|
||||
RADEON_DPM_STATE_ON_BAT=battery
|
||||
RADEON_DPM_PERF_LEVEL_ON_AC=auto
|
||||
RADEON_DPM_PERF_LEVEL_ON_BAT=auto
|
||||
WIFI_PWR_ON_AC=off
|
||||
WIFI_PWR_ON_BAT=on
|
||||
WOL_DISABLE=Y
|
||||
SOUND_POWER_SAVE_ON_AC=0
|
||||
SOUND_POWER_SAVE_ON_BAT=1
|
||||
SOUND_POWER_SAVE_CONTROLLER=Y
|
||||
BAY_POWEROFF_ON_AC=0
|
||||
BAY_POWEROFF_ON_BAT=0
|
||||
BAY_DEVICE="sr0"
|
||||
RUNTIME_PM_ON_AC=on
|
||||
RUNTIME_PM_ON_BAT=auto
|
||||
RUNTIME_PM_DRIVER_BLACKLIST="amdgpu nouveau nvidia radeon pcieport"
|
||||
USB_AUTOSUSPEND=0
|
||||
USB_BLACKLIST_BTUSB=0
|
||||
USB_BLACKLIST_PHONE=0
|
||||
USB_BLACKLIST_PRINTER=1
|
||||
USB_BLACKLIST_WWAN=1
|
||||
RESTORE_DEVICE_STATE_ON_STARTUP=0
|
||||
|
||||
+++ System Info
|
||||
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
|
||||
BIOS = CDCN35WW
|
||||
Release = "Manjaro Linux"
|
||||
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
|
||||
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
|
||||
Init system = systemd
|
||||
Boot mode = BIOS (CSM, Legacy)
|
||||
|
||||
+++ TLP Status
|
||||
State = enabled
|
||||
Last run = 11:04:00 IST, 684 sec(s) ago
|
||||
Mode = battery
|
||||
Power source = battery
|
||||
|
||||
+++ Processor
|
||||
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
|
||||
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
|
||||
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
|
||||
|
||||
x86_energy_perf_policy: program not installed.
|
||||
|
||||
/sys/module/workqueue/parameters/power_efficient = Y
|
||||
/proc/sys/kernel/nmi_watchdog = 0
|
||||
|
||||
+++ Undervolting
|
||||
PHC kernel not available.
|
||||
|
||||
+++ Temperatures
|
||||
CPU temp = 42 [°C]
|
||||
Fan speed = (not available)
|
||||
|
||||
+++ File System
|
||||
/proc/sys/vm/laptop_mode = 2
|
||||
/proc/sys/vm/dirty_writeback_centisecs = 6000
|
||||
/proc/sys/vm/dirty_expire_centisecs = 6000
|
||||
/proc/sys/vm/dirty_ratio = 20
|
||||
/proc/sys/vm/dirty_background_ratio = 10
|
||||
|
||||
+++ Storage Devices
|
||||
/dev/sda:
|
||||
Model = WDC WD10SPCX-24HWST1
|
||||
Firmware = 02.01A02
|
||||
APM Level = 128
|
||||
Status = active/idle
|
||||
Scheduler = mq-deadline
|
||||
|
||||
Runtime PM: control = on, autosuspend_delay = (not available)
|
||||
|
||||
SMART info:
|
||||
4 Start_Stop_Count = 18787
|
||||
5 Reallocated_Sector_Ct = 0
|
||||
9 Power_On_Hours = 606 [h]
|
||||
12 Power_Cycle_Count = 1792
|
||||
193 Load_Cycle_Count = 25777
|
||||
194 Temperature_Celsius = 31 [°C]
|
||||
|
||||
|
||||
+++ AHCI Link Power Management (ALPM)
|
||||
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
|
||||
|
||||
+++ AHCI Host Controller Runtime Power Management
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
|
||||
|
||||
+++ PCIe Active State Power Management
|
||||
/sys/module/pcie_aspm/parameters/policy = powersave
|
||||
|
||||
+++ Intel Graphics
|
||||
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
|
||||
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
|
||||
/sys/module/i915/parameters/enable_psr = 0 (disabled)
|
||||
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
|
||||
|
||||
+++ Wireless
|
||||
bluetooth = on
|
||||
wifi = on
|
||||
wwan = none (no device)
|
||||
|
||||
hci0(btusb) : bluetooth, not connected
|
||||
wlp8s0(iwlwifi) : wifi, connected, power management = on
|
||||
|
||||
+++ Audio
|
||||
/sys/module/snd_hda_intel/parameters/power_save = 1
|
||||
/sys/module/snd_hda_intel/parameters/power_save_controller = Y
|
||||
|
||||
+++ Runtime Power Management
|
||||
Device blacklist = (not configured)
|
||||
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
|
||||
|
||||
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
|
||||
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
|
||||
/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd)
|
||||
/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me)
|
||||
/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci)
|
||||
/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver)
|
||||
/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver)
|
||||
/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel)
|
||||
/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus)
|
||||
/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau)
|
||||
/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci)
|
||||
/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi)
|
||||
/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168)
|
||||
/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme)
|
||||
|
||||
+++ USB
|
||||
Autosuspend = disabled
|
||||
Device whitelist = (not configured)
|
||||
Device blacklist = (not configured)
|
||||
Bluetooth blacklist = disabled
|
||||
Phone blacklist = disabled
|
||||
WWAN blacklist = enabled
|
||||
|
||||
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
|
||||
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
|
||||
Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid)
|
||||
Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb)
|
||||
Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub)
|
||||
|
||||
+++ Battery Status
|
||||
/sys/class/power_supply/BAT0/manufacturer = SMP
|
||||
/sys/class/power_supply/BAT0/model_name = L14M4P23
|
||||
/sys/class/power_supply/BAT0/cycle_count = (not supported)
|
||||
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_full = 51690 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_now = 50140 [mWh]
|
||||
/sys/class/power_supply/BAT0/power_now = 12185 [mW]
|
||||
/sys/class/power_supply/BAT0/status = Discharging
|
||||
|
||||
Charge = 97.0 [%]
|
||||
Capacity = 86.2 [%]
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/check-laptop-battery-status-and-charging-state-in-linux-terminal/
|
||||
[2]: https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/
|
||||
[3]: https://www.2daygeek.com/monitor-laptop-battery-charging-state-linux/
|
||||
[4]: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html
|
||||
[5]: https://www.2daygeek.com/category/package-management/
|
||||
[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -1,196 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Virtual filesystems in Linux: Why we need them and how they work)
|
||||
[#]: via: (https://opensource.com/article/19/3/virtual-filesystems-linux)
|
||||
[#]: author: (Alison Chariken )
|
||||
|
||||
Virtual filesystems in Linux: Why we need them and how they work
|
||||
======
|
||||
Virtual filesystems are the magic abstraction that makes the "everything is a file" philosophy of Linux possible.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ)
|
||||
|
||||
What is a filesystem? According to early Linux contributor and author [Robert Love][1], "A filesystem is a hierarchical storage of data adhering to a specific structure." However, this description applies equally well to VFAT (Virtual File Allocation Table), Git, and [Cassandra][2] (a [NoSQL database][3]). So what distinguishes a filesystem?
|
||||
|
||||
### Filesystem basics
|
||||
|
||||
The Linux kernel requires that for an entity to be a filesystem, it must also implement the **open()** , **read()** , and **write()** methods on persistent objects that have names associated with them. From the point of view of [object-oriented programming][4], the kernel treats the generic filesystem as an abstract interface, and these big-three functions are "virtual," with no default definition. Accordingly, the kernel's default filesystem implementation is called a virtual filesystem (VFS).
|
||||
|
||||
|
||||
![][5]
|
||||
If we can open(), read(), and write(), it is a file as this console session shows.
|
||||
|
||||
VFS underlies the famous observation that in Unix-like systems "everything is a file." Consider how weird it is that the tiny demo above featuring the character device /dev/console actually works. The image shows an interactive Bash session on a virtual teletype (tty). Sending a string into the virtual console device makes it appear on the virtual screen. VFS has other, even odder properties. For example, it's [possible to seek in them][6].
|
||||
|
||||
The familiar filesystems like ext4, NFS, and /proc all provide definitions of the big-three functions in a C-language data structure called [file_operations][7] . In addition, particular filesystems extend and override the VFS functions in the familiar object-oriented way. As Robert Love points out, the abstraction of VFS enables Linux users to blithely copy files to and from foreign operating systems or abstract entities like pipes without worrying about their internal data format. On behalf of userspace, via a system call, a process can copy from a file into the kernel's data structures with the read() method of one filesystem, then use the write() method of another kind of filesystem to output the data.
|
||||
|
||||
The function definitions that belong to the VFS base type itself are found in the [fs/*.c files][8] in kernel source, while the subdirectories of fs/ contain the specific filesystems. The kernel also contains filesystem-like entities such as cgroups, /dev, and tmpfs, which are needed early in the boot process and are therefore defined in the kernel's init/ subdirectory. Note that cgroups, /dev, and tmpfs do not call the file_operations big-three functions, but directly read from and write to memory instead.
|
||||
|
||||
The diagram below roughly illustrates how userspace accesses various types of filesystems commonly mounted on Linux systems. Not shown are constructs like pipes, dmesg, and POSIX clocks that also implement struct file_operations and whose accesses therefore pass through the VFS layer.
|
||||
|
||||
![How userspace accesses various types of filesystems][9]
|
||||
VFS are a "shim layer" between system calls and implementors of specific file_operations like ext4 and procfs. The file_operations functions can then communicate either with device-specific drivers or with memory accessors. tmpfs, devtmpfs and cgroups don't make use of file_operations but access memory directly.
|
||||
|
||||
VFS's existence promotes code reuse, as the basic methods associated with filesystems need not be re-implemented by every filesystem type. Code reuse is a widely accepted software engineering best practice! Alas, if the reused code [introduces serious bugs][10], then all the implementations that inherit the common methods suffer from them.
|
||||
|
||||
### /tmp: A simple tip
|
||||
|
||||
An easy way to find out what VFSes are present on a system is to type **mount | grep -v sd | grep -v :/** , which will list all mounted filesystems that are not resident on a disk and not NFS on most computers. One of the listed VFS mounts will assuredly be /tmp, right?
|
||||
|
||||
![Man with shocked expression][11]
|
||||
Everyone knows that keeping /tmp on a physical storage device is crazy! credit: <https://tinyurl.com/ybomxyfo>
|
||||
|
||||
Why is keeping /tmp on storage inadvisable? Because the files in /tmp are temporary(!), and storage devices are slower than memory, where tmpfs are created. Further, physical devices are more subject to wear from frequent writing than memory is. Last, files in /tmp may contain sensitive information, so having them disappear at every reboot is a feature.
|
||||
|
||||
Unfortunately, installation scripts for some Linux distros still create /tmp on a storage device by default. Do not despair should this be the case with your system. Follow simple instructions on the always excellent [Arch Wiki][12] to fix the problem, keeping in mind that memory allocated to tmpfs is not available for other purposes. In other words, a system with a gigantic tmpfs with large files in it can run out of memory and crash. Another tip: when editing the /etc/fstab file, be sure to end it with a newline, as your system will not boot otherwise. (Guess how I know.)
|
||||
|
||||
### /proc and /sys
|
||||
|
||||
Besides /tmp, the VFSes with which most Linux users are most familiar are /proc and /sys. (/dev relies on shared memory and has no file_operations). Why two flavors? Let's have a look in more detail.
|
||||
|
||||
The procfs offers a snapshot into the instantaneous state of the kernel and the processes that it controls for userspace. In /proc, the kernel publishes information about the facilities it provides, like interrupts, virtual memory, and the scheduler. In addition, /proc/sys is where the settings that are configurable via the [sysctl command][13] are accessible to userspace. Status and statistics on individual processes are reported in /proc/<PID> directories.
|
||||
|
||||
![Console][14]
|
||||
/proc/meminfo is an empty file that nonetheless contains valuable information.
|
||||
|
||||
The behavior of /proc files illustrates how unlike on-disk filesystems VFS can be. On the one hand, /proc/meminfo contains the information presented by the command **free**. On the other hand, it's also empty! How can this be? The situation is reminiscent of a famous article written by Cornell University physicist N. David Mermin in 1985 called "[Is the moon there when nobody looks?][15] Reality and the quantum theory." The truth is that the kernel gathers statistics about memory when a process requests them from /proc, and there actually is nothing in the files in /proc when no one is looking. As [Mermin said][16], "It is a fundamental quantum doctrine that a measurement does not, in general, reveal a preexisting value of the measured property." (The answer to the question about the moon is left as an exercise.)
|
||||
|
||||
![Full moon][17]
|
||||
The files in /proc are empty when no process accesses them. ([Source][18])
|
||||
|
||||
The apparent emptiness of procfs makes sense, as the information available there is dynamic. The situation with sysfs is different. Let's compare how many files of at least one byte in size there are in /proc versus /sys.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/virtualfilesystems_6-filesize.png)
|
||||
|
||||
Procfs has precisely one, namely the exported kernel configuration, which is an exception since it needs to be generated only once per boot. On the other hand, /sys has lots of larger files, most of which comprise one page of memory. Typically, sysfs files contain exactly one number or string, in contrast to the tables of information produced by reading files like /proc/meminfo.
|
||||
|
||||
The purpose of sysfs is to expose the readable and writable properties of what the kernel calls "kobjects" to userspace. The only purpose of kobjects is reference-counting: when the last reference to a kobject is deleted, the system will reclaim the resources associated with it. Yet, /sys constitutes most of the kernel's famous "[stable ABI to userspace][19]" which [no one may ever, under any circumstances, "break."][20] That doesn't mean the files in sysfs are static, which would be contrary to reference-counting of volatile objects.
|
||||
|
||||
The kernel's stable ABI instead constrains what can appear in /sys, not what is actually present at any given instant. Listing the permissions on files in sysfs gives an idea of how the configurable, tunable parameters of devices, modules, filesystems, etc. can be set or read. Logic compels the conclusion that procfs is also part of the kernel's stable ABI, although the kernel's [documentation][19] doesn't state so explicitly.
|
||||
|
||||
![Console][21]
|
||||
Files in sysfs describe exactly one property each for an entity and may be readable, writable or both. The "0" in the file reveals that the SSD is not removable.
|
||||
|
||||
### Snooping on VFS with eBPF and bcc tools
|
||||
|
||||
The easiest way to learn how the kernel manages sysfs files is to watch it in action, and the simplest way to watch on ARM64 or x86_64 is to use eBPF. eBPF (extended Berkeley Packet Filter) consists of a [virtual machine running inside the kernel][22] that privileged users can query from the command line. Kernel source tells the reader what the kernel can do; running eBPF tools on a booted system shows instead what the kernel actually does.
|
||||
|
||||
Happily, getting started with eBPF is pretty easy via the [bcc][23] tools, which are available as [packages from major Linux distros][24] and have been [amply documented][25] by Brendan Gregg. The bcc tools are Python scripts with small embedded snippets of C, meaning anyone who is comfortable with either language can readily modify them. At this count, [there are 80 Python scripts in bcc/tools][26], making it highly likely that a system administrator or developer will find an existing one relevant to her/his needs.
|
||||
|
||||
To get a very crude idea about what work VFSes are performing on a running system, try the simple [vfscount][27] or [vfsstat][28], which show that dozens of calls to vfs_open() and its friends occur every second.
|
||||
|
||||
![Console - vfsstat.py][29]
|
||||
vfsstat.py is a Python script with an embedded C snippet that simply counts VFS function calls.
|
||||
|
||||
For a less trivial example, let's watch what happens in sysfs when a USB stick is inserted on a running system.
|
||||
|
||||
![Console when USB is inserted][30]
|
||||
Watch with eBPF what happens in /sys when a USB stick is inserted, with simple and complex examples.
|
||||
|
||||
In the first simple example above, the [trace.py][31] bcc tools script prints out a message whenever the sysfs_create_files() command runs. We see that sysfs_create_files() was started by a kworker thread in response to the USB stick insertion, but what file was created? The second example illustrates the full power of eBPF. Here, trace.py is printing the kernel backtrace (-K option) plus the name of the file created by sysfs_create_files(). The snippet inside the single quotes is some C source code, including an easily recognizable format string, that the provided Python script [induces a LLVM just-in-time compiler][32] to compile and execute inside an in-kernel virtual machine. The full sysfs_create_files() function signature must be reproduced in the second command so that the format string can refer to one of the parameters. Making mistakes in this C snippet results in recognizable C-compiler errors. For example, if the **-I** parameter is omitted, the result is "Failed to compile BPF text." Developers who are conversant with either C or Python will find the bcc tools easy to extend and modify.
|
||||
|
||||
When the USB stick is inserted, the kernel backtrace appears showing that PID 7711 is a kworker thread that created a file called "events" in sysfs. A corresponding invocation with sysfs_remove_files() shows that removal of the USB stick results in removal of the events file, in keeping with the idea of reference counting. Watching sysfs_create_link() with eBPF during USB stick insertion (not shown) reveals that no fewer than 48 symbolic links are created.
|
||||
|
||||
What is the purpose of the events file anyway? Using [cscope][33] to find the function [__device_add_disk()][34] reveals that it calls disk_add_events(), and either "media_change" or "eject_request" may be written to the events file. Here, the kernel's block layer is informing userspace about the appearance and disappearance of the "disk." Consider how quickly informative this method of investigating how USB stick insertion works is compared to trying to figure out the process solely from the source.
|
||||
|
||||
### Read-only root filesystems make embedded devices possible
|
||||
|
||||
Assuredly, no one shuts down a server or desktop system by pulling out the power plug. Why? Because mounted filesystems on the physical storage devices may have pending writes, and the data structures that record their state may become out of sync with what is written on the storage. When that happens, system owners will have to wait at next boot for the [fsck filesystem-recovery tool][35] to run and, in the worst case, will actually lose data.
|
||||
|
||||
Yet, aficionados will have heard that many IoT and embedded devices like routers, thermostats, and automobiles now run Linux. Many of these devices almost entirely lack a user interface, and there's no way to "unboot" them cleanly. Consider jump-starting a car with a dead battery where the power to the [Linux-running head unit][36] goes up and down repeatedly. How is it that the system boots without a long fsck when the engine finally starts running? The answer is that embedded devices rely on [a read-only root fileystem][37] (ro-rootfs for short).
|
||||
|
||||
![Photograph of a console][38]
|
||||
ro-rootfs are why embedded systems don't frequently need to fsck. Credit (with permission): <https://tinyurl.com/yxoauoub>
|
||||
|
||||
A ro-rootfs offers many advantages that are less obvious than incorruptibility. One is that malware cannot write to /usr or /lib if no Linux process can write there. Another is that a largely immutable filesystem is critical for field support of remote devices, as support personnel possess local systems that are nominally identical to those in the field. Perhaps the most important (but also most subtle) advantage is that ro-rootfs forces developers to decide during a project's design phase which system objects will be immutable. Dealing with ro-rootfs may often be inconvenient or even painful, as [const variables in programming languages][39] often are, but the benefits easily repay the extra overhead.
|
||||
|
||||
Creating a read-only rootfs does require some additional amount of effort for embedded developers, and that's where VFS comes in. Linux needs files in /var to be writable, and in addition, many popular applications that embedded systems run will try to create configuration dot-files in $HOME. One solution for configuration files in the home directory is typically to pregenerate them and build them into the rootfs. For /var, one approach is to mount it on a separate writable partition while / itself is mounted as read-only. Using bind or overlay mounts is another popular alternative.
|
||||
|
||||
### Bind and overlay mounts and their use by containers
|
||||
|
||||
Running **[man mount][40]** is the best place to learn about bind and overlay mounts, which give embedded developers and system administrators the power to create a filesystem in one path location and then provide it to applications at a second one. For embedded systems, the implication is that it's possible to store the files in /var on an unwritable flash device but overlay- or bind-mount a path in a tmpfs onto the /var path at boot so that applications can scrawl there to their heart's delight. At next power-on, the changes in /var will be gone. Overlay mounts provide a union between the tmpfs and the underlying filesystem and allow apparent modification to an existing file in a ro-rootfs, while bind mounts can make new empty tmpfs directories show up as writable at ro-rootfs paths. While overlayfs is a proper filesystem type, bind mounts are implemented by the [VFS namespace facility][41].
|
||||
|
||||
Based on the description of overlay and bind mounts, no one will be surprised that [Linux containers][42] make heavy use of them. Let's spy on what happens when we employ [systemd-nspawn][43] to start up a container by running bcc's mountsnoop tool:
|
||||
|
||||
![Console - system-nspawn invocation][44]
|
||||
The system-nspawn invocation fires up the container while mountsnoop.py runs.
|
||||
|
||||
And let's see what happened:
|
||||
|
||||
![Console - Running mountsnoop][45]
|
||||
Running mountsnoop during the container "boot" reveals that the container runtime relies heavily on bind mounts. (Only the beginning of the lengthy output is displayed)
|
||||
|
||||
Here, systemd-nspawn is providing selected files in the host's procfs and sysfs to the container at paths in its rootfs. Besides the MS_BIND flag that sets bind-mounting, some of the other flags that the "mount" system call invokes determine the relationship between changes in the host namespace and in the container. For example, the bind-mount can either propagate changes in /proc and /sys to the container, or hide them, depending on the invocation.
|
||||
|
||||
### Summary
|
||||
|
||||
Understanding Linux internals can seem an impossible task, as the kernel itself contains a gigantic amount of code, leaving aside Linux userspace applications and the system-call interface in C libraries like glibc. One way to make progress is to read the source code of one kernel subsystem with an emphasis on understanding the userspace-facing system calls and headers plus major kernel internal interfaces, exemplified here by the file_operations table. The file operations are what makes "everything is a file" actually work, so getting a handle on them is particularly satisfying. The kernel C source files in the top-level fs/ directory constitute its implementation of virtual filesystems, which are the shim layer that enables broad and relatively straightforward interoperability of popular filesystems and storage devices. Bind and overlay mounts via Linux namespaces are the VFS magic that makes containers and read-only root filesystems possible. In combination with a study of source code, the eBPF kernel facility and its bcc interface makes probing the kernel simpler than ever before.
|
||||
|
||||
Much thanks to [Akkana Peck][46] and [Michael Eager][47] for comments and corrections.
|
||||
|
||||
Alison Chaiken will present [Virtual filesystems: why we need them and how they work][48] at the 17th annual Southern California Linux Expo ([SCaLE 17x][49]) March 7-10 in Pasadena, Calif.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/virtual-filesystems-linux
|
||||
|
||||
作者:[Alison Chariken][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.pearson.com/us/higher-education/program/Love-Linux-Kernel-Development-3rd-Edition/PGM202532.html
|
||||
[2]: http://cassandra.apache.org/
|
||||
[3]: https://en.wikipedia.org/wiki/NoSQL
|
||||
[4]: http://lwn.net/Articles/444910/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_1-console.png (Console)
|
||||
[6]: https://lwn.net/Articles/22355/
|
||||
[7]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/fs.h
|
||||
[8]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs
|
||||
[9]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_2-shim-layer.png (How userspace accesses various types of filesystems)
|
||||
[10]: https://lwn.net/Articles/774114/
|
||||
[11]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_3-crazy.jpg (Man with shocked expression)
|
||||
[12]: https://wiki.archlinux.org/index.php/Tmpfs
|
||||
[13]: http://man7.org/linux/man-pages/man8/sysctl.8.html
|
||||
[14]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_4-proc-meminfo.png (Console)
|
||||
[15]: http://www-f1.ijs.si/~ramsak/km1/mermin.moon.pdf
|
||||
[16]: https://en.wikiquote.org/wiki/David_Mermin
|
||||
[17]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_5-moon.jpg (Full moon)
|
||||
[18]: https://commons.wikimedia.org/wiki/Moon#/media/File:Full_Moon_Luc_Viatour.jpg
|
||||
[19]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/ABI/stable
|
||||
[20]: https://lkml.org/lkml/2012/12/23/75
|
||||
[21]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_7-sysfs.png (Console)
|
||||
[22]: https://events.linuxfoundation.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf
|
||||
[23]: https://github.com/iovisor/bcc
|
||||
[24]: https://github.com/iovisor/bcc/blob/master/INSTALL.md
|
||||
[25]: http://brendangregg.com/ebpf.html
|
||||
[26]: https://github.com/iovisor/bcc/tree/master/tools
|
||||
[27]: https://github.com/iovisor/bcc/blob/master/tools/vfscount_example.txt
|
||||
[28]: https://github.com/iovisor/bcc/blob/master/tools/vfsstat.py
|
||||
[29]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_8-vfsstat.png (Console - vfsstat.py)
|
||||
[30]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_9-ebpf.png (Console when USB is inserted)
|
||||
[31]: https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt
|
||||
[32]: https://events.static.linuxfound.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf
|
||||
[33]: http://northstar-www.dartmouth.edu/doc/solaris-forte/manuals/c/user_guide/cscope.html
|
||||
[34]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/block/genhd.c#n665
|
||||
[35]: http://www.man7.org/linux/man-pages/man8/fsck.8.html
|
||||
[36]: https://wiki.automotivelinux.org/_media/eg-rhsa/agl_referencehardwarespec_v0.1.0_20171018.pdf
|
||||
[37]: https://elinux.org/images/1/1f/Read-only_rootfs.pdf
|
||||
[38]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_10-code.jpg (Photograph of a console)
|
||||
[39]: https://www.meetup.com/ACCU-Bay-Area/events/drpmvfytlbqb/
|
||||
[40]: http://man7.org/linux/man-pages/man8/mount.8.html
|
||||
[41]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/sharedsubtree.txt
|
||||
[42]: https://coreos.com/os/docs/latest/kernel-modules.html
|
||||
[43]: https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html
|
||||
[44]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_11-system-nspawn.png (Console - system-nspawn invocation)
|
||||
[45]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_12-mountsnoop.png (Console - Running mountsnoop)
|
||||
[46]: http://shallowsky.com/
|
||||
[47]: http://eagercon.com/
|
||||
[48]: https://www.socallinuxexpo.org/scale/17x/presentations/virtual-filesystems-why-we-need-them-and-how-they-work
|
||||
[49]: https://www.socallinuxexpo.org/
|
@ -1,132 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Automate backups with restic and systemd)
|
||||
[#]: via: (https://fedoramagazine.org/automate-backups-with-restic-and-systemd/)
|
||||
[#]: author: (Link Dupont https://fedoramagazine.org/author/linkdupont/)
|
||||
|
||||
Automate backups with restic and systemd
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Timely backups are important. So much so that [backing up software][2] is a common topic of discussion, even [here on the Fedora Magazine][3]. This article demonstrates how to automate backups with **restic** using only systemd unit files.
|
||||
|
||||
For an introduction to restic, be sure to check out our article [Use restic on Fedora for encrypted backups][4]. Then read on for more details.
|
||||
|
||||
Two systemd services are required to run in order to automate taking snapshots and keeping data pruned. The first service runs the _backup_ command needs to be run on a regular frequency. The second service takes care of data pruning.
|
||||
|
||||
If you’re not familiar with systemd at all, there’s never been a better time to learn. Check out [the series on systemd here at the Magazine][5], starting with this primer on unit files:
|
||||
|
||||
> [systemd unit file basics][6]
|
||||
|
||||
If you haven’t installed restic already, note it’s in the official Fedora repositories. To install use this command [with sudo][7]:
|
||||
|
||||
```
|
||||
$ sudo dnf install restic
|
||||
```
|
||||
|
||||
### Backup
|
||||
|
||||
First, create the _~/.config/systemd/user/restic-backup.service_ file. Copy and paste the text below into the file for best results.
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Restic backup service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=restic backup --verbose --one-file-system --tag systemd.timer $BACKUP_EXCLUDES $BACKUP_PATHS
|
||||
ExecStartPost=restic forget --verbose --tag systemd.timer --group-by "paths,tags" --keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS --keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS
|
||||
EnvironmentFile=%h/.config/restic-backup.conf
|
||||
```
|
||||
|
||||
This service references an environment file in order to load secrets (such as _RESTIC_PASSWORD_ ). Create the _~/.config/restic-backup.conf_ file. Copy and paste the content below for best results. This example uses BackBlaze B2 buckets. Adjust the ID, key, repository, and password values accordingly.
|
||||
|
||||
```
|
||||
BACKUP_PATHS="/home/rupert"
|
||||
BACKUP_EXCLUDES="--exclude-file /home/rupert/.restic_excludes --exclude-if-present .exclude_from_backup"
|
||||
RETENTION_DAYS=7
|
||||
RETENTION_WEEKS=4
|
||||
RETENTION_MONTHS=6
|
||||
RETENTION_YEARS=3
|
||||
B2_ACCOUNT_ID=XXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
B2_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
RESTIC_REPOSITORY=b2:XXXXXXXXXXXXXXXXXX:/
|
||||
RESTIC_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
```
|
||||
|
||||
Now that the service is installed, reload systemd: _systemctl –user daemon-reload_. Try running the service manually to create a backup: _systemctl –user start restic-backup_.
|
||||
|
||||
Because the service is a _oneshot_ , it will run once and exit. After verifying that the service runs and creates snapshots as desired, set up a timer to run this service regularly. For example, to run the _restic-backup.service_ daily, create _~/.config/systemd/user/restic-backup.timer_ as follows. Again, copy and paste this text:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Backup with restic daily
|
||||
[Timer]
|
||||
OnCalendar=daily
|
||||
Persistent=true
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
Enable it by running this command:
|
||||
|
||||
```
|
||||
$ systemctl --user enable --now restic-backup.timer
|
||||
```
|
||||
|
||||
### Prune
|
||||
|
||||
While the main service runs the _forget_ command to only keep snapshots within the keep policy, the data is not actually removed from the restic repository. The _prune_ command inspects the repository and current snapshots, and deletes any data not associated with a snapshot. Because _prune_ can be a time-consuming process, it is not necessary to run every time a backup is run. This is the perfect scenario for a second service and timer. First, create the file _~/.config/systemd/user/restic-prune.service_ by copying and pasting this text:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Restic backup service (data pruning)
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=restic prune
|
||||
EnvironmentFile=%h/.config/restic-backup.conf
|
||||
```
|
||||
|
||||
Similarly to the main _restic-backup.service_ , _restic-prune_ is a oneshot service and can be run manually. Once the service has been set up, create and enable a corresponding timer at _~/.config/systemd/user/restic-prune.timer_ :
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Prune data from the restic repository monthly
|
||||
[Timer]
|
||||
OnCalendar=monthly
|
||||
Persistent=true
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
That’s it! Restic will now run daily and prune data monthly.
|
||||
|
||||
* * *
|
||||
|
||||
_Photo by _[ _Samuel Zeller_][8]_ on _[_Unsplash_][9]_._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/automate-backups-with-restic-and-systemd/
|
||||
|
||||
作者:[Link Dupont][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/linkdupont/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/restic-systemd-816x345.jpg
|
||||
[2]: https://restic.net/
|
||||
[3]: https://fedoramagazine.org/?s=backup
|
||||
[4]: https://fedoramagazine.org/use-restic-encrypted-backups/
|
||||
[5]: https://fedoramagazine.org/series/systemd-series/
|
||||
[6]: https://fedoramagazine.org/systemd-getting-a-grip-on-units/
|
||||
[7]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[8]: https://unsplash.com/photos/JuFcQxgCXwA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[9]: https://unsplash.com/search/photos/archive?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,96 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Upgrading Fedora 29 to Fedora 30)
|
||||
[#]: via: (https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/)
|
||||
[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/)
|
||||
|
||||
Upgrading Fedora 29 to Fedora 30
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Fedora 30 i[s available now][2]. You’ll likely want to upgrade your system to the latest version of Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 29 to Fedora 30.
|
||||
|
||||
### Upgrading Fedora 29 Workstation to Fedora 30
|
||||
|
||||
Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell.
|
||||
|
||||
Choose the _Updates_ tab in GNOME Software and you should see a screen informing you that Fedora 30 is Now Available.
|
||||
|
||||
If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.
|
||||
|
||||
Choose _Download_ to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.
|
||||
|
||||
### Using the command line
|
||||
|
||||
If you’ve upgraded from past Fedora releases, you are likely familiar with the _dnf upgrade_ plugin. This method is the recommended and supported way to upgrade from Fedora 29 to Fedora 30. Using this plugin will make your upgrade to Fedora 30 simple and easy.
|
||||
|
||||
##### 1\. Update software and back up your system
|
||||
|
||||
Before you do anything, you will want to make sure you have the latest software for Fedora 29 before beginning the upgrade process. To update your software, use _GNOME Software_ or enter the following command in a terminal.
|
||||
|
||||
```
|
||||
sudo dnf upgrade --refresh
|
||||
```
|
||||
|
||||
Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series][3] on the Fedora Magazine.
|
||||
|
||||
##### 2\. Install the DNF plugin
|
||||
|
||||
Next, open a terminal and type the following command to install the plugin:
|
||||
|
||||
```
|
||||
sudo dnf install dnf-plugin-system-upgrade
|
||||
```
|
||||
|
||||
##### 3\. Start the update with DNF
|
||||
|
||||
Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:
|
||||
|
||||
```
|
||||
sudo dnf system-upgrade download --releasever=30
|
||||
```
|
||||
|
||||
This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the _‐‐allowerasing_ flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.
|
||||
|
||||
##### 4\. Reboot and upgrade
|
||||
|
||||
Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:
|
||||
|
||||
```
|
||||
sudo dnf system-upgrade reboot
|
||||
```
|
||||
|
||||
Your system will restart after this. Many releases ago, the _fedup_ tool would create a new option on the kernel selection / boot screen. With the _dnf-plugin-system-upgrade_ package, your system reboots into the current kernel installed for Fedora 29; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.
|
||||
|
||||
Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 30 system.
|
||||
|
||||
![][4]
|
||||
|
||||
### Resolving upgrade problems
|
||||
|
||||
On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade wiki page][5] for more information on troubleshooting in the event of a problem.
|
||||
|
||||
If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/
|
||||
|
||||
作者:[Ryan Lerch][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/ryanlerch/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/29-30-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/announcing-fedora-30/
|
||||
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
|
||||
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
|
||||
[5]: https://fedoraproject.org/wiki/DNF_system_upgrade#Resolving_post-upgrade_issues
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
136
sources/tech/20190505 -Review- Void Linux, a Linux BSD Hybrid.md
Normal file
136
sources/tech/20190505 -Review- Void Linux, a Linux BSD Hybrid.md
Normal file
@ -0,0 +1,136 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: ([Review] Void Linux, a Linux BSD Hybrid)
|
||||
[#]: via: (https://itsfoss.com/void-linux/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
[Review] Void Linux, a Linux BSD Hybrid
|
||||
======
|
||||
|
||||
There are distros that follow the crowd and there are others that try to make their own path through the tall weed. Today, we’ll be looking at a small distro that looks to challenge how a distro should work. We’ll be looking at Void Linux.
|
||||
|
||||
### What is Void Linux?
|
||||
|
||||
[Void Linux][1] is a “general purpose operating system, based on the monolithic Linux kernel. Its package system allows you to quickly install, update and remove software; software is provided in binary packages or can be built directly from sources with the help of the XBPS source packages collection.”
|
||||
|
||||
![Void Linux Neofetch][2]
|
||||
|
||||
Like Solus, Void Linux is written from scratch and does not depend on any other operating system. It is a rolling release. Unlike the majority of Linux distros, Void does not use [systemd][3]. Instead, it uses [runit][4]. Another thing that separates Void from the rest of Linux distros is the fact that they use LibreSSL instead of OpenSSL. Void also offers support for the [musl C library][5]. In fact, when you download a .iso file, you can choose between `glibc` and `musl`.
|
||||
|
||||
The homegrown package manager that Void uses is named X Binary Package System (or xbps). According to the [Void wiki][6], xbps has the following features:
|
||||
|
||||
* Supports multiple local and remote repositories (HTTP/HTTPS/FTP).
|
||||
* RSA signed remote repositories
|
||||
* SHA256 hashes for package metadata, files, and binary packages
|
||||
* Supports package states (ala dpkg) to mitigate broken package * installs/updates
|
||||
* Ability to resume partial package install/updates
|
||||
* Ability to unpack only files that have been modified in * package updates
|
||||
* Ability to use virtual packages
|
||||
* Ability to check for incompatible shared libraries in reverse dependencies
|
||||
* Ability to replace packages
|
||||
* Ability to put packages on hold (to never update them)
|
||||
* Ability to preserve/update configuration files
|
||||
* Ability to force reinstallation of any installed package
|
||||
* Ability to downgrade any installed package
|
||||
* Ability to execute pre/post install/remove/update scriptlets
|
||||
* Ability to check package integrity: missing files, hashes, missing or unresolved (reverse)dependencies, dangling or modified symlinks, etc.
|
||||
|
||||
|
||||
|
||||
#### System Requirements
|
||||
|
||||
According to the [Void Linux download page][7], the system requirements differ based on the architecture you choose. 64-bit images require “EM64T CPU, 96MB RAM, 350MB disk, Ethernet/WiFi for network installation”. 32-bit images require “Pentium 4 CPU (SSE2), 96MB RAM, 350MB disk, Ethernet / WiFi for network installation”. The [Void Linux handbook][8] recommends 700 MB for storage and also notes that “Flavor installations require more resources. How much more depends on the flavor.”
|
||||
|
||||
Void also supports ARM devices. You can download [ready to boot images][9] for Raspberry Pi and several other [Raspberry Pi alternatives][10].
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read NomadBSD, a BSD for the Road
|
||||
|
||||
### Void Linux Installation
|
||||
|
||||
NOTE: you can either install [Void Linux download page][7] via a live image or use a net installer. I used a live image.
|
||||
|
||||
I was able to successfully install Void Linux on my Dell Latitude D630. This laptop has an Intel Centrino Duo Core processor running at 2.00 GHz, NVIDIA Quadro NVS 135M graphics chip, and 4 GB of RAM.
|
||||
|
||||
![Void Linux Mate][12]
|
||||
|
||||
After I `dd`ed the 800 MB Void Linux MATE image to my thumb drive and inserted it, I booted my computer. I was very quickly presented with a vanilla MATE desktop. To start installing Void, I opened up a terminal and typed `sudo void-installer`. After using the default password `voidlinux`, the installer started. The installer reminded me a little bit of the terminal Debian installer, but it was laid out more like FreeBSD. It was divided into keyboard, network, source, hostname, locale, timezone, root password, user account, bootloader, partition, and filesystems sections.
|
||||
|
||||
Most of the sections where self-explanatory. In the source section, you could choose whether to install the packages from the local image or grab them from the web. I chose local because I did not want to eat up bandwidth or take longer than I had to. The partition and filesystems sections are usually handled automatically by most installers, but not on Void. In this case, the first section allows you to use `cfdisk` to create partitions and the second allows to specify what filesystems will be used in those partitions. I followed the partition layout on [this page][13].
|
||||
|
||||
If you install Void Linux from the local image, you definitely need to update your system. The [Void wiki][14] recommends running `xbps-install -Suv` until there are no more updates to install. It would probably be a good idea to reboot between batches of updates.
|
||||
|
||||
### Experience with Void Linux
|
||||
|
||||
So far in my Linux journey, Void Linux has been by far the most difficult. It feels more like I’m [using a BSD than a Linux distro][15]. (I guess that should not be surprising since Void was created by a former [NetBSD][16] developer who wanted to experiment with his own package manager.) The steps in the command line installer are closer to that of [FreeBSD][17] than Debian.
|
||||
|
||||
Once Void was installed and updated, I went to work installing apps. Unfortunately, I ran into an issue with missing applications. Most of these applications come preinstalled on other distros. I had to install wget, unzip, git, nano, LibreOffice to name just a few.
|
||||
|
||||
Void does not come with a graphical package manager. There are three unofficial frontends for the xbps package manager and [one is based on qt][18]. I ran into issues getting one of the Bash-based tools to work. It hadn’t been updated in 4-5 years.
|
||||
|
||||
![Octoxbps][19]
|
||||
|
||||
The xbps package manager is kinda interesting. It downloads the package and its signature to verify it. You can see the [terminal print out][20] from when I installed Mcomix. Xbps does not use the normal naming convention used in most package managers (ie `apt install` or `pacman -R`), instead, it uses `xbps-install`, `xbps-query`, `xbps-remove`. Luckily, the Void wiki had a [page][21] to show what xbps command relates to apt or dnf commands.
|
||||
|
||||
[][22]
|
||||
|
||||
Suggested read How To Solve: error: no such partition grub rescue in Ubuntu Linux
|
||||
|
||||
The main repo for Void is located in Germany, so I decided to switch to a more local server to ease the burden on that server and to download packages quicker. Switching to a local mirror took a couple of tries because the documentation was not very clear. Documentation for Void is located in two different places: the [wiki][23] and the [handbook][24]. For me, the wiki’s [explanation][25] was confusing and I ran into issues. So, I searched for an answer on DuckDuckGo. From there I stumbled upon the [handbook’s instructions][26], which were much clearer. (The handbook is not linked on the Void Linux website and I had to stumble across it via search.)
|
||||
|
||||
One of the nice things about Void is the speed of the system once everything was installed. It had the quickest boot time I have ever encountered. Overall, the system was very responsive. I did not run into any system crashes.
|
||||
|
||||
### Final Thoughts
|
||||
|
||||
Void Linux took more work to get to a useable state than any other distro I have tried. Even the BSDs I tried felt more polished than Void. I think the tagline “General purpose Linux” is misleading. It should be “Linux with hackers and tinkerers in mind”. Personally, I prefer using distros that are ready for me to use after installing. While it is an interesting combination of Linux and BSD ideas, I don’t think I’ll add Void to my short list of go-to distros.
|
||||
|
||||
If you like tinkering with your Linux system or like building it from scratch, give [Void Linux][7] a try.
|
||||
|
||||
Have you ever used Void Linux? What is your favorite Debian-based distro? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][27].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/void-linux/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://voidlinux.org/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Void-Linux-Neofetch.png?resize=800%2C562&ssl=1
|
||||
[3]: https://en.wikipedia.org/wiki/Systemd
|
||||
[4]: http://smarden.org/runit/
|
||||
[5]: https://www.musl-libc.org/
|
||||
[6]: https://wiki.voidlinux.org/XBPS
|
||||
[7]: https://voidlinux.org/download/
|
||||
[8]: https://docs.voidlinux.org/installation/base-requirements.html
|
||||
[9]: https://voidlinux.org/download/#download-ready-to-boot-images-for-arm
|
||||
[10]: https://itsfoss.com/raspberry-pi-alternatives/
|
||||
[11]: https://itsfoss.com/nomadbsd/
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/Void-Linux-Mate.png?resize=800%2C640&ssl=1
|
||||
[13]: https://wiki.voidlinux.org/Disks#Filesystems
|
||||
[14]: https://wiki.voidlinux.org/Post_Installation#Updates
|
||||
[15]: https://itsfoss.com/why-use-bsd/
|
||||
[16]: https://itsfoss.com/netbsd-8-release/
|
||||
[17]: https://www.freebsd.org/
|
||||
[18]: https://github.com/aarnt/octoxbps
|
||||
[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/OctoXBPS.jpg?resize=800%2C534&ssl=1
|
||||
[20]: https://pastebin.com/g31n1bFT
|
||||
[21]: https://wiki.voidlinux.org/Rosetta_stone
|
||||
[22]: https://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/
|
||||
[23]: https://wiki.voidlinux.org/
|
||||
[24]: https://docs.voidlinux.org/
|
||||
[25]: https://wiki.voidlinux.org/XBPS#Official_Repositories
|
||||
[26]: https://docs.voidlinux.org/xbps/repositories/mirrors/changing.html
|
||||
[27]: http://reddit.com/r/linuxusersgroup
|
@ -1,209 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Create SSH Alias In Linux)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Create SSH Alias In Linux
|
||||
======
|
||||
|
||||
![How To Create SSH Alias In Linux][1]
|
||||
|
||||
If you frequently access a lot of different remote systems via SSH, this trick will save you some time. You can create SSH alias to frequently-accessed systems via SSH. This way you need not to remember all the different usernames, hostnames, ssh port numbers and IP addresses etc. Additionally, It avoids the need to repetitively type the same username/hostname, ip address, port no whenever you SSH into a Linux server(s).
|
||||
|
||||
### Create SSH Alias In Linux
|
||||
|
||||
Before I know this trick, usually, I connect to a remote system over SSH using anyone of the following ways.
|
||||
|
||||
Using IP address:
|
||||
|
||||
```
|
||||
$ ssh 192.168.225.22
|
||||
```
|
||||
|
||||
Or using port number, username and IP address:
|
||||
|
||||
```
|
||||
$ ssh -p 22 sk@server.example.com
|
||||
```
|
||||
|
||||
Or using port number, username and hostname:
|
||||
|
||||
```
|
||||
$ ssh -p 22 sk@server.example.com
|
||||
```
|
||||
|
||||
Here,
|
||||
|
||||
* **22** is the port number,
|
||||
* **sk** is the username of the remote system,
|
||||
* **192.168.225.22** is the IP of my remote system,
|
||||
* **server.example.com** is the hostname of remote system.
|
||||
|
||||
|
||||
|
||||
I believe most of the newbie Linux users and/or admins would SSH into a remote system this way. However, If you SSH into multiple different systems, remembering all hostnames/ip addresses, usernames is bit difficult unless you write them down in a paper or save them in a text file. No worries! This can be easily solved by creating an alias(or shortcut) for SSH connections.
|
||||
|
||||
We can create an alias for SSH commands in two methods.
|
||||
|
||||
##### Method 1 – Using SSH Config File
|
||||
|
||||
This is my preferred way of creating aliases.
|
||||
|
||||
We can use SSH default configuration file to create SSH alias. To do so, edit **~/.ssh/config** file (If this file doesn’t exist, just create one):
|
||||
|
||||
```
|
||||
$ vi ~/.ssh/config
|
||||
```
|
||||
|
||||
Add all of your remote hosts details like below:
|
||||
|
||||
```
|
||||
Host webserver
|
||||
HostName 192.168.225.22
|
||||
User sk
|
||||
|
||||
Host dns
|
||||
HostName server.example.com
|
||||
User root
|
||||
|
||||
Host dhcp
|
||||
HostName 192.168.225.25
|
||||
User ostechnix
|
||||
Port 2233
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
Create SSH Alias In Linux Using SSH Config File
|
||||
|
||||
Replace the values of **Host** , **Hostname** , **User** and **Port** with your own. Once you added the details of all remote hosts, save and exit the file.
|
||||
|
||||
Now you can SSH into the systems with commands:
|
||||
|
||||
```
|
||||
$ ssh webserver
|
||||
|
||||
$ ssh dns
|
||||
|
||||
$ ssh dhcp
|
||||
```
|
||||
|
||||
It is simple as that.
|
||||
|
||||
Have a look at the following screenshot.
|
||||
|
||||
![][3]
|
||||
|
||||
Access remote system using SSH alias
|
||||
|
||||
See? I only used the alias name (i.e **webserver** ) to access my remote system that has IP address **192.168.225.22**.
|
||||
|
||||
Please note that this applies for current user only. If you want to make the aliases available for all users (system wide), add the above lines in **/etc/ssh/ssh_config** file.
|
||||
|
||||
You can also add plenty of other things in the SSH config file. For example, if you have [**configured SSH Key-based authentication**][4], mention the SSH keyfile location as below.
|
||||
|
||||
```
|
||||
Host ubuntu
|
||||
HostName 192.168.225.50
|
||||
User senthil
|
||||
IdentityFIle ~/.ssh/id_rsa_remotesystem
|
||||
```
|
||||
|
||||
Make sure you have replace the hostname, username and SSH keyfile path with your own.
|
||||
|
||||
Now connect to the remote server with command:
|
||||
|
||||
```
|
||||
$ ssh ubuntu
|
||||
```
|
||||
|
||||
This way you can add as many as remote hosts you want to access over SSH and quickly access them using their alias name.
|
||||
|
||||
##### Method 2 – Using Bash aliases
|
||||
|
||||
This is quick and dirty way to create SSH aliases for faster communication. You can use the [**alias command**][5] to make this task much easier.
|
||||
|
||||
Open **~/.bashrc** or **~/.bash_profile** file:
|
||||
|
||||
Add aliases for each SSH connections one by one like below.
|
||||
|
||||
```
|
||||
alias webserver='ssh sk@server.example.com'
|
||||
alias dns='ssh sk@server.example.com'
|
||||
alias dhcp='ssh sk@server.example.com -p 2233'
|
||||
alias ubuntu='ssh sk@server.example.com -i ~/.ssh/id_rsa_remotesystem'
|
||||
```
|
||||
|
||||
Again make sure you have replaced the host, hostname, port number and ip address with your own. Save the file and exit.
|
||||
|
||||
Then, apply the changes using command:
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ source ~/.bash_profile
|
||||
```
|
||||
|
||||
In this method, you don’t even need to use “ssh alias-name” command. Instead, just use alias name only like below.
|
||||
|
||||
```
|
||||
$ webserver
|
||||
$ dns
|
||||
$ dhcp
|
||||
$ ubuntu
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
These two methods are very simple, yet useful and much more convenient for those who often SSH into multiple different systems. Use any one of the aforementioned methods that suits for you to quickly access your remote Linux systems over SSH.
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**Allow Or Deny SSH Access To A Particular User Or Group In Linux**][7]
|
||||
* [**How To SSH Into A Particular Directory On Linux**][8]
|
||||
* [**How To Stop SSH Session From Disconnecting In Linux**][9]
|
||||
* [**4 Ways To Keep A Command Running After You Log Out Of The SSH Session**][10]
|
||||
* [**SSLH – Share A Same Port For HTTPS And SSH**][11]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/ssh-alias-720x340.png
|
||||
[2]: http://www.ostechnix.com/wp-content/uploads/2019/04/Create-SSH-Alias-In-Linux.png
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias.png
|
||||
[4]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
|
||||
[5]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias-1.png
|
||||
[7]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
|
||||
[8]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
|
||||
[9]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
|
||||
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
|
||||
[11]: https://www.ostechnix.com/sslh-share-port-https-ssh/
|
@ -0,0 +1,199 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use udica to build SELinux policy for containers)
|
||||
[#]: via: (https://fedoramagazine.org/use-udica-to-build-selinux-policy-for-containers/)
|
||||
[#]: author: (Lukas Vrabec https://fedoramagazine.org/author/lvrabec/)
|
||||
|
||||
Use udica to build SELinux policy for containers
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
While modern IT environments move towards Linux containers, the need to secure these environments is as relevant as ever. Containers are a process isolation technology. While containers can be a defense mechanism, they only excel when combined with SELinux.
|
||||
|
||||
Fedora SELinux engineering built a new standalone tool, **udica** , to generate SELinux policy profiles for containers by automatically inspecting them. This article focuses on why _udica_ is needed in the container world, and how it makes SELinux and containers work better together. You’ll find examples of SELinux separation for containers that let you avoid turning protection off because the generic SELinux type _container_t_ is too tight. With _udica_ you can easily customize the policy with limited SELinux policy writing skills.
|
||||
|
||||
### SELinux technology
|
||||
|
||||
SELinux is a security technology that brings proactive security to Linux systems. It’s a labeling system that assigns a label to all _subjects_ (processes and users) and _objects_ (files, directories, sockets, etc.). These labels are then used in a security policy that controls access throughout the system. It’s important to mention that what’s not allowed in an SELinux security policy is denied by default. The policy rules are enforced by the kernel. This security technology has been in use on Fedora for several years. A real example of such a rule is:
|
||||
|
||||
```
|
||||
allow httpd_t httpd_log_t: file { append create getattr ioctl lock open read setattr };
|
||||
```
|
||||
|
||||
The rule allows any process labeled as _httpd_t_ ****to create, append, read and lock files labeled as _httpd_log_t_. Using the _ps_ command, you can list all processes with their labels:
|
||||
|
||||
```
|
||||
$ ps -efZ | grep httpd
|
||||
system_u:system_r:httpd_t:s0 root 13911 1 0 Apr14 ? 00:05:14 /usr/sbin/httpd -DFOREGROUND
|
||||
...
|
||||
```
|
||||
|
||||
To see which objects are labeled as httpd_log_t, use _semanage_ :
|
||||
|
||||
```
|
||||
# semanage fcontext -l | grep httpd_log_t
|
||||
/var/log/httpd(/.)? all files system_u:object_r:httpd_log_t:s0
|
||||
/var/log/nginx(/.)? all files system_u:object_r:httpd_log_t:s0
|
||||
...
|
||||
```
|
||||
|
||||
The SELinux security policy for Fedora is shipped in the _selinux-policy_ RPM package.
|
||||
|
||||
### SELinux vs. containers
|
||||
|
||||
In Fedora, the _container-selinux_ RPM package provides a generic SELinux policy for all containers started by engines like _podman_ or _docker_. Its main purposes are to protect the host system against a container process, and to separate containers from each other. For instance, containers confined by SELinux with the process type _container_t_ can only read/execute files in _/usr_ and write to _container_file_t_ ****files type on host file system. To prevent attacks by containers on each other, Multi-Category Security (MCS) is used.
|
||||
|
||||
Using only one generic policy for containers is problematic, because of the huge variety of container usage. On one hand, the default container type ( _container_t_ ) is often too strict. For example:
|
||||
|
||||
* [Fedora SilverBlue][2] needs containers to read/write a user’s home directory
|
||||
* [Fluentd][3] project needs containers to be able to read logs in the _/var/log_ directory
|
||||
|
||||
|
||||
|
||||
On the other hand, the default container type could be too loose for certain use cases:
|
||||
|
||||
* It has no SELinux network controls — all container processes can bind to any network port
|
||||
* It has no SELinux control on [Linux capabilities][4] — all container processes can use all capabilities
|
||||
|
||||
|
||||
|
||||
There is one solution to handle both use cases: write a custom SELinux security policy for the container. This can be tricky, because SELinux expertise is required. For this purpose, the _udica_ tool was created.
|
||||
|
||||
### Introducing udica
|
||||
|
||||
Udica generates SELinux security profiles for containers. Its concept is based on the “block inheritance” feature inside the [common intermediate language][5] (CIL) supported by SELinux userspace. The tool creates a policy that combines:
|
||||
|
||||
* Rules inherited from specified CIL blocks (templates), and
|
||||
* Rules discovered by inspection of container JSON file, which contains mountpoints and ports definitions
|
||||
|
||||
|
||||
|
||||
You can load the final policy immediately, or move it to another system to load into the kernel. Here’s an example, using a container that:
|
||||
|
||||
* Mounts _/home_ as read only
|
||||
* Mounts _/var/spool_ as read/write
|
||||
* Exposes port _tcp/21_
|
||||
|
||||
|
||||
|
||||
The container starts with this command:
|
||||
|
||||
```
|
||||
# podman run -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
|
||||
```
|
||||
|
||||
The default container type ( _container_t_ ) doesn’t allow any of these three actions. To prove it, you could use the _sesearch_ tool to query that the _allow_ rules are present on system:
|
||||
|
||||
```
|
||||
# sesearch -A -s container_t -t home_root_t -c dir -p read
|
||||
```
|
||||
|
||||
There’s no _allow_ rule present that lets a process labeled as _container_t_ access a directory labeled _home_root_t_ (like the _/home_ directory). The same situation occurs with _/var/spool_ , which is labeled _var_spool_t:_
|
||||
|
||||
```
|
||||
# sesearch -A -s container_t -t var_spool_t -c dir -p read
|
||||
```
|
||||
|
||||
On the other hand, the default policy completely allows network access.
|
||||
|
||||
```
|
||||
# sesearch -A -s container_t -t port_type -c tcp_socket
|
||||
allow container_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };
|
||||
allow sandbox_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };
|
||||
```
|
||||
|
||||
### Securing the container
|
||||
|
||||
It would be great to restrict this access and allow the container to bind just on TCP port _21_ or with the same label. Imagine you find an example container using _podman ps_ whose ID is _37a3635afb8f_ :
|
||||
|
||||
```
|
||||
# podman ps -q
|
||||
37a3635afb8f
|
||||
```
|
||||
|
||||
You can now inspect the container and pass the inspection file to the _udica_ tool. The name for the new policy is _my_container_.
|
||||
|
||||
```
|
||||
# podman inspect 37a3635afb8f > container.json
|
||||
# udica -j container.json my_container
|
||||
Policy my_container with container id 37a3635afb8f created!
|
||||
|
||||
Please load these modules using:
|
||||
# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}
|
||||
|
||||
Restart the container with: "--security-opt label=type:my_container.process" parameter
|
||||
```
|
||||
|
||||
That’s it! You just created a custom SELinux security policy for the example container. Now you can load this policy into the kernel and make it active. The _udica_ output above even tells you the command to use:
|
||||
|
||||
```
|
||||
# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}
|
||||
```
|
||||
|
||||
Now you must restart the container to allow the container engine to use the new custom policy:
|
||||
|
||||
```
|
||||
# podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
|
||||
```
|
||||
|
||||
The example container is now running in the newly created _my_container.process_ SELinux process type:
|
||||
|
||||
```
|
||||
# ps -efZ | grep my_container.process
|
||||
unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 2275 434 1 13:49 pts/1 00:00:00 podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
|
||||
system_u:system_r:my_container.process:s0:c270,c963 root 2317 2305 0 13:49 pts/0 00:00:00 bash
|
||||
```
|
||||
|
||||
### Seeing the results
|
||||
|
||||
The command _sesearch_ now shows _allow_ rules for accessing _/home_ and _/var/spool:_
|
||||
|
||||
```
|
||||
# sesearch -A -s my_container.process -t home_root_t -c dir -p read
|
||||
allow my_container.process home_root_t:dir { getattr ioctl lock open read search };
|
||||
# sesearch -A -s my_container.process -t var_spool_t -c dir -p read
|
||||
allow my_container.process var_spool_t:dir { add_name getattr ioctl lock open read remove_name search write }
|
||||
```
|
||||
|
||||
The new custom SELinux policy also allows _my_container.process_ to bind only to TCP/UDP ports labeled the same as TCP port 21:
|
||||
|
||||
```
|
||||
# semanage port -l | grep 21 | grep ftp
|
||||
ftp_port_t tcp 21, 989, 990
|
||||
# sesearch -A -s my_container.process -c tcp_socket -p name_bind
|
||||
allow my_container.process ftp_port_t:tcp_socket name_bind;
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
The _udica_ tool helps you create SELinux policies for containers based on an inspection file without any SELinux expertise required. Now you can increase the security of containerized environments. Sources are available on [GitHub][6], and an RPM package is available in Fedora repositories for Fedora 28 and later.
|
||||
|
||||
* * *
|
||||
|
||||
*Photo by _[_Samuel Zeller_][7]_ on *[ _Unsplash_.][8]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/use-udica-to-build-selinux-policy-for-containers/
|
||||
|
||||
作者:[Lukas Vrabec][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/lvrabec/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/udica-816x345.jpg
|
||||
[2]: https://silverblue.fedoraproject.org
|
||||
[3]: https://www.fluentd.org
|
||||
[4]: http://man7.org/linux/man-pages/man7/capabilities.7.html
|
||||
[5]: https://en.wikipedia.org/wiki/Common_Intermediate_Language
|
||||
[6]: https://github.com/containers/udica
|
||||
[7]: https://unsplash.com/photos/KVG-XMOs6tw?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[8]: https://unsplash.com/search/photos/lockers?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,140 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Innovations on the Linux desktop: A look at Fedora 30's new features)
|
||||
[#]: via: (https://opensource.com/article/19/5/fedora-30-features)
|
||||
[#]: author: (Anderson Silva https://opensource.com/users/ansilva/users/marcobravo/users/alanfdoss/users/ansilva)
|
||||
|
||||
Innovations on the Linux desktop: A look at Fedora 30's new features
|
||||
======
|
||||
Learn about some of the highlights in the latest version of Fedora
|
||||
Linux.
|
||||
![Fedora Linux distro on laptop][1]
|
||||
|
||||
The latest version of Fedora Linux was released at the end of April. As a full-time Fedora user since its original release back in 2003 and an active contributor since 2007, I always find it satisfying to see new features and advancements in the community.
|
||||
|
||||
If you want a TL;DR version of what's has changed in [Fedora 30][2], feel free to ignore this article and jump straight to Fedora's [ChangeSet][3] wiki page. Otherwise, keep on reading to learn about some of the highlights in the new version.
|
||||
|
||||
### Upgrade vs. fresh install
|
||||
|
||||
I upgraded my Lenovo ThinkPad T series from Fedora 29 to 30 using the [DNF system upgrade instructions][4], and so far it is working great!
|
||||
|
||||
I also had the chance to do a fresh install on another ThinkPad, and it was a nice surprise to see a new boot screen on Fedora 30—it even picked up the Lenovo logo. I did not see this new and improved boot screen on the upgrade above; it was only on the fresh install.
|
||||
|
||||
![Fedora 30 boot screen][5]
|
||||
|
||||
### Desktop changes
|
||||
|
||||
If you are a GNOME user, you'll be happy to know that Fedora 30 comes with the latest version, [GNOME 3.32][6]. It has an improved on-screen keyboard (handy for touch-screen laptops), brand new icons for core applications, and a new "Applications" panel under Settings that allows users to gain a bit more control on GNOME default handlers, access permissions, and notifications. Version 3.32 also improves Google Drive performance so that Google files and calendar appointments will be integrated with GNOME.
|
||||
|
||||
![Applications panel in GNOME Settings][7]
|
||||
|
||||
The new Applications panel in GNOME Settings
|
||||
|
||||
Fedora 30 also introduces two new Desktop environments: Pantheon and Deepin. Pantheon is [ElementaryOS][8]'s default desktop environment and can be installed with a simple:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf groupinstall "Pantheon Desktop"`
|
||||
```
|
||||
|
||||
I haven't used Pantheon yet, but I do use [Deepin][9]. Installation is simple; just run:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install deepin-desktop`
|
||||
```
|
||||
|
||||
then log out of GNOME and log back in, choosing "Deepin" by clicking on the gear icon on the login screen.
|
||||
|
||||
![Deepin desktop on Fedora 30][10]
|
||||
|
||||
Deepin desktop on Fedora 30
|
||||
|
||||
Deepin appears as a very polished, user-friendly desktop environment that allows you to control many aspects of your environment with a click of a button. So far, the only issue I've had is that it can take a few extra seconds to complete login and return control to your mouse pointer. Other than that, it is brilliant! It is the first desktop environment I've used that seems to do high dots per inch (HiDPI) properly—or at least close to correctly.
|
||||
|
||||
### Command line
|
||||
|
||||
Fedora 30 upgrades the Bourne Again Shell (aka Bash) to version 5.0.x. If you want to find out about every change since its last stable version (4.4), read this [description][11]. I do want to mention that three new environments have been introduced in Bash 5:
|
||||
|
||||
|
||||
```
|
||||
$ echo $EPOCHSECONDS
|
||||
1556636959
|
||||
$ echo $EPOCHREALTIME
|
||||
1556636968.012369
|
||||
$ echo $BASH_ARGV0
|
||||
bash
|
||||
```
|
||||
|
||||
Fedora 30 also updates the [Fish shell][12], a colorful shell with auto-suggestion, which can be very helpful for beginners. Fedora 30 comes with [Fish version 3][13], and you can even [try it out in a browser][14] without having to install it on your machine.
|
||||
|
||||
(Note that Fish shell is not the same as guestfish for mounting virtual machine images, which comes with the libguestfs-tools package.)
|
||||
|
||||
### Development
|
||||
|
||||
Fedora 30 brings updates to the following languages: [C][15], [Boost (C++)][16], [Erlang][17], [Go][18], [Haskell][19], [Python][20], [Ruby][21], and [PHP][22].
|
||||
|
||||
Regarding these updates, the most important thing to know is that Python 2 is deprecated in Fedora 30. The community and Fedora leadership are requesting that all package maintainers that still depend on Python 2 port their packages to Python 3 as soon as possible, as the plan is to remove virtually all Python 2 packages in Fedora 31.
|
||||
|
||||
### Containers
|
||||
|
||||
If you would like to run Fedora as an immutable OS for a container, kiosk, or appliance-like environment, check out [Fedora Silverblue][23]. It brings you all of Fedora's technology managed by [rpm-ostree][24], which is a hybrid image/package system that allows automatic updates and easy rollbacks for developers. It is a great option for anyone who wants to learn more and play around with [Flatpak deployments][25].
|
||||
|
||||
Fedora Atomic is no longer available under Fedora 30, but you can still [download it][26]. If your jam is containers, don't despair: even though Fedora Atomic is gone, a brand new [Fedora CoreOS][27] is under development and should be going live soon!
|
||||
|
||||
### What else is new?
|
||||
|
||||
As of Fedora 30, **/usr/bin/gpg** points to [GnuPG][28] v2 by default, and [NFS][29] server configuration is now located at **/etc/nfs.conf** instead of **/etc/sysconfig/nfs**.
|
||||
|
||||
There have also been a [few changes][30] for installation and boot time.
|
||||
|
||||
Last but not least, check out [Fedora Spins][31] for a spin of Fedora that defaults to your favorite Window manager and [Fedora Labs][32] for functionally curated software bundles built on Fedora 30 (i.e. astronomy, security, and gaming).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/fedora-30-features
|
||||
|
||||
作者:[Anderson Silva ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/marcobravo/users/alanfdoss/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fedora_on_laptop_lead.jpg?itok=XMc5wo_e (Fedora Linux distro on laptop)
|
||||
[2]: https://getfedora.org/
|
||||
[3]: https://fedoraproject.org/wiki/Releases/30/ChangeSet
|
||||
[4]: https://fedoraproject.org/wiki/DNF_system_upgrade#How_do_I_use_it.3F
|
||||
[5]: https://opensource.com/sites/default/files/uploads/fedora30_fresh-boot.jpg (Fedora 30 boot screen)
|
||||
[6]: https://help.gnome.org/misc/release-notes/3.32/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/fedora10_gnome.png (Applications panel in GNOME Settings)
|
||||
[8]: https://elementary.io/
|
||||
[9]: https://www.deepin.org/en/dde/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/fedora10_deepin.png (Deepin desktop on Fedora 30)
|
||||
[11]: https://git.savannah.gnu.org/cgit/bash.git/tree/NEWS
|
||||
[12]: https://fishshell.com/
|
||||
[13]: https://fishshell.com/release_notes.html
|
||||
[14]: https://rootnroll.com/d/fish-shell/
|
||||
[15]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_C/
|
||||
[16]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Boost/
|
||||
[17]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Erlang/
|
||||
[18]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Go/
|
||||
[19]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Haskell/
|
||||
[20]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Python/
|
||||
[21]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Ruby/
|
||||
[22]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Web/
|
||||
[23]: https://silverblue.fedoraproject.org/
|
||||
[24]: https://rpm-ostree.readthedocs.io/en/latest/
|
||||
[25]: https://flatpak.org/setup/Fedora/
|
||||
[26]: https://getfedora.org/en/atomic/
|
||||
[27]: https://coreos.fedoraproject.org/
|
||||
[28]: https://gnupg.org/index.html
|
||||
[29]: https://en.wikipedia.org/wiki/Network_File_System
|
||||
[30]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/sysadmin/Installation/
|
||||
[31]: https://spins.fedoraproject.org
|
||||
[32]: https://labs.fedoraproject.org/
|
@ -0,0 +1,79 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why startups should release their code as open source)
|
||||
[#]: via: (https://opensource.com/article/19/5/startups-release-code)
|
||||
[#]: author: (Clément Flipo https://opensource.com/users/cl%C3%A9ment-flipo)
|
||||
|
||||
Why startups should release their code as open source
|
||||
======
|
||||
Dokit wondered whether giving away its knowledge as open source was a
|
||||
bad business decision, but that choice has been the foundation of its
|
||||
success.
|
||||
![open source button on keyboard][1]
|
||||
|
||||
It's always hard to recall exactly how a project started, but sometimes that can help you understand that project more clearly. When I think about it, our platform for creating user guides and documentation, [Dokit][2], came straight out of my childhood. Growing up in a house where my toys were Meccano and model airplane kits, the idea of making things, taking individual pieces and putting them together to create a new whole, was always a fundamental part of what it meant to play. My father worked for a DIY company, so there were always signs of building, repair, and instruction manuals around the house. When I was young, my parents sent me to join the Boy Scouts, where we made tables, tents and mud ovens, which helped foster my enjoyment of shared learning that I later found in the open source movement.
|
||||
|
||||
The art of repairing things and recycling products that I learned in childhood became part of what I did for a job. Then it became my ambition to take the reassuring feel of learning how to make and do and repair at home or in a group—but put it online. That inspired Dokit's creation.
|
||||
|
||||
### The first months
|
||||
|
||||
It hasn't always been easy, but since founding our company in 2017, I've realized that the biggest and most worthwhile goals are generally always difficult. If we were to achieve our plan to revolutionize the way [old-fashioned manuals and user guides are created and published][3], and maximize our impact in what we knew all along would be a niche market, we knew that a guiding mission was crucial to how we organized everything else. It was from there that we reached our first big decision: to [quickly launch a proof of concept using an existing open source framework][4], MediaWiki, and from there to release all of our code as open source.
|
||||
|
||||
In retrospect, this decision was made easier by the fact that [MediaWiki][5] was already up and running. With 15,000 developers already active around the world and on a platform that included 90% of the features we needed to meet our minimum viable product (MVP), things would have no doubt been harder without support from the engine that made its name by powering Wikipedia. Confluence, a documentation platform in use by many enterprises, offers some good features, but in the end, it was an easy choice between the two.
|
||||
|
||||
Placing our faith in the community, we put the first version of our platform straight onto GitHub. The excitement of watching the world's makers start using our platform, even before we'd done any real advertising, felt like an early indication that we were on the right track. Although the [maker and Fablab movements][6] encourage users to share instructions, and even sets out this expectation in the [Fablab charter][7] (as stated by MIT), in reality, there is a lack of real documentation.
|
||||
|
||||
The first and most significant reason people like using our platform is that it responds to the very real problem of poor documentation inside an otherwise great movement—one that we knew could be even better. To us, it felt a bit like we were repairing a gap in the community of makers and DIY. Within a year of our launch, Fablabs, [Wikifab][8], [Open Source Ecology][9], [Les Petits Debrouillards][10], [Ademe][11], and [Low-Tech Lab][12] had installed our tool on their servers for creating step-by-step tutorials.
|
||||
|
||||
Before even putting out a press release, one of our users, Wikifab, began to get praise in national media as "[the Wikipedia of DIY][13]." In just two years, we've seen hundreds of communities launched on their own Dokits, ranging from the fun to the funny to the more formal product guides. Again, the power of the community is the force we want to harness, and it's constantly amazing to see projects—ranging from wind turbines to pet feeders—develop engaging product manuals using the platform we started.
|
||||
|
||||
### Opening up open source
|
||||
|
||||
Looking back at such a successful first two years, it's clear to us that our choice to use open source was fundamental to how we got where we are as fast as we did. The ability to gather feedback in open source is second-to-none. If a piece of code didn't work, [someone could tell us right away][14]. Why wait on appointments with consultants if you can learn along with those who are already using the service you created?
|
||||
|
||||
The level of engagement from the community also revealed the potential (including the potential interest) in our market. [Paris has a good and growing community of developers][15], but open source took us from a pool of a few thousand locally, and brought us to millions of developers all around the world who could become a part of what we were trying to make happen. The open availability of our code also proved reassuring to our users and customers who felt safe that, even if our company went away, the code wouldn't.
|
||||
|
||||
If that was most of what we thought might happen as a result of using open source, there were also surprises along the way. By adopting an open method, we found ourselves gaining customers, reputation, and perfectly targeted advertising that we didn't have to pay for out of our limited startup budget. We found that the availability of our code helped improve our recruitment process because we were able to test candidates using our code before we made hires, and this also helped simplify the onboarding journey for those we did hire.
|
||||
|
||||
In what we see as a mixture of embarrassment and solidarity, the totally public nature of developers creating code in an open setting also helped drive up quality. People can share feedback with one another, but the public nature of the work also seems to encourage people to do their best. In the spirit of constant improvement and of continually building and rebuilding how Dokit works, supporting the community is something that we know we'd like to do more of and get better at in future.
|
||||
|
||||
### Where to next?
|
||||
|
||||
Even with the faith we've always had in what we were doing, and seeing the great product manuals that have been developed using our software, it never stops being exciting to see our project grow, and we're certain that the future has good things in store.
|
||||
|
||||
In the early days, we found ourselves living a lot under the fear of distributing our knowledge for free. In reality, it was the opposite—open source gave us the ability to very rapidly build a startup that was sustainable from the beginning. Dokit is a platform designed to give its users the confidence to build, assemble, repair, and create entirely new inventions with the support of a community. In hindsight, we found we were doing the same thing by using open source to build a platform.
|
||||
|
||||
Just like when doing a repair or assembling a physical product, it's only when you have confidence in your methods that things truly begin to feel right. Now, at the beginning of our third year, we're starting to see growing global interest as the industry responds to [new generations of customers who want to use, reuse, and assemble products][16] that respond to changing homes and lifestyles. By providing the support of an online community, we think we're helping to create circumstances in which people feel more confident in doing things for themselves.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/startups-release-code
|
||||
|
||||
作者:[Clément Flipo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cl%C3%A9ment-flipo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx (open source button on keyboard)
|
||||
[2]: https://dokit.io/
|
||||
[3]: https://dokit.io/9-reasons-to-stop-writing-your-user-manuals-or-work-instructions-with-word-processors/
|
||||
[4]: https://medium.com/@gofloaters/5-cheap-ways-to-build-your-mvp-71d6170d5250
|
||||
[5]: https://en.wikipedia.org/wiki/MediaWiki
|
||||
[6]: https://en.wikipedia.org/wiki/Maker_culture
|
||||
[7]: http://fab.cba.mit.edu/about/charter/
|
||||
[8]: https://wikifab.org/
|
||||
[9]: https://www.opensourceecology.org/
|
||||
[10]: http://www.lespetitsdebrouillards.org/
|
||||
[11]: https://www.ademe.fr/en
|
||||
[12]: http://lowtechlab.org/
|
||||
[13]: https://www.20minutes.fr/magazine/economie-collaborative-mag/2428995-20160919-pour-construire-leurs-meubles-eux-memes-ils-creent-le-wikipedia-du-bricolage
|
||||
[14]: https://opensource.guide/how-to-contribute/
|
||||
[15]: https://www.rudebaguette.com/2013/03/here-are-the-details-on-the-new-developer-school-that-xavier-niel-is-launching-tomorrow/?lang=en
|
||||
[16]: https://www.inc.com/ari-zoldan/why-now-is-the-best-time-to-start-a-diy-home-based.html
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 essential values for the DevOps mindset)
|
||||
[#]: via: (https://opensource.com/article/19/5/values-devops-mindset)
|
||||
[#]: author: (Brent Aaron Reed https://opensource.com/users/brentaaronreed/users/wpschaub/users/wpschaub/users/wpschaub/users/cobiacomm/users/marcobravo/users/brentaaronreed)
|
||||
|
||||
5 essential values for the DevOps mindset
|
||||
======
|
||||
People and process take more time but are more important than any
|
||||
technology "silver bullet" in solving business problems.
|
||||
![human head, brain outlined with computer hardware background][1]
|
||||
|
||||
Many IT professionals today struggle with adapting to change and disruption. Are you struggling with just trying to keep the lights on, so to speak? Do you feel overwhelmed? This is not uncommon. Today, the status quo is not enough, so IT constantly tries to re-invent itself.
|
||||
|
||||
With over 30 years of combined IT experience, we have witnessed how important people and relationships are to IT's ability to be effective and help the business thrive. However, most of the time, our conversations about IT solutions start with technology rather than people and process. The propensity to look for a "silver bullet" to address business and IT challenges is far too common. But you can't just buy innovation, DevOps, or effective teams and ways of working; they need to be nurtured, supported, and guided.
|
||||
|
||||
With disruption so prevalent and there being such a critical demand for speed of change, we need both discipline and guardrails. The five essential values for the DevOps mindset, described below, will support the practices that will get us there. These values are not new ideas; they are refactored as we've learned from our experience. Some of the values may be interchangeable, they are flexible, and they guide overall principles that support (like a pillar) these five values.
|
||||
|
||||
![5 essential values for the DevOps mindset][2]
|
||||
|
||||
### 1\. Feedback from stakeholders is essential
|
||||
|
||||
How do we know if we are creating more value for us than for our stakeholders? We need persistent quality data to analyze, inform, and drive better decisions. Relevant information from trusted sources is vital for any business to thrive. We need to listen to and understand what our stakeholders are saying—and not saying—and we need to implement changes in a way that enables us to adjust our thinking—and our processes and technologies—and adapt them as needed to delight our stakeholders. Too often, we see little change, or lots of change for the wrong reasons, because of incorrect information (data). Therefore, aligning change to our stakeholders' feedback is an essential value and helps us focus on what is most important to making our company successful.
|
||||
|
||||
> Focus on our stakeholders and their feedback rather than simply changing for the sake of change.
|
||||
|
||||
### 2\. Improve beyond the limits of today's processes
|
||||
|
||||
We want our products and services to continuously delight our customers—our most important stakeholders—therefore, we need to improve continually. This is not only about quality; it could also mean costs, availability, relevance, and many other goals and factors. Creating repeatable processes or utilizing a common framework is great—they can improve governance and a host of other issues—however, that should not be our end goal. As we look for ways to improve, we must adjust our processes, complemented by the right tech and tools. There may be reasons to throw out a "so-called" framework because not doing so could add waste—or worse, simply "cargo culting" (doing something with of no value or purpose).
|
||||
|
||||
> Strive to always innovate and improve beyond repeatable processes and frameworks.
|
||||
|
||||
### 3\. No new silos to break down silos
|
||||
|
||||
Silos and DevOps are incompatible. We see this all the time: an IT director brings in so-called "experts" to implement agile and DevOps, and what do they do? These "experts" create a new problem on top of the existing problem, which is another silo added to an IT department and a business riddled with silos. Creating "DevOps" titles goes against the very principles of agile and DevOps, which are based on the concept of breaking down silos. In both agile and DevOps, teamwork is essential, and if you don't work in a self-organizing team, you're doing neither of them.
|
||||
|
||||
> Inspire and share collaboratively instead of becoming a hero or creating a silo.
|
||||
|
||||
### 4\. Knowing your customer means cross-organization collaboration
|
||||
|
||||
No part of the business is an independent entity because they all have stakeholders, and the primary stakeholder is always the customer. "The customer is always right" (or the king, as I like to say). The point is, without the customer, there really is no business, and to stay in business today, we need to "differentiate" from our competitors. We also need to know how our customers feel about us and what they want from us. Knowing what the customer wants is imperative and requires timely feedback to ensure the business addresses these primary stakeholders' needs and concerns quickly and responsibly.
|
||||
|
||||
![Minimize time spent with build-measure-learn process][3]
|
||||
|
||||
Whether it comes from an idea, a concept, an assumption, or direct stakeholder feedback, we need to identify and measure the feature or service our product delivers by using the explore, build, test, deliver lifecycle. Fundamentally, this means that we need to be "plugged into" our organization across the organization. There are no borders in continuous innovation, learning, and DevOps. Thus when we measure across the enterprise, we can understand the whole and take actionable, meaningful steps to improve.
|
||||
|
||||
> Measure performance across the organization, not just in a line of business.
|
||||
|
||||
### 5\. Inspire adoption through enthusiasm
|
||||
|
||||
Not everyone is driven to learn, adapt, and change; however, just like smiles can be infectious, so can learning and wanting to be part of a culture of change. Adapting and evolving within a culture of learning provides a natural mechanism for a group of people to learn and pass on information (i.e., cultural transmission). Learning styles, attitudes, methods, and processes continually evolve so we can improve upon them. The next step is to apply what was learned and improved and share the information with colleagues. Learning does not happen automatically; it takes effort, evaluation, discipline, awareness, and especially communication; unfortunately these are things that tools and automation alone will not provide. Review your processes, automation, tool strategies, and implementation work, make it transparent, and collaborate with your colleagues on reuse and improvement.
|
||||
|
||||
> Promote a culture of learning through lean quality deliverables, not just tools and automation.
|
||||
|
||||
### Summary
|
||||
|
||||
![Continuous goals of DevOps mindset][4]
|
||||
|
||||
As our companies adopt DevOps, we continue to champion these five values over any book, website, or automation software. It takes time to adopt this mindset, and this is very different than what we used to do as sysadmins. It's a wholly new way of working that will take many years to mature. Do these principles align with your own? Share them in the comments or on our website, [Agents of chaos][5].
|
||||
|
||||
* * *
|
||||
|
||||
Can you really do DevOps without sharing scripts or code? DevOps manifesto proponents value cross-...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/values-devops-mindset
|
||||
|
||||
作者:[Brent Aaron Reed][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brentaaronreed/users/wpschaub/users/wpschaub/users/wpschaub/users/cobiacomm/users/marcobravo/users/brentaaronreed
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X (human head, brain outlined with computer hardware background)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/devops_mindset_values.png (5 essential values for the DevOps mindset)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/devops_mindset_minimze-time.jpg (Minimize time spent with build-measure-learn process)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/devops_mindset_continuous.png (Continuous goals of DevOps mindset)
|
||||
[5]: http://agents-of-chaos.org
|
@ -0,0 +1,138 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A day in the life of an open source performance engineering team)
|
||||
[#]: via: (https://opensource.com/article/19/5/life-performance-engineer)
|
||||
[#]: author: (Aakarsh Gopi https://opensource.com/users/aakarsh/users/portante/users/anaga/users/gameloid)
|
||||
|
||||
A day in the life of an open source performance engineering team
|
||||
======
|
||||
Collaborating with the community enables performance engineering to
|
||||
address the confusion and complexity that come with working on a broad
|
||||
spectrum of products.
|
||||
![Team checklist and to dos][1]
|
||||
|
||||
In today's world, open source software solutions are a collaborative effort of the community. Can a performance engineering team operate the same way, by collaborating with the community to address the confusion and complexity that come with working on a broad spectrum of products?
|
||||
|
||||
To answer that question, we need to explore some basic questions:
|
||||
|
||||
* What does a performance engineering team do?
|
||||
* How does a performance engineering team fulfill its responsibilities?
|
||||
* How are open source tools developed or leveraged for performance analysis?
|
||||
|
||||
|
||||
|
||||
The term "performance engineering" has different meanings, which causes difficulty in figuring out a performance engineering team's responsibilities. Adding to the confusion, a team may be charged with working on a broad spectrum of products, ranging from an operating system like RHEL, whose performance can be significantly impacted by hardware components (CPU caches, network interface controllers, disk technologies, etc.), to something much higher up in the stack like Kubernetes, which comes with the added challenges of operating at scale without compromising on performance.
|
||||
|
||||
Performance engineering has progressed a lot since the days of running manual A/B testing and single-system benchmarks. Now, these teams test cloud infrastructures and add machine learning classifiers as a component in the CI/CD pipeline for identifying performance regression in releases of products.
|
||||
|
||||
### What does a performance engineering team do?
|
||||
|
||||
A performance engineering team is generally responsible for the following (among other things):
|
||||
|
||||
* Identifying potential performance issues
|
||||
* Identifying any scale issues that could occur
|
||||
* Developing tuning guides and/or tools that would enable the user to achieve the most out of a product
|
||||
* Developing guides and/or working with customers to help with capacity planning
|
||||
* Providing customers with performance expectations for different use cases of the product
|
||||
|
||||
|
||||
|
||||
The mission of our specific team is to:
|
||||
|
||||
* Establish performance and scale leadership of the Red Hat portfolio; the scope includes component level, system, and solution analysis
|
||||
* Collaborate with engineering, product management, product marketing, and Customer Experience and Engagement (CEE), as well as hardware and software partners
|
||||
* Deliver public-facing guidance, internal enablement, and continuous integration tests
|
||||
|
||||
|
||||
|
||||
Our team fulfills our mission in the following ways:
|
||||
|
||||
* We work with product teams to set performance goals and develop performance tests to run against those products deployed to see how they measure up to those goals.
|
||||
* We also work to re-run performance tests to ensure there are no regressions in behaviors.
|
||||
* We develop open source tooling to achieve our product performance goals, making them available to the communities where the products are derived to re-create what we do.
|
||||
* We work to be transparent and open about how we do performance engineering; sharing these methods and approaches benefits communities, allowing them to reuse our work, and benefits us by leveraging the work they contribute with these tools.
|
||||
|
||||
|
||||
|
||||
### How does a performance engineering team fulfill its responsibilities?
|
||||
|
||||
Meeting these responsibilities requires collaboration with other teams, such as product management, development, QA/QE, documentation, and consulting, and with the communities.
|
||||
|
||||
_Collaboration_ allows a team to be successful by pulling together team members' diverse knowledge and experience. A performance engineering team builds tools to share their knowledge both within the team and with the community, furthering the value of collaboration.
|
||||
|
||||
Our performance engineering team achieves success through:
|
||||
|
||||
* **Collaboration:** _Intra_ -team collaboration is as important as _inter_ -team collaboration for our performance engineering team
|
||||
* Most performance engineers tend to create a niche for themselves in one or more sub-disciplines of performance engineering via tooling, performance analysis, systems knowledge, systems configuration, and such. Our team is composed of engineers with knowledge of setting up/configuring systems across the product stack, those who know how a configuration option would affect the system's performance, and so on. Our team's success is heavily reliant on effective collaboration between performance engineers on the team.
|
||||
* Our team works closely with other organizations at various levels within Red Hat and the communities where our products are derived.
|
||||
* **Knowledge:** To understand the performance implications of configuration and/or system changes, deep knowledge of the product alone is not sufficient.
|
||||
* Our team has the knowledge to cover performance across all levels of the stack:
|
||||
* Hardware setup and configuration
|
||||
* Networking and scale considerations
|
||||
* Operating system setup and configuration (Linux kernel, userspace stack)
|
||||
* Storage sub-systems (Ceph)
|
||||
* Cloud infrastructure (OpenStack, RHV)
|
||||
* Cloud deployments (OpenShift/Kubernetes)
|
||||
* Product architectures
|
||||
* Software technologies (databases like Postgres; software-defined networking and storage)
|
||||
* Product interactions with the underlying hardware
|
||||
* Tooling to monitor and accomplish repeatable benchmarking
|
||||
* **Tooling:** The differentiator for our performance engineering team is the data collected through its tools to help tackle performance analysis complexity in the environments where our products are deployed.
|
||||
|
||||
|
||||
|
||||
### How are open source tools developed or leveraged for performance analysis?
|
||||
|
||||
Tooling is no longer a luxury but a need for today's performance engineering teams. With today's product solutions being so complex (and increasing in complexity as more solutions are composed to solve ever-larger problems), we need tools to help us run performance test suites in a repeatable manner, collect data about those runs, and help us distill that data so it becomes understandable and usable.
|
||||
|
||||
Yet, no performance engineering team is judged on how performance analysis is done, but rather on the results achieved from this analysis.
|
||||
|
||||
This tension can be resolved by collaboratively developing tools. A performance engineering team can't spend all its time developing tools, since that would prevent it from effectively collecting data. By developing its tools in a collaborative manner, a team can leverage work from the community to make further progress while still generating the result by which they will be measured.
|
||||
|
||||
Tooling is the backbone of our performance engineering team, and we strive to use the tools already available upstream. When no tools are available in the community that fit our needs, we've built tools that help us achieve our goals and made them available to the community. Open sourcing our tools has helped us immensely because we receive contributions from our competitors and partners, allowing us to solve problems collectively through collaboration.
|
||||
|
||||
![Performance Engineering Tools][2]
|
||||
|
||||
Following are some of the tools our team has contributed to and rely upon for our work:
|
||||
|
||||
* **[Perf-c2c][3]:** Is your performance impacted by false sharing in CPU caches? The perf-c2c tool can help you tackle this problem by helping you inspect the cache lines where false sharing is detected and understand the readers/writers accessing those cache lines along with the offsets where those accesses occurred. You can read more about this tool on [Joe Mario's blog][4].
|
||||
* **[Pbench][5]:** Do you repeat the same steps when collecting data about performance, but fail to do it consistently? Or do you find it difficult to compare results with others because you're collecting different configuration data? Pbench is a tool that attempts to standardize the way data is collected for performance so comparisons and historical reviews are much easier. Pbench is at the heart of our tooling efforts, as most of the other tools consume it in some form. Pbench is a Swiss Army Knife, as it allows the user to run benchmarks such as fio, uperf, or custom, user-defined tests while gathering metrics through tools such as sar, iostat, and pidstat, standardizing the methods of collecting configuration data about the environment. Pbench provides a dashboard UI to help review and analyze the data collected.
|
||||
* **[Browbeat][6]:** Do you want to monitor a complex environment such as an OpenStack cluster while running tests? Browbeat is the solution, and its power lies in its ability to collect comprehensive data, ranging from logs to system metrics, about an OpenStack cluster while it orchestrates workloads. Browbeat can also monitor the OpenStack cluster while users run test/workloads of their choice either manually or through their own automation.
|
||||
* **[Ripsaw][7]:** Do you want to compare the performance of different Kubernetes distros against the same platform? Do you want to compare the performance of the same Kubernetes distros deployed on different platforms? Ripsaw is a relatively new tool created to run workloads through Kubernetes native calls using the Ansible operator framework to provide solutions to the above questions. Ripsaw's unique selling point is that it can run against any kind of Kubernetes distribution, thus it would run the same against a Kubernetes cluster, on Minikube, or on an OpenShift cluster deployed on OpenStack or bare metal.
|
||||
* **[ClusterLoader][8]:** Ever wondered how an OpenShift component would perform under different cluster states? If you are looking for an answer that can stress the cluster, ClusterLoader will help. The team has generalized the tool so it can be used with any Kubernetes distro. It is currently hosted in the [perf-tests repository][9].
|
||||
|
||||
|
||||
|
||||
### Bottom line
|
||||
|
||||
Given the scale at which products are evolving rapidly, performance engineering teams need to build tooling to help them keep up with products' evolution and diversification.
|
||||
|
||||
Open source-based software solutions are a collaborative effort of the community. Our performance engineering team operates in the same way, collaborating with the community to address the confusion and complexity that comes with working on a broad spectrum of products. By developing our tools in a collaborative manner and using tools from the community, we are leveraging the community's work to make progress, while still generating the results we are measured on.
|
||||
|
||||
_Collaboration_ is our key to accomplish our goals and ensure the success of our team.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/life-performance-engineer
|
||||
|
||||
作者:[Aakarsh Gopi ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/aakarsh/users/portante/users/anaga/users/gameloid
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/performanceengineeringtools.png (Performance Engineering Tools)
|
||||
[3]: http://man7.org/linux/man-pages/man1/perf-c2c.1.html
|
||||
[4]: https://joemario.github.io/blog/2016/09/01/c2c-blog/
|
||||
[5]: https://github.com/distributed-system-analysis/pbench
|
||||
[6]: https://github.com/openstack/browbeat
|
||||
[7]: https://github.com/cloud-bulldozer/ripsaw
|
||||
[8]: https://github.com/openshift/origin/tree/master/test/extended/cluster
|
||||
[9]: https://github.com/kubernetes/perf-tests/tree/master/clusterloader
|
@ -0,0 +1,162 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Query freely available exchange rate data with ExchangeRate-API)
|
||||
[#]: via: (https://opensource.com/article/19/5/exchange-rate-data)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
|
||||
|
||||
Query freely available exchange rate data with ExchangeRate-API
|
||||
======
|
||||
In this interview, ExchangeRate-API's founder explains why exchange rate
|
||||
data should be freely accessible to developers who want to build useful
|
||||
stuff.
|
||||
![scientific calculator][1]
|
||||
|
||||
Last year, [I wrote about][2] using the Groovy programming language to access foreign exchange rate data from an API to simplify my expense records. I showed how two exchange rate sites, [fixer.io][3] and apilayer.net (now [apilayer.com][4]), could provide the data I needed, allowing me to convert between Indian rupees (INR) and Canadian dollars (CAD) using the former, and Chilean pesos (CLP) and Canadian dollars using the latter.
|
||||
|
||||
Recently, David over at [ExchangeRate-API.com][5] reached out to me to say, "the free API you mentioned (Fixer) has been bought by CurrencyLayer and had its no-signup/unlimited access deprecated." He also told me, "I run a free API called ExchangeRate-API.com that has the same JSON format as the original Fixer, doesn't require any signup, and allows unlimited requests."
|
||||
|
||||
After exchanging a few emails, we decided to turn our conversation into an interview. Below the interview, you can find scripts and usage instructions. (The interview has been edited slightly for clarity.)
|
||||
|
||||
### About ExchangeRate-API
|
||||
|
||||
_**Chris:** How is ExchangeRate-API different from other online exchange-rate services? What motivates you to provide this service?_
|
||||
|
||||
**David:** When I started ExchangeRate-API with a friend in 2010, we built and released it for free because we really needed this service for another project and couldn't find one despite extensive googling. There are now around 20 such APIs offering quite a few different approaches. Over the years, I've tried a number of different approaches, but offering quality data for free has always proven the most popular. I'm also motivated by the thought that this data should be freely accessible to developers who want to build useful stuff even if they don't have a budget.
|
||||
|
||||
Thus, the main difference with our currency conversion API is that it's unlimited and requires no signup. This also makes starting to use it really fast—you literally just copy the endpoint URL and you're good to go.
|
||||
|
||||
There are one or two other free and unlimited APIs, but these typically just serve the daily reference rates provided by the European Central Bank. ExchangeRate-API collects the public reference rates from a number of central banks and then blends them to reduce the risk of outlying values. It also does acceptance checking to ensure the rates aren't wildly wrong (for instance an inverted data capture recording US dollars to CLP instead of CLP to USD) and weights different sources based on their historical accuracy. This makes the service quite reliable. I'm currently working on a transparency project to compare and show the accuracy of this public reference rate blend against a proprietary data source so potential users can make more informed decisions on what type of currency data service is right for them.
|
||||
|
||||
_**Chris:** I'm delighted that you've included Canadian dollars and Indian rupees, as that is one problem I need to solve. I'm sad to see that you don't have Chilean pesos (another problem I need to solve). Can you tell us how you select the list of currencies? Do you anticipate adding other currencies to your list?_
|
||||
|
||||
**David:** Since my main aim for this service is to offer stable and reliable exchange rate data, I only include currencies when there is more than one data source for that currency code. For instance, after you mentioned that you're looking for CLP data, I added the daily reference rates published by the Central Bank of Chile to our system. If I can find another source that includes CLP, it would be included in our list of supported currencies, but until then, unfortunately not. The goal is to support as many currencies as possible.
|
||||
|
||||
One thing to note is that, for some currencies, the service has the minimum two sources, but a few currency pairs (for instance USD/EUR) are included in almost every set of public reference rates. The transparent accuracy project I mentioned will hopefully make this difference clear so that users can understand why our USD/EUR rate might be more accurate than less common pairs like CLP/INR and also the degree of variance in accuracy between the pairs. It will take some work to make showing this information quick and easy to understand.
|
||||
|
||||
### The API's architecture
|
||||
|
||||
_**Chris:** Can you tell us a bit about your API's architecture? Do you use open source components to deliver your service?_
|
||||
|
||||
**David:** I exclusively use open source software to run ExchangeRate-API. I'm definitely an open source enthusiast and am always getting friends to switch to open source, explaining licenses, and donating when I can to the projects I use most. I also try to email maintainers of projects I use to say thanks, but I don't do this enough.
|
||||
|
||||
The stack is currently Ubuntu LTS, MariaDB, Nginx, PHP 7, and Memcached. I also use Bootstrap and Picnic open source CSS frameworks. I use Let's Encrypt for HTTPS certificates via the Electronic Frontier Foundation's open source ACME client, [Certbot][6]. The service makes extensive use of classic tools like UFW/iptables, cURL, OpenSSH, and Git.
|
||||
|
||||
My approach is typically to keep everything as simple as possible while using the tried-and-tested open source building blocks. For a project that aims to _always_ be available for users to convert currencies, this feels like the best route to reliability. I love reading about innovative new projects that could be useful for a project like this (for example, CockroachDB), but I wouldn't use them until they are considered really bulletproof. Obviously, things like [Heartbleed][7] show that there are risks with "boring" projects too—but I think these are easier to manage than the potential for unknown risks with newer, cutting-edge projects.
|
||||
|
||||
In terms of the infrastructure setup, I've steadily built and improved the system over the last nine years, and it now comprises roughly three tiers. The main cluster runs on Amazon Web Services (AWS) and consists of Ubuntu EC2 servers and a high-availability MariaDB relational database service (RDS) instance. The EC2 instances are spread across multiple AWS Availability Zones and fronted by the managed AWS Elastic Load Balancing (ELB) service. Between the RDS database instance with automated cross-zone failover and the ELB-fronted EC2 instances spread across availability zones, this setup is exceptionally available. It is, however, only in one locale. So I've set up a second tier of virtual private server (VPS) instances in different geographic locations to reduce latency and distribute the load away from the more expensive AWS infrastructure. These are currently with Linode, but I have also used DigitalOcean and Vultr recently.
|
||||
|
||||
Finally, this is all protected behind Cloudflare. With a free service, it's inevitable that some users will choose to abuse the system, and Cloudflare is an amazing product that's vital to ExchangeRate-API. Our servers can be protected and our users get low-latency, in-region caches. Cloudflare is set up with both the load balancing and traffic steering products to reduce latency and instantly shift traffic from unhealthy parts of the infrastructure to available origins.
|
||||
|
||||
With this very redundant approach, there hasn't been downtime as a result of infrastructure problems or user load for around three years. The few periods of degraded service experienced in this time are all due to issues with code, deployment strategy, or config mistakes. The setup currently handles hundreds of millions of requests per month with low load levels and manageable costs, so there's plenty of room for growth.
|
||||
|
||||
The actual application code is PHP with heavy use of Memcached. Memcached is an amazing open source project started by Brad Fitzpatrick in 2003. It's not particularly glamorous, but it is an incredibly reliable and performant distributed in-memory key value store.
|
||||
|
||||
### Engaging with the open source community
|
||||
|
||||
_**Chris:** There is an impressive amount of open source in your configuration. How do you engage with the broader community of users in these projects?_
|
||||
|
||||
**David:** I really struggle with the best way to be a good open source citizen while running a side project SaaS. I've considered building an open source library of some sort and releasing it, but I haven't thought of something that hasn't already been done and that I would be able to make the time commitment to reliably maintain. I'd only start a project like this if I could be confident I'd have the time to ensure users who choose the project wouldn't suddenly find themselves depending on abandonware. I've also looked into contributing to the projects that ExchangeRate-API depends on, but since I only use the biggest, most established options, I lack the expertise to make a meaningful contribution to such serious projects.
|
||||
|
||||
I'm currently working on a new "Pro" plan for the service and I'm going to set a percentage of this income to donate to my open source dependencies. This still feels like a bandage though—answering this question makes me realize I need to put more time into starting an open source project that calls ExchangeRate-API home!
|
||||
|
||||
### Looking ahead
|
||||
|
||||
_**Chris:** We can only query the latest exchange rate, but it appears that you may be offering historical rates sometime later this year. Can you tell us more about the technical challenges with serving up historical data?_
|
||||
|
||||
**David:** There is a dataset of historical rates blended using our same algorithm from multiple central bank reference sets. However, I stopped new signups for it due to some issues with the data quality. The dataset reaches back to 1990, and there were a few earlier periods that need better data validation. As such, I'm building a better system for checking and comparing the data as it's ingested as well as adding an additional data source. The plan is to have a clean and more comprehensively verified-as-accurate dataset available later this year.
|
||||
|
||||
In terms of the technical side of things, historical data is slightly more complex than live data. Compared to the live dataset (which is just a few bytes) the historical data is millions of database rows. This data was originally served from the database infrastructure with a long time-to-live (TTL) intermediary-caching layer. This was largely performant but struggled in situations where users wanted to dump the entire dataset as fast as the network could handle it. If the cache was sufficiently warm, this was fine, but if reboots, new server deployments, etc. had taken place recently, these big request sets would "miss" enough on the cache that the database would have problematic load spikes.
|
||||
|
||||
Obviously, the goal is an infrastructure that can handle even aggressive use cases with normal performance, so the new historical rates dataset will be accompanied by a preemptive in-memory cache rather than a request-driven one. Thankfully, RAM is cheap these days, and putting a couple hundred megabytes of data entirely into RAM is a plausible approach even for a small project like ExchangeRate-API.com.
|
||||
|
||||
_**Chris:** It sounds like you've been through quite a few iterations of this service to get to where it is today! Where do you see it going in the next few years?_
|
||||
|
||||
**David:** I'd aim for it to have reached coverage of every world currency so that anyone looking for this sort of software can easily and programmatically get the exchange rates they need for free.
|
||||
|
||||
I'd also definitely like to have an affordable Pro plan that really resonates with users. Getting this right would mean better infrastructure and lower latency for free users as well.
|
||||
|
||||
Finally, I'd like to have some sort of useful open source library under the ExchangeRate-API banner. Starting a small project that finds an enthusiastic community would be really rewarding. It's great to run something that's free-as-in-beer, but it would be even better if part of it was free-as-in-speech, as well.
|
||||
|
||||
### How to use the service
|
||||
|
||||
It's easy enough to test out the service using **wget** , as follows:
|
||||
|
||||
|
||||
```
|
||||
clh@marseille:~$ wget -O - <https://api.exchangerate-api.com/v4/latest/INR>
|
||||
\--2019-04-26 13:48:23-- <https://api.exchangerate-api.com/v4/latest/INR>
|
||||
Resolving api.exchangerate-api.com (api.exchangerate-api.com)... 2606:4700:20::681a:c80, 2606:4700:20::681a:d80, 104.26.13.128, ...
|
||||
Connecting to api.exchangerate-api.com (api.exchangerate-api.com)|2606:4700:20::681a:c80|:443... connected.
|
||||
HTTP request sent, awaiting response... 200 OK
|
||||
Length: unspecified [application/json]
|
||||
Saving to: ‘STDOUT’
|
||||
|
||||
\- [<=>
|
||||
] 0 --.-KB/s {"base":"INR","date":"2019-04-26","time_last_updated":1556236800,"rates":{"INR":1,"AUD":0.020343,"BRL":0.056786,"CAD":0.019248,"CHF":0.014554,"CNY":0.096099,"CZK":0.329222,"DKK":0.095497,"EUR":0.012789,"GBP":0.011052,"HKD":0.111898,"HUF":4.118615,"IDR":199.61769,"ILS":0.051749,"ISK":1.741659,"JPY":1.595527,"KRW":16.553091,"MXN":0.272383,"MYR":0.058964,"NOK":0.123365,"NZD":0.02161,"PEN":0.047497,"PHP":0.744974,"PLN":0.054927,"RON":0.060923,"RUB":0.921808,"SAR":0.053562,"SEK":0.135226,"SGD":0.019442,"THB":0.457501,"TRY":0- [ <=> ] 579 --.-KB/s in 0s
|
||||
|
||||
2019-04-26 13:48:23 (15.5 MB/s) - written to stdout [579]
|
||||
|
||||
clh@marseille:~$
|
||||
```
|
||||
|
||||
The result is returned as a JSON payload, giving conversion rates from Indian rupees (the currency I requested in the URL) to all the currencies handled by ExchangeRate-API.
|
||||
|
||||
The Groovy shell can access the API:
|
||||
|
||||
|
||||
```
|
||||
clh@marseille:~$ groovysh
|
||||
Groovy Shell (2.5.3, JVM: 1.8.0_212)
|
||||
Type ':help' or ':h' for help.
|
||||
\----------------------------------------------------------------------------------------------------------------------------------
|
||||
groovy:000> import groovy.json.JsonSlurper
|
||||
===> groovy.json.JsonSlurper
|
||||
groovy:000> result = (new JsonSlurper()).parse(
|
||||
groovy:001> new InputStreamReader((new URL('<https://api.exchangerate-api.com/v4/latest/INR')).newInputStream(>))
|
||||
groovy:002> )
|
||||
===> [base:INR, date:2019-04-26, time_last_updated:1556236800, rates:[INR:1, AUD:0.020343, BRL:0.056786, CAD:0.019248, CHF:0.014554, CNY:0.096099, CZK:0.329222, DKK:0.095497, EUR:0.012789, GBP:0.011052, HKD:0.111898, HUF:4.118615, IDR:199.61769, ILS:0.051749, ISK:1.741659, JPY:1.595527, KRW:16.553091, MXN:0.272383, MYR:0.058964, NOK:0.123365, NZD:0.02161, PEN:0.047497, PHP:0.744974, PLN:0.054927, RON:0.060923, RUB:0.921808, SAR:0.053562, SEK:0.135226, SGD:0.019442, THB:0.457501, TRY:0.084362, TWD:0.441385, USD:0.014255, ZAR:0.206271]]
|
||||
groovy:000>
|
||||
```
|
||||
|
||||
The same JSON payload is returned as a result of the Groovy JSON slurper operating on the URL. Of course, since this is Groovy, the JSON is converted into a Map, so you can do stuff like this:
|
||||
|
||||
|
||||
```
|
||||
groovy:000> println result.base
|
||||
INR
|
||||
===> null
|
||||
groovy:000> println result.date
|
||||
2019-04-26
|
||||
===> null
|
||||
groovy:000> println result.rates.CAD
|
||||
0.019248
|
||||
===> null
|
||||
```
|
||||
|
||||
And that's it!
|
||||
|
||||
Do you use ExchangeRate-API or a similar service? Share how you use exchange rate data in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/exchange-rate-data
|
||||
|
||||
作者:[Chris Hermansen ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c (scientific calculator)
|
||||
[2]: https://opensource.com/article/18/3/groovy-calculate-foreign-exchange
|
||||
[3]: https://fixer.io/
|
||||
[4]: https://apilayer.com/
|
||||
[5]: https://www.exchangerate-api.com/
|
||||
[6]: https://certbot.eff.org/
|
||||
[7]: https://en.wikipedia.org/wiki/Heartbleed
|
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 open source hardware products for the great outdoors)
|
||||
[#]: via: (https://opensource.com/article/19/5/hardware-outdoors)
|
||||
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg/users/aliciagibb)
|
||||
|
||||
5 open source hardware products for the great outdoors
|
||||
======
|
||||
Here's some equipment you can buy or make yourself for hitting the great
|
||||
outdoors, no generators or batteries required.
|
||||
![Tree clouds][1]
|
||||
|
||||
When people think about open source hardware, they often think about the general category of electronics that can be soldered and needs batteries. While there are [many][2] fantastic open source pieces of electronics, the overall category of open source hardware is much broader. This month we take a look at open source hardware that you can take out into the world, no power outlet or batteries required.
|
||||
|
||||
### Hummingbird Hammocks
|
||||
|
||||
[Hummingbird Hammocks][3] offers an entire line of open source camping gear. You can set up an open source [rain tarp][4]...
|
||||
|
||||
![An open source rain tarp from Hummingbird Hammocks][5]
|
||||
|
||||
...with open source [friction adjusters][6]
|
||||
|
||||
![Open source friction adjusters from Hummingbird Hammocks.][7]
|
||||
|
||||
Open source friction adjusters from Hummingbird Hammocks.
|
||||
|
||||
...over your open source [hammock][8]
|
||||
|
||||
![An open source hammock from Hummingbird Hammocks.][9]
|
||||
|
||||
An open source hammock from Hummingbird Hammocks.
|
||||
|
||||
...hung with open source [tree straps][10].
|
||||
|
||||
![Open source tree straps from Hummingbird Hammocks.][11]
|
||||
|
||||
Open source tree straps from Hummingbird Hammocks.
|
||||
|
||||
The design for each of these items is fully documented, so you can even use them as a starting point for making your own outdoor gear (if you are willing to trust friction adjusters you design yourself).
|
||||
|
||||
### Openfoil
|
||||
|
||||
[Openfoil][12] is an open source hydrofoil for kitesurfing. Hydrofoils are attached to the bottom of kiteboards and allow the rider to rise out of the water. This aspect of the design makes riding in low wind situations and with smaller kites easier. It can also reduce the amount of noise the board makes on the water, making for a quieter experience. Because this hydrofoil is open source you can customize it to your needs and adventure tolerance.
|
||||
|
||||
![Openfoil, an open source hydrofoil for kitesurfing.][13]
|
||||
|
||||
Openfoil, an open source hydrofoil for kitesurfing.
|
||||
|
||||
### Solar water heater
|
||||
|
||||
If you prefer your outdoors-ing a bit closer to home, you could build this open source [solar water heater][14] created by the [Anisa Foundation][15]. This appliance focuses energy from the sun to heat water that can then be used in your home, letting you reduce your carbon footprint without having to give up long, hot showers. Of course, you can also [monitor its temperature ][16]over the internet if you need to feel connected.
|
||||
|
||||
![An open source solar water heater from the Anisa Foundation.][17]
|
||||
|
||||
An open source solar water heater from the Anisa Foundation.
|
||||
|
||||
## Wrapping up
|
||||
|
||||
As these projects make clear, open source hardware is more than just electronics. You can take it with you to the woods, to the beach, or just to your roof. Next month we’ll talk about open source instruments and musical gear. Until then, [certify][18] your open source hardware!
|
||||
|
||||
Learn how and why you may want to start using the Open Source Hardware Certification logo on an...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/hardware-outdoors
|
||||
|
||||
作者:[Michael Weinberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mweinberg/users/aliciagibb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_tree_clouds.png?itok=b_ftihhP (Tree clouds)
|
||||
[2]: https://certification.oshwa.org/list.html
|
||||
[3]: https://hummingbirdhammocks.com/
|
||||
[4]: https://certification.oshwa.org/us000102.html
|
||||
[5]: https://opensource.com/sites/default/files/uploads/01-hummingbird_hammocks_rain_tarp.png (An open source rain tarp from Hummingbird Hammocks)
|
||||
[6]: https://certification.oshwa.org/us000105.html
|
||||
[7]: https://opensource.com/sites/default/files/uploads/02-hummingbird_hammocks_friction_adjusters_400_px.png (Open source friction adjusters from Hummingbird Hammocks.)
|
||||
[8]: https://certification.oshwa.org/us000095.html
|
||||
[9]: https://opensource.com/sites/default/files/uploads/03-hummingbird_hammocks_hammock_400_px.png (An open source hammock from Hummingbird Hammocks.)
|
||||
[10]: https://certification.oshwa.org/us000098.html
|
||||
[11]: https://opensource.com/sites/default/files/uploads/04-hummingbird_hammocks_tree_straps_400_px_0.png (Open source tree straps from Hummingbird Hammocks.)
|
||||
[12]: https://certification.oshwa.org/fr000004.html
|
||||
[13]: https://opensource.com/sites/default/files/uploads/05-openfoil-original_size.png (Openfoil, an open source hydrofoil for kitesurfing.)
|
||||
[14]: https://certification.oshwa.org/mx000002.html
|
||||
[15]: http://www.fundacionanisa.org/index.php?lang=en
|
||||
[16]: https://thingspeak.com/channels/72565
|
||||
[17]: https://opensource.com/sites/default/files/uploads/06-solar_water_heater_500_px.png (An open source solar water heater from the Anisa Foundation.)
|
||||
[18]: https://certification.oshwa.org/
|
432
sources/tech/20190510 Check storage performance with dd.md
Normal file
432
sources/tech/20190510 Check storage performance with dd.md
Normal file
@ -0,0 +1,432 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Check storage performance with dd)
|
||||
[#]: via: (https://fedoramagazine.org/check-storage-performance-with-dd/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
Check storage performance with dd
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
This article includes some example commands to show you how to get a _rough_ estimate of hard drive and RAID array performance using the _dd_ command. Accurate measurements would have to take into account things like [write amplification][2] and [system call overhead][3], which this guide does not. For a tool that might give more accurate results, you might want to consider using [hdparm][4].
|
||||
|
||||
To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. **WARNING** : The _write_ tests will destroy any data on the block devices against which they are run. **Do not run them against any device that contains data you want to keep!**
|
||||
|
||||
### Four tests
|
||||
|
||||
Below are four example dd commands that can be used to test the performance of a block device:
|
||||
|
||||
1. One process reading from $MY_DISK:
|
||||
|
||||
```
|
||||
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
|
||||
```
|
||||
|
||||
2. One process writing to $MY_DISK:
|
||||
|
||||
```
|
||||
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
|
||||
```
|
||||
|
||||
3. Two processes reading concurrently from $MY_DISK:
|
||||
|
||||
```
|
||||
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
|
||||
```
|
||||
|
||||
4. Two processes writing concurrently to $MY_DISK:
|
||||
|
||||
```
|
||||
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
– The _iflag=nocache_ and _oflag=direct_ parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from [RAM][5] rather than the hard drive.
|
||||
|
||||
– The values for the _bs_ and _count_ parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.
|
||||
|
||||
– The _null_ and _zero_ devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.
|
||||
|
||||
– The _skip=200_ parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.
|
||||
|
||||
### 16 examples
|
||||
|
||||
Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:
|
||||
|
||||
1. MY_DISK=/dev/sda2 (used in examples 1-X)
|
||||
2. MY_DISK=/dev/sdb2 (used in examples 2-X)
|
||||
3. MY_DISK=/dev/md/stripped (used in examples 3-X)
|
||||
4. MY_DISK=/dev/md/mirrored (used in examples 4-X)
|
||||
|
||||
|
||||
|
||||
A video demonstration of the these tests being run on a PC is provided at the end of this guide.
|
||||
|
||||
Begin by putting your computer into _rescue_ mode to reduce the chances that disk I/O from background services might randomly affect your test results. **WARNING** : This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your _root_ password to get into rescue mode. The _passwd_ command, when run as the root user, will prompt you to (re)set your root account password.
|
||||
|
||||
```
|
||||
$ sudo -i
|
||||
# passwd
|
||||
# setenforce 0
|
||||
# systemctl rescue
|
||||
```
|
||||
|
||||
You might also want to temporarily disable logging to disk:
|
||||
|
||||
```
|
||||
# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
|
||||
# systemctl restart systemd-journald.service
|
||||
```
|
||||
|
||||
If you have a swap device, it can be temporarily disabled and used to perform the following tests:
|
||||
|
||||
```
|
||||
# swapoff -a
|
||||
# MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*")
|
||||
# mdadm --stop /dev/md/swap
|
||||
# mdadm --zero-superblock $MY_DEVS
|
||||
```
|
||||
|
||||
#### Example 1-1 (reading from sda)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
|
||||
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s
|
||||
```
|
||||
|
||||
#### Example 1-2 (writing to sda)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
|
||||
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s
|
||||
```
|
||||
|
||||
#### Example 1-3 (reading concurrently from sda)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
|
||||
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s
|
||||
```
|
||||
|
||||
#### Example 1-4 (writing concurrently to sda)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
|
||||
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s
|
||||
```
|
||||
|
||||
#### Example 2-1 (reading from sdb)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
|
||||
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s
|
||||
```
|
||||
|
||||
#### Example 2-2 (writing to sdb)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
|
||||
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s
|
||||
```
|
||||
|
||||
#### Example 2-3 (reading concurrently from sdb)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
|
||||
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s
|
||||
```
|
||||
|
||||
#### Example 2-4 (writing concurrently to sdb)
|
||||
|
||||
```
|
||||
# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
|
||||
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s
|
||||
```
|
||||
|
||||
#### Example 3-1 (reading from RAID0)
|
||||
|
||||
```
|
||||
# mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
|
||||
# MY_DISK=/dev/md/stripped
|
||||
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s
|
||||
```
|
||||
|
||||
#### Example 3-2 (writing to RAID0)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/stripped
|
||||
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s
|
||||
```
|
||||
|
||||
#### Example 3-3 (reading concurrently from RAID0)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/stripped
|
||||
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s
|
||||
```
|
||||
|
||||
#### Example 3-4 (writing concurrently to RAID0)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/stripped
|
||||
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s
|
||||
```
|
||||
|
||||
#### Example 4-1 (reading from RAID1)
|
||||
|
||||
```
|
||||
# mdadm --stop /dev/md/stripped
|
||||
# mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS
|
||||
# MY_DISK=/dev/md/mirrored
|
||||
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s
|
||||
```
|
||||
|
||||
#### Example 4-2 (writing to RAID1)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/mirrored
|
||||
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s
|
||||
```
|
||||
|
||||
#### Example 4-3 (reading concurrently from RAID1)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/mirrored
|
||||
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s
|
||||
```
|
||||
|
||||
#### Example 4-4 (writing concurrently to RAID1)
|
||||
|
||||
```
|
||||
# MY_DISK=/dev/md/mirrored
|
||||
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
|
||||
```
|
||||
|
||||
```
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s
|
||||
200+0 records in
|
||||
200+0 records out
|
||||
209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s
|
||||
```
|
||||
|
||||
#### Restore your swap device and journald configuration
|
||||
|
||||
```
|
||||
# mdadm --stop /dev/md/stripped /dev/md/mirrored
|
||||
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS
|
||||
# mkswap /dev/md/swap
|
||||
# swapon -a
|
||||
# mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf
|
||||
# systemctl restart systemd-journald.service
|
||||
# reboot
|
||||
```
|
||||
|
||||
### Interpreting the results
|
||||
|
||||
Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.
|
||||
|
||||
Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s).
|
||||
|
||||
The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a [catastrophic failure][6].
|
||||
|
||||
The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost [if a drive fails][7].
|
||||
|
||||
### Video demo
|
||||
|
||||
Testing storage throughput using dd
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology ([SMART][8]). If your drive supports it, the _smartctl_ command can be used to query your hard drive for its internal statistics:
|
||||
|
||||
```
|
||||
# smartctl --health /dev/sda
|
||||
# smartctl --log=error /dev/sda
|
||||
# smartctl -x /dev/sda
|
||||
```
|
||||
|
||||
Another way that you might be able to tune your PC for better performance is by changing your [I/O scheduler][9]. Linux systems support several I/O schedulers and the current default for Fedora systems is the [multiqueue][10] variant of the [deadline][11] scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.
|
||||
|
||||
To view which I/O scheduler your drives are using, issue the following command:
|
||||
|
||||
```
|
||||
$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done
|
||||
```
|
||||
|
||||
You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:
|
||||
|
||||
```
|
||||
# echo bfq > /sys/block/sda/queue/scheduler
|
||||
```
|
||||
|
||||
You can make your changes permanent by creating a [udev rule][12] for your drive. The following example shows how to create a udev rule that will set all [rotational drives][13] to use the [BFQ][14] I/O scheduler:
|
||||
|
||||
```
|
||||
# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
|
||||
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
|
||||
END
|
||||
```
|
||||
|
||||
Here is another example that sets all [solid-state drives][15] to use the [NOOP][16] I/O scheduler:
|
||||
|
||||
```
|
||||
# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
|
||||
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"
|
||||
END
|
||||
```
|
||||
|
||||
Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.
|
||||
|
||||
* * *
|
||||
|
||||
_Photo by _[ _James Donovan_][17]_ on _[_Unsplash_][18]_._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/check-storage-performance-with-dd/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/glb/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/dd-performance-816x345.jpg
|
||||
[2]: https://www.ibm.com/developerworks/community/blogs/ibmnas/entry/misalignment_can_be_twice_the_cost1?lang=en
|
||||
[3]: https://eklitzke.org/efficient-file-copying-on-linux
|
||||
[4]: https://en.wikipedia.org/wiki/Hdparm
|
||||
[5]: https://en.wikipedia.org/wiki/Random-access_memory
|
||||
[6]: https://blog.elcomsoft.com/2019/01/why-ssds-die-a-sudden-death-and-how-to-deal-with-it/
|
||||
[7]: https://www.computerworld.com/article/2484998/ssds-do-die--as-linus-torvalds-just-discovered.html
|
||||
[8]: https://en.wikipedia.org/wiki/S.M.A.R.T.
|
||||
[9]: https://en.wikipedia.org/wiki/I/O_scheduling
|
||||
[10]: https://lwn.net/Articles/552904/
|
||||
[11]: https://en.wikipedia.org/wiki/Deadline_scheduler
|
||||
[12]: http://www.reactivated.net/writing_udev_rules.html
|
||||
[13]: https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics
|
||||
[14]: http://algo.ing.unimo.it/people/paolo/disk_sched/
|
||||
[15]: https://en.wikipedia.org/wiki/Solid-state_drive
|
||||
[16]: https://en.wikipedia.org/wiki/Noop_scheduler
|
||||
[17]: https://unsplash.com/photos/0ZBRKEG_5no?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[18]: https://unsplash.com/search/photos/speed?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -0,0 +1,180 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Keeping an open source project alive when people leave)
|
||||
[#]: via: (https://opensource.com/article/19/5/code-missing-community-management)
|
||||
[#]: author: (Rodrigo Duarte Sousa https://opensource.com/users/rodrigods/users/tellesnobrega)
|
||||
|
||||
Keeping an open source project alive when people leave
|
||||
======
|
||||
How to find out what's done, what's not, and what's missing.
|
||||
![][1]
|
||||
|
||||
Suppose you wake up one day and decide to finally use that recipe video you keep watching all over social media. You get the ingredients, organize the necessary utensils, and start to follow the recipe steps. You cut this, cut that, then start heating the oven at the same time you put butter and onions in a pan. Then, your phone reminds you: you have a dinner appointment with your boss, and you're already late! You turn off everything and leave immediately, stopping the cooking process somewhere near the end.
|
||||
|
||||
Some minutes later, your roommate arrives at home ready to have dinner and finds only the _ongoing work_ in the kitchen. They have the following options:
|
||||
|
||||
1. Clean up the mess and start cooking something from scratch.
|
||||
2. Order dinner and don’t bother to cook and/or fix the mess you left.
|
||||
3. Start cooking “around” the mess you left, which will probably take more time since most of the utensils are dirty and there isn’t much space left in the kitchen.
|
||||
|
||||
|
||||
|
||||
If you left the printed version of the recipe somewhere, your roommate also has a fourth option. They could finish what you started! The problem is that they have no idea what's missing. It is not like you crossed out each completed step. Their best bet is either to call you or to examine all of your _changes_ to infer what is missing.
|
||||
|
||||
In this example, the kitchen is like a software project, the utensils are the code, and the recipe is a new feature being implemented. Leaving something behind is not usually doable in a company's private project since you're accountable for your work and—in a scenario where you need to leave—it's almost certain that there is someone tracking/following the project, so they avoid having a "single point of failure." With open source projects, though, this continuity rarely happens. So how can we in the open source community deal with legacy, unfinished code, or code that is completed but no one dares touch it?
|
||||
|
||||
### Knowledge legacy in open source projects
|
||||
|
||||
We have always felt that open source is one of the best ways for an inexperienced software engineer to improve her skills. For many, open source projects offer their first hands-on experience with particular tools. [Version control systems][2], [unit][3] and [integration][4] tests, [continuous delivery][5], [code reviews][6], [features planning][7], [bug reporting/fixing][8], and more.
|
||||
|
||||
In addition to learning opportunities, we can also view open source projects as a career opportunity—many senior engineers in the community get paid to be there, and you can add your contributions to your resume. That’s pretty cool. There's nothing like learning while improving your resume and getting potential employers' attention so you can pay your rent.
|
||||
|
||||
Is this whole situation an infinite loop where everyone wins? The answer is obviously no. This post focuses on one of the main issues that arise in any project: the [bus/truck factor][9]. In the open source context, specifically, when people experience major changes such as a new job or other more personal factors, they tend to leave the community. We will first describe the problems that can arise from people leaving their _recipes_ unfinished by using [OpenStack][10] as an example. Then, we'll try to discuss some ideas to try to mitigate the issues.
|
||||
|
||||
### Common problems
|
||||
|
||||
In the past few years, we've seen a lot of changes in the [OpenStack][11] community, where some projects lost some portion of their active contributors team. These losses led to incomplete work and even finished modules without clear maintainers. Below are other examples of what happens when people suddenly leave. While this article uses OpenStack terms, such as “specs,” these issues easily apply to software development in general:
|
||||
|
||||
* **Broken documentation:** A new API or setting either wasn't documented, or it was documented but not implemented.
|
||||
* **Hard to resolve knowledge deficits:** For example, a new requirement and/or feature requires part of the code to be refactored but no one has the necessary expertise.
|
||||
* **Incomplete features:** What are the missing tasks required for each feature? Which tasks were completed?
|
||||
* **Debugging drama:** If the person who wrote the code isn't there, meaning that it takes a lot of engineering hours just to decrypt—so to speak—the code path that needs to be fixed.
|
||||
|
||||
|
||||
|
||||
To illustrate, we will use the [Project Tree Deletion][12] feature. Project Tree Deletion is a tiny feature that one of us proposed more than three years ago and couldn’t complete. Basically, the main goal was to enable an OpenStack user/operator to erase a whole branch of projects without having to manually disable/delete every single of them starting from the leaves. Very straightforward, right? The PTD spec has been merged and has the following _work items_ :
|
||||
|
||||
* Update API spec documentation.
|
||||
* Add new rules to the file **policy.json**.
|
||||
* Add new endpoints to mirror the new features.
|
||||
* Implement the new deletion/disabling behavior for the project’s hierarchy.
|
||||
|
||||
|
||||
|
||||
What about the sequence of steps (roadmap) to get these work items done? How do we know where to start and when what to tackle next? Are there any logical dependencies between the work items? How do we know where to start, and with what?
|
||||
|
||||
Also, how do we know which work has been completed (if any)? One of the things that we do is look in the [blueprint][13] and/or the new [bug tracker][14], for example:
|
||||
|
||||
* Recursive deletion and project disabling: <https://review.openstack.org/148730>(merged)
|
||||
* API changes for Reseller: <https://review.openstack.org/153007>(merged)
|
||||
* Add parent_id to GET /projects: <https://review.openstack.org/166326>(merged)
|
||||
* Manager support for project cascade update: <https://review.openstack.org/243584>(merged)
|
||||
* API support for cascade update: <https://review.openstack.org/243585>(abandoned)
|
||||
* Manager support for project delete cascade: <https://review.openstack.org/244149>(merged)
|
||||
* API support for project cascade delete: <https://review.openstack.org/244248>(abandoned)
|
||||
* Add backend support for deleting a projects list: <https://review.openstack.org/245916>(merged)
|
||||
* Test list project hierarchy is correct for a large tree: <https://review.openstack.org/277512>(merged)
|
||||
* Fix cascade operations documentation: <https://review.openstack.org/274836>(merged)
|
||||
* Revert “Fix cascade operations documentation”: <https://review.openstack.org/286716>(merged)
|
||||
* Remove the APIs from the doc that aren't supported yet: <https://review.openstack.org/368570>(merged)
|
||||
|
||||
|
||||
|
||||
Here we can see a lot of merged patches, but also that some were abandoned, and that some include the words Revert and Remove in their titles. Now we have strong evidence that this work is not completed, but at least some work was started to clean it up and avoid exposing something incomplete in the service API. Let’s dig a little bit deeper and look at the [_current_ delete project code][15].
|
||||
|
||||
There, we can see an added **cascade** argument (“cascade” resembles deleting related things together, so this argument must be somehow related to the proposed feature), and that it has a special block to treat the cases for the possible values of **cascade** :
|
||||
|
||||
|
||||
```
|
||||
`def _delete_project(self, project, initiator=None, cascade=False):`[/code] [code]
|
||||
|
||||
if cascade:
|
||||
# Getting reversed project's subtrees list, i.e. from the leaves
|
||||
# to the root, so we do not break parent_id FK.
|
||||
subtree_list = self.list_projects_in_subtree(project_id)
|
||||
subtree_list.reverse()
|
||||
if not self._check_whole_subtree_is_disabled(
|
||||
project_id, subtree_list=subtree_list):
|
||||
raise exception.ForbiddenNotSecurity(
|
||||
_('Cannot delete project %(project_id)s since its subtree '
|
||||
'contains enabled projects.')
|
||||
% {'project_id': project_id})
|
||||
|
||||
project_list = subtree_list + [project]
|
||||
projects_ids = [x['id'] for x in project_list]
|
||||
|
||||
ret = self.driver.delete_projects_from_ids(projects_ids)
|
||||
for prj in project_list:
|
||||
self._post_delete_cleanup_project(prj['id'], prj, initiator)
|
||||
else:
|
||||
ret = self.driver.delete_project(project_id)
|
||||
self._post_delete_cleanup_project(project_id, project, initiator)
|
||||
```
|
||||
|
||||
What about the callers of this function? Do they use **cascade** at all? If we search for it, we only find occurrences in the backend tests:
|
||||
|
||||
|
||||
```
|
||||
$ git grep "delete_project" | grep "cascade" | grep -v "def"
|
||||
keystone/tests/unit/resource/test_backends.py: PROVIDERS.resource_api.delete_project(root_project['id'], cascade=True)
|
||||
keystone/tests/unit/resource/test_backends.py: PROVIDERS.resource_api.delete_project(p1['id'], cascade=True)
|
||||
```
|
||||
|
||||
We can also confirm this finding by looking at the [delete projects API implementation][16].
|
||||
|
||||
So it seems that we have a problem here, something simple that I started was left behind a very long time ago. How could the community or I have prevented this from happening?
|
||||
|
||||
From the example above, one of the most apparent problems is the lack of a clear roadmap and list of completed tasks somewhere. To follow the actual implementation status, we had to dig into the blueprint/bug comments and the code.
|
||||
|
||||
Based on this issue, we can sketch an idea: for each new feature, we need a roadmap stored somewhere to reflect the implementation status. Once the roadmap is defined within a spec, we can track each step as a [Launchpad][17] entry, for example, and have a better view of the progress status of that spec.
|
||||
|
||||
Of course, these steps won’t prevent unfinished projects and they add a little bit of process, but following them can give a better view of what's missing so someone else from the community could finish or even revert what's there.
|
||||
|
||||
### That’s not all
|
||||
|
||||
What about other aspects of the project besides feature completion? We shouldn’t expect that every person on the core team is an expert in every single project module. This issue highlights another very important aspect of any open source community: mentoring.
|
||||
|
||||
New people come to the community all the time and many have an incentive to continuing coming back as we discussed earlier. However, are our current community members willing to mentor them? How many times have you participated as a mentor in a program such as [Outreachy ][18]or [Google Summer of Code][19], or taken time to answer questions in the project’s chat?
|
||||
|
||||
We also know that people eventually move on to other open source communities, so we have the chance of not leaving what we learned behind. We can always transmit that knowledge directly to those who are currently interested and actively asking questions, or indirectly, by writing documentation, blog posts, giving talks, and so forth.
|
||||
|
||||
In order to have a healthy open source community, knowledge can’t be dominated by few people. We need to make an effort to have as many people capable of moving the project forward as possible. Also, a key aspect of mentoring is not only related to coding, but also to leadership skills. Preparing people to take roles like Project Team Lead, joining the Technical Committee, and so on is crucial if we intend to see the community grow even when we're not around anymore.
|
||||
|
||||
Needless to say, mentoring is also an important skill for climbing the engineering ladder in most companies. Consider that another motivation.
|
||||
|
||||
### To conclude
|
||||
|
||||
Open source should not be treated as only the means to an end. Collaboration is a crucial part of these projects, and alongside mentoring, should always be treated as a first citizen in any open source community. And, of course, we will fix the unfinished spec used as this article's example.
|
||||
|
||||
If you are part of an open source community, it is your responsibility to be focusing on sharing your knowledge while you are still around. Chances are that no one is going to tell you to do so, it should be part of the routine of any open source collaborator.
|
||||
|
||||
What are other ways of sharing knowledge? What are your thoughts and ideas about the issue?
|
||||
|
||||
_This original article was posted on[rodrigods][20]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/code-missing-community-management
|
||||
|
||||
作者:[Rodrigo Duarte Sousa][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rodrigods/users/tellesnobrega
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_question_B.png?itok=f88cyt00
|
||||
[2]: https://en.wikipedia.org/wiki/Version_control
|
||||
[3]: https://en.wikipedia.org/wiki/Unit_testing
|
||||
[4]: https://en.wikipedia.org/wiki/Integration_testing
|
||||
[5]: https://en.wikipedia.org/wiki/Continuous_delivery
|
||||
[6]: https://en.wikipedia.org/wiki/Code_review
|
||||
[7]: https://www.agilealliance.org/glossary/sprint-planning/
|
||||
[8]: https://www.softwaretestinghelp.com/how-to-write-good-bug-report/
|
||||
[9]: https://en.wikipedia.org/wiki/Bus_factor
|
||||
[10]: https://www.openstack.org/
|
||||
[11]: /resources/what-is-openstack
|
||||
[12]: https://review.opendev.org/#/c/148730/35
|
||||
[13]: https://blueprints.launchpad.net/keystone/+spec/project-tree-deletion
|
||||
[14]: https://bugs.launchpad.net/keystone/+bug/1816105
|
||||
[15]: https://github.com/openstack/keystone/blob/master/keystone/resource/core.py#L475-L519
|
||||
[16]: https://github.com/openstack/keystone/blob/master/keystone/api/projects.py#L202-L214
|
||||
[17]: https://launchpad.net
|
||||
[18]: https://www.outreachy.org/
|
||||
[19]: https://summerofcode.withgoogle.com/
|
||||
[20]: https://blog.rodrigods.com/knowledge-legacy-the-issue-of-passing-the-baton/
|
594
sources/tech/20190510 Learn to change history with git rebase.md
Normal file
594
sources/tech/20190510 Learn to change history with git rebase.md
Normal file
@ -0,0 +1,594 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learn to change history with git rebase!)
|
||||
[#]: via: (https://git-rebase.io/)
|
||||
[#]: author: (git-rebase https://git-rebase.io/)
|
||||
|
||||
Learn to change history with git rebase!
|
||||
======
|
||||
One of Git 's core value-adds is the ability to edit history. Unlike version control systems that treat the history as a sacred record, in git we can change history to suit our needs. This gives us a lot of powerful tools and allows us to curate a good commit history in the same way we use refactoring to uphold good software design practices. These tools can be a little bit intimidating to the novice or even intermediate git user, but this guide will help to demystify the powerful git-rebase .
|
||||
|
||||
```
|
||||
A word of caution : changing the history of public, shared, or stable branches is generally advised against. Editing the history of feature branches and personal forks is fine, and editing commits that you haven't pushed yet is always okay. Use git push -f to force push your changes to a personal fork or feature branch after editing your commits.
|
||||
```
|
||||
|
||||
Despite the scary warning, it's worth mentioning that everything mentioned in this guide is a non-destructive operation. It's actually pretty difficult to permanently lose data in git. Fixing things when you make mistakes is covered at the end of this guide.
|
||||
|
||||
### Setting up a sandbox
|
||||
|
||||
We don't want to mess up any of your actual repositories, so throughout this guide we'll be working with a sandbox repo. Run these commands to get started:
|
||||
|
||||
```
|
||||
git init /tmp/rebase-sandbox
|
||||
cd /tmp/rebase-sandbox
|
||||
git commit --allow-empty -m"Initial commit"
|
||||
```
|
||||
|
||||
If you run into trouble, just run rm -rf /tmp/rebase-sandbox and run these steps again to start over. Each step of this guide can be run on a fresh sandbox, so it's not necessary to re-do every task.
|
||||
|
||||
|
||||
### Amending your last commit
|
||||
|
||||
Let's start with something simple: fixing your most recent commit. Let's add a file to our sandbox - and make a mistake:
|
||||
|
||||
```
|
||||
echo "Hello wrold!" >greeting.txt
|
||||
git add greeting.txt
|
||||
git commit -m"Add greeting.txt"
|
||||
```
|
||||
|
||||
Fixing this mistake is pretty easy. We can just edit the file and commit with `--amend`, like so:
|
||||
|
||||
```
|
||||
echo "Hello world!" >greeting.txt
|
||||
git commit -a --amend
|
||||
```
|
||||
|
||||
Specifying `-a` automatically stages (i.e. `git add`'s) all files that git already knows about, and `--amend` will squash the changes into the most recent commit. Save and quit your editor (you have a chance to change the commit message now if you'd like). You can see the fixed commit by running `git show`:
|
||||
|
||||
```
|
||||
commit f5f19fbf6d35b2db37dcac3a55289ff9602e4d00 (HEAD -> master)
|
||||
Author: Drew DeVault
|
||||
Date: Sun Apr 28 11:09:47 2019 -0400
|
||||
|
||||
Add greeting.txt
|
||||
|
||||
diff --git a/greeting.txt b/greeting.txt
|
||||
new file mode 100644
|
||||
index 0000000..cd08755
|
||||
--- /dev/null
|
||||
+++ b/greeting.txt
|
||||
@@ -0,0 +1 @@
|
||||
+Hello world!
|
||||
```
|
||||
|
||||
### Fixing up older commits
|
||||
|
||||
Amending only works for the most recent commit. What happens if you need to correct an older commit? Let's start by setting up our sandbox accordingly:
|
||||
|
||||
```
|
||||
echo "Hello!" >greeting.txt
|
||||
git add greeting.txt
|
||||
git commit -m"Add greeting.txt"
|
||||
|
||||
echo "Goodbye world!" >farewell.txt
|
||||
git add farewell.txt
|
||||
git commit -m"Add farewell.txt"
|
||||
```
|
||||
|
||||
Looks like `greeting.txt` is missing "world". Let's write a commit normally which fixes that:
|
||||
|
||||
```
|
||||
echo "Hello world!" >greeting.txt
|
||||
git commit -a -m"fixup greeting.txt"
|
||||
```
|
||||
|
||||
So now the files look correct, but our history could be better - let's use the new commit to "fixup" the last one. For this, we need to introduce a new tool: the interactive rebase. We're going to edit the last three commits this way, so we'll run `git rebase -i HEAD~3` (`-i` for interactive). This'll open your text editor with something like this:
|
||||
|
||||
```
|
||||
pick 8d3fc77 Add greeting.txt
|
||||
pick 2a73a77 Add farewell.txt
|
||||
pick 0b9d0bb fixup greeting.txt
|
||||
|
||||
# Rebase f5f19fb..0b9d0bb onto f5f19fb (3 commands)
|
||||
#
|
||||
# Commands:
|
||||
# p, pick <commit> = use commit
|
||||
# f, fixup <commit> = like "squash", but discard this commit's log message
|
||||
```
|
||||
|
||||
This is the rebase plan, and by editing this file you can instruct git on how to edit history. I've trimmed the summary to just the details relevant to this part of the rebase guide, but feel free to skim the full summary in your text editor.
|
||||
|
||||
When we save and close our editor, git is going to remove all of these commits from its history, then execute each line one at a time. By default, it's going to pick each commit, summoning it from the heap and adding it to the branch. If we don't edit this file at all, we'll end up right back where we started, picking every commit as-is. We're going to use one of my favorite features now: fixup. Edit the third line to change the operation from "pick" to "fixup" and move it to immediately after the commit we want to "fix up":
|
||||
|
||||
```
|
||||
pick 8d3fc77 Add greeting.txt
|
||||
fixup 0b9d0bb fixup greeting.txt
|
||||
pick 2a73a77 Add farewell.txt
|
||||
```
|
||||
|
||||
**Tip** : We can also abbreviate this with just "f" to speed things up next time.
|
||||
|
||||
Save and quit your editor - git will run these commands. We can check the log to verify the result:
|
||||
|
||||
```
|
||||
$ git log -2 --oneline
|
||||
fcff6ae (HEAD -> master) Add farewell.txt
|
||||
a479e94 Add greeting.txt
|
||||
```
|
||||
|
||||
### Squashing several commits into one
|
||||
|
||||
As you work, you may find it useful to write lots of commits as you reach small milestones or fix bugs in previous commits. However, it may be useful to "squash" these commits together, to make a cleaner history before merging your work into master. For this, we'll use the "squash" operation. Let's start by writing a bunch of commits - just copy and paste this if you want to speed it up:
|
||||
|
||||
```
|
||||
git checkout -b squash
|
||||
for c in H e l l o , ' ' w o r l d; do
|
||||
echo "$c" >>squash.txt
|
||||
git add squash.txt
|
||||
git commit -m"Add '$c' to squash.txt"
|
||||
done
|
||||
```
|
||||
|
||||
That's a lot of commits to make a file that says "Hello, world"! Let's start another interactive rebase to squash them together. Note that we checked out a branch to try this on, first. Because of that, we can quickly rebase all of the commits since we branched by using `git rebase -i master`. The result:
|
||||
|
||||
```
|
||||
pick 1e85199 Add 'H' to squash.txt
|
||||
pick fff6631 Add 'e' to squash.txt
|
||||
pick b354c74 Add 'l' to squash.txt
|
||||
pick 04aaf74 Add 'l' to squash.txt
|
||||
pick 9b0f720 Add 'o' to squash.txt
|
||||
pick 66b114d Add ',' to squash.txt
|
||||
pick dc158cd Add ' ' to squash.txt
|
||||
pick dfcf9d6 Add 'w' to squash.txt
|
||||
pick 7a85f34 Add 'o' to squash.txt
|
||||
pick c275c27 Add 'r' to squash.txt
|
||||
pick a513fd1 Add 'l' to squash.txt
|
||||
pick 6b608ae Add 'd' to squash.txt
|
||||
|
||||
# Rebase 1af1b46..6b608ae onto 1af1b46 (12 commands)
|
||||
#
|
||||
# Commands:
|
||||
# p, pick <commit> = use commit
|
||||
# s, squash <commit> = use commit, but meld into previous commit
|
||||
```
|
||||
|
||||
**Tip** : your local master branch evolves independently of the remote master branch, and git stores the remote branch as `origin/master`. Combined with this trick, `git rebase -i origin/master` is often a very convenient way to rebase all of the commits which haven't been merged upstream yet!
|
||||
|
||||
We're going to squash all of these changes into the first commit. To do this, change every "pick" operation to "squash", except for the first line, like so:
|
||||
|
||||
```
|
||||
pick 1e85199 Add 'H' to squash.txt
|
||||
squash fff6631 Add 'e' to squash.txt
|
||||
squash b354c74 Add 'l' to squash.txt
|
||||
squash 04aaf74 Add 'l' to squash.txt
|
||||
squash 9b0f720 Add 'o' to squash.txt
|
||||
squash 66b114d Add ',' to squash.txt
|
||||
squash dc158cd Add ' ' to squash.txt
|
||||
squash dfcf9d6 Add 'w' to squash.txt
|
||||
squash 7a85f34 Add 'o' to squash.txt
|
||||
squash c275c27 Add 'r' to squash.txt
|
||||
squash a513fd1 Add 'l' to squash.txt
|
||||
squash 6b608ae Add 'd' to squash.txt
|
||||
```
|
||||
|
||||
When you save and close your editor, git will think about this for a moment, then open your editor again to revise the final commit message. You'll see something like this:
|
||||
|
||||
```
|
||||
# This is a combination of 12 commits.
|
||||
# This is the 1st commit message:
|
||||
|
||||
Add 'H' to squash.txt
|
||||
|
||||
# This is the commit message #2:
|
||||
|
||||
Add 'e' to squash.txt
|
||||
|
||||
# This is the commit message #3:
|
||||
|
||||
Add 'l' to squash.txt
|
||||
|
||||
# This is the commit message #4:
|
||||
|
||||
Add 'l' to squash.txt
|
||||
|
||||
# This is the commit message #5:
|
||||
|
||||
Add 'o' to squash.txt
|
||||
|
||||
# This is the commit message #6:
|
||||
|
||||
Add ',' to squash.txt
|
||||
|
||||
# This is the commit message #7:
|
||||
|
||||
Add ' ' to squash.txt
|
||||
|
||||
# This is the commit message #8:
|
||||
|
||||
Add 'w' to squash.txt
|
||||
|
||||
# This is the commit message #9:
|
||||
|
||||
Add 'o' to squash.txt
|
||||
|
||||
# This is the commit message #10:
|
||||
|
||||
Add 'r' to squash.txt
|
||||
|
||||
# This is the commit message #11:
|
||||
|
||||
Add 'l' to squash.txt
|
||||
|
||||
# This is the commit message #12:
|
||||
|
||||
Add 'd' to squash.txt
|
||||
|
||||
# Please enter the commit message for your changes. Lines starting
|
||||
# with '#' will be ignored, and an empty message aborts the commit.
|
||||
#
|
||||
# Date: Sun Apr 28 14:21:56 2019 -0400
|
||||
#
|
||||
# interactive rebase in progress; onto 1af1b46
|
||||
# Last commands done (12 commands done):
|
||||
# squash a513fd1 Add 'l' to squash.txt
|
||||
# squash 6b608ae Add 'd' to squash.txt
|
||||
# No commands remaining.
|
||||
# You are currently rebasing branch 'squash' on '1af1b46'.
|
||||
#
|
||||
# Changes to be committed:
|
||||
# new file: squash.txt
|
||||
#
|
||||
```
|
||||
|
||||
This defaults to a combination of all of the commit messages which were squashed, but leaving it like this is almost always not what you want. The old commit messages may be useful for reference when writing the new one, though.
|
||||
|
||||
**Tip** : the "fixup" command you learned about in the previous section can be used for this purpose, too - but it discards the messages of the squashed commits.
|
||||
|
||||
Let's delete everything and replace it with a better commit message, like this:
|
||||
|
||||
```
|
||||
Add squash.txt with contents "Hello, world"
|
||||
|
||||
# Please enter the commit message for your changes. Lines starting
|
||||
# with '#' will be ignored, and an empty message aborts the commit.
|
||||
#
|
||||
# Date: Sun Apr 28 14:21:56 2019 -0400
|
||||
#
|
||||
# interactive rebase in progress; onto 1af1b46
|
||||
# Last commands done (12 commands done):
|
||||
# squash a513fd1 Add 'l' to squash.txt
|
||||
# squash 6b608ae Add 'd' to squash.txt
|
||||
# No commands remaining.
|
||||
# You are currently rebasing branch 'squash' on '1af1b46'.
|
||||
#
|
||||
# Changes to be committed:
|
||||
# new file: squash.txt
|
||||
#
|
||||
```
|
||||
|
||||
Save and quit your editor, then examine your git log - success!
|
||||
|
||||
```
|
||||
commit c785f476c7dff76f21ce2cad7c51cf2af00a44b6 (HEAD -> squash)
|
||||
Author: Drew DeVault
|
||||
Date: Sun Apr 28 14:21:56 2019 -0400
|
||||
|
||||
Add squash.txt with contents "Hello, world"
|
||||
```
|
||||
|
||||
Before we move on, let's pull our changes into the master branch and get rid of this scratch one. We can use `git rebase` like we use `git merge`, but it avoids making a merge commit:
|
||||
|
||||
```
|
||||
git checkout master
|
||||
git rebase squash
|
||||
git branch -D squash
|
||||
```
|
||||
|
||||
We generally prefer to avoid using git merge unless we're actually merging unrelated histories. If you have two divergent branches, a git merge is useful to have a record of when they were... merged. In the course of your normal work, rebase is often more appropriate.
|
||||
|
||||
### Splitting one commit into several
|
||||
|
||||
Sometimes the opposite problem happens - one commit is just too big. Let's look into splitting it up. This time, let's write some actual code. Start with a simple C program2 (you can still copy+paste this snippet into your shell to do this quickly):
|
||||
|
||||
```
|
||||
cat <<EOF >main.c
|
||||
int main(int argc, char *argv[]) {
|
||||
return 0;
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
We'll commit this first.
|
||||
|
||||
```
|
||||
git add main.c
|
||||
git commit -m"Add C program skeleton"
|
||||
```
|
||||
|
||||
Next, let's extend the program a bit:
|
||||
|
||||
```
|
||||
cat <<EOF >main.c
|
||||
#include <stdio.h>
|
||||
|
||||
const char *get_name() {
|
||||
static char buf[128];
|
||||
scanf("%s", buf);
|
||||
return buf;
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[]) {
|
||||
printf("What's your name? ");
|
||||
const char *name = get_name();
|
||||
printf("Hello, %s!\n", name);
|
||||
return 0;
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
After we commit this, we'll be ready to learn how to split it up.
|
||||
|
||||
```
|
||||
git commit -a -m"Flesh out C program"
|
||||
```
|
||||
|
||||
The first step is to start an interactive rebase. Let's rebase both commits with `git rebase -i HEAD~2`, giving us this rebase plan:
|
||||
|
||||
```
|
||||
pick 237b246 Add C program skeleton
|
||||
pick b3f188b Flesh out C program
|
||||
|
||||
# Rebase c785f47..b3f188b onto c785f47 (2 commands)
|
||||
#
|
||||
# Commands:
|
||||
# p, pick <commit> = use commit
|
||||
# e, edit <commit> = use commit, but stop for amending
|
||||
```
|
||||
|
||||
Change the second commit's command from "pick" to "edit", then save and close your editor. Git will think about this for a second, then present you with this:
|
||||
|
||||
```
|
||||
Stopped at b3f188b... Flesh out C program
|
||||
You can amend the commit now, with
|
||||
|
||||
git commit --amend
|
||||
|
||||
Once you are satisfied with your changes, run
|
||||
|
||||
git rebase --continue
|
||||
```
|
||||
|
||||
We could follow these instructions to add new changes to the commit, but instead let's do a "soft reset"3 by running `git reset HEAD^`. If you run `git status` after this, you'll see that it un-commits the latest commit and adds its changes to the working tree:
|
||||
|
||||
```
|
||||
Last commands done (2 commands done):
|
||||
pick 237b246 Add C program skeleton
|
||||
edit b3f188b Flesh out C program
|
||||
No commands remaining.
|
||||
You are currently splitting a commit while rebasing branch 'master' on 'c785f47'.
|
||||
(Once your working directory is clean, run "git rebase --continue")
|
||||
|
||||
Changes not staged for commit:
|
||||
(use "git add ..." to update what will be committed)
|
||||
(use "git checkout -- ..." to discard changes in working directory)
|
||||
|
||||
modified: main.c
|
||||
|
||||
no changes added to commit (use "git add" and/or "git commit -a")
|
||||
```
|
||||
|
||||
To split this up, we're going to do an interactive commit. This allows us to selectively commit only specific changes from the working tree. Run `git commit -p` to start this process, and you'll be presented with the following prompt:
|
||||
|
||||
```
|
||||
diff --git a/main.c b/main.c
|
||||
index b1d9c2c..3463610 100644
|
||||
--- a/main.c
|
||||
+++ b/main.c
|
||||
@@ -1,3 +1,14 @@
|
||||
+#include <stdio.h>
|
||||
+
|
||||
+const char *get_name() {
|
||||
+ static char buf[128];
|
||||
+ scanf("%s", buf);
|
||||
+ return buf;
|
||||
+}
|
||||
+
|
||||
int main(int argc, char *argv[]) {
|
||||
+ printf("What's your name? ");
|
||||
+ const char *name = get_name();
|
||||
+ printf("Hello, %s!\n", name);
|
||||
return 0;
|
||||
}
|
||||
Stage this hunk [y,n,q,a,d,s,e,?]?
|
||||
```
|
||||
|
||||
Git has presented you with just one "hunk" (i.e. a single change) to consider committing. This one is too big, though - let's use the "s" command to "split" up the hunk into smaller parts.
|
||||
|
||||
```
|
||||
Split into 2 hunks.
|
||||
@@ -1 +1,9 @@
|
||||
+#include <stdio.h>
|
||||
+
|
||||
+const char *get_name() {
|
||||
+ static char buf[128];
|
||||
+ scanf("%s", buf);
|
||||
+ return buf;
|
||||
+}
|
||||
+
|
||||
int main(int argc, char *argv[]) {
|
||||
Stage this hunk [y,n,q,a,d,j,J,g,/,e,?]?
|
||||
```
|
||||
|
||||
**Tip** : If you're curious about the other options, press "?" to summarize them.
|
||||
|
||||
This hunk looks better - a single, self-contained change. Let's hit "y" to answer the question (and stage that "hunk"), then "q" to "quit" the interactive session and proceed with the commit. Your editor will pop up to ask you to enter a suitable commit message.
|
||||
|
||||
```
|
||||
Add get_name function to C program
|
||||
|
||||
# Please enter the commit message for your changes. Lines starting
|
||||
# with '#' will be ignored, and an empty message aborts the commit.
|
||||
#
|
||||
# interactive rebase in progress; onto c785f47
|
||||
# Last commands done (2 commands done):
|
||||
# pick 237b246 Add C program skeleton
|
||||
# edit b3f188b Flesh out C program
|
||||
# No commands remaining.
|
||||
# You are currently splitting a commit while rebasing branch 'master' on 'c785f47'.
|
||||
#
|
||||
# Changes to be committed:
|
||||
# modified: main.c
|
||||
#
|
||||
# Changes not staged for commit:
|
||||
# modified: main.c
|
||||
#
|
||||
```
|
||||
|
||||
Save and close your editor, then we'll make the second commit. We could do another interactive commit, but since we just want to include the rest of the changes in this commit we'll just do this:
|
||||
|
||||
```
|
||||
git commit -a -m"Prompt user for their name"
|
||||
git rebase --continue
|
||||
```
|
||||
|
||||
That last command tells git that we're done editing this commit, and to continue to the next rebase command. That's it! Run `git log` to see the fruits of your labor:
|
||||
|
||||
```
|
||||
$ git log -3 --oneline
|
||||
fe19cc3 (HEAD -> master) Prompt user for their name
|
||||
659a489 Add get_name function to C program
|
||||
237b246 Add C program skeleton
|
||||
```
|
||||
|
||||
### Reordering commits
|
||||
|
||||
This one is pretty easy. Let's start by setting up our sandbox:
|
||||
|
||||
```
|
||||
echo "Goodbye now!" >farewell.txt
|
||||
git add farewell.txt
|
||||
git commit -m"Add farewell.txt"
|
||||
|
||||
echo "Hello there!" >greeting.txt
|
||||
git add greeting.txt
|
||||
git commit -m"Add greeting.txt"
|
||||
|
||||
echo "How're you doing?" >inquiry.txt
|
||||
git add inquiry.txt
|
||||
git commit -m"Add inquiry.txt"
|
||||
```
|
||||
|
||||
The git log should now look like this:
|
||||
|
||||
```
|
||||
f03baa5 (HEAD -> master) Add inquiry.txt
|
||||
a4cebf7 Add greeting.txt
|
||||
90bb015 Add farewell.txt
|
||||
```
|
||||
|
||||
Clearly, this is all out of order. Let's do an interactive rebase of the past 3 commits to resolve this. Run `git rebase -i HEAD~3` and this rebase plan will appear:
|
||||
|
||||
```
|
||||
pick 90bb015 Add farewell.txt
|
||||
pick a4cebf7 Add greeting.txt
|
||||
pick f03baa5 Add inquiry.txt
|
||||
|
||||
# Rebase fe19cc3..f03baa5 onto fe19cc3 (3 commands)
|
||||
#
|
||||
# Commands:
|
||||
# p, pick <commit> = use commit
|
||||
#
|
||||
# These lines can be re-ordered; they are executed from top to bottom.
|
||||
```
|
||||
|
||||
The fix is now straightforward: just reorder these lines in the order you wish for the commits to appear. Should look something like this:
|
||||
|
||||
```
|
||||
pick a4cebf7 Add greeting.txt
|
||||
pick f03baa5 Add inquiry.txt
|
||||
pick 90bb015 Add farewell.txt
|
||||
```
|
||||
|
||||
Save and close your editor and git will do the rest for you. Note that it's possible to end up with conflicts when you do this in practice - click here for help resolving conflicts.
|
||||
|
||||
### git pull --rebase
|
||||
|
||||
If you've been writing some commits on a branch which has been updated upstream, normally `git pull` will create a merge commit. In this respect, `git pull`'s behavior by default is equivalent to:
|
||||
|
||||
```
|
||||
git fetch origin
|
||||
git merge origin/master
|
||||
```
|
||||
|
||||
There's another option, which is often more useful and leads to a much cleaner history: `git pull --rebase`. Unlike the merge approach, this is equivalent to the following:
|
||||
|
||||
```
|
||||
git fetch origin
|
||||
git rebase origin/master
|
||||
```
|
||||
|
||||
The merge approach is simpler and easier to understand, but the rebase approach is almost always what you want to do if you understand how to use git rebase. If you like, you can set it as the default behavior like so:
|
||||
|
||||
```
|
||||
git config --global pull.rebase true
|
||||
```
|
||||
|
||||
When you do this, technically you're applying the procedure we discuss in the next section... so let's explain what it means to do that deliberately, too.
|
||||
|
||||
### Using git rebase to... rebase
|
||||
|
||||
Ironically, the feature of git rebase that I use the least is the one it's named for: rebasing branches. Say you have the following branches:
|
||||
|
||||
```
|
||||
o--o--o--o--> master
|
||||
\--o--o--> feature-1
|
||||
\--o--> feature-2
|
||||
```
|
||||
|
||||
It turns out feature-2 doesn't depend on any of the changes in feature-1, so you can just base it off of master. The fix is thus:
|
||||
|
||||
```
|
||||
git checkout feature-2
|
||||
git rebase master
|
||||
```
|
||||
|
||||
The non-interactive rebase does the default operation for all implicated commits ("pick")4, which simply rolls your history back to the last common anscestor and replays the commits from both branches. Your history now looks like this:
|
||||
|
||||
```
|
||||
o--o--o--o--> master
|
||||
| \--o--> feature-2
|
||||
\--o--o--> feature-1
|
||||
```
|
||||
|
||||
### Resolving conflicts
|
||||
|
||||
The details on resolving merge conflicts are beyond the scope of this guide - keep your eye out for another guide for this in the future. Assuming you're familiar with resolving conflicts in general, here are the specifics that apply to rebasing.
|
||||
|
||||
The details on resolving merge conflicts are beyond the scope of this guide - keep your eye out for another guide for this in the future. Assuming you're familiar with resolving conflicts in general, here are the specifics that apply to rebasing.
|
||||
|
||||
Sometimes you'll get a merge conflict when doing a rebase, which you can handle just like any other merge conflict. Git will set up the conflict markers in the affected files, `git status` will show you what you need to resolve, and you can mark files as resolved with `git add` or `git rm`. However, in the context of a git rebase, there are two options you should be aware of.
|
||||
|
||||
The first is how you complete the conflict resolution. Rather than `git commit` like you'll use when addressing conflicts that arise from `git merge`, the appropriate command for rebasing is `git rebase --continue`. However, there's another option available to you: `git rebase --skip`. This will skip the commit you're working on, and it won't be included in the rebase. This is most common when doing a non-interactive rebase, when git doesn't realize that a commit it's pulled from the "other" branch is an updated version of the commit that it conflicts with on "our" branch.
|
||||
|
||||
### Help! I broke it!
|
||||
|
||||
No doubt about it - rebasing can be hard sometimes. If you've made a mistake and in so doing lost commits which you needed, then `git reflog` is here to save the day. Running this command will show you every operation which changed a ref, or reference - that is, branches and tags. Each line shows you what the old reference pointed to, and you can `git cherry-pick`, `git checkout`, `git show`, or use any other operation on git commits once thought lost.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://git-rebase.io/
|
||||
|
||||
作者:[git-rebase][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://git-rebase.io/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – Introduction To Hyperledger Fabric [Part 10])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Blockchain 2.0 – Introduction To Hyperledger Fabric [Part 10]
|
||||
======
|
||||
|
||||
![Hyperledger Fabric][1]
|
||||
|
||||
### Hyperledger Fabric
|
||||
|
||||
The [**Hyperledger project**][2] is an umbrella organization of sorts featuring many different modules and systems under development. Among the most popular among these individual sub-projects is the **Hyperledger Fabric**. This post will explore the features that would make the Fabric almost indispensable in the near future once blockchain systems start proliferating into main stream use. Towards the end we will also take a quick look at what developers and enthusiasts need to know regarding the technicalities of the Hyperledger Fabric.
|
||||
|
||||
### Inception
|
||||
|
||||
In the usual fashion for the Hyperledger project, Fabric was “donated” to the organization by one of its core members, **IBM** , who was previously the principle developer of the same. The technology platform shared by IBM was put to joint development at the Hyperledger project with contributions from over a 100 member companies and institutions.
|
||||
|
||||
Currently running on **v1.4** of the LTS version, Fabric has come a long way and is currently seen as the go to enterprise solution for managing business data. The core vision that surrounds the Hyperledger project inevitably permeates into the Fabric as well. The Hyperledger Fabric system carries forward all the enterprise ready and scalable features that are hard coded into all projects under the Hyperledger organization.
|
||||
|
||||
### Highlights Of Hyperledger Fabric
|
||||
|
||||
Hyperledger Fabric offers a wide variety of features and standards that are built around the mission of supporting fast development and modular architectures. Furthermore, compared to its competitors (primarily **Ripple** and [**Ethereum**][3]), Fabric takes an explicit stance toward closed and [**permissioned blockchains**][4]. Their core objective here is to develop a set of tools which will aid blockchain developers in creating customized solutions and not to create a standalone ecosystem or a product.
|
||||
|
||||
Some of the highlights of the Hyperledger Fabric are given below:
|
||||
|
||||
* **Permissioned blockchain systems**
|
||||
|
||||
|
||||
|
||||
This is a category where other platforms such as Ethereum and Ripple differ quite a lot with Hyperledger Fabric. The Fabric by default is a tool designed to implement a private permissioned blockchain. Such blockchains cannot be accessed by everyone and the nodes working to offer consensus or to verify transactions are chosen by a central authority. This might be important for some applications such as banking and insurance, where transactions have to be verified by the central authority rather than participants.
|
||||
|
||||
* **Confidential and controlled information flow**
|
||||
|
||||
|
||||
|
||||
The Fabric has built in permission systems that will restrict information flow within a specific group or certain individuals as the case may be. Unlike a public blockchain where anyone and everyone who runs a node will have a copy and selective access to data stored in the blockchain, the admin of the system can choose how to and who to share access to the information. There are also subsystems which will encrypt the stored data at better security standards compared to existing competition.
|
||||
|
||||
* **Plug and play architecture**
|
||||
|
||||
|
||||
|
||||
Hyperledger Fabric has a plug and play type architecture. Individual components of the system may be chosen to be implemented and components of the system that developers don’t see a use for maybe discarded. The Fabric takes a highly modular and customizable route to development rather than a one size fits all approach taken by its competitors. This is especially attractive for firms and companies looking to build a lean system fast. This combined with the interoperability of the Fabric with other Hyperledger components implies that developers and designers now have access to a diverse set of standardized tools instead of having to pull code from different sources and integrate them afterwards. It also presents a rather fail-safe way to build robust modular systems.
|
||||
|
||||
* **Smart contracts and chaincode**
|
||||
|
||||
|
||||
|
||||
A distributed application running on a blockchain is called a [**Smart contract**][5]. While the smart contract term is more or less associated with the Ethereum platform, chaincode is the name given to the same in the Hyperledger camp. Apart from possessing all the benefits of **DApps** being present in chaincode applications, what sets Hyperledger apart is the fact that the code for the same may be written in multiple high-level programming language. It supports [**Go**][6] and **JavaScript** out of the box and supports many other after integration with appropriate compiler modules as well. Though this fact might not mean much at this point, the fact remains that if existing talent can be used for ongoing projects involving blockchain that has the potential to save companies billions of dollars in personnel training and management in the long run. Developers can code in languages they’re comfortable in to start building applications on the Hyperledger Fabric and need not learn nor train in platform specific languages and syntax. This presents flexibility which current competitors of the Hyperledger Fabric do not offer.
|
||||
|
||||
* The Hyperledger Fabric is a back-end driver platform and is mainly aimed at integration projects where a blockchain or another distributed ledger technology is required. As such it does not provide any user facing services except for minor scripting capabilities. (Think of it to be more like a scripting language.)
|
||||
* Hyperledger Fabric supports building sidechains for specific use-cases. In case, the developer wishes to isolate a set of users or participants to a specific part or functionality of the application, they may do so by implementing side-chains. Side-chains are blockchains that derive from a main parent, but form a different chain after their initial block. This block which gives rise to the new chain will stay immune to further changes in the new chain and the new chain remains immutable even if new information is added to the original chain. This functionality will aid in scaling the platform being developed and usher in user specific and case specific processing capabilities.
|
||||
* The previous feature also means that not all users will have an “exact” copy of all the data in the blockchain as is expected usually from public chains. Participating nodes will have a copy of data that is only relevant to them. For instance, consider an application similar to PayTM in India. The app has wallet functionality as well as an e-commerce end. However, not all its wallet users use PayTM to shop online. In this scenario, only active shoppers will have the corresponding chain of transactions on the PayTM e-commerce site, whereas the wallet users will just have a copy of the chain that stores wallet transactions. This flexible architecture for data storage and retrieval is important while scaling, since massive singular blockchains have been shown to increase lead times for processing transactions. The chain can be kept lean and well categorised this way.
|
||||
|
||||
|
||||
|
||||
We will look at other modules under the Hyperledger Project in detail in upcoming posts.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
|
||||
[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
[5]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
|
||||
[6]: https://www.ostechnix.com/install-go-language-linux/
|
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Check Whether The Given Package Is Installed Or Not On Debian/Ubuntu System?
|
||||
======
|
||||
|
||||
We have recently published an article about bulk package installation.
|
||||
|
||||
While doing that, i was struggled to get the installed package information and did a small google search and found few methods about it.
|
||||
|
||||
I would like to share it in our website so, that it will be helpful for others too.
|
||||
|
||||
There are numerous ways we can achieve this.
|
||||
|
||||
I have add seven ways to achieve this. However, you can choose the preferred method for you.
|
||||
|
||||
Those methods are listed below.
|
||||
|
||||
* **`apt-cache Command:`** apt-cache command is used to query the APT cache or package metadata.
|
||||
* **`apt Command:`** APT is a powerful command-line tool for installing, downloading, removing, searching and managing packages on Debian based systems.
|
||||
* **`dpkg-query Command:`** dpkg-query is a tool to query the dpkg database.
|
||||
* **`dpkg Command:`** dpkg is a package manager for Debian based systems.
|
||||
* **`which Command:`** The which command returns the full path of the executable that would have been executed when the command had been entered in terminal.
|
||||
* **`whereis Command:`** The whereis command used to search the binary, source, and man page files for a given command.
|
||||
* **`locate Command:`** locate command works faster than the find command because it uses updatedb database, whereas the find command searches in the real system.
|
||||
|
||||
|
||||
|
||||
### Method-1 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using apt-cache Command?
|
||||
|
||||
apt-cache command is used to query the APT cache or package metadata from APT’s internal database.
|
||||
|
||||
It will search and display an information about the given package. It shows whether the package is installed or not, installed package version, source repository information.
|
||||
|
||||
The below output clearly showing that `nano` package has already installed in the system. Since installed part is showing the installed version of nano package.
|
||||
|
||||
```
|
||||
# apt-cache policy nano
|
||||
nano:
|
||||
Installed: 2.9.3-2
|
||||
Candidate: 2.9.3-2
|
||||
Version table:
|
||||
*** 2.9.3-2 500
|
||||
500 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
|
||||
100 /var/lib/dpkg/status
|
||||
```
|
||||
|
||||
### Method-2 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using apt Command?
|
||||
|
||||
APT is a powerful command-line tool for installing, downloading, removing, searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library. It’s contains some less used command-line utilities related to package management.
|
||||
|
||||
```
|
||||
# apt -qq list nano
|
||||
nano/bionic,now 2.9.3-2 amd64 [installed]
|
||||
```
|
||||
|
||||
### Method-3 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using dpkg-query Command?
|
||||
|
||||
dpkg-query is a tool to show information about packages listed in the dpkg database.
|
||||
|
||||
In the below output first column showing `ii`. It means, the given package has already installed in the system.
|
||||
|
||||
```
|
||||
# dpkg-query --list | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### Method-4 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using dpkg Command?
|
||||
|
||||
DPKG stands for Debian Package is a tool to install, build, remove and manage Debian packages, but unlike other package management systems, it cannot automatically download and install packages or their dependencies.
|
||||
|
||||
In the below output first column showing `ii`. It means, the given package has already installed in the system.
|
||||
|
||||
```
|
||||
# dpkg -l | grep -i nano
|
||||
ii nano 2.9.3-2 amd64 small, friendly text editor inspired by Pico
|
||||
```
|
||||
|
||||
### Method-5 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using which Command?
|
||||
|
||||
The which command returns the full path of the executable that would have been executed when the command had been entered in terminal.
|
||||
|
||||
It’s very useful when you want to create a desktop shortcut or symbolic link for executable files.
|
||||
|
||||
Which command searches the directories listed in the current user’s PATH environment variable not for all the users. I mean, when you are logged in your own account and you can’t able to search for root user file or directory.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# which nano
|
||||
/bin/nano
|
||||
```
|
||||
|
||||
### Method-6 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using whereis Command?
|
||||
|
||||
The whereis command used to search the binary, source, and man page files for a given command.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# whereis nano
|
||||
nano: /bin/nano /usr/share/nano /usr/share/man/man1/nano.1.gz /usr/share/info/nano.info.gz
|
||||
```
|
||||
|
||||
### Method-7 : How To Check Whether The Given Package Is Installed Or Not On Ubuntu System Using locate Command?
|
||||
|
||||
locate command works faster than the find command because it uses updatedb database, whereas the find command searches in the real system.
|
||||
|
||||
It uses a database rather than hunting individual directory paths to get a given file.
|
||||
|
||||
locate command doesn’t pre-installed in most of the distributions so, use your distribution package manager to install it.
|
||||
|
||||
The database is updated regularly through cron. Even, we can update it manually.
|
||||
|
||||
If the following output shows the given package binary or executable file location then the given package has already installed in the system. If not, the package is not installed in system.
|
||||
|
||||
```
|
||||
# locate --basename '\nano'
|
||||
/usr/bin/nano
|
||||
/usr/share/nano
|
||||
/usr/share/doc/nano
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-check-whether-the-given-package-is-installed-or-not-on-ubuntu-debian-system/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
243
sources/tech/20190513 How To Set Password Complexity On Linux.md
Normal file
243
sources/tech/20190513 How To Set Password Complexity On Linux.md
Normal file
@ -0,0 +1,243 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Set Password Complexity On Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-set-password-complexity-policy-on-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Set Password Complexity On Linux?
|
||||
======
|
||||
|
||||
User management is one of the important task of Linux system administration.
|
||||
|
||||
There are many aspect is involved in this and implementing the strong password policy is one of them.
|
||||
|
||||
Navigate to the following URL, if you would like to **[generate a strong password on Linux][1]**.
|
||||
|
||||
It will Restrict unauthorized access to systems.
|
||||
|
||||
By default Linux is secure that everybody know. however, we need to make necessary tweak on this to make it more secure.
|
||||
|
||||
Insecure password will leads to breach security. So, take additional care on this.
|
||||
|
||||
Navigate to the following URL, if you would like to see the **[password strength and score][2]** of the generated strong password.
|
||||
|
||||
In this article, we will teach you, how to implement the best security policy on Linux.
|
||||
|
||||
We can use PAM (the “pluggable authentication module”) to enforce password policy On most Linux systems.
|
||||
|
||||
The file can be found in the following location.
|
||||
|
||||
For Redhat based systems @ `/etc/pam.d/system-auth` and Debian based systems @ `/etc/pam.d/common-password`.
|
||||
|
||||
The default password aging details can be found in the `/etc/login.defs` file.
|
||||
|
||||
I have trimmed this file for better understanding.
|
||||
|
||||
```
|
||||
# vi /etc/login.defs
|
||||
|
||||
PASS_MAX_DAYS 99999
|
||||
PASS_MIN_DAYS 0
|
||||
PASS_MIN_LEN 5
|
||||
PASS_WARN_AGE 7
|
||||
```
|
||||
|
||||
**Details:**
|
||||
|
||||
* **`PASS_MAX_DAYS:`**` ` Maximum number of days a password may be used.
|
||||
* **`PASS_MIN_DAYS:`**` ` Minimum number of days allowed between password changes.
|
||||
* **`PASS_MIN_LEN:`**` ` Minimum acceptable password length.
|
||||
* **`PASS_WARN_AGE:`**` ` Number of days warning given before a password expires.
|
||||
|
||||
|
||||
|
||||
We will show you, how to implement the below eleven password policies in Linux.
|
||||
|
||||
* Password Max days
|
||||
* Password Min days
|
||||
* Password warning days
|
||||
* Password history or Deny Re-Used Passwords
|
||||
* Password minimum length
|
||||
* Minimum upper case characters
|
||||
* Minimum lower case characters
|
||||
* Minimum digits in password
|
||||
* Minimum other characters (Symbols)
|
||||
* Account lock – retries
|
||||
* Account unlock time
|
||||
|
||||
|
||||
|
||||
### What Is Password Max days?
|
||||
|
||||
This parameter limits the maximum number of days a password can be used. It’s mandatory for user to change his/her account password before expiry.
|
||||
|
||||
If they forget to change, they are not allowed to login into the system. They need to work with admin team to get rid of it.
|
||||
|
||||
It can be set in `/etc/login.defs` file. I’m going to set `90 days`.
|
||||
|
||||
```
|
||||
# vi /etc/login.defs
|
||||
|
||||
PASS_MAX_DAYS 90
|
||||
```
|
||||
|
||||
### What Is Password Min days?
|
||||
|
||||
This parameter limits the minimum number of days after password can be changed.
|
||||
|
||||
Say for example, if this parameter is set to 15 and user changed password today. Then he won’t be able to change the password again before 15 days from now.
|
||||
|
||||
It can be set in `/etc/login.defs` file. I’m going to set `15 days`.
|
||||
|
||||
```
|
||||
# vi /etc/login.defs
|
||||
|
||||
PASS_MIN_DAYS 15
|
||||
```
|
||||
|
||||
### What Is Password Warning Days?
|
||||
|
||||
This parameter controls the password warning days and it will warn the user when the password is going to expires.
|
||||
|
||||
A warning will be given to the user regularly until the warning days ends. This can helps user to change their password before expiry. Otherwise we need to work with admin team for unlock the password.
|
||||
|
||||
It can be set in `/etc/login.defs` file. I’m going to set `10 days`.
|
||||
|
||||
```
|
||||
# vi /etc/login.defs
|
||||
|
||||
PASS_WARN_AGE 10
|
||||
```
|
||||
|
||||
**Note:** All the above parameters only applicable for new accounts and not for existing accounts.
|
||||
|
||||
### What Is Password History Or Deny Re-Used Passwords?
|
||||
|
||||
This parameter keep controls of the password history. Keep history of passwords used (the number of previous passwords which cannot be reused).
|
||||
|
||||
When the users try to set a new password, it will check the password history and warn the user when they set the same old password.
|
||||
|
||||
It can be set in `/etc/pam.d/system-auth` file. I’m going to set `5` for history of password.
|
||||
|
||||
```
|
||||
# vi /etc/pam.d/system-auth
|
||||
|
||||
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok remember=5
|
||||
```
|
||||
|
||||
### What Is Password Minimum Length?
|
||||
|
||||
This parameter keeps the minimum password length. When the users set a new password, it will check against this parameter and warn the user if they try to set the password length less than that.
|
||||
|
||||
It can be set in `/etc/pam.d/system-auth` file. I’m going to set `12` character for minimum password length.
|
||||
|
||||
```
|
||||
# vi /etc/pam.d/system-auth
|
||||
|
||||
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12
|
||||
```
|
||||
|
||||
**try_first_pass retry=3** : Allow users to set a good password before the passwd command aborts.
|
||||
|
||||
### Set Minimum Upper Case Characters?
|
||||
|
||||
This parameter keeps, how many upper case characters should be added in the password. These are password strengthening parameters ,which increase the password strength.
|
||||
|
||||
When the users set a new password, it will check against this parameter and warn the user if they are not including any upper case characters in the password.
|
||||
|
||||
It can be set in `/etc/pam.d/system-auth` file. I’m going to set `1` character for minimum password length.
|
||||
|
||||
```
|
||||
# vi /etc/pam.d/system-auth
|
||||
|
||||
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 ucredit=-1
|
||||
```
|
||||
|
||||
### Set Minimum Lower Case Characters?
|
||||
|
||||
This parameter keeps, how many lower case characters should be added in the password. These are password strengthening parameters ,which increase the password strength.
|
||||
|
||||
When the users set a new password, it will check against this parameter and warn the user if they are not including any lower case characters in the password.
|
||||
|
||||
It can be set in `/etc/pam.d/system-auth` file. I’m going to set `1` character.
|
||||
|
||||
```
|
||||
# vi /etc/pam.d/system-auth
|
||||
|
||||
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 lcredit=-1
|
||||
```
|
||||
|
||||
### Set Minimum Digits In Password?
|
||||
|
||||
This parameter keeps, how many digits should be added in the password. These are password strengthening parameters ,which increase the password strength.
|
||||
|
||||
When the users set a new password, it will check against this parameter and warn the user if they are not including any digits in the password.
|
||||
|
||||
It can be set in `/etc/pam.d/system-auth` file. I’m going to set `1` character.
|
||||
|
||||
```
|
||||
# vi /etc/pam.d/system-auth
|
||||
|
||||
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 dcredit=-1
|
||||
```
|
||||
|
||||
### Set Minimum Other Characters (Symbols) In Password?
|
||||
|
||||
This parameter keeps, how many Symbols should be added in the password. These are password strengthening parameters ,which increase the password strength.
|
||||
|
||||
When the users set a new password, it will check against this parameter and warn the user if they are not including any Symbol in the password.
|
||||
|
||||
It can be set in `/etc/pam.d/system-auth` file. I’m going to set `1` character.
|
||||
|
||||
```
|
||||
# vi /etc/pam.d/system-auth
|
||||
|
||||
password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 ocredit=-1
|
||||
```
|
||||
|
||||
### Set Account Lock?
|
||||
|
||||
This parameter controls users failed attempts. It locks user account after reaches the given number of failed login attempts.
|
||||
|
||||
It can be set in `/etc/pam.d/system-auth` file.
|
||||
|
||||
```
|
||||
# vi /etc/pam.d/system-auth
|
||||
|
||||
auth required pam_tally2.so onerr=fail audit silent deny=5
|
||||
account required pam_tally2.so
|
||||
```
|
||||
|
||||
### Set Account Unlock Time?
|
||||
|
||||
This parameter keeps users unlock time. If the user account is locked after consecutive failed authentications.
|
||||
|
||||
It’s unlock the locked user account after reaches the given time. Sets the time (900 seconds = 15 minutes) for which the account should remain locked.
|
||||
|
||||
It can be set in `/etc/pam.d/system-auth` file.
|
||||
|
||||
```
|
||||
# vi /etc/pam.d/system-auth
|
||||
|
||||
auth required pam_tally2.so onerr=fail audit silent deny=5 unlock_time=900
|
||||
account required pam_tally2.so
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-set-password-complexity-policy-on-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/5-ways-to-generate-a-random-strong-password-in-linux-terminal/
|
||||
[2]: https://www.2daygeek.com/how-to-check-password-complexity-strength-and-score-in-linux/
|
@ -0,0 +1,211 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Virtual filesystems in Linux: Why we need them and how they work)
|
||||
[#]: via: (https://opensource.com/article/19/3/virtual-filesystems-linux)
|
||||
[#]: author: (Alison Chariken )
|
||||
|
||||
Linux 中的虚拟文件系统
|
||||
======
|
||||
|
||||
> 虚拟文件系统是一种神奇的抽象,它使得 “一切皆文件” 哲学在 Linux 中成为了可能。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ)
|
||||
|
||||
什么是文件系统?根据早期的 Linux 贡献者和作家 [Robert Love][1] 所说,“文件系统是一个遵循特定结构的数据的分层存储。” 不过,这种描述也同样适用于 VFAT(虚拟文件分配表)、Git 和[Cassandra][2](一种 [NoSQL 数据库][3])。那么如何区别文件系统呢?
|
||||
|
||||
### 文件系统基础概念
|
||||
|
||||
Linux 内核要求文件系统必须是实体,它还必须在持久对象上实现 `open()`、`read()` 和 `write()` 方法,并且这些实体需要有与之关联的名字。从 [面向对象编程][4] 的角度来看,内核将通用文件系统视为一个抽象接口,这三大函数是“虚拟”的,没有默认定义。因此,内核的默认文件系统实现被称为虚拟文件系统(VFS)。
|
||||
|
||||
![][5]
|
||||
|
||||
如果我们能够 `open()`、`read()` 和 `write()`,它就是一个文件,如这个主控台会话所示。
|
||||
|
||||
VFS 是著名的类 Unix 系统中 “一切皆文件” 的基础。让我们看一下它有多奇怪,上面的小演示体现了字符设备 `/dev/console` 实际的工作。该图显示了一个在虚拟电传打字(tty)上的交互式 Bash 会话。将一个字符串发送到虚拟控制台设备会使其显示在虚拟屏幕上。而 VFS 甚至还有其它更奇怪的属性。例如,它[可以在其中寻址][6]。
|
||||
|
||||
熟悉的文件系统如 ext4、NFS 和 /proc 在名为 [file_operations] [7] 的 C 语言数据结构中都提供了三大函数的定义。此外,特定的文件系统会以熟悉的面向对象的方式扩展和覆盖了 VFS 功能。正如 Robert Love 指出的那样,VFS 的抽象使 Linux 用户可以轻松地将文件复制到(复制自)外部操作系统或抽象实体(如管道),而无需担心其内部数据格式。在用户空间,通过系统调用,进程可以使用一个文件系统的 `read()`方法从文件复制到内核的数据结构中,然后使用另一种文件系统的 `write()` 方法输出数据。
|
||||
|
||||
属于 VFS 基本类型的函数定义本身可以在内核源代码的 [fs/*.c 文件][8] 中找到,而 `fs/` 的子目录中包含了特定的文件系统。内核还包含了类似文件系统的实体,例如 cgroup、`/dev` 和 tmpfs,它们在引导过程的早期需要,因此定义在内核的 `init/` 子目录中。请注意,cgroup、`/dev` 和 tmpfs 不会调用 `file_operations` 的三大函数,而是直接读取和写入内存。
|
||||
|
||||
下图大致说明了用户空间如何访问通常挂载在 Linux 系统上的各种类型的文件系统。未显示的是像管道、dmesg 和 POSIX 时钟这样的结构,它们也实现了 `struct file_operations`,并且因此其访问要通过 VFS 层。
|
||||
|
||||
![How userspace accesses various types of filesystems][9]
|
||||
|
||||
VFS 是系统调用和特定 `file_operations` 的实现(如 ext4 和 procfs)之间的“垫片层”。然后,`file_operations` 函数可以与特定于设备的驱动程序或内存访问器进行通信。tmpfs、devtmpfs 和 cgroup 不使用 `file_operations` 而是直接访问内存。
|
||||
|
||||
VFS 的存在促进了代码重用,因为与文件系统相关的基本方法不需要由每种文件系统类型重新实现。代码重用是一种被广泛接受的软件工程最佳实践!唉,如果重用的代码[引入了严重的错误][10],那么继承常用方法的所有实现都会受到影响。
|
||||
|
||||
### /tmp:一个小提示
|
||||
|
||||
找出系统中存在的 VFS 的简单方法是键入 `mount | grep -v sd | grep -v :/`,在大多数计算机上,它将列出所有未驻留在磁盘上也不是 NFS 的已挂载文件系统。其中一个列出的 VFS 挂载肯定是 `/ tmp`,对吧?
|
||||
|
||||
![Man with shocked expression][11]
|
||||
|
||||
*每个人都知道把 /tmp 放在物理存储设备上简直是疯了!图片:<https://tinyurl.com/ybomxyfo>*
|
||||
|
||||
为什么把 `/tmp` 留在存储设备上是不可取的?因为 `/tmp` 中的文件是临时的(!),并且存储设备比内存慢,所以创建了 tmpfs 这种文件系统。此外,比起内存,物理设备频繁写入更容易磨损。最后,`/tmp` 中的文件可能包含敏感信息,因此在每次重新启动时让它们消失是一项功能。
|
||||
|
||||
不幸的是,默认情况下,某些 Linux 发行版的安装脚本仍会在存储设备上创建 /tmp。如果你的系统出现这种情况,请不要绝望。按照一直优秀的 [Arch Wiki][12] 上的简单说明来解决问题就行,记住分配给 tmpfs 的内存不能用于其他目的。换句话说,带有巨大 tmpfs 并且其中包含大文件的系统可能会耗尽内存并崩溃。另一个提示:编辑 `/etc/fstab` 文件时,请务必以换行符结束,否则系统将无法启动。(猜猜我怎么知道。)
|
||||
|
||||
### /proc 和 /sys
|
||||
|
||||
除了 `/tmp` 之外,大多数 Linux 用户最熟悉的 VFS 是 `/proc` 和 `/sys`。(`/dev` 依赖于共享内存,没有 `file_operations`)。为什么有两种?让我们来看看更多细节。
|
||||
|
||||
procfs 提供了内核的瞬时状态及其为用户空间控制的进程的快照。在 `/proc` 中,内核发布有关其提供的工具的信息,如中断、虚拟内存和调度程序。此外,`/proc/sys` 是存放可以通过 [sysctl 命令][13]配置的设置的地方,可供用户空间访问。单个进程的状态和统计信息在 `/proc/<PID>` 目录中报告。
|
||||
|
||||
![Console][14]
|
||||
|
||||
*/proc/meminfo 是一个空文件,但仍包含有价值的信息。*
|
||||
|
||||
`/proc` 文件的行为说明了 VFS 可以与磁盘上的文件系统不同。一方面,`/proc/meminfo` 包含命令 `free` 提供的信息。另一方面,它还是空的!怎么会这样?这种情况让人联想起康奈尔大学物理学家 N. David Mermin 在 1985 年写的一篇名为“[没有人看见月亮的情况吗?][15]现实和量子理论。”事实是当进程从 `/proc` 请求内存时内核再收集有关内存的统计信息,并且当没有人在查看时,`/proc` 中的文件实际上没有任何内容。正如 [Mermin 所说][16],“这是一个基本的量子学说,一般来说,测量不会揭示被测属性的预先存在的价值。”(关于月球的问题的答案留作练习。)
|
||||
|
||||
![Full moon][17]
|
||||
|
||||
*当没有进程访问它们时,/proc 中的文件为空。([来源][18])*
|
||||
|
||||
procfs 的空文件是有道理的,因为那里可用的信息是动态的。sysfs 的情况不同。让我们比较一下 `/proc` 与 `/sys` 中不为空的文件数量。
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/virtualfilesystems_6-filesize.png)
|
||||
|
||||
procfs 只有一个,即导出的内核配置,这是一个例外,因为每次启动只需要生成一次。另一方面,`/sys` 有许多较大的文件,其中大多数包含一页内存。通常,sysfs 文件只包含一个数字或字符串,与通过读取 `/proc/meminfo` 等文件生成的信息表格形成鲜明对比。
|
||||
|
||||
sysfs 的目的是将内核称为“kobjects”的可读写属性公开给用户空间。kobjects 的唯一目的是引用计数:当删除对 kobject 的最后一个引用时,系统将回收与之关联的资源。然而,`/sys` 构成了内核著名的“[到用户空间的稳定 ABI][19]”,它的大部分内容[在任何情况下都没有人会“破坏”][20]。这并不意味着 sysfs 中的文件是静态,这与易失性对象的引用计数相反。
|
||||
|
||||
内核的稳定 ABI 反而限制了 `/sys` 中可能出现的内容,而不是任何给定时刻实际存在的内容。列出 sysfs 中文件的权限可以了解如何设置或读取设备、模块、文件系统等的可配置、可调参数。Logic 强调 procfs 也是内核稳定 ABI 的一部分的结论,尽管内核的[文档][19]没有明确说明。
|
||||
|
||||
![Console][21]
|
||||
|
||||
*sysfs 中的文件恰好描述了实体的每个属性,并且可以是可读的、可写的或两者兼而有之。文件中的“0”表示 SSD 不可移动的存储设备。*
|
||||
|
||||
### 用 eBPF 和 bcc 工具一窥 VFS 内部
|
||||
|
||||
了解内核如何管理 sysfs 文件的最简单方法是观察它的运行情况,在 ARM64 或 x86_64 上观看的最简单方法是使用 eBPF。eBPF(<ruby>扩展的伯克利数据包过滤器<rt>extended Berkeley Packet Filter</rt></ruby>)由[在内核中运行的虚拟机][22]组成,特权用户可以从命令行进行查询。内核源代码告诉读者内核可以做什么;在一个启动的系统上运行 eBPF 工具会显示内核实际上做了什么。
|
||||
|
||||
令人高兴的是,通过 [bcc][23] 工具入门使用 eBPF 非常容易,这些工具在[主要 Linux 发行版的软件包][24] 中都有,并且已经由 Brendan Gregg [充分地给出了文档说明][25]。bcc 工具是带有小段嵌入式 C 语言片段的 Python 脚本,这意味着任何对这两种语言熟悉的人都可以轻松修改它们。当前统计,[bcc/tools 中有 80 个 Python 脚本][26],使系统管理员或开发人员很有可能能够找到与她/他的需求相关的现有脚本。
|
||||
|
||||
要了解 VFS 在正在运行的系统上的工作情况,请尝试使用简单的 [vfscount][27] 或 [vfsstat][28],这表明每秒都会发生数十次对 `vfs_open()` 及其相关的调用
|
||||
|
||||
|
||||
![Console - vfsstat.py][29]
|
||||
|
||||
*vfsstat.py 是一个带有嵌入式 C 片段的 Python 脚本,它只是计数 VFS 函数调用。*
|
||||
|
||||
作为一个不太重要的例子,让我们看一下在运行的系统上插入 USB 记忆棒时 sysfs 中会发生什么。
|
||||
|
||||
|
||||
![Console when USB is inserted][30]
|
||||
|
||||
*用 eBPF 观察插入 USB 记忆棒时 /sys 中会发生什么,简单的和复杂的例子。*
|
||||
|
||||
在上面的第一个简单示例中,只要 `sysfs_create_files()` 命令运行,[trace.py][31] bcc 工具脚本就会打印出一条消息。我们看到 `sysfs_create_files()` 由一个 kworker 线程启动,以响应 USB 棒插入事件,但是它创建了什么文件?第二个例子说明了 eBPF 的强大能力。这里,`trace.py` 正在打印内核回溯(`-K` 选项)以及 `sysfs_create_files()` 创建的文件的名称。单引号内的代码段是一些 C 源代码,包括一个易于识别的格式字符串,提供的 Python 脚本[引入 LLVM 即时编译器(JIT)][32] 在内核虚拟机内编译和执行它。必须在第二个命令中重现完整的 `sysfs_create_files()` 函数签名,以便格式字符串可以引用其中一个参数。在此 C 片段中出错会导致可识别的 C 编译器错误。例如,如果省略 `-I` 参数,则结果为“无法编译 BPF 文本”。熟悉 C 或 Python 的开发人员会发现 bcc 工具易于扩展和修改。
|
||||
|
||||
插入 USB 记忆棒后,内核回溯显示 PID 7711 是一个 kworker 线程,它在 sysfs 中创建了一个名为 `events` 的文件。使用 `sysfs_remove_files()` 进行相应的调用表明,删除 USB 记忆棒会导致删除该 `events` 文件,这与引用计数的想法保持一致。在 USB 棒插入期间(未显示)在 eBPF 中观察 `sysfs_create_link()` 表明创建了不少于 48 个符号链接。
|
||||
|
||||
无论如何,`events` 文件的目的是什么?使用 [cscope][33] 查找函数 [`__device_add_disk()`][34] 显示它调用 `disk_add_events()`,并且可以将 “media_change” 或 “eject_request” 写入到该文件。这里,内核的块层通知用户空间 “磁盘” 的出现和消失。考虑一下这种调查 USB 棒插入工作原理的方法与试图仅从源头中找出该过程的速度有多快。
|
||||
|
||||
### 只读根文件系统使得嵌入式设备成为可能
|
||||
|
||||
确实,没有人通过拔出电源插头来关闭服务器或桌面系统。为什么?因为物理存储设备上挂载的文件系统可能有挂起的(未完成的)写入,并且记录其状态的数据结构可能与写入存储器的内容不同步。当发生这种情况时,系统所有者将不得不在下次启动时等待 [fsck 文件系统恢复工具][35] 运行完成,在最坏的情况下,实际上会丢失数据。
|
||||
|
||||
然而,狂热爱好者会听说许多物联网和嵌入式设备,如路由器、恒温器和汽车现在都运行 Linux。许多这些设备几乎完全没有用户界面,并且没有办法干净地“解除启动”它们。想一想使用启动电池耗尽的汽车,其中[运行 Linux 的主机设备][36] 的电源会不断加电断电。当引擎最终开始运行时,系统如何在没有长时间 fsck 的情况下启动呢?答案是嵌入式设备依赖于[只读根文件系统][37](简称 ro-rootfs)。
|
||||
|
||||
|
||||
![Photograph of a console][38]
|
||||
|
||||
*ro-rootfs 是嵌入式系统不经常需要 fsck 的原因。 来源:<https://tinyurl.com/yxoauoub>*
|
||||
|
||||
ro-rootfs 提供了许多优点,虽然这些优点不如耐用性那么显然。一个是,如果没有 Linux 进程可以写入,那么恶意软件无法写入 `/usr` 或 `/lib`。另一个是,基本上不可变的文件系统对于远程设备的现场支持至关重要,因为支持人员拥有名义上与现场相同的本地系统。也许最重要(但也是最微妙)的优势是 ro-rootfs 迫使开发人员在项目的设计阶段就决定哪些系统对象是不可变的。处理 ro-rootfs 可能经常是不方便甚至是痛苦的,[编程语言中的常量变量][39]经常就是这样,但带来的好处很容易偿还额外的开销。
|
||||
|
||||
对于嵌入式开发人员,创建只读根文件系统确实需要做一些额外的工作,而这正是 VFS 的用武之地。Linux 需要 `/var` 中的文件可写,此外,嵌入式系统运行的许多流行应用程序将尝试在 `$HOME` 中创建配置点文件。放在家目录中的配置文件的一种解决方案通常是预生成它们并将它们构建到 rootfs 中。对于 `/var`,一种方法是将其挂载在单独的可写分区上,而 `/` 本身以只读方式挂载。使用绑定或叠加挂载是另一种流行的替代方案。
|
||||
|
||||
### 绑定和叠加挂载以及在容器中的使用
|
||||
|
||||
运行 [man mount][40] 是了解<ruby>绑定挂载<rt>bind mount</rt></ruby>和<ruby>叠加挂载<rt>overlay mount</rt></ruby>的最好办法,这使嵌入式开发人员和系统管理员能够在一个路径位置创建文件系统,然后在另外一个路径将其提供给应用程序。对于嵌入式系统,这代表着可以将文件存储在 `/var` 中的不可写闪存设备上,但是在启动时将 tmpfs 中的路径叠加挂载或绑定挂载到 `/var` 路径上,这样应用程序就可以在那里随意写它们的内容了。下次加电时,`/var` 中的变化将会消失。叠加挂载提供了 tmpfs 和底层文件系统之间的联合,允许对 ro-rootfs 中的现有文件进行直接修改,而绑定挂载可以使新的空 tmpfs 目录在 ro-rootfs 路径中显示为可写。虽然叠加文件系统是一种适当的文件系统类型,但绑定挂载由 [VFS 命名空间工具][41] 实现的。
|
||||
|
||||
根据叠加挂载和绑定挂载的描述,没有人会对 [Linux容器][42] 大量使用它们感到惊讶。让我们通过运行 bcc 的 `mountsnoop` 工具监视当使用 [systemd-nspawn][43] 启动容器时会发生什么:
|
||||
|
||||
![Console - system-nspawn invocation][44]
|
||||
|
||||
*在 mountsnoop.py 运行的同时,system-nspawn 调用启动容器。*
|
||||
|
||||
让我们看看发生了什么:
|
||||
|
||||
![Console - Running mountsnoop][45]
|
||||
|
||||
在容器 “启动” 期间运行 `mountsnoop` 可以看到容器运行时很大程度上依赖于绑定挂载。(仅显示冗长输出的开头)
|
||||
|
||||
这里,`systemd-nspawn` 将主机的 procfs 和 sysfs 中的选定文件其 rootfs 中的路径提供给容器按。除了设置绑定挂载时的 `MS_BIND` 标志之外,`mount` 系统调用的一些其他标志用于确定主机命名空间和容器中的更改之间的关系。例如,绑定挂载可以将 `/proc` 和 `/sys` 中的更改传播到容器,也可以隐藏它们,具体取决于调用。
|
||||
|
||||
### 总结
|
||||
|
||||
理解 Linux 内部结构似乎是一项不可能完成的任务,因为除了 Linux 用户空间应用程序和 glibc 这样的 C 库中的系统调用接口,内核本身也包含大量代码。取得进展的一种方法是阅读一个内核子系统的源代码,重点是理解面向用户空间的系统调用和头文件以及主要的内核内部接口,这里以 `file_operations` 表为例的。`file_operations` 使得“一切都是文件”可以实际工作,因此掌握它们收获特别大。顶级 `fs/` 目录中的内核 C 源文件构成了虚拟文件系统的实现,虚拟文件系统是支持流行的文件系统和存储设备的广泛且相对简单的互操作性的垫片层。通过 Linux 命名空间进行绑定挂载和覆盖挂载是 VFS 魔术,它使容器和只读根文件系统成为可能。结合对源代码的研究,eBPF 内核工具及其 bcc 接口使得探测内核比以往任何时候都更简单。
|
||||
|
||||
非常感谢 [Akkana Peck][46] 和 [Michael Eager][47] 的评论和指正。
|
||||
|
||||
Alison Chaiken 也于 3 月 7 日至 10 日在加利福尼亚州帕萨迪纳举行的第 17 届南加州 Linux 博览会([SCaLE 17x][49])上演讲了[本主题][48]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/virtual-filesystems-linux
|
||||
|
||||
作者:[Alison Chariken][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/chaiken
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.pearson.com/us/higher-education/program/Love-Linux-Kernel-Development-3rd-Edition/PGM202532.html
|
||||
[2]: http://cassandra.apache.org/
|
||||
[3]: https://en.wikipedia.org/wiki/NoSQL
|
||||
[4]: http://lwn.net/Articles/444910/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_1-console.png (Console)
|
||||
[6]: https://lwn.net/Articles/22355/
|
||||
[7]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/fs.h
|
||||
[8]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs
|
||||
[9]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_2-shim-layer.png (How userspace accesses various types of filesystems)
|
||||
[10]: https://lwn.net/Articles/774114/
|
||||
[11]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_3-crazy.jpg (Man with shocked expression)
|
||||
[12]: https://wiki.archlinux.org/index.php/Tmpfs
|
||||
[13]: http://man7.org/linux/man-pages/man8/sysctl.8.html
|
||||
[14]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_4-proc-meminfo.png (Console)
|
||||
[15]: http://www-f1.ijs.si/~ramsak/km1/mermin.moon.pdf
|
||||
[16]: https://en.wikiquote.org/wiki/David_Mermin
|
||||
[17]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_5-moon.jpg (Full moon)
|
||||
[18]: https://commons.wikimedia.org/wiki/Moon#/media/File:Full_Moon_Luc_Viatour.jpg
|
||||
[19]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/ABI/stable
|
||||
[20]: https://lkml.org/lkml/2012/12/23/75
|
||||
[21]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_7-sysfs.png (Console)
|
||||
[22]: https://events.linuxfoundation.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf
|
||||
[23]: https://github.com/iovisor/bcc
|
||||
[24]: https://github.com/iovisor/bcc/blob/master/INSTALL.md
|
||||
[25]: http://brendangregg.com/ebpf.html
|
||||
[26]: https://github.com/iovisor/bcc/tree/master/tools
|
||||
[27]: https://github.com/iovisor/bcc/blob/master/tools/vfscount_example.txt
|
||||
[28]: https://github.com/iovisor/bcc/blob/master/tools/vfsstat.py
|
||||
[29]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_8-vfsstat.png (Console - vfsstat.py)
|
||||
[30]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_9-ebpf.png (Console when USB is inserted)
|
||||
[31]: https://github.com/iovisor/bcc/blob/master/tools/trace_example.txt
|
||||
[32]: https://events.static.linuxfound.org/sites/events/files/slides/bpf_collabsummit_2015feb20.pdf
|
||||
[33]: http://northstar-www.dartmouth.edu/doc/solaris-forte/manuals/c/user_guide/cscope.html
|
||||
[34]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/block/genhd.c#n665
|
||||
[35]: http://www.man7.org/linux/man-pages/man8/fsck.8.html
|
||||
[36]: https://wiki.automotivelinux.org/_media/eg-rhsa/agl_referencehardwarespec_v0.1.0_20171018.pdf
|
||||
[37]: https://elinux.org/images/1/1f/Read-only_rootfs.pdf
|
||||
[38]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_10-code.jpg (Photograph of a console)
|
||||
[39]: https://www.meetup.com/ACCU-Bay-Area/events/drpmvfytlbqb/
|
||||
[40]: http://man7.org/linux/man-pages/man8/mount.8.html
|
||||
[41]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/sharedsubtree.txt
|
||||
[42]: https://coreos.com/os/docs/latest/kernel-modules.html
|
||||
[43]: https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html
|
||||
[44]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_11-system-nspawn.png (Console - system-nspawn invocation)
|
||||
[45]: https://opensource.com/sites/default/files/uploads/virtualfilesystems_12-mountsnoop.png (Console - Running mountsnoop)
|
||||
[46]: http://shallowsky.com/
|
||||
[47]: http://eagercon.com/
|
||||
[48]: https://www.socallinuxexpo.org/scale/17x/presentations/virtual-filesystems-why-we-need-them-and-how-they-work
|
||||
[49]: https://www.socallinuxexpo.org/
|
@ -0,0 +1,133 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Automate backups with restic and systemd)
|
||||
[#]: via: (https://fedoramagazine.org/automate-backups-with-restic-and-systemd/)
|
||||
[#]: author: (Link Dupont https://fedoramagazine.org/author/linkdupont/)
|
||||
|
||||
使用 restic 和 systemd 自动备份
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
及时备份很重要。即使在 [Fedora Magazine][3] 中,[备份软件][2] 也是一个常见的讨论话题。本文演示了如何仅使用 systemd 以及 **restic** 来自动备份。
|
||||
|
||||
|
||||
有关 restic 的介绍,请查看我们的文章[在 Fedora 上使用 restic 进行加密备份][4]。然后继续阅读以了解更多详情。
|
||||
|
||||
为了自动创建快照以及清理数据,需要运行两个 systemd 服务。第一个运行_备份_命令的服务需要以常规频率运行。第二个服务负责数据清理。
|
||||
|
||||
如果你根本不熟悉 systemd,那么这是个很好的学习机会。查看 [Magazine 上关于 systemd 的系列文章] [5],从单元文件的这个入门开始:
|
||||
|
||||
> [systemd 单元文件基础][6]
|
||||
|
||||
如果你还没有安装 restic,请注意它在官方的 Fedora 仓库中。要安装它,请[带上 sudo][7] 运行此命令:
|
||||
|
||||
```
|
||||
$ sudo dnf install restic
|
||||
```
|
||||
|
||||
### 备份
|
||||
|
||||
首先,创建 _~/.config/systemd/user/restic-backup.service_。将下面的文本复制并粘贴到文件中以获得最佳效果。
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Restic backup service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=restic backup --verbose --one-file-system --tag systemd.timer $BACKUP_EXCLUDES $BACKUP_PATHS
|
||||
ExecStartPost=restic forget --verbose --tag systemd.timer --group-by "paths,tags" --keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS --keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS
|
||||
EnvironmentFile=%h/.config/restic-backup.conf
|
||||
```
|
||||
|
||||
此服务引用环境文件来加载密钥(例如 _RESTIC_PASSWORD_)。创建 _~/.config/restic-backup.conf_。复制并粘贴以下内容以获得最佳效果。此示例使用 BackBlaze B2 存储。请相应地调整 ID、密钥、仓库和密码值。
|
||||
|
||||
```
|
||||
BACKUP_PATHS="/home/rupert"
|
||||
BACKUP_EXCLUDES="--exclude-file /home/rupert/.restic_excludes --exclude-if-present .exclude_from_backup"
|
||||
RETENTION_DAYS=7
|
||||
RETENTION_WEEKS=4
|
||||
RETENTION_MONTHS=6
|
||||
RETENTION_YEARS=3
|
||||
B2_ACCOUNT_ID=XXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
B2_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
RESTIC_REPOSITORY=b2:XXXXXXXXXXXXXXXXXX:/
|
||||
RESTIC_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
```
|
||||
|
||||
现在已安装该服务,请重新加载 systemd:_systemctl -user daemon-reload_。尝试手动运行该服务以创建备份:_systemctl -user start restic-backup_。
|
||||
|
||||
因为该服务类型是 _oneshot_,它将运行一次并退出。验证服务运行并根据需要创建快照后,设置计时器以定期运行此服务。例如,要每天运行 _restic-backup.service_,请按如下所示创建 _~/.config/systemd/user/restic-backup.timer_。再次复制并粘贴此文本:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Backup with restic daily
|
||||
[Timer]
|
||||
OnCalendar=daily
|
||||
Persistent=true
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
运行以下命令启用:
|
||||
|
||||
```
|
||||
$ systemctl --user enable --now restic-backup.timer
|
||||
```
|
||||
|
||||
### 清理
|
||||
|
||||
虽然主服务运行 _forget_ 命令仅保留保留策略中的快照,但实际上并未从 restic 仓库中删除数据。 _prune_ 命令检查仓库和当前快照,并删除与快照无关的所有数据。由于 _prune_ 可能是一个耗时的过程,因此无需在每次运行备份时运行。这是第二个服务和计时器的场景。首先,通过复制和粘贴此文本来创建文件 _~/.config/systemd/user/restic-prune.service_:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Restic backup service (data pruning)
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=restic prune
|
||||
EnvironmentFile=%h/.config/restic-backup.conf
|
||||
```
|
||||
|
||||
与主 _restic-backup.service_ 服务类似,_restic-prune_ 也是 onehot 服务,并且可以手动运行。设置完服务后,创建 _~/.config/systemd/user/restic-prune.timer_ 并启用相应的计时器:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Prune data from the restic repository monthly
|
||||
[Timer]
|
||||
OnCalendar=monthly
|
||||
Persistent=true
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
就是这些了!restic 将会每日运行并按月清理数据。
|
||||
|
||||
* * *
|
||||
|
||||
图片来自 _[Unsplash][9]_ 由 _[ Samuel Zeller][8]_ 拍摄。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/automate-backups-with-restic-and-systemd/
|
||||
|
||||
作者:[Link Dupont][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/linkdupont/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/restic-systemd-816x345.jpg
|
||||
[2]: https://restic.net/
|
||||
[3]: https://fedoramagazine.org/?s=backup
|
||||
[4]: https://fedoramagazine.org/use-restic-encrypted-backups/
|
||||
[5]: https://fedoramagazine.org/series/systemd-series/
|
||||
[6]: https://fedoramagazine.org/systemd-getting-a-grip-on-units/
|
||||
[7]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[8]: https://unsplash.com/photos/JuFcQxgCXwA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[9]: https://unsplash.com/search/photos/archive?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -7,30 +7,30 @@
|
||||
[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Linux Shell Script To Monitor Disk Space Usage And Send Email
|
||||
======
|
||||
用 Linux Shell 脚本来监控磁盘使用情况和发送邮件
|
||||
============================================
|
||||
|
||||
There are numerous monitoring tools are available in market to monitor Linux systems and it will send an email when the system reaches the threshold limit.
|
||||
市场上有很多用来监控 Linux 系统的监控工具,当系统到达阀值后它将发送一封邮件。
|
||||
|
||||
It monitors everything such as CPU utilization, Memory utilization, swap utilization, disk space utilization and much more.
|
||||
它监控所有的东西例如 CPU 利用率,内存利用率,交换空间利用率,磁盘空间利用率等等。
|
||||
|
||||
However, it’s suitable for small and big environment.
|
||||
然而,它更适合小环境和大环境。
|
||||
|
||||
Think about if you have only few systems then what will be the best approach on this.
|
||||
想一想如果你只有少量系统,那么什么是最好的方式来应对这种情况。
|
||||
|
||||
Yup, we want to write a **[shell script][1]** to achieve this.
|
||||
是的,我们想要写一个 **[shell 脚本][1]** 来实现。
|
||||
|
||||
In this tutorial we are going to write a shell script to monitor disk space usage on system.
|
||||
在这篇指南中我们打算写一个 shell 脚本来监控系统的磁盘空间使用率。
|
||||
|
||||
When the system reaches the given threshold then it will trigger a mail to corresponding email id.
|
||||
当系统到达给定的阀值,它将给对应的邮件 id 发送一封邮件。
|
||||
|
||||
We have added totally four shell scripts in this article and each has been used for different purpose.
|
||||
在这篇文章中我们总共添加了四个 shell 脚本,每个用于不同的目的。
|
||||
|
||||
Later, we will come up with other shell scripts to monitor CPU, Memory and Swap utilization.
|
||||
之后,我们会想出其他 shell 脚本来监控 CPU,内存和交换空间利用率。
|
||||
|
||||
Before step into that, i would like to clarify one thing which i noticed regarding the disk space usage shell script.
|
||||
在此之前,我想澄清一件事,根据我观察的磁盘空间使用率 shell 脚本使用情况。
|
||||
|
||||
Most of the users were commented in multiple blogs saying they were getting the following error message when they are running the disk space usage script.
|
||||
大多数用户在多篇博客中评论说,当他们运行磁盘空间使用率脚本时他们获得了以下错误。
|
||||
|
||||
```
|
||||
# sh /opt/script/disk-usage-alert-old.sh
|
||||
@ -40,11 +40,11 @@ test-script.sh: line 7: [: /dev/mapper/vg_2g-lv_root: integer expression expecte
|
||||
/ 9.8G
|
||||
```
|
||||
|
||||
Yes that’s right. Even, i had faced the same issue when i ran the script first time. Later, i had found the root causes.
|
||||
是的,这是对的。甚至,当我第一次运行这个脚本的时候我遇到了相同的问题。之后,我发现了根本原因。
|
||||
|
||||
When you use “df -h” or “df -H” in shell script for disk space alert on RHEL 5 & RHEL 6 based system, you will be end up with the above error message because the output is not in the proper format, see the below output.
|
||||
当你在基于 RHEL 5 & RHEL 6 的系统上运行包含用于磁盘空间警告的 “df -h” 或 “df -H” 的 shell 脚本中时,你会发现上述错误信息,因为输出格式不对,查看下列输出。
|
||||
|
||||
To overcome this issue, we need to use “df -Ph” (POSIX output format) but by default “df -h” is working fine on RHEL 7 based systems.
|
||||
为了解决这个问题,我们需要用 “df -Ph” (POSIX 输出格式),但是默认的 “df -h” 在基于 RHEL 7 的系统上运行的很好。
|
||||
|
||||
```
|
||||
# df -h
|
||||
@ -60,15 +60,15 @@ tmpfs 7.8G 0 7.8G 0% /dev/shm
|
||||
4.8G 14M 4.6G 1% /tmp
|
||||
```
|
||||
|
||||
### Method-1 : Linux Shell Script To Monitor Disk Space Usage And Send Email
|
||||
### 方法一:Linux Shell 脚本来监控磁盘空间使用率和发送邮件
|
||||
|
||||
You can use the following shell script to monitor disk space usage on Linux system.
|
||||
你可以使用下列 shell 脚本在 Linux 系统中来监控磁盘空间使用率。
|
||||
|
||||
It will send an email when the system reaches the given threshold limit. In this example, we set threshold limit at 60% for testing purpose and you can change this limit as per your requirements.
|
||||
当系统到达给定的阀值限制时,它将发送一封邮件。在这个例子中,我们设置阀值为 60% 用于测试目的,你可以改变这个限制来符合你的需求。
|
||||
|
||||
It will send multiple mails if more than one file systems get reached the given threshold limit because the script is using loop.
|
||||
如果超过一个文件系统到达给定的阀值,它将发送多封邮件,因为这个脚本使用了循环。
|
||||
|
||||
Also, replace your email id instead of us to get this alert.
|
||||
同样,替换你的邮件 id 来获取这份警告。
|
||||
|
||||
```
|
||||
# vi /opt/script/disk-usage-alert.sh
|
||||
@ -85,7 +85,7 @@ do
|
||||
done
|
||||
```
|
||||
|
||||
**Output:** I got the following two email alerts.
|
||||
**输出:**我获得了下列两封邮件警告。
|
||||
|
||||
```
|
||||
The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr 29 06:16:14 IST 2019
|
||||
@ -100,9 +100,9 @@ Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
|
||||
*/10 * * * * /bin/bash /opt/script/disk-usage-alert.sh
|
||||
```
|
||||
|
||||
### Method-2 : Linux Shell Script To Monitor Disk Space Usage And Send Email
|
||||
### 方法二:Linux Shell 脚本来监控磁盘空间使用率和发送邮件
|
||||
|
||||
Alternatively, you can use the following shell script. We have made few changes in this compared with above script.
|
||||
作为代替,你可以使用下列的 shell 脚本。对比上面的脚本我们做了少量改变。
|
||||
|
||||
```
|
||||
# vi /opt/script/disk-usage-alert-1.sh
|
||||
@ -120,7 +120,8 @@ do
|
||||
done
|
||||
```
|
||||
|
||||
**Output:** I got the following two email alerts.
|
||||
**输出:**我获得了下列两封邮件警告。
|
||||
|
||||
|
||||
```
|
||||
The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr 29 06:16:14 IST 2019
|
||||
@ -128,24 +129,24 @@ The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr
|
||||
The partition "/dev/mapper/vg_2g-lv_root" on 2g.CentOS7 has used 67% at Mon Apr 29 06:16:14 IST 2019
|
||||
```
|
||||
|
||||
Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
|
||||
最终添加了一个 **[cronjob][2]** 来自动完成。它会每 10 分钟运行一次。
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
*/10 * * * * /bin/bash /opt/script/disk-usage-alert-1.sh
|
||||
```
|
||||
|
||||
### Method-3 : Linux Shell Script To Monitor Disk Space Usage And Send Email
|
||||
### 方法三:Linux Shell 脚本来监控磁盘空间使用率和发送邮件
|
||||
|
||||
I would like to go with this method. Since, it work like a charm and you will be getting single email for everything.
|
||||
我更喜欢这种方法。因为,它工作起来很有魔力,你只会收到一封关于所有事的邮件。
|
||||
|
||||
This is very simple and straightforward.
|
||||
这相当简单和直接。
|
||||
|
||||
```
|
||||
*/10 * * * * df -Ph | sed s/%//g | awk '{ if($5 > 60) print $0;}' | mail -s "Disk Space Alert On $(hostname)" [email protected]
|
||||
```
|
||||
|
||||
**Output:** I got a single mail for all alerts.
|
||||
**输出:** 我获得了一封关于所有警告的邮件。
|
||||
|
||||
```
|
||||
Filesystem Size Used Avail Use Mounted on
|
||||
@ -153,9 +154,7 @@ Filesystem Size Used Avail Use Mounted on
|
||||
/dev/mapper/vg_2g-lv_home 5.0G 4.3G 784M 85 /home
|
||||
```
|
||||
|
||||
### Method-4 : Linux Shell Script To Monitor Disk Space Usage Of Particular Partition And Send Email
|
||||
|
||||
If anybody wants to monitor the particular partition then you can use the following shell script. Simply replace your filesystem name instead of us.
|
||||
### 方法四:Linux Shell 脚本来监控某个分区的磁盘空间使用情况和发送邮件
|
||||
|
||||
```
|
||||
# vi /opt/script/disk-usage-alert-2.sh
|
||||
@ -168,22 +167,22 @@ echo "The Mount Point "/DB" on $(hostname) has used $used at $(date)" | mail -s
|
||||
fi
|
||||
```
|
||||
|
||||
**Output:** I got the following email alerts.
|
||||
**输出:** 我得到了下面的邮件警告。
|
||||
|
||||
```
|
||||
The partition /dev/mapper/vg_2g-lv_dbs on 2g.CentOS6 has used 82% at Mon Apr 29 06:16:14 IST 2019
|
||||
```
|
||||
|
||||
Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
|
||||
最终添加了一个 **[cronjob][2]** 来自动完成这些工作。它将每 10 分钟运行一次。
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
*/10 * * * * /bin/bash /opt/script/disk-usage-alert-2.sh
|
||||
```
|
||||
|
||||
**Note:** You will be getting an email alert 10 mins later since the script has scheduled to run every 10 minutes (But it’s not exactly 10 mins and it depends the timing).
|
||||
**注意:** 你将在 10 分钟后收到一封邮件警告,因为这个脚本被计划为每 10 分钟运行一次(但也不是精确的 10 分钟,取决于时间)。
|
||||
|
||||
Say for example. If your system reaches the limit at 8.25 then you will get an email alert in another 5 mins. Hope it’s clear now.
|
||||
例如这个例子。如果你的系统在 8:25 到达了限制,你将在 5 分钟后收到邮件警告。希望现在讲清楚了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -191,7 +190,7 @@ via: https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
@ -200,3 +199,10 @@ via: https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/shell-script/
|
||||
[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -0,0 +1,128 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Add Application Shortcuts on Ubuntu Desktop)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-desktop-shortcut/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
在 Ubuntu 桌面如何添加应用快捷方式
|
||||
===============================
|
||||
|
||||
_**在这篇快速指南中,你将学到如何在 Ubuntu 桌面和其他使用 GNOME 桌面的发行版中添加应用图标。**_
|
||||
|
||||
一个经典的桌面操作系统在 ‘桌面屏’ 上总是有图标的。这些桌面图标包括文件管理器,回收站和应用图标。
|
||||
|
||||
当在 Windows 中安装应用时,一些程序会询问你是否在桌面创建一个快捷方式。但在 Linux 系统中不是这样。
|
||||
|
||||
但是如果你热衷于这个特点,让我给你展示如何在 Ubuntu 桌面和其他使用 GNOME 桌面的发行版中创建应用的快捷方式。
|
||||
|
||||
![Application Shortcuts on Desktop in Ubuntu with GNOME desktop][2]
|
||||
|
||||
如果你想知道我的桌面外观,我正在使用 Ant 主题和 Tela 图标集。你可以获取一些 [GTK 主题][3] 和 [为 Ubuntu 准备的图标集][4]并换成你喜欢的。
|
||||
|
||||
### 在 Ubuntu 中添加桌面快捷方式
|
||||
|
||||
![][5]
|
||||
|
||||
个人来讲,我更喜欢为应用图标准备的 Ubuntu 启动器方式。如果我经常使用一个程序,我会添加到启动器。但是我知道不是每个人都有相同的偏好并且少数人更喜欢桌面的快捷方式。
|
||||
|
||||
让我们看在桌面中创建应用快捷方式的最简单方式。
|
||||
|
||||
免责声明
|
||||
|
||||
这篇指南已经在 Ubuntu18.04 LTS 的 GNOME 桌面上测试过了。它可能在其他发行版和桌面环境上也能发挥作用,但你必须自己尝试。一些 GNOME 特定步骤可能会变,所以请在[其他桌面环境][7]尝试时注意。
|
||||
|
||||
[Subscribe to our YouTube Channel for More Linux Videos][8]
|
||||
|
||||
#### 准备
|
||||
|
||||
首先最重要的事是确保你有 GNOME 桌面的图标权限。
|
||||
|
||||
如果你跟随 Ubuntu 18.04 自定义提示,你会知道如何安装 GNOME Tweaks 工具。在这个工具中,确保你设置 ‘Show Icons’ 选项为允许。
|
||||
|
||||
![Allow icons on desktop in GNOME][9]
|
||||
|
||||
一旦你确保已经设置,是时候在桌面添加应用快捷方式了。
|
||||
|
||||
[][10]
|
||||
|
||||
建议阅读在双启动中如何替代一个 Linux 发行版 [保留 Home分区]
|
||||
|
||||
#### 第一步:定位应用的 .desktop 文件
|
||||
|
||||
到 Files -> Other Location -> Computer。
|
||||
|
||||
![Go to Other Locations -> Computer][11]
|
||||
|
||||
从这里,到目录 usr -> share -> applications。你会在这里看到几个你已经安装的 [Ubuntu 应用][12]。即使你没有看到图标,你应该看到被命名为 应用名.desktop 形式的文件。
|
||||
|
||||
![Application Shortcuts][13]
|
||||
|
||||
#### 第二步:拷贝 .desktop 文件到桌面
|
||||
|
||||
现在你要做的只是查找应用图标(或者它的 desktop 文件)。当你找到后,拖文件到桌面或者拷贝文件(使用 Ctrl+C 快捷方式)并在桌面粘贴(使用 Ctrl+V 快捷方式)。
|
||||
|
||||
![Add .desktop file to the desktop][14]
|
||||
|
||||
#### 第三步:运行 desktop 文件
|
||||
|
||||
当你这么做,你应该在桌面上看到一个图标的文本文件而不是应用 logo。别担心,一会就不一样了。
|
||||
|
||||
你要做的就是双击桌面的那个文件。它将警告你它是一个 ‘未信任的应用启动器’,点击信任并启动。
|
||||
|
||||
![Launch Desktop Shortcut][15]
|
||||
|
||||
这个应用像往常一样启动,好事是你会察觉到 .desktop 文件现在已经变成应用图标了。我相信你喜欢应用图标的方式,不是吗?
|
||||
|
||||
![Application shortcut on the desktop][16]
|
||||
|
||||
#### Ubuntu 19.04 或者 GNOME 3.32 用户的疑难杂症
|
||||
|
||||
如果你使用 Ubuntu 19.04 或者 GNOME 3.32,你的 .desktop 文件可能根本不会启动。你应该右击 .desktop 文件并选择 “Allow Launching”。
|
||||
|
||||
在这之后,你应该能够启动应用并且桌面上的应用快捷方式能够正常显示了。
|
||||
|
||||
**总结**
|
||||
|
||||
如果你不喜欢桌面的某个应用启动器,选择删除就是了。它会删除应用快捷方式,但是应用仍安全的保留在你的系统中。
|
||||
|
||||
我希望你发现这篇快速指南有帮助并喜欢在 Ubuntu 桌面上的应用快捷方式。
|
||||
|
||||
[][17]
|
||||
|
||||
建议阅读在 Ubuntu 中如何安装和设置 Nemo 为默认的文件管理器。
|
||||
|
||||
如果你有问题或建议,请在下方评论让我知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-desktop-shortcut/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ubuntu.com/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/app-shortcut-on-ubuntu-desktop.jpeg?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/best-gtk-themes/
|
||||
[4]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/add-ubuntu-desktop-shortcut.jpeg?resize=800%2C450&ssl=1
|
||||
[6]: https://www.gnome.org/
|
||||
[7]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[8]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/allow-icons-on-desktop-gnome.jpg?ssl=1
|
||||
[10]: https://itsfoss.com/replace-linux-from-dual-boot/
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Adding-desktop-shortcut-Ubuntu-gnome-1.png?resize=800%2C436&ssl=1
|
||||
[12]: https://itsfoss.com/best-ubuntu-apps/
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/application-shortcuts-in-ubuntu.png?resize=800%2C422&ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/add-desktop-file-to-desktop.jpeg?resize=800%2C458&ssl=1
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/launch-desktop-shortcut-.jpeg?resize=800%2C349&ssl=1
|
||||
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/app-shortcut-on-desktop-ubuntu-gnome.jpeg?resize=800%2C375&ssl=1
|
||||
[17]: https://itsfoss.com/install-nemo-file-manager-ubuntu/
|
@ -0,0 +1,140 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use advanced rsync for large Linux backups)
|
||||
[#]: via: (https://opensource.com/article/19/5/advanced-rsync)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/marcobravo)
|
||||
|
||||
如何使用高级工具 rsync 进行大的 Linux 备份
|
||||
=====================================
|
||||
基础的 rsync 命令通常足够来管理你的 Linux 备份,但是额外的选项使大数据集备份更快更强大。
|
||||
![Filing papers and documents][1]
|
||||
|
||||
很明显,备份一直是Linux世界的热门话题。回到 2017,David Both 为 [Opensource.com][2] 的读者在"[使用 rsync 备份你的 Linux 系统][3]方面提了一些建议,在这年的更早时候,他发起了一项问卷调查询问大家,"[在 Linux 中你的 /home 目录的主要备份策略是什么][4]",在今年的另一个问卷调查中,Don Watkins 问到,"[你使用哪种开源备份解决方案][5]"。
|
||||
|
||||
我的回复是 [rsync][6]。市场上有大量大的复杂的工具,对于管理磁带机或者存储库设备,这些可能是必要的,但是可能你需要的只是一个简单的开源命令行工具。
|
||||
|
||||
### rsync 基础
|
||||
|
||||
我为一个大概拥有 35,000 开发者并有着几十 TB 文件的全球组织管理二进制仓库。我经常一次移动或者归档上百 GB 的数据。使用的是 rsync。这种经历使我对这个简单的工具充满信心。(所以,是的,我在家使用它来备份我的 Linux 系统)
|
||||
|
||||
基础的 rsync 命令很简单。
|
||||
|
||||
|
||||
```
|
||||
`rsync -av 源目录 目的地目录`
|
||||
```
|
||||
|
||||
实际上,在任何指南中教的 rsync 命令在大多数通用情况下都运行的很好。然而,假设我们需要备份大量的数据。例如包含 2,000 个子目录的目录,每个包含 50GB 到 700GB 的数据。在这个目录运行 rsync 可能需要大量时间,尤其是当你使用 checksum 选项时(我倾向使用的)。
|
||||
|
||||
当我们试图同步大量数据或者通过慢的网络连接时,可能遇到性能问题。让我给你展示一些我使用的方法来确保好的性能和可靠性。
|
||||
|
||||
### 高级 rsync
|
||||
|
||||
当 rsync 运行时出现的第一行是:“正在发送增量文件列表。” 如果你搜索这一行,你将看到很多类似的问题:为什么它一直运行,或者为什么它似乎挂起了。
|
||||
|
||||
这里是一个基于这个场景的例子。假设我们有一个 **/storage** 的目录,我们想要备份到一个外部 USB 磁盘,我们可以使用下面的命令:
|
||||
|
||||
|
||||
```
|
||||
`rsync -cav /storage /media/WDPassport`
|
||||
```
|
||||
|
||||
**c** 选项告诉 rsync 使用文件校验和而不是时间戳来决定改变的文件,这通常消耗的时间更久。为了分解 **/storage** 目录,我通过子目录同步,使用 **find** 命令。这是一个例子:
|
||||
|
||||
|
||||
```
|
||||
`find /storage -type d -exec rsync -cav {} /media/WDPassport \;`
|
||||
```
|
||||
|
||||
这看起来可以,但是如果 **/storage** 目录有任何文件,它们将被跳过。因此,我们如何同步 **/storage** 目录中的文件呢?同样有一个细微的差别是具体的选项将造成 rsync 同步 **.** 目录,该目录是源目录自身;这意味着它会同步子目录两次,这并不是我们想要的。
|
||||
|
||||
长话短说,我的解决方案是一个 “双-递增”脚本。这允许我分解一个目录,例如,当你的 home 目录有多个大的目录,例如音乐或者家庭照片时,分解 **/home** 目录为单个的用户 home 目录。
|
||||
|
||||
这是我的脚本的一个例子:
|
||||
|
||||
|
||||
```
|
||||
HOMES="alan"
|
||||
DRIVE="/media/WDPassport"
|
||||
|
||||
for HOME in $HOMES; do
|
||||
cd /home/$HOME
|
||||
rsync -cdlptgov --delete . /$DRIVE/$HOME
|
||||
find . -maxdepth 1 -type d -not -name "." -exec rsync -crlptgov --delete {} /$DRIVE/$HOME \;
|
||||
done
|
||||
```
|
||||
|
||||
第一个 rsync 命令拷贝它在源目录中发现的文件和目录。然而,它将目录留空,因此我们能够通过 **find** 命令迭代他们。这通过传递 **d** 参数来完成,它告诉 rsync 不要递归目录。
|
||||
|
||||
|
||||
```
|
||||
`-d, --dirs transfer directories without recursing`
|
||||
```
|
||||
|
||||
然后 **find** 命令传递每个目录来单独运行 rsync。之后 rsync 拷贝目录的内容。这通过传递 **r** 参数来完成,它告诉 rsync 要递归目录。
|
||||
|
||||
|
||||
```
|
||||
`-r, --recursive 递归进入目录`
|
||||
```
|
||||
|
||||
这使得 rsync使用的增量文件保持在一个可管理的大小。
|
||||
|
||||
大多数 rsync 指南为了简便使用 **a** (或者 **archive**) 参数。这实际是一个复合参数。
|
||||
|
||||
|
||||
```
|
||||
`-a, --archive 归档模式; equals -rlptgoD (no -H,-A,-X)`
|
||||
```
|
||||
|
||||
我传递的其他参数包含在 **a** 中;这些是 **l** , **p** , **t** , **g** , 和 **o**。
|
||||
|
||||
|
||||
```
|
||||
-l, --links 复制符号链接作为符号链接
|
||||
-p, --perms 保留权限
|
||||
-t, --times 保留修改时间
|
||||
-g, --group 保留组
|
||||
-o, --owner 保留拥有者(只适用于超级管理员)
|
||||
```
|
||||
|
||||
**\--delete** 选项告诉 rsync 删除目的地目录中所有在源目录不存在的任意文件。这种方式,运行的结果仅仅是复制。你同样可以排除 **.Trash** 目录或者 MacOS 创建的 **.DS_Store** 文件。
|
||||
|
||||
|
||||
```
|
||||
`-not -name ".Trash*" -not -name ".DS_Store"`
|
||||
```
|
||||
|
||||
### 注意
|
||||
|
||||
最后一条建议: rsync 可以是破坏性的命令。幸运的是,它的睿智的创造者提供了 “空运行”的能力。如果我们加入 **n** 选项,rsync 会显示预期的输出但不写任何数据。
|
||||
|
||||
|
||||
```
|
||||
`rsync -cdlptgovn --delete . /$DRIVE/$HOME`
|
||||
```
|
||||
|
||||
这个脚本适用于非常大的存储规模和高延迟或者慢链接的情况。一如既往,我确信仍有提升的空间。如果你有任何建议,请在下方评论中分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/advanced-rsync
|
||||
|
||||
作者:[Alan Formy-Duval ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
|
||||
[2]: http://Opensource.com
|
||||
[3]: https://opensource.com/article/17/1/rsync-backup-linux
|
||||
[4]: https://opensource.com/poll/19/4/backup-strategy-home-directory-linux
|
||||
[5]: https://opensource.com/article/19/2/linux-backup-solutions
|
||||
[6]: https://en.wikipedia.org/wiki/Rsync
|
Loading…
Reference in New Issue
Block a user