mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-03 23:40:14 +08:00
commit
0cfeecc593
@ -1,33 +1,27 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "zero-MK"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-10766-1.html"
|
||||
[#]: subject: "How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?"
|
||||
[#]: via: "https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/"
|
||||
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
|
||||
|
||||
|
||||
如何检查多个远程 Linux 系统是否打开了指定端口?
|
||||
======
|
||||
|
||||
# 如何使用带有 nc 命令的 Shell 脚本来检查多个远程 Linux 系统是否打开了指定端口?
|
||||
我们最近写了一篇文章关于如何检查远程 Linux 服务器是否打开指定端口。它能帮助你检查单个服务器。
|
||||
|
||||
我们最近写了一篇文章关于如何检查远程 Linux 服务器是否打开指定端口。它能帮助您检查单个服务器。
|
||||
如果要检查五个服务器有没有问题,可以使用以下任何一个命令,如 `nc`(netcat)、`nmap` 和 `telnet`。但是如果想检查 50 多台服务器,那么你的解决方案是什么?
|
||||
|
||||
如果要检查五个服务器有没有问题,可以使用以下任何一个命令,如 nc(netcat),nmap 和 telnet。
|
||||
要检查所有服务器并不容易,如果你一个一个这样做,完全没有必要,因为这样你将会浪费大量的时间。为了解决这种情况,我使用 `nc` 命令编写了一个 shell 小脚本,它将允许我们扫描任意数量服务器给定的端口。
|
||||
|
||||
但是如果想检查 50 多台服务器,那么你的解决方案是什么?
|
||||
如果你要查找单个服务器扫描,你有多个选择,你只需阅读 [检查远程 Linux 系统上的端口是否打开?][1] 了解更多信息。
|
||||
|
||||
要检查所有服务器并不容易,如果你一个一个这样做,完全没有必要,因为这样你将会浪费大量的时间。
|
||||
本教程中提供了两个脚本,这两个脚本都很有用。这两个脚本都用于不同的目的,你可以通过阅读标题轻松理解其用途。
|
||||
|
||||
为了解决这种情况,我使用 nc 命令编写了一个 shell 小脚本,它将允许我们扫描任意数量服务器给定的端口。
|
||||
|
||||
如果您要查找单个服务器扫描,您有多个选择,你只需导航到到 **[检查远程 Linux 系统上的端口是否打开?][1]** 了解更多信息。
|
||||
|
||||
本教程中提供了两个脚本,这两个脚本都很有用。
|
||||
|
||||
这两个脚本都用于不同的目的,您可以通过阅读标题轻松理解其用途。
|
||||
|
||||
在你阅读这篇文章之前,我会问你几个问题,如果你知道答案或者你可以通过阅读这篇文章来获得答案。
|
||||
在你阅读这篇文章之前,我会问你几个问题,如果你不知道答案你可以通过阅读这篇文章来获得答案。
|
||||
|
||||
如何检查一个远程 Linux 服务器上指定的端口是否打开?
|
||||
|
||||
@ -35,17 +29,17 @@
|
||||
|
||||
如何检查多个远程 Linux 服务器上是否打开了多个指定的端口?
|
||||
|
||||
### 什么是nc(netcat)命令?
|
||||
### 什么是 nc(netcat)命令?
|
||||
|
||||
nc 即 netcat 。Netcat 是一个简单实用的 Unix 程序,它使用 TCP 或 UDP 协议进行跨网络连接进行数据读取和写入。
|
||||
`nc` 即 netcat。它是一个简单实用的 Unix 程序,它使用 TCP 或 UDP 协议进行跨网络连接进行数据读取和写入。
|
||||
|
||||
它被设计成一个可靠的 “后端” (back-end) 工具,我们可以直接使用或由其他程序和脚本轻松驱动它。
|
||||
它被设计成一个可靠的 “后端” 工具,我们可以直接使用或由其他程序和脚本轻松驱动它。
|
||||
|
||||
同时,它也是一个功能丰富的网络调试和探索工具,因为它可以创建您需要的几乎任何类型的连接,并具有几个有趣的内置功能。
|
||||
同时,它也是一个功能丰富的网络调试和探索工具,因为它可以创建你需要的几乎任何类型的连接,并具有几个有趣的内置功能。
|
||||
|
||||
Netcat 有三个主要的模式。分别是连接模式,监听模式和隧道模式。
|
||||
netcat 有三个主要的模式。分别是连接模式,监听模式和隧道模式。
|
||||
|
||||
**nc(netcat)的通用语法:**
|
||||
`nc`(netcat)的通用语法:
|
||||
|
||||
```
|
||||
$ nc [-options] [HostName or IP] [PortNumber]
|
||||
@ -55,9 +49,9 @@ $ nc [-options] [HostName or IP] [PortNumber]
|
||||
|
||||
如果要检查多个远程 Linux 服务器上给定端口是否打开,请使用以下 shell 脚本。
|
||||
|
||||
在我的例子中,我们将检查端口 22 是否在以下远程服务器中打开,确保您已经更新文件中的服务器列表而不是还是使用我的服务器列表。
|
||||
在我的例子中,我们将检查端口 22 是否在以下远程服务器中打开,确保你已经更新文件中的服务器列表而不是使用我的服务器列表。
|
||||
|
||||
您必须确保已经更新服务器列表 : `server-list.txt file` 。每个服务器(IP)应该在单独的行中。
|
||||
你必须确保已经更新服务器列表 :`server-list.txt` 。每个服务器(IP)应该在单独的行中。
|
||||
|
||||
```
|
||||
# cat server-list.txt
|
||||
@ -77,12 +71,12 @@ $ nc [-options] [HostName or IP] [PortNumber]
|
||||
#!/bin/sh
|
||||
for server in `more server-list.txt`
|
||||
do
|
||||
#echo $i
|
||||
nc -zvw3 $server 22
|
||||
#echo $i
|
||||
nc -zvw3 $server 22
|
||||
done
|
||||
```
|
||||
|
||||
设置 `port_scan.sh` 文件的可执行权限。
|
||||
设置 `port_scan.sh` 文件的可执行权限。
|
||||
|
||||
```
|
||||
$ chmod +x port_scan.sh
|
||||
@ -105,9 +99,9 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
|
||||
|
||||
如果要检查多个服务器中的多个端口,请使用下面的脚本。
|
||||
|
||||
在我的例子中,我们将检查给定服务器的 22 和 80 端口是否打开。确保您必须替换所需的端口和服务器名称而不使用是我的。
|
||||
在我的例子中,我们将检查给定服务器的 22 和 80 端口是否打开。确保你必须替换所需的端口和服务器名称而不使用是我的。
|
||||
|
||||
您必须确保已经将要检查的端口写入 `port-list.txt` 文件中。每个端口应该在一个单独的行中。
|
||||
你必须确保已经将要检查的端口写入 `port-list.txt` 文件中。每个端口应该在一个单独的行中。
|
||||
|
||||
```
|
||||
# cat port-list.txt
|
||||
@ -115,7 +109,7 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
|
||||
80
|
||||
```
|
||||
|
||||
您必须确保已经将要检查的服务器( IP 地址 )写入 `server-list.txt` 到文件中。每个服务器( IP ) 应该在单独的行中。
|
||||
你必须确保已经将要检查的服务器(IP 地址)写入 `server-list.txt` 到文件中。每个服务器(IP) 应该在单独的行中。
|
||||
|
||||
```
|
||||
# cat server-list.txt
|
||||
@ -135,12 +129,12 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
|
||||
#!/bin/sh
|
||||
for server in `more server-list.txt`
|
||||
do
|
||||
for port in `more port-list.txt`
|
||||
do
|
||||
#echo $server
|
||||
nc -zvw3 $server $port
|
||||
echo ""
|
||||
done
|
||||
for port in `more port-list.txt`
|
||||
do
|
||||
#echo $server
|
||||
nc -zvw3 $server $port
|
||||
echo ""
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
@ -180,10 +174,10 @@ via: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[zero-MK](https://github.com/zero-mk)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/
|
||||
[1]: https://linux.cn/article-10675-1.html
|
@ -1,28 +1,28 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Modrisco)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10752-1.html)
|
||||
[#]: subject: (Getting started with Vim: The basics)
|
||||
[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
|
||||
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
Vim 入门:基础
|
||||
======
|
||||
|
||||
为工作或者新项目学习足够的 Vim 知识。
|
||||
> 为工作或者新项目学习足够的 Vim 知识。
|
||||
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
我还清晰地记得我第一次接触 Vim 的时候。那时我还是一名大学生,计算机学院的机房里都装着 Ubuntu 系统。尽管我在上大学前也曾接触过不同的 Linux 发行版(比如 RHEL,Red Hat 在百思买出售它的 CD),但这却是我第一次要在日常中频繁使用 Linux 系统,因为我的课程要求我这样做。当我开始使用 Linux 时,正如我的前辈和将来的后继者们一样,我感觉自己像是一名“真正的程序员”了。
|
||||
我还清晰地记得我第一次接触 Vim 的时候。那时我还是一名大学生,计算机学院的机房里都装着 Ubuntu 系统。尽管我在上大学前也曾接触过不同的 Linux 发行版(比如 RHEL —— Red Hat 在百思买出售它的 CD),但这却是我第一次要在日常中频繁使用 Linux 系统,因为我的课程要求我这样做。当我开始使用 Linux 时,正如我的前辈和将来的后继者们一样,我感觉自己像是一名“真正的程序员”了。
|
||||
|
||||
![Real Programmers comic][2]
|
||||
|
||||
真正的程序员,来自 [xkcd][3]
|
||||
*真正的程序员,来自 [xkcd][3]*
|
||||
|
||||
学生们可以使用像 [Kate][4] 一样的图形文本编辑器,这也安装在学校的电脑上了。对于那些可以使用 shell 但不习惯使用控制台编辑器的学生,最流行的选择是 [Nano][5],它提供了很好的交互式菜单和类似于 Windows 图形文本编辑器的体验。
|
||||
|
||||
我有时会用 Nano,但当我听说 [Vi/Vim][6] 和 [Emacs][7] 能做一些很棒的事情时我决定试一试它们(主要是因为它们看起来很酷,而且我也很好奇它们有什么特别之处)。第一次使用 Vim 时吓到我了 —— 我不想搞砸任何事情!但是,一旦我掌握了它的诀窍,事情就变得容易得多,我可以欣赏编辑器的强大功能。至于 Emacs,呃,我有点放弃了,但我很高兴我坚持和 Vim 在一起。
|
||||
我有时会用 Nano,但当我听说 [Vi/Vim][6] 和 [Emacs][7] 能做一些很棒的事情时我决定试一试它们(主要是因为它们看起来很酷,而且我也很好奇它们有什么特别之处)。第一次使用 Vim 时吓到我了 —— 我不想搞砸任何事情!但是,一旦我掌握了它的诀窍,事情就变得容易得多,我就可以欣赏这个编辑器的强大功能了。至于 Emacs,呃,我有点放弃了,但我很高兴我坚持和 Vim 在一起。
|
||||
|
||||
在本文中,我将介绍一下 Vim(基于我的个人经验),这样你就可以在 Linux 系统上用它来作为编辑器使用了。这篇文章不会让你变成 Vim 的专家,甚至不会触及 Vim 许多强大功能的皮毛。但是起点总是很重要的,我想让开始的经历尽可能简单,剩下的则由你自己去探索。
|
||||
|
||||
@ -40,23 +40,23 @@ Vim 入门:基础
|
||||
|
||||
还记得我一开始说过我不敢使用 Vim 吗?我当时在害怕“如果我改变了一个现有的文件,把事情搞砸了怎么办?”毕竟,一些计算机科学作业要求我修改现有的文件。我想知道:_如何在不保存更改的情况下打开和关闭文件?_
|
||||
|
||||
好消息是你可以使用相同的命令在 Vim 中创建或打开文件:`vim <FILE_NAME>`,其中 **<FILE_NAME>** 表示要创建或修改的目标文件名。让我们通过输入 `vim HelloWorld.java` 来创建一个名为 `HelloWorld.java` 的文件。
|
||||
好消息是你可以使用相同的命令在 Vim 中创建或打开文件:`vim <FILE_NAME>`,其中 `<FILE_NAME>` 表示要创建或修改的目标文件名。让我们通过输入 `vim HelloWorld.java` 来创建一个名为 `HelloWorld.java` 的文件。
|
||||
|
||||
你好,Vim!现在,讲一下 Vim 中一个非常重要的概念,可能也是最需要记住的:Vim 有多种模式,下面是 Vim 基础中需要知道的的三种:
|
||||
|
||||
模式 | 描述
|
||||
---|---
|
||||
正常模式 | 默认模式,用于导航和简单编辑
|
||||
插入模式 | 用于插入和修改文本
|
||||
命令模式 | 用于执行如保存,退出等命令
|
||||
插入模式 | 用于直接插入和修改文本
|
||||
命令行模式 | 用于执行如保存,退出等命令
|
||||
|
||||
Vim 也有其他模式,例如可视模式、选择模式和命令模式。不过上面的三种模式对我们来说已经足够好了。
|
||||
Vim 也有其他模式,例如可视模式、选择模式和命令模式。不过上面的三种模式对我们来说已经足够用了。
|
||||
|
||||
你现在正处于正常模式,如果有文本,你可以用箭头键移动或使用其他导航键(将在稍后看到)。要确定你正处于正常模式,只需按下 `esc` (Escape)键即可。
|
||||
|
||||
> **提示:** **Esc** 切换到正常模式。即使你已经在正常模式下,点击 **Esc** 只是为了练习。
|
||||
> **提示:** `Esc` 切换到正常模式。即使你已经在正常模式下,点击 `Esc` 只是为了练习。
|
||||
|
||||
现在,有趣的事情发生了。输入 `:` (冒号键)并接着 `q!` (完整命令:`:q!`)。你的屏幕将显示如下:
|
||||
现在,有趣的事情发生了。输入 `:` (冒号键)并接着 `q!` (完整命令:`:q!`)。你的屏幕将显示如下:
|
||||
|
||||
![Editing Vim][9]
|
||||
|
||||
@ -68,7 +68,7 @@ Vim 也有其他模式,例如可视模式、选择模式和命令模式。不
|
||||
|
||||
通过输入 `vim HelloWorld.java` 和回车键来再次打开这个文件。你可以在插入模式中修改文件。首先,通过 `Esc` 键来确定你正处于正常模式。接着输入 `i` 来进入插入模式(没错,就是字母 **i**)。
|
||||
|
||||
在左下角,你将看到 `\-- INSERT --`,这标志着你这处于插入模式。
|
||||
在左下角,你将看到 `-- INSERT --`,这标志着你这处于插入模式。
|
||||
|
||||
![Vim insert mode][10]
|
||||
|
||||
@ -80,9 +80,10 @@ public class HelloWorld {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
非常漂亮!注意文本是如何在 Java 语法中高亮显示的。因为这是个 Java 文件,所以 Vim 将自动检测语法并高亮颜色。
|
||||
|
||||
保存文件:按下 `Esc` 来退出插入模式并进入命令模式。输入 `:` 并接着 `x!` (完整命令:`:x!`),按回车键来保存文件。你也可以输入 `wq` 来执行相同的操作。
|
||||
保存文件:按下 `Esc` 来退出插入模式并进入命令行模式。输入 `:` 并接着 `x!` (完整命令:`:x!`),按回车键来保存文件。你也可以输入 `wq` 来执行相同的操作。
|
||||
|
||||
现在,你知道了如何使用插入模式输入文本并使用以下命令保存文件:`:x!` 或者 `:wq`。
|
||||
|
||||
@ -96,7 +97,7 @@ public class HelloWorld {
|
||||
|
||||
![Showing Line Numbers][12]
|
||||
|
||||
好,你也许会说,“这确实很酷,不过我该怎么跳到某一行呢?”再一次的,确认你正处于正常模式。接着输入 `: <LINE_NUMBER>`,在这里 **< LINE_NUMBER>** 是你想去的那一行的行数。按下回车键来试着移动到第二行。
|
||||
好,你也许会说,“这确实很酷,不过我该怎么跳到某一行呢?”再一次的,确认你正处于正常模式。接着输入 `:<LINE_NUMBER>`,在这里 `<LINE_NUMBER>` 是你想去的那一行的行数。按下回车键来试着移动到第二行。
|
||||
|
||||
```
|
||||
:2
|
||||
@ -116,17 +117,17 @@ public class HelloWorld {
|
||||
|
||||
你现在来到这行的最后一个字节了。在此示例中,高亮左大括号以显示光标移动到的位置,右大括号被高亮是因为它是高亮的左大括号的匹配字符。
|
||||
|
||||
这就是 Vim 中的基本导航功能。等等,别急着退出文件。让我们转到 Vim 中的基本编辑。不过,你可以暂时随便喝杯咖啡或茶休息一下。
|
||||
这就是 Vim 中的基本导航功能。等等,别急着退出文件。让我们转到 Vim 中的基本编辑。不过,你可以暂时顺便喝杯咖啡或茶休息一下。
|
||||
|
||||
### 第 4 步:Vim 中的基本编辑
|
||||
|
||||
现在,你已经知道如何通过跳到想要的一行来在文件中导航,你可以使用这个技能在 Vim 中进行一些基本编辑。切换到插入模式。(还记得怎么做吗?是不是输入 `i` ?)当然,你可以使用键盘逐一删除或插入字符来进行编辑,但是 Vim 提供了更快捷的方法来编辑文件。
|
||||
|
||||
来到第三行,这里的代码是 **public static void main(String[] args) {**。双击 `d` 键,没错,就是 `dd`。如果你成功做到了,你将会看到,第三行消失了,剩下的所有行都向上移动了一行。(例如,第四行变成了第三行)。
|
||||
来到第三行,这里的代码是 `public static void main(String[] args) {`。双击 `d` 键,没错,就是 `dd`。如果你成功做到了,你将会看到,第三行消失了,剩下的所有行都向上移动了一行。(例如,第四行变成了第三行)。
|
||||
|
||||
![Deleting A Line][15]
|
||||
|
||||
这就是 _删除_(delete) 命令。不要担心,键入 `u`,你会发现这一行又回来了。喔,这就是 _撤销_(undo) 命令。
|
||||
这就是<ruby>删除<rt>delete</rt></ruby>命令。不要担心,键入 `u`,你会发现这一行又回来了。喔,这就是<ruby>撤销<rt>undo</rt></ruby>命令。
|
||||
|
||||
![Undoing a change in Vim][16]
|
||||
|
||||
@ -134,7 +135,7 @@ public class HelloWorld {
|
||||
|
||||
![Highlighting text in Vim][17]
|
||||
|
||||
来到第四行,这里的代码是 **System.out.println("Hello, Opensource");**。高亮这一行的所有内容。好了吗?当第四行的内容处于高亮时,按下 `y`。这就叫做 _复制_(yank)模式,文本将会被复制到剪贴板上。接下来,输入 `o` 来创建新的一行。注意,这将让你进入插入模式。通过按 `Esc` 退出插入模式,然后按下 `p`,代表 _粘贴_。这将把复制的文本从第三行粘贴到第四行。
|
||||
来到第四行,这里的代码是 `System.out.println("Hello, Opensource");`。高亮这一行的所有内容。好了吗?当第四行的内容处于高亮时,按下 `y`。这就叫做<ruby>复制<rt>yank</rt></ruby>模式,文本将会被复制到剪贴板上。接下来,输入 `o` 来创建新的一行。注意,这将让你进入插入模式。通过按 `Esc` 退出插入模式,然后按下 `p`,代表<ruby>粘贴<rt>paste</rt></ruby>。这将把复制的文本从第三行粘贴到第四行。
|
||||
|
||||
![Pasting in Vim][18]
|
||||
|
||||
@ -148,50 +149,50 @@ public class HelloWorld {
|
||||
|
||||
假设你的团队领导希望你更改项目中的文本字符串。你该如何快速完成任务?你可能希望使用某个关键字来搜索该行。
|
||||
|
||||
Vim 的搜索功能非常有用。通过 `Esc` 键来进入命令模式,然后输入冒号 `:`,我们可以通过输入 `/ <SEARCH_KEYWORD>` 来搜索关键词, **< SEARCH_KEYWORD>** 指你希望搜索的字符串。在这里,我们搜索关键字符串 “Hello”。在面的图示中缺少冒号,但这是必需的。
|
||||
Vim 的搜索功能非常有用。通过 `Esc` 键来进入命令模式,然后输入冒号 `:`,我们可以通过输入 `/<SEARCH_KEYWORD>` 来搜索关键词, `<SEARCH_KEYWORD>` 指你希望搜索的字符串。在这里,我们搜索关键字符串 `Hello`。在下面的图示中没有显示冒号,但这是必须输入的。
|
||||
|
||||
![Searching in Vim][19]
|
||||
|
||||
但是,一个关键字可以出现不止一次,而这可能不是你想要的那一个。那么,如何找到下一个匹配项呢?只需按 `n` 键即可,这代表 _下一个_(next)。执行此操作时,请确保你没有处于插入模式!
|
||||
但是,一个关键字可以出现不止一次,而这可能不是你想要的那一个。那么,如何找到下一个匹配项呢?只需按 `n` 键即可,这代表<ruby>下一个<rt>next</rt></ruby>。执行此操作时,请确保你没有处于插入模式!
|
||||
|
||||
### 附加步骤:Vim中的分割模式
|
||||
### 附加步骤:Vim 中的分割模式
|
||||
|
||||
以上几乎涵盖了所有的 Vim 基础知识。但是,作为一个额外奖励,我想给你展示 Vim 一个很酷的特性,叫做 _分割_(split)模式。
|
||||
以上几乎涵盖了所有的 Vim 基础知识。但是,作为一个额外奖励,我想给你展示 Vim 一个很酷的特性,叫做<ruby>分割<rt>split</rt></ruby>模式。
|
||||
|
||||
退出 _HelloWorld.java_ 并创建一个新文件。在控制台窗口中,输入 `vim GoodBye.java` 并按回车键来创建一个名为 _GoodBye.java_ 的新文件。
|
||||
退出 `HelloWorld.java` 并创建一个新文件。在控制台窗口中,输入 `vim GoodBye.java` 并按回车键来创建一个名为 `GoodBye.java` 的新文件。
|
||||
|
||||
输入任何你想输入的让内容,我选择输入“Goodbye”。保存文件(记住你可以在命令模式中使用 `:x!` 或者 `:wq`)。
|
||||
输入任何你想输入的让内容,我选择输入 `Goodbye`。保存文件(记住你可以在命令模式中使用 `:x!` 或者 `:wq`)。
|
||||
|
||||
在命令模式中,输入 `:split HelloWorld.java`,来看看发生了什么。
|
||||
|
||||
![Split mode in Vim][20]
|
||||
|
||||
Wow!快看!**split** 命令将控制台窗口水平分割成了两个部分,上面是 _HelloWorld.java_,下面是 _GoodBye.java_。该怎么能在窗口之间切换呢?按住 `Control` 键 (在 Mac 上)或 `Ctrl` 键(在PC上),然后按下 `ww` (即双击 `w` 键)。
|
||||
Wow!快看! `split` 命令将控制台窗口水平分割成了两个部分,上面是 `HelloWorld.java`,下面是 `GoodBye.java`。该怎么能在窗口之间切换呢? 按住 `Control` 键(在 Mac 上)或 `Ctrl` 键(在 PC 上),然后按下 `ww` (即双击 `w` 键)。
|
||||
|
||||
作为最后一个练习,尝试通过复制和粘贴 _HelloWorld.java_ 来编辑 _GoodBye.java_ 以匹配下面屏幕上的内容。
|
||||
作为最后一个练习,尝试通过复制和粘贴 `HelloWorld.java` 来编辑 `GoodBye.java` 以匹配下面屏幕上的内容。
|
||||
|
||||
![Modify GoodBye.java file in Split Mode][21]
|
||||
|
||||
保存两份文件,成功!
|
||||
|
||||
> **提示 1:** 如果你想将两个文件窗口垂直分割,使用 `:vsplit <FILE_NAME>` 命令。(代替 `:split <FILE_NAME>` 命令,**< FILE_NAME>** 指你想要使用分割模式打开的文件名)。
|
||||
> **提示 1:** 如果你想将两个文件窗口垂直分割,使用 `:vsplit <FILE_NAME>` 命令。(代替 `:split <FILE_NAME>` 命令,`<FILE_NAME>` 指你想要使用分割模式打开的文件名)。
|
||||
>
|
||||
> **提示 2:** 你可以通过调用任意数量的 **split** 或者 **vsplit** 命令来打开两个以上的文件。试一试,看看它效果如何。
|
||||
> **提示 2:** 你可以通过调用任意数量的 `split` 或者 `vsplit` 命令来打开两个以上的文件。试一试,看看它效果如何。
|
||||
|
||||
### Vim 速查表
|
||||
|
||||
在本文中,您学会了如何使用 Vim 来完成工作或项目。但这只是你开启 Vim 强大功能之旅的开始。请务必在 Opensource.com 上查看其他很棒的教程和技巧。
|
||||
在本文中,您学会了如何使用 Vim 来完成工作或项目,但这只是你开启 Vim 强大功能之旅的开始,可以查看其他很棒的教程和技巧。
|
||||
|
||||
为了让一切变得简单些,我已经将你学到的一切总结到了 [a handy cheat sheet][22] 中。
|
||||
为了让一切变得简单些,我已经将你学到的一切总结到了 [一份方便的速查表][22] 中。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/getting-started-vim
|
||||
|
||||
作者:[Bryant Son (Red Hat, Community Moderator)][a]
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Modrisco](https://github.com/Modrisco)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
104
published/20190328 How to run PostgreSQL on Kubernetes.md
Normal file
104
published/20190328 How to run PostgreSQL on Kubernetes.md
Normal file
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10762-1.html)
|
||||
[#]: subject: (How to run PostgreSQL on Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
|
||||
[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
|
||||
|
||||
怎样在 Kubernetes 上运行 PostgreSQL
|
||||
======
|
||||
|
||||
> 创建统一管理的,具备灵活性的云原生生产部署来部署一个个性化的数据库即服务(DBaaS)。
|
||||
|
||||
![cubes coming together to create a larger cube][1]
|
||||
|
||||
通过在 [Kubernetes][2] 上运行 [PostgreSQL][3] 数据库,你能创建统一管理的,具备灵活性的云原生生产部署应用来部署一个个性化的数据库即服务为你的特定需求进行量身定制。
|
||||
|
||||
对于 Kubernetes,使用 Operator 允许你提供额外的上下文去[管理有状态应用][4]。当使用像PostgreSQL 这样开源的数据库去执行包括配置、扩展、高可用和用户管理时,Operator 也很有帮助。
|
||||
|
||||
让我们来探索如何在 Kubernetes 上启动并运行 PostgreSQL。
|
||||
|
||||
### 安装 PostgreSQL Operator
|
||||
|
||||
将 PostgreSQL 和 Kubernetes 结合使用的第一步是安装一个 Operator。在针对 Linux 系统的Crunchy 的[快速启动脚本][6]的帮助下,你可以在任意基于 Kubernetes 的环境下启动和运行开源的[Crunchy PostgreSQL Operator][5]。
|
||||
|
||||
快速启动脚本有一些必要前提:
|
||||
|
||||
* [Wget][7] 工具已安装。
|
||||
* [kubectl][8] 工具已安装。
|
||||
* 在你的 Kubernetes 中已经定义了一个 [StorageClass][9]。
|
||||
* 拥有集群权限的可访问 Kubernetes 的用户账号,以安装 Operator 的 [RBAC][10] 规则。
|
||||
* 一个 PostgreSQL Operator 的 [命名空间][11]。
|
||||
|
||||
执行这个脚本将提供给你一个默认的 PostgreSQL Operator 部署,其默认假设你采用 [动态存储][12]和一个名为 `standard` 的 StorageClass。这个脚本允许用户采用自定义的值去覆盖这些默认值。
|
||||
|
||||
通过下列命令,你能下载这个快速启动脚本并把它的权限设置为可执行:
|
||||
|
||||
```
|
||||
wget <https://raw.githubusercontent.com/CrunchyData/postgres-operator/master/examples/quickstart.sh>
|
||||
chmod +x ./quickstart.sh
|
||||
```
|
||||
|
||||
然后你运行快速启动脚本:
|
||||
|
||||
```
|
||||
./examples/quickstart.sh
|
||||
```
|
||||
|
||||
在脚本提示你相关的 Kubernetes 集群基本信息后,它将执行下列操作:
|
||||
|
||||
* 下载 Operator 配置文件
|
||||
* 将 `$HOME/.pgouser` 这个文件设置为默认设置
|
||||
* 以 Kubernetes [Deployment][13] 部署 Operator
|
||||
* 设置你的 `.bashrc` 文件包含 Operator 环境变量
|
||||
* 设置你的 `$HOME/.bash_completion` 文件为 `pgo bash_completion` 文件
|
||||
|
||||
在快速启动脚本的执行期间,你将会被提示在你的 Kubernetes 集群设置 RBAC 规则。在另一个终端,执行快速启动命令所提示你的命令。
|
||||
|
||||
一旦这个脚本执行完成,你将会得到提示设置一个端口以转发到 PostgreSQL Operator pod。在另一个终端,执行这个端口转发操作;这将允许你开始对 PostgreSQL Operator 执行命令!尝试输入下列命令创建集群:
|
||||
|
||||
```
|
||||
pgo create cluster mynewcluster
|
||||
```
|
||||
|
||||
你能输入下列命令测试你的集群运行状况:
|
||||
|
||||
```
|
||||
pgo test mynewcluster
|
||||
```
|
||||
|
||||
现在,你能在 Kubernetes 环境下管理你的 PostgreSQL 数据库了!你可以在[官方文档][14]找到非常全面的命令,包括扩容,高可用,备份等等。
|
||||
|
||||
这篇文章部分参考了该作者为 Crunchy 博客而写的[在 Kubernetes 上开始运行 PostgreSQL][15]。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/how-run-postgresql-kubernetes
|
||||
|
||||
作者:[Jonathan S. Katz][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[arrowfeng](https://github.com/arrowfeng)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jkatz05
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
|
||||
[2]: https://www.postgresql.org/
|
||||
[3]: https://kubernetes.io/
|
||||
[4]: https://opensource.com/article/19/2/scaling-postgresql-kubernetes-operators
|
||||
[5]: https://github.com/CrunchyData/postgres-operator
|
||||
[6]: https://crunchydata.github.io/postgres-operator/stable/installation/#quickstart-script
|
||||
[7]: https://www.gnu.org/software/wget/
|
||||
[8]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
[9]: https://kubernetes.io/docs/concepts/storage/storage-classes/
|
||||
[10]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||
[11]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
|
||||
[12]: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
|
||||
[13]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
|
||||
[14]: https://crunchydata.github.io/postgres-operator/stable/#documentation
|
||||
[15]: https://info.crunchydata.com/blog/get-started-runnning-postgresql-on-kubernetes
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10761-1.html)
|
||||
[#]: subject: (Using Square Brackets in Bash: Part 2)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
|
||||
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
|
||||
@ -14,8 +14,6 @@
|
||||
|
||||
> 我们继续来看方括号的用法,它们甚至还可以在 Bash 当中作为一个命令使用。
|
||||
|
||||
[Creative Commons Zero][2]
|
||||
|
||||
欢迎回到我们的方括号专题。在[前一篇文章][3]当中,我们介绍了方括号在命令行中可以用于通配操作,如果你已经读过前一篇文章,就可以从这里继续了。
|
||||
|
||||
方括号还可以以一个命令的形式使用,就像这样:
|
||||
@ -26,7 +24,7 @@
|
||||
|
||||
上面这种 `[ ... ]` 的形式就可以看成是一个可执行的命令。要注意,方括号内部的内容 `"a" = "a"` 和方括号 `[`、`]` 之间是有空格隔开的。因为这里的方括号被视作一个命令,因此要用空格将命令和它的参数隔开。
|
||||
|
||||
上面这个命令的含义是“判断字符串 `"a"` 和字符串 `"a"` 是否相同”,如果判断结果为真,那么 `[ ... ]` 就会以<ruby>状态码<rt>status code</rt></ruby> 0 退出,否则以状态码 1 退出。在之前的文章中,我们也有介绍过状态码的概念,可以通过 `$?` 变量获取到最近一个命令的状态码。
|
||||
上面这个命令的含义是“判断字符串 `"a"` 和字符串 `"a"` 是否相同”,如果判断结果为真,那么 `[ ... ]` 就会以<ruby>状态码<rt>status code</rt></ruby> 0 退出,否则以状态码 1 退出。在[之前的文章][4]中,我们也有介绍过状态码的概念,可以通过 `$?` 变量获取到最近一个命令的状态码。
|
||||
|
||||
分别执行
|
||||
|
||||
@ -42,22 +40,22 @@ echo $?
|
||||
echo $?
|
||||
```
|
||||
|
||||
这两段命令中,前者会输出 0(判断结果为真),后者则会输出 1(判断结果为假)。在 Bash 当中,如果一个命令的状态码是 0,表示这个命令正常执行完成并退出,而且其中没有出现错误,对应布尔值 `true`;如果在命令执行过程中出现错误,就会返回一个非零的状态码,对应布尔值 `false`。而 `[ ... ]`也同样遵循这样的规则。
|
||||
这两段命令中,前者会输出 0(判断结果为真),后者则会输出 1(判断结果为假)。在 Bash 当中,如果一个命令的状态码是 0,表示这个命令正常执行完成并退出,而且其中没有出现错误,对应布尔值 `true`;如果在命令执行过程中出现错误,就会返回一个非零的状态码,对应布尔值 `false`。而 `[ ... ]` 也同样遵循这样的规则。
|
||||
|
||||
因此,`[ ... ]` 很适合在 `if ... then`、`while` 或 `until` 这种在代码块结束前需要判断是否达到某个条件结构中使用。
|
||||
|
||||
对应使用的逻辑判断运算符也相当直观:
|
||||
|
||||
```
|
||||
[ STRING1 = STRING2 ] => checks to see if the strings are equal
|
||||
[ STRING1 != STRING2 ] => checks to see if the strings are not equal
|
||||
[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
|
||||
[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
|
||||
[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
|
||||
[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
|
||||
[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
|
||||
[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
|
||||
etc...
|
||||
[ STRING1 = STRING2 ] => 检查字符串是否相等
|
||||
[ STRING1 != STRING2 ] => 检查字符串是否不相等
|
||||
[ INTEGER1 -eq INTEGER2 ] => 检查整数 INTEGER1 是否等于 INTEGER2
|
||||
[ INTEGER1 -ge INTEGER2 ] => 检查整数 INTEGER1 是否大于等于 INTEGER2
|
||||
[ INTEGER1 -gt INTEGER2 ] => 检查整数 INTEGER1 是否大于 INTEGER2
|
||||
[ INTEGER1 -le INTEGER2 ] => 检查整数 INTEGER1 是否小于等于 INTEGER2
|
||||
[ INTEGER1 -lt INTEGER2 ] => 检查整数 INTEGER1 是否小于 INTEGER2
|
||||
[ INTEGER1 -ne INTEGER2 ] => 检查整数 INTEGER1 是否不等于 INTEGER2
|
||||
等等……
|
||||
```
|
||||
|
||||
方括号的这种用法也可以很有 shell 风格,例如通过带上 `-f` 参数可以判断某个文件是否存在:
|
||||
@ -129,7 +127,16 @@ done
|
||||
|
||||
在下一篇文章中,我们会开始介绍圆括号 `()` 在 Linux 命令行中的用法,敬请关注!
|
||||
|
||||
### 更多
|
||||
|
||||
- [Linux 工具:点的含义][6]
|
||||
- [理解 Bash 中的尖括号][7]
|
||||
- [Bash 中尖括号的更多用法][8]
|
||||
- [Linux 中的 &][9]
|
||||
- [Bash 中的 & 符号和文件描述符][10]
|
||||
- [Bash 中的逻辑和(&)][4]
|
||||
- [浅析 Bash 中的 {花括号}][11]
|
||||
- [在 Bash 中使用[方括号] (一)][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -138,7 +145,7 @@ via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -146,13 +153,13 @@ via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy "square brackets"
|
||||
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
||||
[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
|
||||
[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
|
||||
[3]: https://linux.cn/article-10717-1.html
|
||||
[4]: https://linux.cn/article-10596-1.html
|
||||
[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
|
||||
[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
|
||||
[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
|
||||
[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
|
||||
[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
|
||||
[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
|
||||
[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
|
||||
[6]: https://linux.cn/article-10465-1.html
|
||||
[7]: https://linux.cn/article-10502-1.html
|
||||
[8]: https://linux.cn/article-10529-1.html
|
||||
[9]: https://linux.cn/article-10587-1.html
|
||||
[10]: https://linux.cn/article-10591-1.html
|
||||
[11]: https://linux.cn/article-10624-1.html
|
||||
|
@ -1,64 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10757-1.html)
|
||||
[#]: subject: (5 useful open source log analysis tools)
|
||||
[#]: via: (https://opensource.com/article/19/4/log-analysis-tools)
|
||||
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
|
||||
|
||||
5 个有用的开源日志分析工具
|
||||
======
|
||||
监控网络活动既重要又繁琐,以下这些工具可以使它更容易。
|
||||
|
||||
> 监控网络活动既重要又繁琐,以下这些工具可以使它更容易。
|
||||
|
||||
![People work on a computer server][1]
|
||||
|
||||
监控网络活动是一项繁琐的工作,但有充分的理由这样做。例如,它允许你查找和调查工作站、连接到网络的设备和服务器上的可疑登录,同时确定管理员滥用了什么。你还可以跟踪软件安装和数据传输,以实时识别潜在问题,而不是在损坏发生后才进行跟踪。
|
||||
监控网络活动是一项繁琐的工作,但有充分的理由这样做。例如,它允许你查找和调查工作站和连接到网络的设备及服务器上的可疑登录,同时确定管理员滥用了什么。你还可以跟踪软件安装和数据传输,以实时识别潜在问题,而不是在损坏发生后才进行跟踪。
|
||||
|
||||
这些日志还有助于使你的公司遵守适用于在欧盟范围内运营的任何实体的[通用数据保护条例][2](GFPR)。如果你的网站在欧盟可以浏览,那么你就有资格。
|
||||
这些日志还有助于使你的公司遵守适用于在欧盟范围内运营的任何实体的[通用数据保护条例][2](GDPR)。如果你的网站在欧盟可以浏览,那么你就有遵守的该条例的资格。
|
||||
|
||||
日志记录,包括跟踪和分析,应该是任何监控基础设置中的一个基本过程。要从灾难中恢复 SQL Server 数据库,需要事务日志文件。此外,通过跟踪日志文件,DevOps 团队和数据库管理员(DBA)可以保持最佳的数据库性能,或者,在网络攻击的情况下找到未经授权活动的证据。因此,定期监视和分析系统日志非常重要。这是一种可靠的方式来重新创建导致出现任何问题的事件链。
|
||||
日志记录,包括跟踪和分析,应该是任何监控基础设置中的一个基本过程。要从灾难中恢复 SQL Server 数据库,需要事务日志文件。此外,通过跟踪日志文件,DevOps 团队和数据库管理员(DBA)可以保持最佳的数据库性能,又或者,在网络攻击的情况下找到未经授权活动的证据。因此,定期监视和分析系统日志非常重要。这是一种重新创建导致出现任何问题的事件链的可靠方式。
|
||||
|
||||
现在有很多开源日志跟踪器和分析工具可供使用,这使得为活动日志选择合适的资源比你想象的更容易。免费和开源软件社区提供的日志设计适用于各种站点和操作系统。以下是五个我用过的最好的,它们并没有特别的顺序。
|
||||
现在有很多开源日志跟踪器和分析工具可供使用,这使得为活动日志选择合适的资源比你想象的更容易。自由和开源软件社区提供的日志设计适用于各种站点和操作系统。以下是五个我用过的最好的工具,它们并没有特别的顺序。
|
||||
|
||||
### Graylog
|
||||
|
||||
[Graylog][3] 于 2011 年在德国启动,现在作为开源工具或商业解决方案提供。它被设计成一个集中式日志管理系统,接受来自不同服务器或端点的数据流,并允许你快速浏览或分析该信息。
|
||||
[Graylog][3] 于 2011 年在德国创立,现在作为开源工具或商业解决方案提供。它被设计成一个集中式日志管理系统,接受来自不同服务器或端点的数据流,并允许你快速浏览或分析该信息。
|
||||
|
||||
![Graylog screenshot][4]
|
||||
|
||||
Graylog 在系统管理员中建立了良好的声誉,因为它易于扩展。大多数 Web 项目都是从小规模开始的,‘但它们可能指数级增长。Graylog 可以平衡后端服务网络中的负载,每天可以处理几 TB 的日志数据。
|
||||
Graylog 在系统管理员中有着良好的声誉,因为它易于扩展。大多数 Web 项目都是从小规模开始的,但它们可能指数级增长。Graylog 可以均衡后端服务网络中的负载,每天可以处理几 TB 的日志数据。
|
||||
|
||||
IT 管理员会发现 Graylog 的前端界面易于使用,而且功能强大。Graylog 是围绕仪表板的概念构建的,它允许你选择你认为最有价值的指标或数据源,并快速查看一段时间内的趋势。
|
||||
|
||||
当发生安全或性能事件时,IT 管理员希望能够尽可能地将症状追根溯源。Graylog 的搜索功能使这变得容易。它有内置的容错功能,可运行多线程搜索,因此你可以同时分析多个潜在的威胁。
|
||||
当发生安全或性能事件时,IT 管理员希望能够尽可能地根据症状追根溯源。Graylog 的搜索功能使这变得容易。它有内置的容错功能,可运行多线程搜索,因此你可以同时分析多个潜在的威胁。
|
||||
|
||||
### Nagios
|
||||
|
||||
[Nagios][5] 于 1999 年开始由一个开发人员开发,现在已经发展成为管理日志数据最可靠的开源工具之一。当前版本的 Nagios 可以与运行 Microsoft Windows, Linux 或 Unix 的服务器集成。
|
||||
[Nagios][5] 始于 1999 年,最初是由一个开发人员开发的,现在已经发展成为管理日志数据最可靠的开源工具之一。当前版本的 Nagios 可以与运行 Microsoft Windows、Linux 或 Unix 的服务器集成。
|
||||
|
||||
![Nagios Core][6]
|
||||
|
||||
它的主要产品是日志服务器,旨在简化数据收集并使系统管理员更容易访问信息。Nagios 日志服务器引擎将实时捕获数据并将其提供给一个强大的搜索工具。通过内置的设置向导,可以轻松地与新端点或应用程序集成。
|
||||
它的主要产品是日志服务器,旨在简化数据收集并使系统管理员更容易访问信息。Nagios 日志服务器引擎将实时捕获数据,并将其提供给一个强大的搜索工具。通过内置的设置向导,可以轻松地与新端点或应用程序集成。
|
||||
|
||||
Nagios 最常用于需要监控其本地网络安全性的组织。它可以审核一系列与网络相关的事件,并帮助自动分发警报。如果满足特定条件,甚至可以将 Nagios 配置为运行预定义的脚本,从而允许你在人员介入之前解决问题。
|
||||
|
||||
作为网络审核的一部分,Nagios 将根据日志数据来源的地理位置过滤日志数据。这意味着你可以使用映射技术构建全面的仪表板,以了解 Web 流量是如何流动的。
|
||||
作为网络审计的一部分,Nagios 将根据日志数据来源的地理位置过滤日志数据。这意味着你可以使用地图技术构建全面的仪表板,以了解 Web 流量是如何流动的。
|
||||
|
||||
### Elastic Stack ("ELK Stack")
|
||||
### Elastic Stack (ELK Stack)
|
||||
|
||||
[Elastic Stack][7],通常称为 ELK Stack,是需要筛选大量数据并理解其日志系统的组织中最受欢迎的开源工具之一(这也是我个人的最爱)。
|
||||
|
||||
![ELK Stack][8]
|
||||
|
||||
它的主要产品由三个独立的产品组成:Elasticsearch, Kibana 和 Logstash:
|
||||
它的主要产品由三个独立的产品组成:Elasticsearch、Kibana 和 Logstash:
|
||||
|
||||
* 顾名思义, _**Elasticsearch**_ 旨在帮助用户使用多种查询语言和类型在数据集中找到匹配项。速度是它最大的优势。它可以扩展成由数百个服务器节点组成的集群,轻松处理 PB 级的数据。
|
||||
* 顾名思义, Elasticsearch 旨在帮助用户使用多种查询语言和类型在数据集之中找到匹配项。速度是它最大的优势。它可以扩展成由数百个服务器节点组成的集群,轻松处理 PB 级的数据。
|
||||
* Kibana 是一个可视化工具,与 Elasticsearch 一起工作,允许用户分析他们的数据并构建强大的报告。当你第一次在服务器集群上安装 Kibana 引擎时,你会看到一个显示着统计数据、图表甚至是动画的界面。
|
||||
* ELK Stack 的最后一部分是 Logstash,它作为一个纯粹的服务端管道进入 Elasticsearch 数据库。你可以将 Logstash 与各种编程语言和 API 集成,这样你的网站和移动应用程序中的信息就可以直接提供给强大的 Elastic Stalk 搜索引擎中。
|
||||
|
||||
* _**Kibana**_ 是一个可视化工具,与 Elasticsearch 一起工作,允许用户分析他们的数据并构建强大的报告。当你第一次在服务器集群上安装 Kibana 引擎时,你将访问一个显示统计数据、图表甚至是动画的界面。
|
||||
|
||||
* ELK Stack 的最后一部分是 _**Logstash**_ ,它作为一个纯粹的服务端管道进入 Elasticsearch 数据库。你可以将 Logstash 与各种编程语言和 API 集成,这样你的网站和移动应用程序中的信息就可以直接提供给强大的 Elastic Stalk 搜索引擎中。
|
||||
|
||||
ELK Stack 的一个独特功能是,它允许你监视构建在 WordPress 开源安装上的应用程序。与[跟踪管理员和 PHP 日志][9]的大多数开箱即用的安全审计日志工具相比,ELK Stack 可以筛选 Web 服务器和数据库日志。
|
||||
ELK Stack 的一个独特功能是,它允许你监视构建在 WordPress 开源网站上的应用程序。与[跟踪管理日志和 PHP 日志][9]的大多数开箱即用的安全审计日志工具相比,ELK Stack 可以筛选 Web 服务器和数据库日志。
|
||||
|
||||
糟糕的日志跟踪和数据库管理是导致网站性能不佳的最常见原因之一。没有定期检查、优化和清空数据库日志,不仅会降低站点的运行速度,还可能导致其完全崩溃。因此,ELK Stack 对于每个 WordPress 开发人员的工具包来说都是一个优秀的工具。
|
||||
|
||||
@ -97,7 +97,7 @@ via: https://opensource.com/article/19/4/log-analysis-tools
|
||||
作者:[Sam Bocetta][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
186
published/20190407 Fixing Ubuntu Freezing at Boot Time.md
Normal file
186
published/20190407 Fixing Ubuntu Freezing at Boot Time.md
Normal file
@ -0,0 +1,186 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Raverstern)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10756-1.html)
|
||||
[#]: subject: (Fixing Ubuntu Freezing at Boot Time)
|
||||
[#]: via: (https://itsfoss.com/fix-ubuntu-freezing/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
解决 Ubuntu 在启动时冻结的问题
|
||||
======
|
||||
|
||||
> 本文将向你一步步展示如何通过安装 NVIDIA 专有驱动来处理 Ubuntu 在启动过程中冻结的问题。本教程仅在一个新安装的 Ubuntu 系统上操作验证过,不过在其它情况下也理应可用。
|
||||
|
||||
不久前我买了台[宏碁掠夺者][1]笔记本电脑来测试各种 Linux 发行版。这台庞大且笨重的机器与我喜欢的,类似[戴尔 XPS][3]那般小巧轻便的笔记本电脑大相径庭。
|
||||
|
||||
我即便不打游戏也选择这台电竞笔记本电脑的原因,就是为了 [NVIDIA 的显卡][4]。宏碁掠夺者 Helios 300 上搭载了一块 [NVIDIA Geforce][5] GTX 1050Ti 显卡。
|
||||
|
||||
NVIDIA 那糟糕的 Linux 兼容性为人们所熟知。过去很多 It’s FOSS 的读者都向我求助过关于 NVIDIA 笔记本电脑的问题,而我当时无能为力,因为我手头上没有使用 NVIDIA 显卡的系统。
|
||||
|
||||
所以当我决定搞一台专门的设备来测试 Linux 发行版时,我选择了带有 NVIDIA 显卡的笔记本电脑。
|
||||
|
||||
这台笔记本原装的 Windows 10 系统安装在 120 GB 的固态硬盘上,并另外配有 1 TB 的机械硬盘来存储数据。在此之上我配置好了 [Windows 10 和 Ubuntu 18.04 双系统][6]。整个的安装过程舒适、方便、快捷。
|
||||
|
||||
随后我启动了 [Ubuntu][7]。那熟悉的紫色界面展现了出来,然后我就发现它卡在那儿了。鼠标一动不动,我也输入不了任何东西,然后除了长按电源键强制关机以外我啥事儿都做不了。
|
||||
|
||||
然后再次尝试启动,结果一模一样。整个系统就一直卡在那个紫色界面,随后的登录界面也出不来。
|
||||
|
||||
这听起来很耳熟吧?下面就让我来告诉你如何解决这个 Ubuntu 在启动过程中冻结的问题。
|
||||
|
||||
> 如果你用的不是 Ubuntu
|
||||
|
||||
> 请注意,尽管是在 Ubuntu 18.04 上操作的,本教程应该也能用于其他基于 Ubuntu 的发行版,例如 Linux Mint、elementary OS 等等。关于这点我已经在 Zorin OS 上确认过。
|
||||
|
||||
### 解决 Ubuntu 启动中由 NVIDIA 驱动引起的冻结问题
|
||||
|
||||
![][8]
|
||||
|
||||
我介绍的解决方案适用于配有 NVIDIA 显卡的系统,因为你所面临的系统冻结问题是由开源的 [NVIDIA Nouveau 驱动][9]所导致的。
|
||||
|
||||
事不宜迟,让我们马上来看看如何解决这个问题。
|
||||
|
||||
#### 步骤 1:编辑 Grub
|
||||
|
||||
在启动系统的过程中,请你在如下图所示的 Grub 界面上停下。如果你没看到这个界面,在启动电脑时请按住 `Shift` 键。
|
||||
|
||||
在这个界面上,按 `E` 键进入编辑模式。
|
||||
|
||||
![按“E”按键][10]
|
||||
|
||||
你应该看到一些如下图所示的代码。此刻你应关注于以 “linux” 开头的那一行。
|
||||
|
||||
![前往 Linux 开头的那一行][11]
|
||||
|
||||
#### 步骤 2:在 Grub 中临时修改 Linux 内核参数
|
||||
|
||||
回忆一下,我们的问题出在 NVIDIA 显卡驱动上,是开源版 NVIDIA 驱动的不适配导致了我们的问题。所以此处我们能做的就是禁用这些驱动。
|
||||
|
||||
此刻,你有多种方式可以禁用这些驱动。我最喜欢的方式是通过 `nomodeset` 来禁用所有显卡的驱动。
|
||||
|
||||
请把下列文本添加到以 “linux” 开头的那一行的末尾。此处你应该可以正常输入。请确保你把这段文本加到了行末。
|
||||
|
||||
```
|
||||
nomodeset
|
||||
```
|
||||
|
||||
现在你屏幕上的显示应如下图所示:
|
||||
|
||||
![通过向内核添加 nomodeset 来禁用显卡驱动][12]
|
||||
|
||||
按 `Ctrl+X` 或 `F10` 保存并退出。下次你就将以修改后的内核参数来启动。
|
||||
|
||||
> 对以上操作的解释
|
||||
|
||||
> 所以我们究竟做了些啥?那个 `nomodeset` 又是个什么玩意儿?让我来向你简单地解释一下。
|
||||
|
||||
> 通常来说,显卡是在 X 或者是其他显示服务器开始执行后才被启用的,也就是在你登录系统并看到图形界面以后。
|
||||
|
||||
> 但近来,视频模式的设置被移进了内核。这么做的众多优点之一就是能你看到一个漂亮且高清的启动画面。
|
||||
|
||||
> 若你往内核中加入 `nomodeset` 参数,它就会指示内核在显示服务启动后才加载显卡驱动。
|
||||
|
||||
> 换句话说,你在此时禁止视频驱动的加载,由此产生的冲突也会随之消失。你在登录进系统以后,还是能看到一切如旧,那是因为显卡驱动在随后的过程中被加载了。
|
||||
|
||||
#### 步骤 3:更新你的系统并安装 NVIDIA 专有驱动
|
||||
|
||||
别因为现在可以登录系统了就过早地高兴起来。你之前所做的只是临时措施,在下次启动的时候,你的系统依旧会尝试加载 Nouveau 驱动而因此冻结。
|
||||
|
||||
这是否意味着你将不得不在 Grub 界面上不断地编辑内核?可喜可贺,答案是否定的。
|
||||
|
||||
你可以在 Ubuntu 上为 NVIDIA 显卡[安装额外的驱动][13]。在使用专有驱动后,Ubuntu 将不会在启动过程中冻结。
|
||||
|
||||
我假设这是你第一次登录到一个新安装的系统。这意味着在做其他事情之前你必须先[更新 Ubuntu][14]。通过 Ubuntu 的 `Ctrl+Alt+T` [系统快捷键][15]打开一个终端,并输入以下命令:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
在上述命令执行完以后,你可以尝试安装额外的驱动。不过根据我的经验,在安装新驱动之前你需要先重启一下你的系统。在你重启时,你还是需要按我们之前做的那样修改内核参数。
|
||||
|
||||
当你的系统已经更新和重启完毕,按下 `Windows` 键打开一个菜单栏,并搜索“<ruby>软件与更新<rt>Software & Updates</rt></ruby>”。
|
||||
|
||||
![点击“软件与更新”(Software & Updates)][16]
|
||||
|
||||
然后切换到“<ruby>额外驱动<rt>Additional Drivers</rt></ruby>”标签页,并等待数秒。然后你就能看到可供系统使用的专有驱动了。在这个列表上你应该可以找到 NVIDIA。
|
||||
|
||||
选择专有驱动并点击“<ruby>应用更改<rt>Apply Changes</rt></ruby>”。
|
||||
|
||||
![NVIDIA 驱动安装中][17]
|
||||
|
||||
新驱动的安装会费点时间。若你的系统启用了 UEFI 安全启动,你将被要求设置一个密码。*你可以将其设置为任何容易记住的密码*。它的用处我将在步骤 4 中说明。
|
||||
|
||||
![你可能需要设置一个安全启动密码][18]
|
||||
|
||||
安装完成后,你会被要求重启系统以令之前的更改生效。
|
||||
|
||||
![在新驱动安装好后重启你的系统][19]
|
||||
|
||||
#### 步骤 4:处理 MOK(仅针对启用了 UEFI 安全启动的设备)
|
||||
|
||||
如果你之前被要求设置安全启动密码,此刻你会看到一个蓝色界面,上面写着 “MOK management”。这是个复杂的概念,我试着长话短说。
|
||||
|
||||
对 MOK([设备所有者密码][20])的要求是因为安全启动的功能要求所有内核模块都必须被签名。Ubuntu 中所有随 ISO 镜像发行的内核模块都已经签了名。由于你安装了一个新模块(也就是那个额外的驱动),或者你对内核模块做了修改,你的安全系统可能视之为一个未经验证的外部修改,从而拒绝启动。
|
||||
|
||||
因此,你可以自己对系统模块进行签名(以告诉 UEFI 系统莫要大惊小怪,这些修改是你做的),或者你也可以简单粗暴地[禁用安全启动][21]。
|
||||
|
||||
现在你对[安全启动和 MOK][22] 有了一定了解,那咱们就来看看在遇到这个蓝色界面后该做些什么。
|
||||
|
||||
如果你选择“继续启动”,你的系统将有很大概率如往常一样启动,并且你啥事儿也不用做。不过在这种情况下,新驱动的有些功能有可能工作不正常。
|
||||
|
||||
这就是为什么,你应该“选择注册 MOK”。
|
||||
|
||||
![][23]
|
||||
|
||||
它会在下一个页面让你点击“继续”,然后要你输入一串密码。请输入在上一步中,在安装额外驱动时设置的密码。
|
||||
|
||||
> 别担心!
|
||||
|
||||
> 如果你错过了这个关于 MOK 的蓝色界面,或不小心点了“继续启动”而不是“注册 MOK”,不必惊慌。你的主要目的是能够成功启动系统,而通过禁用 Nouveau 显卡驱动,你已经成功地实现了这一点。
|
||||
|
||||
> 最坏的情况也不过就是你的系统切换到 Intel 集成显卡而不再使用 NVIDIA 显卡。你可以之后的任何时间安装 NVIDIA 显卡驱动。你的首要任务是启动系统。
|
||||
|
||||
#### 步骤 5:享受安装了专有 NVIDIA 驱动的 Linux 系统
|
||||
|
||||
当新驱动被安装好后,你需要再次重启系统。别担心!目前的情况应该已经好起来了,并且你不必再去修改内核参数,而是能够直接启动 Ubuntu 系统了。
|
||||
|
||||
我希望本教程帮助你解决了 Ubuntu 系统在启动中冻结的问题,并让你能够成功启动 Ubuntu 系统。
|
||||
|
||||
如果你有任何问题或建议,请在下方评论区给我留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/fix-ubuntu-freezing/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Raverstern](https://github.com/Raverstern)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://amzn.to/2YVV6rt
|
||||
[2]: https://itsfoss.com/affiliate-policy/
|
||||
[3]: https://itsfoss.com/dell-xps-13-ubuntu-review/
|
||||
[4]: https://www.nvidia.com/en-us/
|
||||
[5]: https://www.nvidia.com/en-us/geforce/
|
||||
[6]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
|
||||
[7]: https://www.ubuntu.com/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/fixing-frozen-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[9]: https://nouveau.freedesktop.org/wiki/
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/edit-grub-menu.jpg?resize=800%2C393&ssl=1
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue.jpg?resize=800%2C343&ssl=1
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue-2.jpg?resize=800%2C320&ssl=1
|
||||
[13]: https://itsfoss.com/install-additional-drivers-ubuntu/
|
||||
[14]: https://itsfoss.com/update-ubuntu/
|
||||
[15]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/activities_software_updates_search-e1551416201782-800x228.png?resize=800%2C228&ssl=1
|
||||
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-nvidia-driver-ubuntu.jpg?resize=800%2C520&ssl=1
|
||||
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/secure-boot-nvidia.jpg?ssl=1
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/nvidia-drivers-installed-Ubuntu.jpg?resize=800%2C510&ssl=1
|
||||
[20]: https://firmware.intel.com/blog/using-mok-and-uefi-secure-boot-suse-linux
|
||||
[21]: https://itsfoss.com/disable-secure-boot-in-acer/
|
||||
[22]: https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS
|
||||
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/MOK-Secure-boot.jpg?resize=800%2C350&ssl=1
|
@ -1,46 +1,38 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( NeverKnowsTomorrow )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: translator: (NeverKnowsTomorrow)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10768-1.html)
|
||||
[#]: subject: (Four Methods To Add A User To Group In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
在 Linux 中添加用户到组的四个方法
|
||||
在 Linux 中把用户添加到组的四个方法
|
||||
======
|
||||
|
||||
Linux 组是用于管理 Linux 中用户帐户的组织单位。
|
||||
Linux 组是用于管理 Linux 中用户帐户的组织单位。对于 Linux 系统中的每一个用户和组,它都有惟一的数字标识号。它被称为 用户 ID(UID)和组 ID(GID)。组的主要目的是为组的成员定义一组特权。它们都可以执行特定的操作,但不能执行其他操作。
|
||||
|
||||
对于 Linux 系统中的每一个用户和组,它都有惟一的数字标识号。
|
||||
Linux 中有两种类型的默认组。每个用户应该只有一个 <ruby>主要组<rt>primary group</rt></ruby> 和任意数量的 <ruby>次要组<rt>secondary group</rt></ruby>。
|
||||
|
||||
它被称为 userid (UID) 和 groupid (GID)。组的主要目的是为组的成员定义一组特权。
|
||||
|
||||
它们都可以执行特定的操作,但不能执行其他操作。
|
||||
|
||||
Linux 中有两种类型的默认组可用。每个用户应该只有一个 <ruby>主要组<rt>primary group</rt></ruby> 和任意数量的 <ruby>次要组<rt>secondary group</rt></ruby>。
|
||||
|
||||
* **主要组:** 创建用户帐户时,已将主组添加到用户。它通常是用户的名称。在执行诸如创建新文件(或目录)、修改文件或执行命令等任何操作时,主组将应用于用户。用户主要组信息存储在 `/etc/passwd` 文件中。
|
||||
* **次要组:** 它被称为次要组。它允许用户组在同一组成员文件中执行特定操作。
|
||||
|
||||
例如,如果你希望允许少数用户运行 apache(httpd)服务命令,那么它将非常适合。
|
||||
* **主要组:** 创建用户帐户时,已将主要组添加到用户。它通常是用户的名称。在执行诸如创建新文件(或目录)、修改文件或执行命令等任何操作时,主要组将应用于用户。用户的主要组信息存储在 `/etc/passwd` 文件中。
|
||||
* **次要组:** 它被称为次要组。它允许用户组在同一组成员文件中执行特定操作。例如,如果你希望允许少数用户运行 Apache(httpd)服务命令,那么它将非常适合。
|
||||
|
||||
你可能对以下与用户管理相关的文章感兴趣。
|
||||
|
||||
* 在 Linux 中创建用户帐户的三种方法?
|
||||
* 如何在 Linux 中创建批量用户?
|
||||
* 如何在 Linux 中使用不同的方法更新/更改用户密码?
|
||||
* [在 Linux 中创建用户帐户的三种方法?][1]
|
||||
* [如何在 Linux 中创建批量用户?][2]
|
||||
* [如何在 Linux 中使用不同的方法更新/更改用户密码?][3]
|
||||
|
||||
可以使用以下四种方法实现。
|
||||
|
||||
* **`usermod:`** usermod 命令修改系统帐户文件,以反映在命令行中指定的更改。
|
||||
* **`gpasswd:`** gpasswd 命令用于管理 /etc/group 和 /etc/gshadow。每个组都可以有管理员、成员和密码。
|
||||
* **`Shell Script:`** shell 脚本允许管理员自动执行所需的任务。
|
||||
* **`Manual Method:`** 我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
|
||||
* `usermod`:修改系统帐户文件,以反映在命令行中指定的更改。
|
||||
* `gpasswd`:用于管理 `/etc/group` 和 `/etc/gshadow`。每个组都可以有管理员、成员和密码。
|
||||
* Shell 脚本:可以让管理员自动执行所需的任务。
|
||||
* 手动方式:我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
|
||||
|
||||
我假设你已经拥有此活动所需的组和用户。在本例中,我们将使用以下用户和组:`user1`、`user2`、`user3`,group 是 `mygroup` 和 `mygroup1`。
|
||||
我假设你已经拥有此操作所需的组和用户。在本例中,我们将使用以下用户和组:`user1`、`user2`、`user3`,另外的组是 `mygroup` 和 `mygroup1`。
|
||||
|
||||
在进行更改之前,我想检查用户和组信息。详见下文。
|
||||
在进行更改之前,我希望检查一下用户和组信息。详见下文。
|
||||
|
||||
我可以看到下面的用户与他们自己的组关联,而不是与其他组关联。
|
||||
|
||||
@ -65,15 +57,15 @@ mygroup:x:1012:
|
||||
mygroup1:x:1013:
|
||||
```
|
||||
|
||||
### 方法 1:什么是 usermod 命令?
|
||||
### 方法 1:使用 usermod 命令
|
||||
|
||||
usermod 命令修改系统帐户文件,以反映命令行上指定的更改。
|
||||
`usermod` 命令修改系统帐户文件,以反映命令行上指定的更改。
|
||||
|
||||
### 如何使用 usermod 命令将现有的用户添加到次要组或附加组?
|
||||
#### 如何使用 usermod 命令将现有的用户添加到次要组或附加组?
|
||||
|
||||
要将现有用户添加到辅助组,请使用带有 `-g` 选项和组名称的 usermod 命令。
|
||||
要将现有用户添加到辅助组,请使用带有 `-g` 选项和组名称的 `usermod` 命令。
|
||||
|
||||
语法
|
||||
语法:
|
||||
|
||||
```
|
||||
# usermod [-G] [GroupName] [UserName]
|
||||
@ -85,18 +77,18 @@ usermod 命令修改系统帐户文件,以反映命令行上指定的更改。
|
||||
# usermod -a -G mygroup user1
|
||||
```
|
||||
|
||||
让我使用 id 命令查看输出。是的,添加成功。
|
||||
让我使用 `id` 命令查看输出。是的,添加成功。
|
||||
|
||||
```
|
||||
# id user1
|
||||
uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
|
||||
```
|
||||
|
||||
### 如何使用 usermod 命令将现有的用户添加到多个次要组或附加组?
|
||||
#### 如何使用 usermod 命令将现有的用户添加到多个次要组或附加组?
|
||||
|
||||
要将现有用户添加到多个次要组中,请使用带有 `-G` 选项的 usermod 命令和带有逗号分隔的组名称。
|
||||
要将现有用户添加到多个次要组中,请使用带有 `-G` 选项的 `usermod` 命令和带有逗号分隔的组名称。
|
||||
|
||||
语法
|
||||
语法:
|
||||
|
||||
```
|
||||
# usermod [-G] [GroupName1,GroupName2] [UserName]
|
||||
@ -115,11 +107,11 @@ uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
|
||||
uid=1009(user2) gid=1009(user2) groups=1009(user2),1012(mygroup),1013(mygroup1)
|
||||
```
|
||||
|
||||
### 如何改变用户的主要组?
|
||||
#### 如何改变用户的主要组?
|
||||
|
||||
要更改用户的主要组,请使用带有 `-g` 选项和组名称的 usermod 命令。
|
||||
要更改用户的主要组,请使用带有 `-g` 选项和组名称的 `usermod` 命令。
|
||||
|
||||
语法
|
||||
语法:
|
||||
|
||||
```
|
||||
# usermod [-g] [GroupName] [UserName]
|
||||
@ -131,22 +123,22 @@ uid=1009(user2) gid=1009(user2) groups=1009(user2),1012(mygroup),1013(mygroup1)
|
||||
# usermod -g mygroup user3
|
||||
```
|
||||
|
||||
让我们看看输出。是的,已成功更改。现在,它将 mygroup 显示为 user3 主要组而不是 user3。
|
||||
让我们看看输出。是的,已成功更改。现在,显示`user3` 主要组是 `mygroup` 而不是 `user3`。
|
||||
|
||||
```
|
||||
# id user3
|
||||
uid=1010(user3) gid=1012(mygroup) groups=1012(mygroup)
|
||||
```
|
||||
|
||||
### 方法 2:什么是 gpasswd 命令?
|
||||
### 方法 2:使用 gpasswd 命令
|
||||
|
||||
`gpasswd` 命令用于管理 `/etc/group` 和 `/etc/gshadow`。每个组都可以有管理员、成员和密码。
|
||||
|
||||
### 如何使用 gpasswd 命令将现有用户添加到次要组或者附加组?
|
||||
#### 如何使用 gpasswd 命令将现有用户添加到次要组或者附加组?
|
||||
|
||||
要将现有用户添加到次要组,请使用带有 `-M` 选项和组名称的 gpasswd 命令。
|
||||
要将现有用户添加到次要组,请使用带有 `-M` 选项和组名称的 `gpasswd` 命令。
|
||||
|
||||
语法
|
||||
语法:
|
||||
|
||||
```
|
||||
# gpasswd [-M] [UserName] [GroupName]
|
||||
@ -158,18 +150,18 @@ uid=1010(user3) gid=1012(mygroup) groups=1012(mygroup)
|
||||
# gpasswd -M user1 mygroup
|
||||
```
|
||||
|
||||
让我使用 id 命令查看输出。是的,`user1` 已成功添加到 `mygroup` 中。
|
||||
让我使用 `id` 命令查看输出。是的,`user1` 已成功添加到 `mygroup` 中。
|
||||
|
||||
```
|
||||
# id user1
|
||||
uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
|
||||
```
|
||||
|
||||
### 如何使用 gpasswd 命令添加多个用户到次要组或附加组中?
|
||||
#### 如何使用 gpasswd 命令添加多个用户到次要组或附加组中?
|
||||
|
||||
要将多个用户添加到辅助组中,请使用带有 `-M` 选项和组名称的 gpasswd 命令。
|
||||
要将多个用户添加到辅助组中,请使用带有 `-M` 选项和组名称的 `gpasswd` 命令。
|
||||
|
||||
语法
|
||||
语法:
|
||||
|
||||
```
|
||||
# gpasswd [-M] [UserName1,UserName2] [GroupName]
|
||||
@ -181,18 +173,18 @@ uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
|
||||
# gpasswd -M user2,user3 mygroup1
|
||||
```
|
||||
|
||||
让我使用 getent 命令查看输出。是的,`user2` 和 `user3` 已成功添加到 `myGroup1` 中。
|
||||
让我使用 `getent` 命令查看输出。是的,`user2` 和 `user3` 已成功添加到 `myGroup1` 中。
|
||||
|
||||
```
|
||||
# getent group mygroup1
|
||||
mygroup1:x:1013:user2,user3
|
||||
```
|
||||
|
||||
### 如何使用 gpasswd 命令从组中删除一个用户?
|
||||
#### 如何使用 gpasswd 命令从组中删除一个用户?
|
||||
|
||||
要从组中删除用户,请使用带有 `-d` 选项的 gpasswd 命令以及用户和组的名称。
|
||||
要从组中删除用户,请使用带有 `-d` 选项的 `gpasswd` 命令以及用户和组的名称。
|
||||
|
||||
语法
|
||||
语法:
|
||||
|
||||
```
|
||||
# gpasswd [-d] [UserName] [GroupName]
|
||||
@ -205,11 +197,9 @@ mygroup1:x:1013:user2,user3
|
||||
Removing user user1 from group mygroup
|
||||
```
|
||||
|
||||
### 方法 3:使用 Shell 脚本?
|
||||
### 方法 3:使用 Shell 脚本
|
||||
|
||||
基于上面的例子,我知道 `usermod` 命令没有能力将多个用户添加到组中,但是可以通过 `gpasswd` 命令完成。
|
||||
|
||||
但是,它将覆盖当前与组关联的现有用户。
|
||||
基于上面的例子,我知道 `usermod` 命令没有能力将多个用户添加到组中,可以通过 `gpasswd` 命令完成。但是,它将覆盖当前与组关联的现有用户。
|
||||
|
||||
例如,`user1` 已经与 `mygroup` 关联。如果要使用 `gpasswd` 命令将 `user2` 和 `user3` 添加到 `mygroup` 中,它将不会按预期生效,而是对组进行修改。
|
||||
|
||||
@ -219,9 +209,9 @@ Removing user user1 from group mygroup
|
||||
|
||||
因此,我们需要编写一个小的 shell 脚本来实现这一点。
|
||||
|
||||
### 如何使用 gpasswd 命令将多个用户添加到次要组或附加组?
|
||||
#### 如何使用 gpasswd 命令将多个用户添加到次要组或附加组?
|
||||
|
||||
如果要使用 gpasswd 命令将多个用户添加到次要组或附加组,请创建以下小的 shell 脚本。
|
||||
如果要使用 `gpasswd` 命令将多个用户添加到次要组或附加组,请创建以下 shell 脚本。
|
||||
|
||||
创建用户列表。每个用户应该在单独的行中。
|
||||
|
||||
@ -256,16 +246,16 @@ done
|
||||
# sh group-update.sh
|
||||
```
|
||||
|
||||
让我看看使用 getent 命令的输出。 是的,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup` 中。
|
||||
让我看看使用 `getent` 命令的输出。 是的,`user1`、`user2` 和 `user3` 已成功添加到 `mygroup` 中。
|
||||
|
||||
```
|
||||
# getent group mygroup
|
||||
mygroup:x:1012:user1,user2,user3
|
||||
```
|
||||
|
||||
### 如何使用 gpasswd 命令将多个用户添加到多个次要组或附加组?
|
||||
#### 如何使用 gpasswd 命令将多个用户添加到多个次要组或附加组?
|
||||
|
||||
如果要使用 gpasswd 命令将多个用户添加到多个次要组或附加组中,请创建以下小的 shell 脚本。
|
||||
如果要使用 `gpasswd` 命令将多个用户添加到多个次要组或附加组中,请创建以下 shell 脚本。
|
||||
|
||||
创建用户列表。每个用户应该在单独的行中。
|
||||
|
||||
@ -308,21 +298,21 @@ done
|
||||
# sh group-update-1.sh
|
||||
```
|
||||
|
||||
让我看看使用 getent 命令的输出。 是的,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup` 中。
|
||||
让我看看使用 `getent` 命令的输出。 是的,`user1`、`user2` 和 `user3` 已成功添加到 `mygroup` 中。
|
||||
|
||||
```
|
||||
# getent group mygroup
|
||||
mygroup:x:1012:user1,user2,user3
|
||||
```
|
||||
|
||||
此外,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup1` 中。
|
||||
此外,`user1`、`user2` 和 `user3` 已成功添加到 `mygroup1` 中。
|
||||
|
||||
```
|
||||
# getent group mygroup1
|
||||
mygroup1:x:1013:user1,user2,user3
|
||||
```
|
||||
|
||||
### 方法 4:在 Linux 中将用户添加到组中的手动方法?
|
||||
### 方法 4:在 Linux 中将用户添加到组中的手动方法
|
||||
|
||||
我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
|
||||
|
||||
@ -339,7 +329,7 @@ via: https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-us
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[NeverKnowsTomorrow](https://github.com/NeverKnowsTomorrow)
|
||||
校对:[校对者 ID](https://github.com/校对者 ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,38 +1,37 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zgj1024)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10765-1.html)
|
||||
[#]: subject: (HTTPie – A Modern Command Line HTTP Client For Curl And Wget Alternative)
|
||||
[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
HTTPie – 替代 Curl 和 Wget 的现代 HTTP 命令行客户端
|
||||
HTTPie:替代 Curl 和 Wget 的现代 HTTP 命令行客户端
|
||||
======
|
||||
|
||||
大多数时间我们会使用 Curl 命令或是 Wget 命令下载文件或者做其他事
|
||||
大多数时间我们会使用 `curl` 命令或是 `wget` 命令下载文件或者做其他事。
|
||||
|
||||
我们以前曾写过 **[最佳命令行下载管理器][1]** 的文章。你可以点击相应的 URL 连接来浏览这些文章。
|
||||
我们以前曾写过 [最佳命令行下载管理器][1] 的文章。你可以点击相应的 URL 连接来浏览这些文章。
|
||||
|
||||
* **[aria2 – Linux 下的多协议命令行下载工具][2]**
|
||||
* **[Axel – Linux 下的轻量级命令行下载加速器][3]**
|
||||
* **[Wget – Linux 下的标准命令行下载工具][4]**
|
||||
* **[curl – Linux 下的实用的命令行下载工具][5]**
|
||||
* [aria2 – Linux 下的多协议命令行下载工具][2]
|
||||
* [Axel – Linux 下的轻量级命令行下载加速器][3]
|
||||
* [Wget – Linux 下的标准命令行下载工具][4]
|
||||
* [curl – Linux 下的实用的命令行下载工具][5]
|
||||
|
||||
今天我们将讨论同样的话题。这个实用程序名为 HTTPie。
|
||||
|
||||
今天我们将讨论同样的话题。实用程序名为 HTTPie。
|
||||
|
||||
它是现代命令行 http 客户端,也是curl和wget命令的最佳替代品。
|
||||
它是现代命令行 http 客户端,也是 `curl` 和 `wget` 命令的最佳替代品。
|
||||
|
||||
### 什么是 HTTPie?
|
||||
|
||||
HTTPie (发音是 aitch-tee-tee-pie) 是一个 Http 命令行客户端。
|
||||
HTTPie (发音是 aitch-tee-tee-pie) 是一个 HTTP 命令行客户端。
|
||||
|
||||
httpie 工具是现代命令的 HTTP 客户端,它能让命令行界面与 Web 服务进行交互。
|
||||
HTTPie 工具是现代的 HTTP 命令行客户端,它能通过命令行界面与 Web 服务进行交互。
|
||||
|
||||
他提供一个简单 Http 命令,运行使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
|
||||
它提供一个简单的 `http` 命令,允许使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
|
||||
|
||||
HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
|
||||
HTTPie 能用于测试、调试及与 HTTP 服务器交互。
|
||||
|
||||
### 主要特点
|
||||
|
||||
@ -40,50 +39,52 @@ HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
|
||||
* 格式化的及彩色化的终端输出
|
||||
* 内置 JSON 支持
|
||||
* 表单和文件上传
|
||||
* HTTPS, 代理, 和认证
|
||||
* HTTPS、代理和认证
|
||||
* 任意请求数据
|
||||
* 自定义头部
|
||||
* 持久化会话(sessions)
|
||||
* 类似 wget 的下载
|
||||
* 持久化会话
|
||||
* 类似 `wget` 的下载
|
||||
* 支持 Python 2.7 和 3.x
|
||||
|
||||
### 在 Linux 下如何安装 HTTPie
|
||||
|
||||
大部分 Linux 发行版都提供了系统包管理器,可以用它来安装。
|
||||
|
||||
**`Fedora`** 系统,使用 **[DNF 命令][6]** 来安装 httpie
|
||||
Fedora 系统,使用 [DNF 命令][6] 来安装 httpie:
|
||||
|
||||
```
|
||||
$ sudo dnf install httpie
|
||||
```
|
||||
|
||||
**`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][7]** 或 **[APT 命令][8]** 来安装 httpie。
|
||||
Debian/Ubuntu 系统,使用 [APT-GET 命令][7] 或 [APT 命令][8] 来安装 HTTPie。
|
||||
|
||||
```
|
||||
$ sudo apt install httpie
|
||||
```
|
||||
|
||||
基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][9]** 来安装 httpie。
|
||||
基于 Arch Linux 的系统,使用 [Pacman 命令][9] 来安装 HTTPie。
|
||||
|
||||
```
|
||||
$ sudo pacman -S httpie
|
||||
```
|
||||
|
||||
**`RHEL/CentOS`** 的系统, 使用 **[YUM 命令][10]** 来安装 httpie。
|
||||
RHEL/CentOS 的系统,使用 [YUM 命令][10] 来安装 HTTPie。
|
||||
|
||||
```
|
||||
$ sudo yum install httpie
|
||||
```
|
||||
|
||||
**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][11]** 来安装 httpie。
|
||||
openSUSE Leap 系统,使用 [Zypper 命令][11] 来安装 HTTPie。
|
||||
|
||||
```
|
||||
$ sudo zypper install httpie
|
||||
```
|
||||
|
||||
### 1) 如何使用 HTTPie 请求URL?
|
||||
### 用法
|
||||
|
||||
httpie 的基本用法是将网站的 URL 作为参数。
|
||||
#### 如何使用 HTTPie 请求 URL?
|
||||
|
||||
HTTPie 的基本用法是将网站的 URL 作为参数。
|
||||
|
||||
```
|
||||
# http 2daygeek.com
|
||||
@ -99,9 +100,9 @@ Transfer-Encoding: chunked
|
||||
Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
### 2) 如何使用 HTTPie 下载文件
|
||||
#### 如何使用 HTTPie 下载文件
|
||||
|
||||
你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 wget 命令。
|
||||
你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 `wget` 命令。
|
||||
|
||||
```
|
||||
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png
|
||||
@ -148,10 +149,11 @@ Vary: Accept-Encoding
|
||||
Downloading 31.31 kB to "Anbox-1.png"
|
||||
Done. 31.31 kB in 0.01551s (1.97 MB/s)
|
||||
```
|
||||
如何使用HTTPie恢复部分下载?
|
||||
### 3) 如何使用 HTTPie 恢复部分下载?
|
||||
|
||||
#### 如何使用 HTTPie 恢复部分下载?
|
||||
|
||||
你可以使用带 `-c` 参数的 HTTPie 继续下载。
|
||||
|
||||
```
|
||||
# http --download --continue https://speed.hetzner.de/100MB.bin -o 100MB.bin
|
||||
HTTP/1.1 206 Partial Content
|
||||
@ -169,24 +171,24 @@ Downloading 100.00 MB to "100MB.bin"
|
||||
| 24.14 % 24.14 MB 1.12 MB/s 0:01:07 ETA^C
|
||||
```
|
||||
|
||||
你根据下面的输出验证是否同一个文件
|
||||
你根据下面的输出验证是否同一个文件:
|
||||
|
||||
```
|
||||
[email protected]:/var/log# ls -lhtr 100MB.bin
|
||||
-rw-r--r-- 1 root root 25M Apr 9 01:33 100MB.bin
|
||||
```
|
||||
|
||||
### 5) 如何使用 HTTPie 上传文件?
|
||||
#### 如何使用 HTTPie 上传文件?
|
||||
|
||||
你可以通过使用带有 `小于号 "<"` 的 HTTPie 命令上传文件
|
||||
You can upload a file using HTTPie with the `less-than symbol "<"` symbol.
|
||||
你可以通过使用带有小于号 `<` 的 HTTPie 命令上传文件
|
||||
|
||||
```
|
||||
$ http https://transfer.sh < Anbox-1.png
|
||||
```
|
||||
|
||||
### 6) 如何使用带有重定向符号">" 的 HTTPie 下载文件?
|
||||
#### 如何使用带有重定向符号 > 下载文件?
|
||||
|
||||
你可以使用带有 `重定向 ">"` 符号的 HTTPie 命令下载文件。
|
||||
你可以使用带有重定向 `>` 符号的 HTTPie 命令下载文件。
|
||||
|
||||
```
|
||||
# http https://www.2daygeek.com/wp-content/uploads/2019/03/How-To-Install-And-Enable-Flatpak-Support-On-Linux-1.png > Flatpak.png
|
||||
@ -195,7 +197,7 @@ $ http https://transfer.sh < Anbox-1.png
|
||||
-rw-r--r-- 1 root root 47K Apr 9 01:44 Flatpak.png
|
||||
```
|
||||
|
||||
### 7) 发送一个 HTTP GET 请求?
|
||||
#### 发送一个 HTTP GET 请求?
|
||||
|
||||
您可以在请求中发送 HTTP GET 方法。GET 方法会使用给定的 URI,从给定服务器检索信息。
|
||||
|
||||
@ -214,7 +216,7 @@ Transfer-Encoding: chunked
|
||||
Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
### 8) 提交表单?
|
||||
#### 提交表单?
|
||||
|
||||
使用以下格式提交表单。POST 请求用于向服务器发送数据,例如客户信息、文件上传等。要使用 HTML 表单。
|
||||
|
||||
@ -261,24 +263,24 @@ Server: Apache/2.4.29 (Ubuntu)
|
||||
Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
### 9) HTTP 认证?
|
||||
#### HTTP 认证?
|
||||
|
||||
当前支持的身份验证认证方案是基本认证(Basic)和摘要验证(Digest)
|
||||
The currently supported authentication schemes are Basic and Digest
|
||||
当前支持的身份验证认证方案是基本认证(Basic)和摘要验证(Digest)。
|
||||
|
||||
基本认证
|
||||
基本认证:
|
||||
|
||||
```
|
||||
$ http -a username:password example.org
|
||||
```
|
||||
|
||||
摘要验证
|
||||
摘要验证:
|
||||
|
||||
```
|
||||
$ http -A digest -a username:password example.org
|
||||
```
|
||||
|
||||
提示输入密码
|
||||
提示输入密码:
|
||||
|
||||
```
|
||||
$ http -a username example.org
|
||||
```
|
||||
@ -289,8 +291,8 @@ via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/zgj1024)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[zgj1024](https://github.com/zgj1024)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -306,4 +308,4 @@ via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
|
||||
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
||||
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -0,0 +1,130 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco Talos details exceptionally dangerous DNS hijacking attack)
|
||||
[#]: via: (https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco Talos details exceptionally dangerous DNS hijacking attack
|
||||
======
|
||||
Cisco Talos says state-sponsored attackers are battering DNS to gain access to sensitive networks and systems
|
||||
![Peshkova / Getty][1]
|
||||
|
||||
Security experts at Cisco Talos have released a [report detailing][2] what it calls the “first known case of a domain name registry organization that was compromised for cyber espionage operations.”
|
||||
|
||||
Talos calls ongoing cyber threat campaign “Sea Turtle” and said that state-sponsored attackers are abusing DNS to harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect, which displays unique knowledge on how to manipulate DNS, Talos stated.
|
||||
|
||||
**More about DNS:**
|
||||
|
||||
* [DNS in the cloud: Why and why not][3]
|
||||
* [DNS over HTTPS seeks to make internet use more private][4]
|
||||
* [How to protect your infrastructure from DNS cache poisoning][5]
|
||||
* [ICANN housecleaning revokes old DNS security key][6]
|
||||
|
||||
|
||||
|
||||
By obtaining control of victims’ DNS, the attackers can change or falsify any data on the Internet, illicitly modify DNS name records to point users to actor-controlled servers; users visiting those sites would never know, Talos reported.
|
||||
|
||||
DNS, routinely known as the Internet’s phonebook, is part of the global internet infrastructure that translates between familiar names and the numbers computers need to access a website or send an email.
|
||||
|
||||
### Threat to DNS could spread
|
||||
|
||||
At this point Talos says Sea Turtle isn't compromising organizations in the U.S.
|
||||
|
||||
“While this incident is limited to targeting primarily national security organizations in the Middle East and North Africa, and we do not want to overstate the consequences of this specific campaign, we are concerned that the success of this operation will lead to actors more broadly attacking the global DNS system,” Talos stated.
|
||||
|
||||
Talos reports that the ongoing operation likely began as early as January 2017 and has continued through the first quarter of 2019. “Our investigation revealed that approximately 40 different organizations across 13 different countries were compromised during this campaign,” Talos stated. “We assess with high confidence that this activity is being carried out by an advanced, state-sponsored actor that seeks to obtain persistent access to sensitive networks and systems.”
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][7] ]**
|
||||
|
||||
Talos says the attackers directing the Sea Turtle campaign show signs of being highly sophisticated and have continued their attacks despite public reports of their activities. In most cases, threat actors typically stop or slow down their activities once their campaigns are publicly revealed suggesting the Sea Turtle actors are unusually brazen and may be difficult to deter going forward, Talos stated.
|
||||
|
||||
In January the Department of Homeland Security (DHS) [issued an alert][8] about this activity, warning that an attacker could redirect user traffic and obtain valid encryption certificates for an organization’s domain names.
|
||||
|
||||
At that time the DHS’s [Cybersecurity and Infrastructure Security Agency][9] said in its [Emergency Directive][9] that it was tracking a series of incidents targeting DNS infrastructure. CISA wrote that it “is aware of multiple executive branch agency domains that were impacted by the tampering campaign and has notified the agencies that maintain them.”
|
||||
|
||||
### DNS hijacking
|
||||
|
||||
CISA said that attackers have managed to intercept and redirect web and mail traffic and could target other networked services. The agency said the attacks start with compromising user credentials of an account that can make changes to DNS records. Then the attacker alters DNS records, like Address, Mail Exchanger, or Name Server records, replacing the legitimate address of the services with an address the attacker controls.
|
||||
|
||||
To achieve their nefarious goals, Talos stated the Sea Turtle accomplices:
|
||||
|
||||
* Use DNS hijacking through the use of actor-controlled name servers.
|
||||
* Are aggressive in their pursuit targeting DNS registries and a number of registrars, including those that manage country-code top-level domains (ccTLD).
|
||||
|
||||
|
||||
* Use Let’s Encrypts, Comodo, Sectigo, and self-signed certificates in their man-in-the-middle (MitM) servers to gain the initial round of credentials.
|
||||
|
||||
|
||||
* Steal victim organization’s legitimate SSL certificate and use it on actor-controlled servers.
|
||||
|
||||
|
||||
|
||||
Such actions also distinguish Sea Turtle from an earlier DNS exploit known as DNSpionage, which [Talos reported][10] on in November 2018.
|
||||
|
||||
Talos noted “with high confidence” that these operations are distinctly different and independent from the operations performed by [DNSpionage.][11]
|
||||
|
||||
In that report, Talos said a DNSpionage campaign utilized two fake, malicious websites containing job postings that were used to compromise targets via malicious Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers.
|
||||
|
||||
In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated Let's Encrypt certificates for the redirected domains. These certificates provide X.509 certificates for [Transport Layer Security (TLS)][12] free of charge to the user, Talos said.
|
||||
|
||||
The Sea Turtle campaign gained initial access either by exploiting known vulnerabilities or by sending spear-phishing emails. Talos said it believes the attackers have exploited multiple known common vulnerabilities and exposures (CVEs) to either gain initial access or to move laterally within an affected organization. Talos research further shows the following known exploits of Sea Turtle include:
|
||||
|
||||
* CVE-2009-1151: PHP code injection vulnerability affecting phpMyAdmin
|
||||
* CVE-2014-6271: RCE affecting GNU bash system, specifically the SMTP (this was part of the Shellshock CVEs)
|
||||
* CVE-2017-3881: RCE by unauthenticated user with elevated privileges Cisco switches
|
||||
* CVE-2017-6736: Remote Code Exploit (RCE) for Cisco integrated Service Router 2811
|
||||
* CVE-2017-12617: RCE affecting Apache web servers running Tomcat
|
||||
* CVE-2018-0296: Directory traversal allowing unauthorized access to Cisco Adaptive Security Appliances (ASAs) and firewalls
|
||||
* CVE-2018-7600: RCE for Website built with Drupal, aka “Drupalgeddon”
|
||||
|
||||
|
||||
|
||||
“As with any initial access involving a sophisticated actor, we believe this list of CVEs to be incomplete,” Talos stated. “The actor in question can leverage known vulnerabilities as they encounter a new threat surface. This list only represents the observed behavior of the actor, not their complete capabilities.”
|
||||
|
||||
Talos says that the Sea Turtle campaign continues to be highly successful for several reasons. “First, the actors employ a unique approach to gain access to the targeted networks. Most traditional security products such as IDS and IPS systems are not designed to monitor and log DNS requests,” Talos stated. “The threat actors were able to achieve this level of success because the DNS domain space system added security into the equation as an afterthought. Had more ccTLDs implemented security features such as registrar locks, attackers would be unable to redirect the targeted domains.”
|
||||
|
||||
Talos said the attackers also used previously undisclosed techniques such as certificate impersonation. “This technique was successful in part because the SSL certificates were created to provide confidentiality, not integrity. The attackers stole organizations’ SSL certificates associated with security appliances such as [Cisco's Adaptive Security Appliance] to obtain VPN credentials, allowing the actors to gain access to the targeted network, and have long-term persistent access, Talos stated.
|
||||
|
||||
### Cisco Talos DNS attack mitigation strategy
|
||||
|
||||
To protect against Sea Turtle, Cisco recommends:
|
||||
|
||||
* Use a registry lock service, which will require an out-of-band message before any changes can occur to an organization's DNS record.
|
||||
* If your registrar does not offer a registry-lock service, Talos recommends implementing multi-factor authentication, such as DUO, to access your organization's DNS records.
|
||||
* If you suspect you were targeted by this type of intrusion, Talos recommends instituting a network-wide password reset, preferably from a computer on a trusted network.
|
||||
* Apply patches, especially on internet-facing machines. Network administrators can monitor passive DNS records on their domains to check for abnormalities.
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/man-in-boat-surrounded-by-sharks_risk_fear_decision_attack_threat_by-peshkova-getty-100786972-large.jpg
|
||||
[2]: https://blog.talosintelligence.com/2019/04/seaturtle.html
|
||||
[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
|
||||
[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
|
||||
[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
|
||||
[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
|
||||
[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[8]: https://www.networkworld.com/article/3336201/batten-down-the-dns-hatches-as-attackers-strike-feds.html
|
||||
[9]: https://cyber.dhs.gov/ed/19-01/
|
||||
[10]: https://blog.talosintelligence.com/2018/11/dnspionage-campaign-targets-middle-east.html
|
||||
[11]: https://krebsonsecurity.com/tag/dnspionage/
|
||||
[12]: https://www.networkworld.com/article/2303073/lan-wan-what-is-transport-layer-security-protocol.html
|
||||
[13]: https://www.facebook.com/NetworkWorld/
|
||||
[14]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Clearing up confusion between edge and cloud)
|
||||
[#]: via: (https://www.networkworld.com/article/3389364/clearing-up-confusion-between-edge-and-cloud.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
Clearing up confusion between edge and cloud
|
||||
======
|
||||
The benefits of edge computing are not just hype; however, that doesn’t mean you should throw cloud computing initiatives to the wind.
|
||||
![iStock][1]
|
||||
|
||||
Edge computing and cloud computing are sometimes discussed as if they’re mutually exclusive approaches to network infrastructure. While they may function in different ways, utilizing one does not preclude the use of the other.
|
||||
|
||||
Indeed, [Futurum Research][2] found that, among companies that have deployed edge projects, only 15% intend to separate these efforts from their cloud computing initiatives — largely for security or compartmentalization reasons.
|
||||
|
||||
So then, what’s the difference, and how do edge and cloud work together?
|
||||
|
||||
**Location, location, location**
|
||||
|
||||
Moving data and processing to the cloud, as opposed to on-premises data centers, has enabled the business to move faster, more efficiently, less expensively — and in many cases, more securely.
|
||||
|
||||
Yet cloud computing is not without challenges, particularly:
|
||||
|
||||
* Users will abandon a graphics-heavy website if it doesn’t load quickly. So, imagine the lag for compute-heavy processing associated artificial intelligence or machine learning functions.
|
||||
|
||||
* The strength of network connectivity is crucial for large data sets. As enterprises increasingly generate data, particularly with the adoption of Internet of Things (IoT), traditional cloud connections will be insufficient.
|
||||
|
||||
|
||||
|
||||
|
||||
To make up for the lack of speed and connectivity with cloud, processing for mission-critical applications will need to occur closer to the data source. Maybe that’s a robot on the factory floor, digital signage at a retail store, or an MRI machine in a hospital. That’s edge computing, which reduces the distance the data must travel and thereby boosts the performance and reliability of applications and services.
|
||||
|
||||
**One doesn’t supersede the other**
|
||||
|
||||
That said, the benefits gained by edge computing don’t negate the need for cloud. In many cases, IT will now become a decision-maker in terms of best usage for each. For example, edge might make sense for devices running processing-power-hungry apps such as IoT, artificial intelligence, and machine learning. And cloud will work for apps where time isn’t necessarily of the essence, like inventory or big-data projects.
|
||||
|
||||
> “By being able to triage the types of data processing on the edge versus that heading to the cloud, we can keep both systems running smoothly – keeping our customers and employees safe and happy,” [writes Daniel Newman][3], principal analyst for Futurum Research.
|
||||
|
||||
And in reality, edge will require cloud. “To enable digital transformation, you have to build out the edge computing side and connect it with the cloud,” [Tony Antoun][4], senior vice president of edge and digital at GE Digital, told _Automation World_. “It’s a journey from the edge to the cloud and back, and the cycle keeps continuing. You need both to enrich and boost the business and take advantage of different points within this virtual lifecycle.”
|
||||
|
||||
**Ensuring resiliency of cloud and edge**
|
||||
|
||||
Both edge and cloud computing require careful consideration to the underlying processing power. Connectivity and availability, no matter the application, are always critical measures.
|
||||
|
||||
But especially for the edge, it will be important to have a resilient architecture. Companies should focus on ensuring security, redundancy, connectivity, and remote management capabilities.
|
||||
|
||||
Discover how your edge and cloud computing environments can coexist at [APC.com][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389364/clearing-up-confusion-between-edge-and-cloud.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-612507606-100793995-large.jpg
|
||||
[2]: https://futurumresearch.com/edge-computing-from-edge-to-enterprise/
|
||||
[3]: https://futurumresearch.com/edge-computing-data-centers/
|
||||
[4]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Startup MemVerge combines DRAM and Optane into massive memory pool)
|
||||
[#]: via: (https://www.networkworld.com/article/3389358/startup-memverge-combines-dram-and-optane-into-massive-memory-pool.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Startup MemVerge combines DRAM and Optane into massive memory pool
|
||||
======
|
||||
MemVerge bridges two technologies that are already a bridge.
|
||||
![monsitj / Getty Images][1]
|
||||
|
||||
A startup called MemVerge has announced software to combine regular DRAM with Intel’s Optane DIMM persistent memory into a single clustered storage pool and without requiring any changes to applications.
|
||||
|
||||
MemVerge has been working with Intel in developing this new hardware platform for close to two years. It offers what it calls a Memory-Converged Infrastructure (MCI) to allow existing apps to use Optane DC persistent memory. It's architected to integrate seamlessly with existing applications.
|
||||
|
||||
**[ Read also:[Mass data fragmentation requires a storage rethink][2] ]**
|
||||
|
||||
Optane memory is designed to sit between high-speed memory and [solid-state drives][3] (SSDs) and acts as a cache for the SSD, since it has speed comparable to DRAM but SSD persistence. With Intel’s new Xeon Scalable processors, this can make up to 4.5TB of memory available to a processor.
|
||||
|
||||
Optane runs in one of two modes: Memory Mode and App Direct Mode. In Memory Mode, the Optane memory functions like regular memory and is not persistent. In App Direct Mode, it functions as the SSD cache but apps don’t natively support it. They need to be tweaked to function properly in Optane memory.
|
||||
|
||||
As it was explained to me, apps aren’t designed for persistent storage because the data is already in memory on powerup rather than having to load it from storage. So, the app has to know memory doesn’t go away and that it does not need to shuffle data back and forth between storage and memory. Therefore, apps natively don’t work in persistent memory.
|
||||
|
||||
### Why didn't Intel think of this?
|
||||
|
||||
All of which really begs a question I can’t get answered, at least not immediately: Why didn’t Intel think of this when it created Optane in the first place?
|
||||
|
||||
MemVerge has what it calls Distributed Memory Objects (DMO) hypervisor technology to provide a logical convergence layer to run data-intensive workloads at memory speed with guaranteed data consistency across multiple systems. This allows Optane memory to process and derive insights from the enormous amounts of data in real time.
|
||||
|
||||
That’s because MemVerge’s technology makes random access as fast as sequential access. Normally, random access is slower than sequential because of all the jumping around with random access vs. reading one sequential file. But MemVerge can handle many small files as fast as it handles one large file.
|
||||
|
||||
MemVerge itself is actually software, with a single API for both DRAM and Optane. It’s also available via a hyperconverged server appliance that comes with 2 Cascade Lake processors, up to 512 GB DRAM, 6TB of Optane memory, and 360TB of NVMe physical storage capacity.
|
||||
|
||||
However, all of this is still vapor. MemVerge doesn’t expect to ship a beta product until at least June.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389358/startup-memverge-combines-dram-and-optane-into-massive-memory-pool.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/big_data_center_server_racks_storage_binary_analytics_by_monsitj_gettyimages-951389152_3x2-100787358-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3323580/mass-data-fragmentation-requires-a-storage-rethink.html
|
||||
[3]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,119 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Want to the know future of IoT? Ask the developers!)
|
||||
[#]: via: (https://www.networkworld.com/article/3389877/want-to-the-know-future-of-iot-ask-the-developers.html#tk.rss_all)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
Want to the know future of IoT? Ask the developers!
|
||||
======
|
||||
A new survey of IoT developers reveals that connectivity, performance, and standards are growing areas of concern as IoT projects hit production.
|
||||
![Avgust01 / Getty Images][1]
|
||||
|
||||
It may be a cliché that software developers rule the world, but if you want to know the future of an important technology, it pays to look at what the developers are doing. With that in mind, there are some real, on-the-ground insights for the entire internet of things (IoT) community to be gained in a new [survey of more than 1,700 IoT developers][2] (pdf) conducted by the [Eclipse Foundation][3].
|
||||
|
||||
### IoT connectivity concerns
|
||||
|
||||
Perhaps not surprisingly, security topped the list of concerns, easily outpacing other IoT worries. But that's where things begin to get interesting. More than a fifth (21%) of IoT developers cited connectivity as a challenge, followed by data collection and analysis (19%), performance (18%), privacy (18%), and standards (16%).
|
||||
|
||||
Connectivity rose to second place after being the number three IoT concern for developers last year. Worries over security and data collection and analysis, meanwhile, actually declined slightly year over year. (Concerns over performance, privacy, and standards also increased significantly from last year.)
|
||||
|
||||
**[ Learn more:[Download a PDF bundle of five essential articles about IoT in the enterprise][4] ]**
|
||||
|
||||
“If you look at the list of developers’ top concerns with IoT in the survey,” said [Mike Milinkovich][5], executive director of the Eclipse Foundation via email, “I think connectivity, performance, and standards stand out — those are speaking to the fact that the IoT projects are getting real, that they’re getting out of sandboxes and into production.”
|
||||
|
||||
“With connectivity in IoT,” Milinkovich continued, “everything seems straightforward until you have a sensor in a corner somewhere — narrowband or broadband — and physical constraints make it hard to connect."
|
||||
|
||||
He also cited a proliferation of incompatible technologies that is driving developer concerns over connectivity.
|
||||
|
||||
![][6]
|
||||
|
||||
### IoT standards and interoperability
|
||||
|
||||
Milinkovich also addressed one of [my personal IoT bugaboos: interoperability][7]. “Standards is a proxy for interoperability” among products from different vendors, he explained, which is an “elusive goal” in industrial IoT (IIoT).
|
||||
|
||||
**[[Learn Java from beginning concepts to advanced design patterns in this comprehensive 12-part course!][8] ]**
|
||||
|
||||
“IIoT is about breaking down the proprietary silos and re-tooling the infrastructure that’s been in our factories and logistics for many years using OSS standards and implementations — standard sets of protocols as opposed to vendor-specific protocols,” he said.
|
||||
|
||||
That becomes a big issue when you’re deploying applications in the field and different manufacturers are using different protocols or non-standard extensions to existing protocols and the machines can’t talk to each other.
|
||||
|
||||
**[ Also read:[Interoperability is the key to IoT success][7] ]**
|
||||
|
||||
“This ties back to the requirement of not just having open standards, but more robust implementations of those standards in open source stacks,” Milinkovich said. “To keep maturing, the market needs not just standards, but out-of-the-box interoperability between devices.”
|
||||
|
||||
“Performance is another production-grade concern,” he said. “When you’re in development, you think you know the bottlenecks, but then you discover the real-world issues when you push to production.”
|
||||
|
||||
### Cloudy developments for IoT
|
||||
|
||||
The survey also revealed that in some ways, IoT is very much aligned with the larger technology community. For example, IoT use of public and hybrid cloud architectures continues to grow. Amazon Web Services (AWS) (34%), Microsoft Azure (23%), and Google Cloud Platform (20%) are the leading IoT cloud providers, just as they are throughout the industry. If anything, AWS’ lead may be smaller in the IoT space than it is in other areas, though reliable cloud-provider market share figures are notoriously hard to come by.
|
||||
|
||||
But Milinkovich sees industrial IoT as “a massive opportunity for hybrid cloud” because many industrial IoT users are very concerned about minimizing latency with their factory data, what he calls “their gold.” He sees factories moving towards hybrid cloud environments, leveraging “modern infrastructure technology like Kubernetes, and building around open protocols like HTTP and MQTT while getting rid of the older proprietary protocols.”
|
||||
|
||||
### How IoT development is different
|
||||
|
||||
In some ways, the IoT development world doesn’t seem much different than wider software development. For example, the top IoT programming languages mirror [the popularity of those languages][9] over all, with C and Java ruling the roost. (C led the way on constrained devices, while Java was the top choice for gateway and edge nodes, as well as the IoT cloud.)
|
||||
|
||||
![][10]
|
||||
|
||||
But Milinkovich noted that when developing for embedded or constrained devices, the programmer’s interface to a device could be through any number of esoteric hardware connectors.
|
||||
|
||||
“You’re doing development using emulators and simulators, and it’s an inherently different and more complex interaction between your dev environment and the target for your application,” he said. “Sometimes hardware and software are developed in tandem, which makes it even more complicated.”
|
||||
|
||||
For example, he explained, building an IoT solution may bring in web developers working on front ends using JavaScript and Angular, while backend cloud developers control cloud infrastructure and embedded developers focus on building software to run on constrained devices.
|
||||
|
||||
No wonder IoT developers have so many things to worry about.
|
||||
|
||||
**More about IoT:**
|
||||
|
||||
* [What is the IoT? How the internet of things works][11]
|
||||
* [What is edge computing and how it’s changing the network][12]
|
||||
* [Most powerful Internet of Things companies][13]
|
||||
* [10 Hot IoT startups to watch][14]
|
||||
* [The 6 ways to make money in IoT][15]
|
||||
* [What is digital twin technology? [and why it matters]][16]
|
||||
* [Blockchain, service-centric networking key to IoT success][17]
|
||||
* [Getting grounded in IoT networking and security][4]
|
||||
* [Building IoT-ready networks must become a priority][18]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][19]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389877/want-to-the-know-future-of-iot-ask-the-developers.html#tk.rss_all
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/iot_internet_of_things_mobile_connections_by_avgust01_gettyimages-1055659210_2400x1600-100788447-large.jpg
|
||||
[2]: https://drive.google.com/file/d/17WEobD5Etfw5JnoKC1g4IME_XCtPNGGc/view
|
||||
[3]: https://www.eclipse.org/
|
||||
[4]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[5]: https://blogs.eclipse.org/post/mike-milinkovich/measuring-industrial-iot%E2%80%99s-evolution
|
||||
[6]: https://images.idgesg.net/images/article/2019/04/top-developer-concerns-2019-eclipse-foundation-100793974-large.jpg
|
||||
[7]: https://www.networkworld.com/article/3204529/interoperability-is-the-key-to-iot-success.html
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fjava
|
||||
[9]: https://blog.newrelic.com/technology/popular-programming-languages-2018/
|
||||
[10]: https://images.idgesg.net/images/article/2019/04/top-iot-programming-languages-eclipse-foundation-100793973-large.jpg
|
||||
[11]: https://www.networkworld.com/article/3207535/internet-of-things/what-is-the-iot-how-the-internet-of-things-works.html
|
||||
[12]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[13]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[14]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[15]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[16]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[17]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[18]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[19]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[20]: https://www.facebook.com/NetworkWorld/
|
||||
[21]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: ('Fiber-in-air' 5G network research gets funding)
|
||||
[#]: via: (https://www.networkworld.com/article/3389881/extreme-5g-network-research-gets-funding.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
'Fiber-in-air' 5G network research gets funding
|
||||
======
|
||||
A consortium of tech companies and universities plan to aggressively investigate the exploitation of D-Band to develop a new variant of 5G infrastructure.
|
||||
![Peshkova / Getty Images][1]
|
||||
|
||||
Wireless transmission at data rates of around 45gbps could one day be commonplace, some engineers say. “Fiber-in-air” is how the latest variant of 5G infrastructure is being described. To get there, a Britain-funded consortium of chip makers, universities, and others intend to aggressively investigate the exploitation of D-Band. That part of the radio spectrum is at 151-174.8 GHz in millimeter wavelengths (mm-wave) and hasn’t been used before.
|
||||
|
||||
The researchers intend to do it by riffing on a now roughly 70-year-old gun-like electron-sending device that can trace its roots back through the annals of radio history: The Traveling Wave Tube, or TWT, an electron gun-magnet-combo that was used in the development of television and still brings space images back to Earth.
|
||||
|
||||
**[ Also read:[The time of 5G is almost here][2] ]**
|
||||
|
||||
D-Band, the spectrum the researchers want to use, has the advantage that it’s wide, so theoretically it should be good for fast, copious data rates. The problem with it though, and the reason it hasn’t thus far been used, is that it’s subject to monkey-wrenching from atmospheric conditions such as rain, explains IQE, a semiconductor wafer and materials producer involved in the project, in a [press release][3]. The team says attenuation is fixable, though. Their solution is the now-aging TWTs.
|
||||
|
||||
The group, which includes BT, Filtronic, Glasgow University, Intel, Nokia Bell Labs, Optocap, and Teledyne e2v, has secured funding of the equivalent of $1.12 million USD from the U.K.’s [Engineering and Physical Sciences Research Council (EPSRC)][4]. That’s the principal public funding body for engineering science research there.
|
||||
|
||||
### Tapping the power of TWTs
|
||||
|
||||
The DLINK system, as the team calls it, will use a high-power vacuum TWT with a special, newly developed tunneling diode and a modulator. Two bands of 10 GHz, each will deliver the throughput, [explains Lancaster University on its website][5]. The tubes are, in fact, special amplifiers that produce 10 Watts. That’s 10 times what an equivalent solid-state solution would likely produce at the same spot in the band, they say. Energy is basically sent from the electron beam to an electric field generated by the input signal.
|
||||
|
||||
Despite TWTs being around for eons, “no D-band TWTs are available in the market.” The development of one is key to these fiber-in-air speeds, the researchers say.
|
||||
|
||||
They will include “unprecedented data rate and transmission distance,” IQE writes.
|
||||
|
||||
The TWT device, although used extensively in space wireless communications since its invention in the 1950s, is overlooked as a significant contributor to global communications systems, say a group of French researchers working separately from this project, who recently argue that TWTs should be given more recognition.
|
||||
|
||||
TWT’s are “the unsung heroes of space exploration,” the Aix-Marseille Université researchers say in [an article on publisher Springer’s website][6]. Springer is promoting the group's 2019-published [paper][7] in the European Physical Journal H in which they delve into the history of the simple electron gun and magnet device.
|
||||
|
||||
“Its role in the history of wireless communications and in the space conquest is significant, but largely ignored,” they write in their paper.
|
||||
|
||||
They will be pleased to hear it maybe isn’t going away anytime soon.
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389881/extreme-5g-network-research-gets-funding.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/abstract_data_coding_matrix_structure_network_connections_by_peshkova_gettyimages-897683944_2400x1600-100788487-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html
|
||||
[3]: https://www.iqep.com/media/2019/03/iqe-partners-in-key-wireless-communications-project-for-5g-infrastructure-(1)/
|
||||
[4]: https://epsrc.ukri.org/
|
||||
[5]: http://wp.lancs.ac.uk/dlink/
|
||||
[6]: https://www.springer.com/gp/about-springer/media/research-news/all-english-research-news/traveling-wave-tubes--the-unsung-heroes-of-space-exploration/16578434
|
||||
[7]: https://link.springer.com/article/10.1140%2Fepjh%2Fe2018-90023-1
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes)
|
||||
[#]: via: (https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco warns WLAN controller, 9000 series router and IOS/XE users to patch urgent security holes
|
||||
======
|
||||
Cisco says unpatched vulnerabilities could lead to DoS attacks, arbitrary code execution, take-over of devices.
|
||||
![Woolzian / Getty Images][1]
|
||||
|
||||
Cisco this week issued 31 security advisories but directed customer attention to “critical” patches for its IOS and IOS XE Software Cluster Management and IOS software for Cisco ASR 9000 Series routers. A number of other vulnerabilities also need attention if customers are running Cisco Wireless LAN Controllers.
|
||||
|
||||
The [first critical patch][2] has to do with a vulnerability in the Cisco Cluster Management Protocol (CMP) processing code in Cisco IOS and Cisco IOS XE Software that could allow an unauthenticated, remote attacker to send malformed CMP-specific Telnet options while establishing a Telnet session with an affected Cisco device configured to accept Telnet connections. An exploit could allow an attacker to execute arbitrary code and obtain full control of the device or cause a reload of the affected device, Cisco said.
|
||||
|
||||
**[ Also see[What to consider when deploying a next generation firewall][3]. | Get regularly scheduled insights by [signing up for Network World newsletters][4]. ]**
|
||||
|
||||
The problem has a Common Vulnerability Scoring System number of 9.8 out of 10.
|
||||
|
||||
According to Cisco, the Cluster Management Protocol utilizes Telnet internally as a signaling and command protocol between cluster members. The vulnerability is due to the combination of two factors:
|
||||
|
||||
* The failure to restrict the use of CMP-specific Telnet options only to internal, local communications between cluster members and instead accept and process such options over any Telnet connection to an affected device
|
||||
* The incorrect processing of malformed CMP-specific Telnet options.
|
||||
|
||||
|
||||
|
||||
Cisco says the vulnerability can be exploited during Telnet session negotiation over either IPv4 or IPv6. This vulnerability can only be exploited through a Telnet session established _to_ the device; sending the malformed options on Telnet sessions _through_ the device will not trigger the vulnerability.
|
||||
|
||||
The company says there are no workarounds for this problem, but disabling Telnet as an allowed protocol for incoming connections would eliminate the exploit vector. Cisco recommends disabling Telnet and using SSH instead. Information on how to do both can be found on the [Cisco Guide to Harden Cisco IOS Devices][5]. For patch information [go here][6].
|
||||
|
||||
The second critical patch involves a vulnerability in the sysadmin virtual machine (VM) on Cisco’s ASR 9000 carrier class routers running Cisco IOS XR 64-bit Software could let an unauthenticated, remote attacker access internal applications running on the sysadmin VM, Cisco said in the [advisory][7]. This CVSS also has a 9.8 rating.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
|
||||
|
||||
Cisco said the vulnerability is due to incorrect isolation of the secondary management interface from internal sysadmin applications. An attacker could exploit this vulnerability by connecting to one of the listening internal applications. A successful exploit could result in unstable conditions, including both denial of service (DoS) and remote unauthenticated access to the device, Cisco stated.
|
||||
|
||||
Cisco has released [free software updates][6] that address the vulnerability described in this advisory.
|
||||
|
||||
Lastly, Cisco wrote that [multiple vulnerabilities][9] in the administrative GUI configuration feature of Cisco Wireless LAN Controller (WLC) Software could let an authenticated, remote attacker cause the device to reload unexpectedly during device configuration when the administrator is using this GUI, causing a DoS condition on an affected device. The attacker would need to have valid administrator credentials on the device for this exploit to work, Cisco stated.
|
||||
|
||||
“These vulnerabilities are due to incomplete input validation for unexpected configuration options that the attacker could submit while accessing the GUI configuration menus. An attacker could exploit these vulnerabilities by authenticating to the device and submitting crafted user input when using the administrative GUI configuration feature,” Cisco stated.
|
||||
|
||||
“These vulnerabilities have a Security Impact Rating (SIR) of High because they could be exploited when the software fix for the Cisco Wireless LAN Controller Cross-Site Request Forgery Vulnerability is not in place,” Cisco stated. “In that case, an unauthenticated attacker who first exploits the cross-site request forgery vulnerability could perform arbitrary commands with the privileges of the administrator user by exploiting the vulnerabilities described in this advisory.”
|
||||
|
||||
Cisco has released [software updates][10] that address these vulnerabilities and said that there are no workarounds.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390159/cisco-warns-wlan-controller-9000-series-router-and-iosxe-users-to-patch-urgent-security-holes.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/compromised_data_security_breach_vulnerability_by_woolzian_gettyimages-475563052_2400x1600-100788413-large.jpg
|
||||
[2]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20170317-cmp
|
||||
[3]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: http://www.cisco.com/c/en/us/support/docs/ip/access-lists/13608-21.html
|
||||
[6]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
|
||||
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-asr9k-exr
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190417-wlc-iapp
|
||||
[10]: https://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fujitsu completes design of exascale supercomputer, promises to productize it)
|
||||
[#]: via: (https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Fujitsu completes design of exascale supercomputer, promises to productize it
|
||||
======
|
||||
Fujitsu hopes to be the first to offer exascale supercomputing using Arm processors.
|
||||
![Riken Advanced Institute for Computational Science][1]
|
||||
|
||||
Fujitsu and Japanese research institute Riken announced the design for the post-K supercomputer, to be launched in 2021, is complete and that they will productize the design for sale later this year.
|
||||
|
||||
The K supercomputer was a massive system, built by Fujitsu and housed at the Riken Advanced Institute for Computational Science campus in Kobe, Japan, with more than 80,000 nodes and using Sparc64 VIIIfx processors, a derivative of the Sun Microsystems Sparc processor developed under a license agreement that pre-dated Oracle buying out Sun in 2010.
|
||||
|
||||
**[ Also read:[10 of the world's fastest supercomputers][2] ]**
|
||||
|
||||
It was ranked as the top supercomputer when it was launched in June 2011 with a computation speed of over 8 petaflops. And in November 2011, K became the first computer to top 10 petaflops. It was eventually surpassed as the world's fastest supercomputer by the IBM’s Sequoia, but even now, eight years later, it’s still in the top 20 of supercomputers in the world.
|
||||
|
||||
### What's in the Post-K supercomputer?
|
||||
|
||||
The new system, dubbed “Post-K,” will feature an Arm-based processor called A64FX, a high-performance CPU developed by Fujitsu, designed for exascale systems. The chip is based off the Arm8 design, which is popular in smartphones, with 48 cores plus four “assistant” cores and the ability to access up to 32GB of memory per chip.
|
||||
|
||||
A64FX is the first CPU to adopt the Scalable Vector Extension (SVE), an instruction set specifically designed for Arm-based supercomputers. Fujitsu claims A64FX will offer a peak double precision (64-bit) floating point operations performance of over 2.7 teraflops per chip. The system will have one CPU per node and 384 nodes per rack. That comes out to one petaflop per rack.
|
||||
|
||||
Contrast that with Summit, the top supercomputer in the world built by IBM for the Oak Ridge National Laboratory using IBM Power9 processors and Nvidia GPUs. A Summit rack has a peak computer of 864 teraflops.
|
||||
|
||||
Let me put it another way: IBM’s Power processor and Nvidia’s Tesla are about to get pwned by a derivative chip to the one in your iPhone.
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][3] ]**
|
||||
|
||||
Fujitsu will productize the Post-K design and sell it as the successor to the Fujitsu Supercomputer PrimeHPC FX100. The company said it is also considering measures such as developing an entry-level model that will be easy to deploy, or supplying these technologies to other vendors.
|
||||
|
||||
Post-K will be installed in the Riken Center for Computational Science (R-CCS), where the K computer is currently located. The system will be one of the first exascale supercomputers in the world, although the U.S. and China are certainly gunning to be first if only for bragging rights.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389748/fujitsu-completes-design-of-exascale-supercomputer-promises-to-productize-it.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/06/riken_advanced_institute_for_computational_science_k-computer_supercomputer_1200x800-100762135-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
|
||||
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Most data center workers happy with their jobs -- despite the heavy demands)
|
||||
[#]: via: (https://www.networkworld.com/article/3389359/most-data-center-workers-happy-with-their-jobs-despite-the-heavy-demands.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Most data center workers happy with their jobs -- despite the heavy demands
|
||||
======
|
||||
An Informa Engage and Data Center Knowledge survey finds data center workers are content with their jobs, so much so they would encourage their children to go into that line of work.
|
||||
![Thinkstock][1]
|
||||
|
||||
A [survey conducted by Informa Engage and Data Center Knowledge][2] finds data center workers overall are content with their job, so much so they would encourage their children to go into that line of work despite the heavy demands on time and their brain.
|
||||
|
||||
Overall satisfaction is pretty good, with 72% of respondents generally agreeing with the statement “I love my current job,” while a third strongly agreed. And 75% agreed with the statement, “If my child, niece or nephew asked, I’d recommend getting into IT.”
|
||||
|
||||
**[ Also read:[20 hot jobs ambitious IT pros should shoot for][3] ]**
|
||||
|
||||
And there is a feeling of significance among data center workers, with 88% saying they feel they are very important to the success of their employer.
|
||||
|
||||
That’s despite some challenges, not the least of which is a skills and certification shortage. Survey respondents cite a lack of skills as the biggest area of concern. Only 56% felt they had the training necessary to do their job, and 74% said they had been in the IT industry for more than a decade.
|
||||
|
||||
The industry offers certification programs, every major IT hardware provider has them, but 61% said they have not completed or renewed certificates in the past 12 months. There are several reasons why.
|
||||
|
||||
A third (34%) said it was due to a lack of a training budget at their organization, while 24% cited a lack of time, 16% said management doesn’t see a need for training, and 16% cited no training plans within their workplace.
|
||||
|
||||
That doesn’t surprise me, since tech is one of the most open industries in the world where you can find training or educational materials and teach yourself. It’s already established that [many coders are self-taught][4], including industry giants Bill Gates, Steve Wozniak, John Carmack, and Jack Dorsey.
|
||||
|
||||
**[[Looking to upgrade your career in tech? This comprehensive online course teaches you how.][5] ]**
|
||||
|
||||
### Data center workers' salaries
|
||||
|
||||
Data center workers can’t complain about the pay. Well, most can’t, as 50% make $100,000 per year or more, but 11% make less than $40,000. Two-thirds of those surveyed are in the U.S., so those on the low end might be outside the country.
|
||||
|
||||
There was one notable discrepancy. Steve Brown, managing director of London-based Datacenter People, noted that software engineers get paid a lot better than the hardware people.
|
||||
|
||||
“The software engineering side of the data center is comparable to the highest-earning professions,” Brown said in the report. “On the physical infrastructure — the mechanical/electrical side — it’s not quite the case. It’s more equivalent to mid-level management.”
|
||||
|
||||
### Data center professionals still predominantly male
|
||||
|
||||
The least surprising finding? Nine out of 10 survey respondents were male. The industry is bending over backwards to fix the gender imbalance, but so far nothing has changed.
|
||||
|
||||
The conclusion of the report is a bit ominous, but I also think is wrong:
|
||||
|
||||
> “As data center infrastructure completes its transition to a cloud computing model, and software moves into containers and microservices, the remaining, treasured leaders of the data center workforce — people who acquired their skills in the 20th century — may find themselves with nothing recognizable they can manage and no-one to lead. We may be shocked when the crisis finally hits, but we won’t be able to say we weren’t warned.”
|
||||
|
||||
How many times do I have to say it, [the data center is not going away][6].
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389359/most-data-center-workers-happy-with-their-jobs-despite-the-heavy-demands.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/02/data_center_thinkstock_879720438-100749725-large.jpg
|
||||
[2]: https://informa.tradepub.com/c/pubRD.mpl?sr=oc&_t=oc:&qf=w_dats04&ch=datacenterkids
|
||||
[3]: https://www.networkworld.com/article/3276025/20-hot-jobs-ambitious-it-pros-should-shoot-for.html
|
||||
[4]: https://www.networkworld.com/article/3046178/survey-finds-most-coders-are-self-taught.html
|
||||
[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fupgrading-your-technology-career
|
||||
[6]: https://www.networkworld.com/article/3289509/two-studies-show-the-data-center-is-thriving-instead-of-dying.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Intel follows AMD’s lead (again) into single-socket Xeon servers)
|
||||
[#]: via: (https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Intel follows AMD’s lead (again) into single-socket Xeon servers
|
||||
======
|
||||
Intel's new U series of processors are aimed at the low-end market where one processor is good enough.
|
||||
![Intel][1]
|
||||
|
||||
I’m really starting to wonder who the leader in x86 really is these days because it seems Intel is borrowing another page out of AMD’s playbook.
|
||||
|
||||
Intel launched a whole lot of new Xeon Scalable processors earlier this month, but they neglected to mention a unique line: the U series of single-socket processors. The folks over at Serve The Home sniffed it out first, and Intel has confirmed the existence of the line, just that they “didn’t broadly promote them.”
|
||||
|
||||
**[ Read also:[Intel makes a play for high-speed fiber networking for data centers][2] ]**
|
||||
|
||||
To backtrack a bit, AMD made a major push for [single-socket servers][3] when it launched the Epyc line of server chips. Epyc comes with up to 32 cores and multithreading, and Intel (and Dell) argued that one 32-core/64-thread processor was enough to handle many loads and a lot cheaper than a two-socket system.
|
||||
|
||||
The new U series isn’t available in the regular Intel [ARK database][4] listing of Xeon Scalable processors, but they do show up if you search. Intel says they are looking into that .There are two processors for now, one with 24 cores and two with 20 cores.
|
||||
|
||||
The 24-core Intel [Xeon Gold 6212U][5] will be a counterpart to the Intel Xeon Platinum 8260, with a 2.4GHz base clock speed and a 3.9GHz turbo clock and the ability to access up to 1TB of memory. The Xeon Gold 6212U will have the same 165W TDP as the 8260 line, but with a single socket that’s 165 fewer watts of power.
|
||||
|
||||
Also, Intel is suggesting a price of about $2,000 for the Intel Xeon Gold 6212U, a big discount over the Xeon Platinum 8260’s $4,702 list price. So, that will translate into much cheaper servers.
|
||||
|
||||
The [Intel Xeon Gold 6210U][6] with 20 cores carries a suggested price of $1,500, has a base clock rate of 2.50GHz with turbo boost to 3.9GHz and a 150 watt TDP. Finally, there is the 20-core Intel [Xeon Gold 6209U][7] with a price of around $1,000 that is identical to the 6210 except its base clock speed is 2.1GHz with a turbo boost of 3.9GHz and a TDP of 125 watts due to its lower clock speed.
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
|
||||
|
||||
All of the processors support up to 1TB of DDR4-2933 memory and Intel’s Optane persistent memory.
|
||||
|
||||
In terms of speeds and feeds, AMD has a slight advantage over Intel in the single-socket race, and Epyc 2 is rumored to be approaching completion, which will only further advance AMD’s lead.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/06/intel_generic_cpu_background-100760187-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3307852/intel-makes-a-play-for-high-speed-fiber-networking-for-data-centers.html
|
||||
[3]: https://www.networkworld.com/article/3253626/amd-lands-dell-as-its-latest-epyc-server-processor-customer.html
|
||||
[4]: https://ark.intel.com/content/www/us/en/ark/products/series/192283/2nd-generation-intel-xeon-scalable-processors.html
|
||||
[5]: https://ark.intel.com/content/www/us/en/ark/products/192453/intel-xeon-gold-6212u-processor-35-75m-cache-2-40-ghz.html
|
||||
[6]: https://ark.intel.com/content/www/us/en/ark/products/192452/intel-xeon-gold-6210u-processor-27-5m-cache-2-50-ghz.html
|
||||
[7]: https://ark.intel.com/content/www/us/en/ark/products/193971/intel-xeon-gold-6209u-processor-27-5m-cache-2-10-ghz.html
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 projects for Raspberry Pi at home)
|
||||
[#]: via: (https://opensource.com/article/17/4/5-projects-raspberry-pi-home)
|
||||
[#]: author: (Ben Nuttall (Community Moderator) )
|
||||
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
|
||||
|
||||
5 projects for Raspberry Pi at home
|
||||
======
|
||||
@ -100,14 +100,14 @@ Let us know in the comments.
|
||||
|
||||
via: https://opensource.com/article/17/4/5-projects-raspberry-pi-home
|
||||
|
||||
作者:[Ben Nuttall (Community Moderator)][a]
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_home_automation.png?itok=2TnmJpD8 (5 projects for Raspberry Pi at home)
|
||||
[2]: https://www.raspberrypi.org/
|
||||
|
@ -1,104 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The shell scripting trap)
|
||||
[#]: via: (https://arp242.net/weblog/shell-scripting-trap.html)
|
||||
[#]: author: (Martin Tournoij https://arp242.net/)
|
||||
|
||||
The shell scripting trap
|
||||
======
|
||||
|
||||
|
||||
Shell scripting is great. It is amazingly simple to create something very useful. Even a simple no-brainer command such as:
|
||||
|
||||
```
|
||||
# Official way of naming Go-related things:
|
||||
$ grep -i ^go /usr/share/dict/american-english /usr/share/dict/british /usr/share/dict/british-english /usr/share/dict/catala /usr/share/dict/catalan /usr/share/dict/cracklib-small /usr/share/dict/finnish /usr/share/dict/french /usr/share/dict/german /usr/share/dict/italian /usr/share/dict/ngerman /usr/share/dict/ogerman /usr/share/dict/spanish /usr/share/dict/usa /usr/share/dict/words | cut -d: -f2 | sort -R | head -n1
|
||||
goldfish
|
||||
```
|
||||
|
||||
Takes several lines of code and a lot more brainpower in many programming languages. For example in Ruby:
|
||||
|
||||
```
|
||||
puts(Dir['/usr/share/dict/*-english'].map do |f|
|
||||
File.open(f)
|
||||
.readlines
|
||||
.select { |l| l[0..1].downcase == 'go' }
|
||||
end.flatten.sample.chomp)
|
||||
```
|
||||
|
||||
The Ruby version isn’t that long, or even especially complicated. But the shell script version was so simple that I didn’t even need to actually test it to make sure it is correct, whereas I did have to test the Ruby version to ensure I didn’t make a mistake. It’s also twice as long and looks a lot more dense.
|
||||
|
||||
This is why people use shell scripts, it’s so easy to make something useful. Here’s is another example:
|
||||
|
||||
```
|
||||
curl https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_gemeenten |
|
||||
grep '^<li><a href=' |
|
||||
sed -r 's|<li><a href="/wiki/.+" title=".+">(.+)</a>.*</li>|\1|' |
|
||||
grep -Ev '(^Tabel van|^Lijst van|Nederland)'
|
||||
```
|
||||
|
||||
This gets a list of all Dutch municipalities. I actually wrote this as a quick one-shot script to populate a database years ago, but it still works fine today, and it took me a minimum of effort to make it. Doing this in e.g. Ruby would take a lot more effort.
|
||||
|
||||
But there’s a downside, as your script grows it will become increasingly harder to maintain, but you also don’t really want to rewrite it to something else, as you’ve already spent so much time on the shell script version.
|
||||
|
||||
This is what I call ‘the shell script trap’, which is a special case of the [sunk cost fallacy][1].
|
||||
|
||||
And many scripts do grow beyond their original intended size, and often you will spend a lot more time than you should on “fixing that one bug”, or “adding just one small feature”. Rinse, repeat.
|
||||
|
||||
If you had written it in Python or Ruby or another similar language from the start, you would have spent some more time writing the original version, but would have spent much less time maintaining it, while almost certainly having fewer bugs.
|
||||
|
||||
Take my [packman.vim][2] script for example. It started out as a simple `for` loop over all directories and a `git pull` and has grown from there. At about 200 lines it’s hardly the most complex script, but had I written it in Go as I originally planned then it would have been much easier to add support for printing out the status or cloning new repos from a config file. It would also be almost trivial to add support for parallel clones, which is hard (though not impossible) to do correct in a shell script. In hindsight, I would have saved time, and gotten a better result to boot.
|
||||
|
||||
I regret writing most shell scripts I’ve written for similar reasons, and my 2018 new year’s pledge will be to not write any more.
|
||||
|
||||
#### Appendix: the problems
|
||||
|
||||
And to be clear, shell scripting does come with some real limitation. Some examples:
|
||||
|
||||
* Dealing with filenames that contain spaces or other ‘special’ characters requires careful attention to detail. The vast majority of scripts get this wrong, even when written by experienced authors who care about such things (e.g. me), because it’s so easy to do it wrong. [Adding quotes is not enough][3].
|
||||
|
||||
* There are many “right” and “wrong” ways to do things. Should you use `which` or `command`? Should you use `$@` or `$*`, and should that be quoted? Should you use `cmd $arg` or `cmd "$arg"`? etc. etc.
|
||||
|
||||
* You cannot store any NULL bytes (0x00) in variables; it is very hard to make shell scripts deal with binary data.
|
||||
|
||||
* While you can make something very useful very quickly, implementing more complex algorithms can be very painful – if not nigh-impossible – even when using the ksh/zsh/bash extensions. My ad-hoc HTML parsing in the example above was okay for a quick one-off script, but you really don’t want to do things like that in a production-script.
|
||||
|
||||
* It can be hard to write shell scripts that work well on all platforms. `/bin/sh` could be `dash` or `bash`, and will behave different. External tools such as `grep`, `sed`, etc. may or may not support certain flags. Are you sure that your script works on all versions (past, present, and future) of Linux, macOS, and Windows equally well?
|
||||
|
||||
* Debugging shell scripts can be hard, especially as the syntax can get fairly obscure quite fast, and not everyone is equally well versed in shell scripting.
|
||||
|
||||
* Error handling can be tricky (check `$?` or `set -e`), and doing something more advanced beyond “an error occurred” is practically impossible.
|
||||
|
||||
* Undefined variables are not an error unless you use `set -u`, leading to “fun stuff” like `rm -r ~/$undefined` deleting user’s home dir ([not a theoretical problem][4]).
|
||||
|
||||
* Everything is a string. Some shells add arrays, which works but the syntax is obscure and ugly. Numeric computations with fractions remain tricky and rely on external tools such as `bc` or `dc` (`$(( .. ))` expansion only works for integers).
|
||||
|
||||
|
||||
|
||||
|
||||
**Feedback**
|
||||
|
||||
You can mail me at [martin@arp242.net][5] or [create a GitHub issue][6] for feedback, questions, etc.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arp242.net/weblog/shell-scripting-trap.html
|
||||
|
||||
作者:[Martin Tournoij][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://arp242.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://youarenotsosmart.com/2011/03/25/the-sunk-cost-fallacy/
|
||||
[2]: https://github.com/Carpetsmoker/packman.vim
|
||||
[3]: https://dwheeler.com/essays/filenames-in-shell.html
|
||||
[4]: https://github.com/ValveSoftware/steam-for-linux/issues/3671
|
||||
[5]: mailto:martin@arp242.net
|
||||
[6]: https://github.com/Carpetsmoker/arp242.net/issues/new
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create portable documents with CBZ and DjVu)
|
||||
[#]: via: (https://opensource.com/article/19/3/comic-book-archive-djvu)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to create portable documents with CBZ and DjVu
|
||||
======
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (sanfusu)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to set up a homelab from hardware to firewall)
|
||||
[#]: via: (https://opensource.com/article/19/3/home-lab)
|
||||
[#]: author: (Michael Zamot (Red Hat) https://opensource.com/users/mzamot)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
How to set up a homelab from hardware to firewall
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Choosing an open messenger client: Alternatives to WhatsApp)
|
||||
[#]: via: (https://opensource.com/article/19/3/open-messenger-client)
|
||||
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
|
||||
|
||||
Choosing an open messenger client: Alternatives to WhatsApp
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Jaeger to build an Istio service mesh)
|
||||
[#]: via: (https://opensource.com/article/19/3/getting-started-jaeger)
|
||||
[#]: author: (Daniel Oh (Red Hat) https://opensource.com/users/daniel-oh)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
Getting started with Jaeger to build an Istio service mesh
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use Spark SQL: A hands-on tutorial)
|
||||
[#]: via: (https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial)
|
||||
[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
|
||||
[#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar)
|
||||
|
||||
How to use Spark SQL: A hands-on tutorial
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 open source tools for natural language processing)
|
||||
[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools)
|
||||
[#]: author: (Dan Barker (Community Moderator) https://opensource.com/users/barkerd427)
|
||||
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
|
||||
|
||||
12 open source tools for natural language processing
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use NetBSD on a Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/netbsd-raspberry-pi)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to use NetBSD on a Raspberry Pi
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to submit a bug report with Bugzilla)
|
||||
[#]: via: (https://opensource.com/article/19/3/bug-reporting)
|
||||
[#]: author: (David Both (Community Moderator) https://opensource.com/users/dboth)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
How to submit a bug report with Bugzilla
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Build and host a website with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/building-hosting-website-git)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Build and host a website with Git
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage your daily schedule with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/calendar-git)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Manage your daily schedule with Git
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use Git as the backend for chat)
|
||||
[#]: via: (https://opensource.com/article/19/4/git-based-chat)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Use Git as the backend for chat
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (File sharing with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/file-sharing-git)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
File sharing with Git
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Run a server with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/server-administration-git)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth/users/seth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/seth)
|
||||
|
||||
Run a server with Git
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage multimedia files with Git)
|
||||
[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git)
|
||||
[#]: author: (Seth Kenlon (Red Hat, Community Moderator) https://opensource.com/users/seth)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Manage multimedia files with Git
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A beginner's guide to building DevOps pipelines with open source tools)
|
||||
[#]: via: (https://opensource.com/article/19/4/devops-pipeline)
|
||||
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter)
|
||||
|
||||
A beginner's guide to building DevOps pipelines with open source tools
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Python's cryptography library)
|
||||
[#]: via: (https://opensource.com/article/19/4/cryptography-python)
|
||||
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Getting started with Python's cryptography library
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 Linux rookie mistakes)
|
||||
[#]: via: (https://opensource.com/article/19/4/linux-rookie-mistakes)
|
||||
[#]: author: (Jen Wike Huger (Red Hat) https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
|
||||
[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/bcotton/users/petercheer/users/greg-p/users/greg-p)
|
||||
|
||||
5 Linux rookie mistakes
|
||||
======
|
||||
|
@ -1,11 +1,11 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (fuzheng1998 )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 open source mobile apps)
|
||||
[#]: via: (https://opensource.com/article/19/4/mobile-apps)
|
||||
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen/users/bcotton/users/clhermansen/users/bcotton/users/clhermansen)
|
||||
|
||||
5 open source mobile apps
|
||||
======
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to enable serverless computing in Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/19/4/enabling-serverless-kubernetes)
|
||||
[#]: author: (Daniel Oh (Red Hat, Community Moderator) https://opensource.com/users/daniel-oh/users/daniel-oh)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh/users/daniel-oh)
|
||||
|
||||
How to enable serverless computing in Kubernetes
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Be your own certificate authority)
|
||||
[#]: via: (https://opensource.com/article/19/4/certificate-authority)
|
||||
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez/users/elenajon123)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/elenajon123)
|
||||
|
||||
Be your own certificate authority
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How do you contribute to open source without code?)
|
||||
[#]: via: (https://opensource.com/article/19/4/contribute-without-code)
|
||||
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer)
|
||||
|
||||
How do you contribute to open source without code?
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managed, enabled, empowered: 3 dimensions of leadership in an open organization)
|
||||
[#]: via: (https://opensource.com/open-organization/19/4/managed-enabled-empowered)
|
||||
[#]: author: (Heidi Hess von Ludewig (Red Hat) https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack)
|
||||
[#]: author: (Heidi Hess von Ludewig https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack)
|
||||
|
||||
Managed, enabled, empowered: 3 dimensions of leadership in an open organization
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Testing Small Scale Scrum in the real world)
|
||||
[#]: via: (https://opensource.com/article/19/4/next-steps-small-scale-scrum)
|
||||
[#]: author: (Agnieszka Gancarczyk (Red Hat)Leigh Griffin (Red Hat) https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin)
|
||||
[#]: author: (Agnieszka Gancarczyk Leigh Griffin https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin)
|
||||
|
||||
Testing Small Scale Scrum in the real world
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How libraries are adopting open source)
|
||||
[#]: via: (https://opensource.com/article/19/4/software-libraries)
|
||||
[#]: author: (Don Watkins (Community Moderator) https://opensource.com/users/don-watkins)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
How libraries are adopting open source
|
||||
======
|
||||
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blender short film, new license for Chef, ethics in open source, and more news)
|
||||
[#]: via: (https://opensource.com/article/15/4/news-april-15)
|
||||
[#]: author: (Joshua Allen Holm (Community Moderator) https://opensource.com/users/holmja)
|
||||
[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
|
||||
|
||||
Blender short film, new license for Chef, ethics in open source, and more news
|
||||
======
|
||||
|
@ -1,39 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Troubleshooting slow WiFi on Linux)
|
||||
[#]: via: (https://www.linux.com/blog/troubleshooting-slow-wifi-linux)
|
||||
[#]: author: (OpenSource.com https://www.linux.com/USERS/OPENSOURCECOM)
|
||||
|
||||
Troubleshooting slow WiFi on Linux
|
||||
======
|
||||
|
||||
I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
|
||||
|
||||
Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
|
||||
|
||||
Read more at: [OpenSource.com][2]
|
||||
|
||||
I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
|
||||
|
||||
Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
|
||||
|
||||
Read more at: [OpenSource.com][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/troubleshooting-slow-wifi-linux
|
||||
|
||||
作者:[OpenSource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/USERS/OPENSOURCECOM
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
|
||||
[2]: https://opensource.com/article/19/4/troubleshooting-wifi-linux
|
@ -5,7 +5,7 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Detecting malaria with deep learning)
|
||||
[#]: via: (https://opensource.com/article/19/4/detecting-malaria-deep-learning)
|
||||
[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
|
||||
[#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar)
|
||||
|
||||
Detecting malaria with deep learning
|
||||
======
|
||||
|
@ -1,238 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install MySQL in Ubuntu Linux)
|
||||
[#]: via: (https://itsfoss.com/install-mysql-ubuntu/)
|
||||
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
|
||||
|
||||
How to Install MySQL in Ubuntu Linux
|
||||
======
|
||||
|
||||
_**Brief: This tutorial teaches you to install MySQL in Ubuntu based Linux distributions. You’ll also learn how to verify your install and how to connect to MySQL for the first time.**_
|
||||
|
||||
**[MySQL][1]** is the quintessential database management system. It is used in many tech stacks, including the popular **[LAMP][2]** (Linux, Apache, MySQL, PHP) stack. It has proven its stability. Another thing that makes **MySQL** so great is that it is **open-source**.
|
||||
|
||||
**MySQL** uses **relational databases** (basically **tabular data** ). It is really easy to store, organize and access data this way. For managing data, **SQL** ( **Structured Query Language** ) is used.
|
||||
|
||||
In this article I’ll show you how to **install** and **use** MySQL 8.0 in Ubuntu 18.04. Let’s get to it!
|
||||
|
||||
### Installing MySQL in Ubuntu
|
||||
|
||||
![][3]
|
||||
|
||||
I’ll be covering two ways you can install **MySQL** in Ubuntu 18.04:
|
||||
|
||||
1. Install MySQL from the Ubuntu repositories. Very basic, not the latest version (5.7)
|
||||
2. Install MySQL using the official repository. There is a bigger step that you’ll have to add to the process, but nothing to worry about. Also, you’ll have the latest version (8.0)
|
||||
|
||||
|
||||
|
||||
When needed, I’ll provide screenshots to guide you. For most of this guide, I’ll be entering commands in the **terminal** ( **default hotkey** : CTRL+ALT+T). Don’t be scared of it!
|
||||
|
||||
#### Method 1. Installing MySQL from the Ubuntu repositories
|
||||
|
||||
First of all, make sure your repositories are updated by entering:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Now, to install **MySQL 5.7** , simply type:
|
||||
|
||||
```
|
||||
sudo apt install mysql-server -y
|
||||
```
|
||||
|
||||
That’s it! Simple and efficient.
|
||||
|
||||
#### Method 2. Installing MySQL using the official repository
|
||||
|
||||
Although this method has a few more steps, I’ll go through them one by one and I’ll try writing down clear notes.
|
||||
|
||||
The first step is browsing to the [download page][4] of the official MySQL website.
|
||||
|
||||
![][5]
|
||||
|
||||
Here, go down to the **download link** for the **DEB Package**.
|
||||
|
||||
![][6]
|
||||
|
||||
Scroll down past the info about Oracle Web and right-click on **No thanks, just start my download.** Select **Copy link location**.
|
||||
|
||||
Now go back to the terminal. We’ll [use][7] **[Curl][7]** [command][7] to the download the package:
|
||||
|
||||
```
|
||||
curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
|
||||
```
|
||||
|
||||
**<https://dev.mysql.com/get/mysql-apt-config\_0.8.12-1\_all.deb>** is the link I copied from the website. It might be different based on the current version of MySQL. Let’s use **dpkg** to start installing MySQL:
|
||||
|
||||
```
|
||||
sudo dpkg -i mysql-apt-config*
|
||||
```
|
||||
|
||||
Update your repositories:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
To actually install MySQL, we’ll use the same command as in the first method:
|
||||
|
||||
```
|
||||
sudo apt install mysql-server -y
|
||||
```
|
||||
|
||||
Doing so will open a prompt in your terminal for **package configuration**. Use the **down arrow** to select the **Ok** option.
|
||||
|
||||
![][8]
|
||||
|
||||
Press **Enter**. This should prompt you to enter a **password** :. Your are basically setting the root password for MySQL. Don’t confuse it with [root password of Ubuntu][9] system.
|
||||
|
||||
![][10]
|
||||
|
||||
Type in a password and press **Tab** to select **< Ok>**. Press **Enter.** You’ll now have to **re-enter** the **password**. After doing so, press **Tab** again to select **< Ok>**. Press **Enter**.
|
||||
|
||||
![][11]
|
||||
|
||||
Some **information** on configuring MySQL Server will be presented. Press **Tab** to select **< Ok>** and **Enter** again:
|
||||
|
||||
![][12]
|
||||
|
||||
Here you need to choose a **default authentication plugin**. Make sure **Use Strong Password Encryption** is selected. Press **Tab** and then **Enter**.
|
||||
|
||||
That’s it! You have successfully installed MySQL.
|
||||
|
||||
#### Verify your MySQL installation
|
||||
|
||||
To **verify** that MySQL installed correctly, use:
|
||||
|
||||
```
|
||||
sudo systemctl status mysql.service
|
||||
```
|
||||
|
||||
This will show some information about the service:
|
||||
|
||||
![][13]
|
||||
|
||||
You should see **Active: active (running)** in there somewhere. If you don’t, use the following command to start the **service** :
|
||||
|
||||
```
|
||||
sudo systemctl start mysql.service
|
||||
```
|
||||
|
||||
#### Configuring/Securing MySQL
|
||||
|
||||
For a new install, you should run the provided command for security-related updates. That’s:
|
||||
|
||||
```
|
||||
sudo mysql_secure_installation
|
||||
```
|
||||
|
||||
Doing so will first of all ask you if you want to use the **VALIDATE PASSWORD COMPONENT**. If you want to use it, you’ll have to select a minimum password strength ( **0 – Low, 1 – Medium, 2 – High** ). You won’t be able to input any password doesn’t respect the selected rules. If you don’t have the habit of using strong passwords (you should!), this could come in handy. If you think it might help, type in **y** or **Y** and press **Enter** , then choose a **strength level** for your password and input the one you want to use. If successful, you’ll continue the **securing** process; otherwise you’ll have to re-enter a password.
|
||||
|
||||
If, however, you do not want this feature (I won’t), just press **Enter** or **any other key** to skip using it.
|
||||
|
||||
For the other options, I suggest **enabling** them (typing in **y** or **Y** and pressing **Enter** for each of them). They are (in this order): **remove anonymous user, disallow root login remotely, remove test database and access to it, reload privilege tables now**.
|
||||
|
||||
#### Connecting to & Disconnecting from the MySQL Server
|
||||
|
||||
To be able to run SQL queries, you’ll first have to connect to the server using MySQL and use the MySQL prompt. The command for doing this is:
|
||||
|
||||
```
|
||||
mysql -h host_name -u user -p
|
||||
```
|
||||
|
||||
* **-h** is used to specify a **host name** (if the server is located on another machine; if it isn’t, just omit it)
|
||||
* **-u** mentions the **user**
|
||||
* **-p** specifies that you want to input a **password**.
|
||||
|
||||
|
||||
|
||||
Although not recommended (for safety reasons), you can enter the password directly in the command by typing it in right after **-p**. For example, if the password for **test_user** is **1234** and you are trying to connect on the machine you are using, you could use:
|
||||
|
||||
```
|
||||
mysql -u test_user -p1234
|
||||
```
|
||||
|
||||
If you successfully inputted the required parameters, you’ll be greeted by the **MySQL shell prompt** ( **mysql >**):
|
||||
|
||||
![][14]
|
||||
|
||||
To **disconnect** from the server and **leave** the mysql prompt, type:
|
||||
|
||||
```
|
||||
QUIT
|
||||
```
|
||||
|
||||
Typing **quit** (MySQL is case insensitive) or **\q** will also work. Press **Enter** to exit.
|
||||
|
||||
You can also output info about the **version** with a simple command:
|
||||
|
||||
```
|
||||
sudo mysqladmin -u root version -p
|
||||
```
|
||||
|
||||
If you want to see a **list of options** , use:
|
||||
|
||||
```
|
||||
mysql --help
|
||||
```
|
||||
|
||||
#### Uninstalling MySQL
|
||||
|
||||
If you decide that you want to use a newer release or just want to stop using MySQL.
|
||||
|
||||
First, disable the service:
|
||||
|
||||
```
|
||||
sudo systemctl stop mysql.service && sudo systemctl disable mysql.service
|
||||
```
|
||||
|
||||
Make sure you backed up your databases, in case you want to use them later on. You can uninstall MySQL by running:
|
||||
|
||||
```
|
||||
sudo apt purge mysql*
|
||||
```
|
||||
|
||||
To clean up dependecies:
|
||||
|
||||
```
|
||||
sudo apt autoremove
|
||||
```
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
In this article, I’ve covered **installing MySQL** in Ubuntu Linux. I’d be glad if this guide helps struggling users and beginners.
|
||||
|
||||
Tell us in the comments if you found this post to be a useful resource. What do you use MySQL for? We’re eager to receive any feedback, impressions or suggestions. Thanks for reading and have don’t hesitate to experiment with this incredible tool!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-mysql-ubuntu/
|
||||
|
||||
作者:[Sergiu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/sergiu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.mysql.com/
|
||||
[2]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-mysql-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[4]: https://dev.mysql.com/downloads/repo/apt/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_apt_download_page.jpg?fit=800%2C280&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_deb_download_link.jpg?fit=800%2C507&ssl=1
|
||||
[7]: https://linuxhandbook.com/curl-command-examples/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_package_configuration_ok.jpg?fit=800%2C587&ssl=1
|
||||
[9]: https://itsfoss.com/change-password-ubuntu/
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_enter_password.jpg?fit=800%2C583&ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_information_on_configuring.jpg?fit=800%2C581&ssl=1
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_default_authentication_plugin.jpg?fit=800%2C586&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_service_information.jpg?fit=800%2C402&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_shell_prompt-2.jpg?fit=800%2C423&ssl=1
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,34 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Move Data to the Cloud with Azure Data Migration)
|
||||
[#]: via: (https://www.linux.com/blog/move-data-cloud-azure-data-migration)
|
||||
[#]: author: (InfoWorld https://www.linux.com/users/infoworld)
|
||||
|
||||
Move Data to the Cloud with Azure Data Migration
|
||||
======
|
||||
|
||||
Despite more than a decade of cloud migration, there’s still a vast amount of data running on-premises. That’s not surprising since data migrations, even between similar systems, are complex, slow, and add risk to your day-to-day operations. Moving to the cloud adds additional management overhead, raising questions of network connectivity and bandwidth, as well as the variable costs associated with running cloud databases.
|
||||
|
||||
Read more at: [InfoWorld][1]
|
||||
|
||||
Despite more than a decade of cloud migration, there’s still a vast amount of data running on-premises. That’s not surprising since data migrations, even between similar systems, are complex, slow, and add risk to your day-to-day operations. Moving to the cloud adds additional management overhead, raising questions of network connectivity and bandwidth, as well as the variable costs associated with running cloud databases.
|
||||
|
||||
Read more at: [InfoWorld][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/move-data-cloud-azure-data-migration
|
||||
|
||||
作者:[InfoWorld][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/infoworld
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.infoworld.com/article/3388312/move-data-to-the-cloud-with-azure-data-migration.html
|
@ -0,0 +1,132 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use Ansible to document procedures)
|
||||
[#]: via: (https://opensource.com/article/19/4/ansible-procedures)
|
||||
[#]: author: (Marco Bravo https://opensource.com/users/marcobravo/users/shawnhcorey/users/marcobravo)
|
||||
|
||||
How to use Ansible to document procedures
|
||||
======
|
||||
In Ansible, the documentation is the playbook, so the documentation
|
||||
naturally evolves alongside the code
|
||||
![][1]
|
||||
|
||||
> "Documentation is a love letter that you write to your future self." —[Damian Conway][2]
|
||||
|
||||
I use [Ansible][3] as my personal notebook for documenting coding procedures—both the ones I use often and the ones I rarely use. This process facilitates my work and reduces the time it takes to do repetitive tasks, the ones where specific commands in a certain sequence are executed to accomplish a specific result.
|
||||
|
||||
By documenting with Ansible, I don't need to memorize all the parameters for each command or all the steps involved with a specific procedure, and it's easy to share the details with my teammates.
|
||||
|
||||
Traditional approaches for documentation, like wikis or shared drives, are useful for general documents, but inevitably they become outdated and can't keep pace with the rapid changes in infrastructure and environments. For specific procedures, it's better to document directly into the code using a tool like Ansible.
|
||||
|
||||
### Ansible's advantages
|
||||
|
||||
Before we begin, let's recap some basic Ansible concepts: a _playbook_ is a high-level organization of procedures using plays; _plays_ are specific procedures for a group of hosts; _tasks_ are specific actions, _modules_ are units of code, and _inventory_ is a list of managed nodes.
|
||||
|
||||
Ansible's great advantage is that the documentation is the playbook itself, so it evolves with and is contained inside the code. This is not only useful; it's also practical because, more than just documenting solutions with Ansible, you're also coding a playbook that permits you to write your procedures and commands, reproduce them, and automate them. This way, you can look back in six months and be able to quickly understand and execute them again.
|
||||
|
||||
It's true that this way of resolving problems could take more time at first, but it will definitely save a lot of time in the long term. By being courageous and disciplined to adopt these new habits, you will improve your skills in each iteration.
|
||||
|
||||
Following are some other important elements and support tools that will facilitate your process.
|
||||
|
||||
### Use source code control
|
||||
|
||||
> "First do it, then do it right, then do it better." —[Addy Osmani][4]
|
||||
|
||||
When working with Ansible playbooks, it's very important to implement a playbook-as-code strategy. A good way to accomplish this is to use a source code control repository that will permit to you start with a simple solution and iterate to improve it.
|
||||
|
||||
A source code control repository provides many advantages as you collaborate with other developers, restore previous versions, and back up your work. But in creating documentation, its main advantages are that you get traceability about what are you doing and can iterate around small changes to improve your work.
|
||||
|
||||
The most popular source control system is [Git][5], but there are [others][6] like [Subversion][7], [Bazaar][8], [BitKeeper][9], and [Mercurial][10].
|
||||
|
||||
### Keep idempotency in mind
|
||||
|
||||
In infrastructure automation, idempotency means to reach a specific end state that remains the same, no matter how many times the process is executed. So when you are preparing to automate your procedures, keep the desired result in mind and write scripts and commands that will achieve them consistently.
|
||||
|
||||
This concept exists in most Ansible modules because after you specify the desired final state, Ansible will accomplish it. For instance, there are modules for creating filesystems, modifying iptables, and managing cron entries. All of these modules are idempotent by default, so you should give them preference.
|
||||
|
||||
If you are using some of the lower-level modules, like command or shell, or developing your own modules, be careful to write code that will be idempotent and safe to repeat many times to get the same result.
|
||||
|
||||
The idempotency concept is important when you prepare procedures for automation because it permits you to evaluate several scenarios and incorporate the ones that will make your code safer and create an abstraction level that points to the desired result.
|
||||
|
||||
### Test it!
|
||||
|
||||
Testing your deployment workflow creates fewer surprises when your code arrives in production. Ansible's belief that you shouldn't need another framework to validate basic things in your infrastructure is true. But your focus should be on application testing, not infrastructure testing.
|
||||
|
||||
Ansible's documentation offers several [testing strategies for your procedures][11]. For testing Ansible playbooks, you can use [Molecule][12], which is designed to aid in the development and testing of Ansible roles. Molecule supports testing with multiple instances, operating systems/distributions, virtualization providers, test frameworks, and testing scenarios. This means Molecule will run through all the testing steps: linting verifications, checking playbook syntax, building Docker environments, running playbooks against Docker environments, running the playbook again to verify idempotence, and cleaning everything up afterward. [Testing Ansible roles with Molecule][13] is a good introduction to Molecule.
|
||||
|
||||
### Run it!
|
||||
|
||||
Running Ansible playbooks can create logs that are formatted in an unfriendly and difficult-to-read way. In those cases, the Ansible Run Analysis (ARA) is a great complementary tool for running Ansible playbooks, as it provides an intuitive interface to browse them. Read [Analyzing Ansible runs using ARA][14] for more information.
|
||||
|
||||
Remember to protect your passwords and other sensitive information with [Ansible Vault][15]. Vault can encrypt binary files, **group_vars** , **host_vars** , **include_vars** , and **var_files**. But this encrypted data is exposed when you run a playbook in **-v** (verbose) mode, so it's a good idea to combine it with the keyword **no_log** set to **true** to hide any task's information, as it indicates that the value of the argument should not be logged or displayed.
|
||||
|
||||
### A basic example
|
||||
|
||||
Do you need to connect to a server to produce a report file and copy the file to another server? Or do you need a lot of specific parameters to connect? Maybe you're not sure where to store the parameters. Or are your procedures are taking a long time because you need to collect all the parameters from several sources?
|
||||
|
||||
Suppose you have a network topology with some restrictions and you need to copy a file from a server that you can access ( **server1** ) to another server that is managed by a third party ( **server2** ). The parameters to connect are:
|
||||
|
||||
|
||||
```
|
||||
Source server: server1
|
||||
Target server: server2
|
||||
Port: 2202
|
||||
User: transfers
|
||||
SSH Key: transfers_key
|
||||
File to copy: file.log
|
||||
Remote directory: /logs/server1/
|
||||
```
|
||||
|
||||
In this scenario, you need to connect to **server1** and copy the file using these parameters. You can accomplish this using a one-line command:
|
||||
|
||||
|
||||
```
|
||||
`ssh server1 "scp -P 2202 -oUser=transfers -i ~/.ssh/transfers_key file.log server2:/logs/server1/"`
|
||||
```
|
||||
|
||||
Now your playbook can do the procedure.
|
||||
|
||||
### Useful combinations
|
||||
|
||||
If you produce a lot of Ansible playbooks, you can organize all your procedures with other tools like [AWX][16] (Ansible Works Project), which provides a web-based user interface, a REST API, and a task engine built on top of Ansible so that users can better control their Ansible project use in IT environments.
|
||||
|
||||
Other interesting combinations are Ansible with [Rundeck][17], which provides procedures as self-service jobs, and [Jenkins][18] for continuous integration and continuous delivery processes.
|
||||
|
||||
### Conclusion
|
||||
|
||||
I hope that these tips for using Ansible will help you improve your automation processes, coding, and documentation. If you have more interest, dive in and learn more. And I would like to hear your ideas or questions, so please share them in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/ansible-procedures
|
||||
|
||||
作者:[Marco Bravo][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/marcobravo/users/shawnhcorey/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2
|
||||
[2]: https://en.wikipedia.org/wiki/Damian_Conway
|
||||
[3]: https://www.ansible.com/
|
||||
[4]: https://addyosmani.com/
|
||||
[5]: https://git-scm.com/
|
||||
[6]: https://en.wikipedia.org/wiki/Comparison_of_version_control_software
|
||||
[7]: https://subversion.apache.org/
|
||||
[8]: https://bazaar.canonical.com/en/
|
||||
[9]: https://www.bitkeeper.org/
|
||||
[10]: https://www.mercurial-scm.org/
|
||||
[11]: https://docs.ansible.com/ansible/latest/reference_appendices/test_strategies.html
|
||||
[12]: https://molecule.readthedocs.io/en/latest/
|
||||
[13]: https://opensource.com/article/18/12/testing-ansible-roles-molecule
|
||||
[14]: https://opensource.com/article/18/5/analyzing-ansible-runs-using-ara
|
||||
[15]: https://docs.ansible.com/ansible/latest/user_guide/vault.html
|
||||
[16]: https://github.com/ansible/awx
|
||||
[17]: https://www.rundeck.com/ansible
|
||||
[18]: https://www.redhat.com/en/blog/integrating-ansible-jenkins-cicd-process
|
@ -0,0 +1,388 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Inter-process communication in Linux: Sockets and signals)
|
||||
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-networking)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
Inter-process communication in Linux: Sockets and signals
|
||||
======
|
||||
|
||||
Learn how processes synchronize with each other in Linux.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3)
|
||||
|
||||
This is the third and final article in a series about [interprocess communication][1] (IPC) in Linux. The [first article][2] focused on IPC through shared storage (files and memory segments), and the [second article][3] does the same for basic channels: pipes (named and unnamed) and message queues. This article moves from IPC at the high end (sockets) to IPC at the low end (signals). Code examples flesh out the details.
|
||||
|
||||
### Sockets
|
||||
|
||||
Just as pipes come in two flavors (named and unnamed), so do sockets. IPC sockets (aka Unix domain sockets) enable channel-based communication for processes on the same physical device (host), whereas network sockets enable this kind of IPC for processes that can run on different hosts, thereby bringing networking into play. Network sockets need support from an underlying protocol such as TCP (Transmission Control Protocol) or the lower-level UDP (User Datagram Protocol).
|
||||
|
||||
By contrast, IPC sockets rely upon the local system kernel to support communication; in particular, IPC sockets communicate using a local file as a socket address. Despite these implementation differences, the IPC socket and network socket APIs are the same in the essentials. The forthcoming example covers network sockets, but the sample server and client programs can run on the same machine because the server uses network address localhost (127.0.0.1), the address for the local machine on the local machine.
|
||||
|
||||
Sockets configured as streams (discussed below) are bidirectional, and control follows a client/server pattern: the client initiates the conversation by trying to connect to a server, which tries to accept the connection. If everything works, requests from the client and responses from the server then can flow through the channel until this is closed on either end, thereby breaking the connection.
|
||||
|
||||
An iterative server, which is suited for development only, handles connected clients one at a time to completion: the first client is handled from start to finish, then the second, and so on. The downside is that the handling of a particular client may hang, which then starves all the clients waiting behind. A production-grade server would be concurrent, typically using some mix of multi-processing and multi-threading. For example, the Nginx web server on my desktop machine has a pool of four worker processes that can handle client requests concurrently. The following code example keeps the clutter to a minimum by using an iterative server; the focus thus remains on the basic API, not on concurrency.
|
||||
|
||||
Finally, the socket API has evolved significantly over time as various POSIX refinements have emerged. The current sample code for server and client is deliberately simple but underscores the bidirectional aspect of a stream-based socket connection. Here's a summary of the flow of control, with the server started in a terminal then the client started in a separate terminal:
|
||||
|
||||
* The server awaits client connections and, given a successful connection, reads the bytes from the client.
|
||||
|
||||
* To underscore the two-way conversation, the server echoes back to the client the bytes received from the client. These bytes are ASCII character codes, which make up book titles.
|
||||
|
||||
* The client writes book titles to the server process and then reads the same titles echoed from the server. Both the server and the client print the titles to the screen. Here is the server's output, essentially the same as the client's:
|
||||
|
||||
```
|
||||
Listening on port 9876 for clients...
|
||||
War and Peace
|
||||
Pride and Prejudice
|
||||
The Sound and the Fury
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
#### Example 1. The socket server
|
||||
|
||||
```
|
||||
#include <string.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/socket.h>
|
||||
#include <netinet/tcp.h>
|
||||
#include <arpa/inet.h>
|
||||
#include "sock.h"
|
||||
|
||||
void report(const char* msg, int terminate) {
|
||||
perror(msg);
|
||||
if (terminate) exit(-1); /* failure */
|
||||
}
|
||||
|
||||
int main() {
|
||||
int fd = socket(AF_INET, /* network versus AF_LOCAL */
|
||||
SOCK_STREAM, /* reliable, bidirectional, arbitrary payload size */
|
||||
0); /* system picks underlying protocol (TCP) */
|
||||
if (fd < 0) report("socket", 1); /* terminate */
|
||||
|
||||
/* bind the server's local address in memory */
|
||||
struct sockaddr_in saddr;
|
||||
memset(&saddr, 0, sizeof(saddr)); /* clear the bytes */
|
||||
saddr.sin_family = AF_INET; /* versus AF_LOCAL */
|
||||
saddr.sin_addr.s_addr = htonl(INADDR_ANY); /* host-to-network endian */
|
||||
saddr.sin_port = htons(PortNumber); /* for listening */
|
||||
|
||||
if (bind(fd, (struct sockaddr *) &saddr, sizeof(saddr)) < 0)
|
||||
report("bind", 1); /* terminate */
|
||||
|
||||
/* listen to the socket */
|
||||
if (listen(fd, MaxConnects) < 0) /* listen for clients, up to MaxConnects */
|
||||
report("listen", 1); /* terminate */
|
||||
|
||||
fprintf(stderr, "Listening on port %i for clients...\n", PortNumber);
|
||||
/* a server traditionally listens indefinitely */
|
||||
while (1) {
|
||||
struct sockaddr_in caddr; /* client address */
|
||||
int len = sizeof(caddr); /* address length could change */
|
||||
|
||||
int client_fd = accept(fd, (struct sockaddr*) &caddr, &len); /* accept blocks */
|
||||
if (client_fd < 0) {
|
||||
report("accept", 0); /* don't terminate, though there's a problem */
|
||||
continue;
|
||||
}
|
||||
|
||||
/* read from client */
|
||||
int i;
|
||||
for (i = 0; i < ConversationLen; i++) {
|
||||
char buffer[BuffSize + 1];
|
||||
memset(buffer, '\0', sizeof(buffer));
|
||||
int count = read(client_fd, buffer, sizeof(buffer));
|
||||
if (count > 0) {
|
||||
puts(buffer);
|
||||
write(client_fd, buffer, sizeof(buffer)); /* echo as confirmation */
|
||||
}
|
||||
}
|
||||
close(client_fd); /* break connection */
|
||||
} /* while(1) */
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The server program above performs the classic four-step to ready itself for client requests and then to accept individual requests. Each step is named after a system function that the server calls:
|
||||
|
||||
1. **socket(…)** : get a file descriptor for the socket connection
|
||||
2. **bind(…)** : bind the socket to an address on the server's host
|
||||
3. **listen(…)** : listen for client requests
|
||||
4. **accept(…)** : accept a particular client request
|
||||
|
||||
|
||||
|
||||
The **socket** call in full is:
|
||||
|
||||
```
|
||||
int sockfd = socket(AF_INET, /* versus AF_LOCAL */
|
||||
SOCK_STREAM, /* reliable, bidirectional */
|
||||
0); /* system picks protocol (TCP) */
|
||||
```
|
||||
|
||||
The first argument specifies a network socket as opposed to an IPC socket. There are several options for the second argument, but **SOCK_STREAM** and **SOCK_DGRAM** (datagram) are likely the most used. A stream-based socket supports a reliable channel in which lost or altered messages are reported; the channel is bidirectional, and the payloads from one side to the other can be arbitrary in size. By contrast, a datagram-based socket is unreliable (best try), unidirectional, and requires fixed-sized payloads. The third argument to **socket** specifies the protocol. For the stream-based socket in play here, there is a single choice, which the zero represents: TCP. Because a successful call to **socket** returns the familiar file descriptor, a socket is written and read with the same syntax as, for example, a local file.
|
||||
|
||||
The **bind** call is the most complicated, as it reflects various refinements in the socket API. The point of interest is that this call binds the socket to a memory address on the server machine. However, the **listen** call is straightforward:
|
||||
|
||||
```
|
||||
if (listen(fd, MaxConnects) < 0)
|
||||
```
|
||||
|
||||
The first argument is the socket's file descriptor and the second specifies how many client connections can be accommodated before the server issues a connection refused error on an attempted connection. ( **MaxConnects** is set to 8 in the header file sock.h.)
|
||||
|
||||
The **accept** call defaults to a blocking wait: the server does nothing until a client attempts to connect and then proceeds. The **accept** function returns **-1** to indicate an error. If the call succeeds, it returns another file descriptor—for a read/write socket in contrast to the accepting socket referenced by the first argument in the **accept** call. The server uses the read/write socket to read requests from the client and to write responses back. The accepting socket is used only to accept client connections.
|
||||
|
||||
By design, a server runs indefinitely. Accordingly, the server can be terminated with a **Ctrl+C** from the command line.
|
||||
|
||||
#### Example 2. The socket client
|
||||
|
||||
```
|
||||
#include <string.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <netinet/in.h>
|
||||
#include <netinet/tcp.h>
|
||||
#include <netdb.h>
|
||||
#include "sock.h"
|
||||
|
||||
const char* books[] = {"War and Peace",
|
||||
"Pride and Prejudice",
|
||||
"The Sound and the Fury"};
|
||||
|
||||
void report(const char* msg, int terminate) {
|
||||
perror(msg);
|
||||
if (terminate) exit(-1); /* failure */
|
||||
}
|
||||
|
||||
int main() {
|
||||
/* fd for the socket */
|
||||
int sockfd = socket(AF_INET, /* versus AF_LOCAL */
|
||||
SOCK_STREAM, /* reliable, bidirectional */
|
||||
0); /* system picks protocol (TCP) */
|
||||
if (sockfd < 0) report("socket", 1); /* terminate */
|
||||
|
||||
/* get the address of the host */
|
||||
struct hostent* hptr = gethostbyname(Host); /* localhost: 127.0.0.1 */
|
||||
if (!hptr) report("gethostbyname", 1); /* is hptr NULL? */
|
||||
if (hptr->h_addrtype != AF_INET) /* versus AF_LOCAL */
|
||||
report("bad address family", 1);
|
||||
|
||||
/* connect to the server: configure server's address 1st */
|
||||
struct sockaddr_in saddr;
|
||||
memset(&saddr, 0, sizeof(saddr));
|
||||
saddr.sin_family = AF_INET;
|
||||
saddr.sin_addr.s_addr =
|
||||
((struct in_addr*) hptr->h_addr_list[0])->s_addr;
|
||||
saddr.sin_port = htons(PortNumber); /* port number in big-endian */
|
||||
|
||||
if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0)
|
||||
report("connect", 1);
|
||||
|
||||
/* Write some stuff and read the echoes. */
|
||||
puts("Connect to server, about to write some stuff...");
|
||||
int i;
|
||||
for (i = 0; i < ConversationLen; i++) {
|
||||
if (write(sockfd, books[i], strlen(books[i])) > 0) {
|
||||
/* get confirmation echoed from server and print */
|
||||
char buffer[BuffSize + 1];
|
||||
memset(buffer, '\0', sizeof(buffer));
|
||||
if (read(sockfd, buffer, sizeof(buffer)) > 0)
|
||||
puts(buffer);
|
||||
}
|
||||
}
|
||||
puts("Client done, about to exit...");
|
||||
close(sockfd); /* close the connection */
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The client program's setup code is similar to the server's. The principal difference between the two is that the client neither listens nor accepts, but instead connects:
|
||||
|
||||
```
|
||||
if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0)
|
||||
```
|
||||
|
||||
The **connect** call might fail for several reasons; for example, the client has the wrong server address or too many clients are already connected to the server. If the **connect** operation succeeds, the client writes requests and then reads the echoed responses in a **for** loop. After the conversation, both the server and the client **close** the read/write socket, although a close operation on either side is sufficient to close the connection. The client exits thereafter but, as noted earlier, the server remains open for business.
|
||||
|
||||
The socket example, with request messages echoed back to the client, hints at the possibilities of arbitrarily rich conversations between the server and the client. Perhaps this is the chief appeal of sockets. It is common on modern systems for client applications (e.g., a database client) to communicate with a server through a socket. As noted earlier, local IPC sockets and network sockets differ only in a few implementation details; in general, IPC sockets have lower overhead and better performance. The communication API is essentially the same for both.
|
||||
|
||||
### Signals
|
||||
|
||||
A signal interrupts an executing program and, in this sense, communicates with it. Most signals can be either ignored (blocked) or handled (through designated code), with **SIGSTOP** (pause) and **SIGKILL** (terminate immediately) as the two notable exceptions. Symbolic constants such as **SIGKILL** have integer values, in this case, 9.
|
||||
|
||||
Signals can arise in user interaction. For example, a user hits **Ctrl+C** from the command line to terminate a program started from the command-line; **Ctrl+C** generates a **SIGTERM** signal. **SIGTERM** for terminate, unlike **SIGKILL** , can be either blocked or handled. One process also can signal another, thereby making signals an IPC mechanism.
|
||||
|
||||
Consider how a multi-processing application such as the Nginx web server might be shut down gracefully from another process. The **kill** function:
|
||||
|
||||
```
|
||||
int kill(pid_t pid, int signum); /* declaration */
|
||||
```
|
||||
|
||||
can be used by one process to terminate another process or group of processes. If the first argument to function **kill** is greater than zero, this argument is treated as the pid (process ID) of the targeted process; if the argument is zero, the argument identifies the group of processes to which the signal sender belongs.
|
||||
|
||||
The second argument to **kill** is either a standard signal number (e.g., **SIGTERM** or **SIGKILL** ) or 0, which makes the call to **signal** a query about whether the pid in the first argument is indeed valid. The graceful shutdown of a multi-processing application thus could be accomplished by sending a terminate signal—a call to the **kill** function with **SIGTERM** as the second argument—to the group of processes that make up the application. (The Nginx master process could terminate the worker processes with a call to **kill** and then exit itself.) The **kill** function, like so many library functions, houses power and flexibility in a simple invocation syntax.
|
||||
|
||||
#### Example 3. The graceful shutdown of a multi-processing system
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <signal.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/wait.h>
|
||||
|
||||
void graceful(int signum) {
|
||||
printf("\tChild confirming received signal: %i\n", signum);
|
||||
puts("\tChild about to terminate gracefully...");
|
||||
sleep(1);
|
||||
puts("\tChild terminating now...");
|
||||
_exit(0); /* fast-track notification of parent */
|
||||
}
|
||||
|
||||
void set_handler() {
|
||||
struct sigaction current;
|
||||
sigemptyset(¤t.sa_mask); /* clear the signal set */
|
||||
current.sa_flags = 0; /* enables setting sa_handler, not sa_action */
|
||||
current.sa_handler = graceful; /* specify a handler */
|
||||
sigaction(SIGTERM, ¤t, NULL); /* register the handler */
|
||||
}
|
||||
|
||||
void child_code() {
|
||||
set_handler();
|
||||
|
||||
while (1) { /** loop until interrupted **/
|
||||
sleep(1);
|
||||
puts("\tChild just woke up, but going back to sleep.");
|
||||
}
|
||||
}
|
||||
|
||||
void parent_code(pid_t cpid) {
|
||||
puts("Parent sleeping for a time...");
|
||||
sleep(5);
|
||||
|
||||
/* Try to terminate child. */
|
||||
if (-1 == kill(cpid, SIGTERM)) {
|
||||
perror("kill");
|
||||
exit(-1);
|
||||
}
|
||||
wait(NULL); /** wait for child to terminate **/
|
||||
puts("My child terminated, about to exit myself...");
|
||||
}
|
||||
|
||||
int main() {
|
||||
pid_t pid = fork();
|
||||
if (pid < 0) {
|
||||
perror("fork");
|
||||
return -1; /* error */
|
||||
}
|
||||
if (0 == pid)
|
||||
child_code();
|
||||
else
|
||||
parent_code(pid);
|
||||
return 0; /* normal */
|
||||
}
|
||||
```
|
||||
|
||||
The shutdown program above simulates the graceful shutdown of a multi-processing system, in this case, a simple one consisting of a parent process and a single child process. The simulation works as follows:
|
||||
|
||||
* The parent process tries to fork a child. If the fork succeeds, each process executes its own code: the child executes the function **child_code** , and the parent executes the function **parent_code**.
|
||||
* The child process goes into a potentially infinite loop in which the child sleeps for a second, prints a message, goes back to sleep, and so on. It is precisely a **SIGTERM** signal from the parent that causes the child to execute the signal-handling callback function **graceful**. The signal thus breaks the child process out of its loop and sets up the graceful termination of both the child and the parent. The child prints a message before terminating.
|
||||
* The parent process, after forking the child, sleeps for five seconds so that the child can execute for a while; of course, the child mostly sleeps in this simulation. The parent then calls the **kill** function with **SIGTERM** as the second argument, waits for the child to terminate, and then exits.
|
||||
|
||||
|
||||
|
||||
Here is the output from a sample run:
|
||||
|
||||
```
|
||||
% ./shutdown
|
||||
Parent sleeping for a time...
|
||||
Child just woke up, but going back to sleep.
|
||||
Child just woke up, but going back to sleep.
|
||||
Child just woke up, but going back to sleep.
|
||||
Child just woke up, but going back to sleep.
|
||||
Child confirming received signal: 15 ## SIGTERM is 15
|
||||
Child about to terminate gracefully...
|
||||
Child terminating now...
|
||||
My child terminated, about to exit myself...
|
||||
```
|
||||
|
||||
For the signal handling, the example uses the **sigaction** library function (POSIX recommended) rather than the legacy **signal** function, which has portability issues. Here are the code segments of chief interest:
|
||||
|
||||
* If the call to **fork** succeeds, the parent executes the **parent_code** function and the child executes the **child_code** function. The parent waits for five seconds before signaling the child:
|
||||
|
||||
```
|
||||
puts("Parent sleeping for a time...");
|
||||
sleep(5);
|
||||
if (-1 == kill(cpid, SIGTERM)) {
|
||||
...sleepkillcpidSIGTERM...
|
||||
```
|
||||
|
||||
If the **kill** call succeeds, the parent does a **wait** on the child's termination to prevent the child from becoming a permanent zombie; after the wait, the parent exits.
|
||||
|
||||
* The **child_code** function first calls **set_handler** and then goes into its potentially infinite sleeping loop. Here is the **set_handler** function for review:
|
||||
|
||||
```
|
||||
void set_handler() {
|
||||
struct sigaction current; /* current setup */
|
||||
sigemptyset(¤t.sa_mask); /* clear the signal set */
|
||||
current.sa_flags = 0; /* for setting sa_handler, not sa_action */
|
||||
current.sa_handler = graceful; /* specify a handler */
|
||||
sigaction(SIGTERM, ¤t, NULL); /* register the handler */
|
||||
}
|
||||
```
|
||||
|
||||
The first three lines are preparation. The fourth statement sets the handler to the function **graceful** , which prints some messages before calling **_exit** to terminate. The fifth and last statement then registers the handler with the system through the call to **sigaction**. The first argument to **sigaction** is **SIGTERM** for terminate, the second is the current **sigaction** setup, and the last argument ( **NULL** in this case) can be used to save a previous **sigaction** setup, perhaps for later use.
|
||||
|
||||
|
||||
|
||||
|
||||
Using signals for IPC is indeed a minimalist approach, but a tried-and-true one at that. IPC through signals clearly belongs in the IPC toolbox.
|
||||
|
||||
### Wrapping up this series
|
||||
|
||||
These three articles on IPC have covered the following mechanisms through code examples:
|
||||
|
||||
* Shared files
|
||||
* Shared memory (with semaphores)
|
||||
* Pipes (named and unnamed)
|
||||
* Message queues
|
||||
* Sockets
|
||||
* Signals
|
||||
|
||||
|
||||
|
||||
Even today, when thread-centric languages such as Java, C#, and Go have become so popular, IPC remains appealing because concurrency through multi-processing has an obvious advantage over multi-threading: every process, by default, has its own address space, which rules out memory-based race conditions in multi-processing unless the IPC mechanism of shared memory is brought into play. (Shared memory must be locked in both multi-processing and multi-threading for safe concurrency.) Anyone who has written even an elementary multi-threading program with communication via shared variables knows how challenging it can be to write thread-safe yet clear, efficient code. Multi-processing with single-threaded processes remains a viable—indeed, quite appealing—way to take advantage of today's multi-processor machines without the inherent risk of memory-based race conditions.
|
||||
|
||||
There is no simple answer, of course, to the question of which among the IPC mechanisms is the best. Each involves a trade-off typical in programming: simplicity versus functionality. Signals, for example, are a relatively simple IPC mechanism but do not support rich conversations among processes. If such a conversion is needed, then one of the other choices is more appropriate. Shared files with locking is reasonably straightforward, but shared files may not perform well enough if processes need to share massive data streams; pipes or even sockets, with more complicated APIs, might be a better choice. Let the problem at hand guide the choice.
|
||||
|
||||
Although the sample code ([available on my website][4]) is all in C, other programming languages often provide thin wrappers around these IPC mechanisms. The code examples are short and simple enough, I hope, to encourage you to experiment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/interprocess-communication-linux-networking
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Inter-process_communication
|
||||
[2]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
|
||||
[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-2
|
||||
[4]: http://condor.depaul.edu/mkalin
|
99
sources/tech/20190417 Managing RAID arrays with mdadm.md
Normal file
99
sources/tech/20190417 Managing RAID arrays with mdadm.md
Normal file
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing RAID arrays with mdadm)
|
||||
[#]: via: (https://fedoramagazine.org/managing-raid-arrays-with-mdadm/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
Managing RAID arrays with mdadm
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Mdadm stands for Multiple Disk and Device Administration. It is a command line tool that can be used to manage software [RAID][2] arrays on your Linux PC. This article outlines the basics you need to get started with it.
|
||||
|
||||
The following five commands allow you to make use of mdadm’s most basic features:
|
||||
|
||||
1. **Create a RAID array** :
|
||||
### mdadm --create /dev/md/test --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
|
||||
2. **Assemble (and start) a RAID array** :
|
||||
### mdadm --assemble /dev/md/test /dev/sda1 /dev/sdb1
|
||||
3. **Stop a RAID array** :
|
||||
### mdadm --stop /dev/md/test
|
||||
4. **Delete a RAID array** :
|
||||
### mdadm --zero-superblock /dev/sda1 /dev/sdb1
|
||||
5. **Check the status of all assembled RAID arrays** :
|
||||
### cat /proc/mdstat
|
||||
|
||||
|
||||
|
||||
#### Notes on features
|
||||
|
||||
##### `mdadm --create`
|
||||
|
||||
The _create_ command shown above includes the following four parameters in addition to the create parameter itself and the device names:
|
||||
|
||||
1. **–homehost** :
|
||||
By default, mdadm stores your computer’s name as an attribute of the RAID array. If your computer name does not match the stored name, the array will not automatically assemble. This feature is useful in server clusters that share hard drives because file system corruption usually occurs if multiple servers attempt to access the same drive at the same time. The name _any_ is reserved and disables the _homehost_ restriction.
|
||||
2. **–metadata** :
|
||||
_mdadm_ reserves a small portion of each RAID device to store information about the RAID array itself. The _metadata_ parameter specifies the format and location of the information. The value _1.0_ indicates to use version-1 formatting and store the metadata at the end of the device.
|
||||
3. **–level** :
|
||||
The _level_ parameter specifies how the data should be distributed among the underlying devices. Level _1_ indicates each device should contain a complete copy of all the data. This level is also known as [disk mirroring][3].
|
||||
4. **–raid-devices** :
|
||||
The _raid-devices_ parameter specifies the number of devices that will be used to create the RAID array.
|
||||
|
||||
|
||||
|
||||
By using _level=1_ (mirroring) in combination with _metadata=1.0_ (store the metadata at the end of the device), you create a RAID1 array whose underlying devices appear normal if accessed without the aid of the mdadm driver. This is useful in the case of disaster recovery, because you can access the device even if the new system doesn’t support mdadm arrays. It’s also useful in case a program needs _read-only_ access to the underlying device before mdadm is available. For example, the [UEFI][4] firmware in a computer may need to read the bootloader from the [ESP][5] before mdadm is started.
|
||||
|
||||
##### `mdadm --assemble`
|
||||
|
||||
The _assemble_ command above fails if a member device is missing or corrupt. To force the RAID array to assemble and start when one of its members is missing, use the following command:
|
||||
|
||||
```
|
||||
# mdadm --assemble --run /dev/md/test /dev/sda1
|
||||
```
|
||||
|
||||
#### Other important notes
|
||||
|
||||
Avoid writing directly to any devices that underlay a mdadm RAID1 array. That causes the devices to become out-of-sync and mdadm won’t know that they are out-of-sync. If you access a RAID1 array with a device that’s been modified out-of-band, you can cause file system corruption. If you modify a RAID1 device out-of-band and need to force the array to re-synchronize, delete the mdadm metadata from the device to be overwritten and then re-add it to the array as demonstrated below:
|
||||
|
||||
```
|
||||
# mdadm --zero-superblock /dev/sdb1
|
||||
# mdadm --assemble --run /dev/md/test /dev/sda1
|
||||
# mdadm /dev/md/test --add /dev/sdb1
|
||||
```
|
||||
|
||||
These commands completely overwrite the contents of sdb1 with the contents of sda1.
|
||||
|
||||
To specify any RAID arrays to automatically activate when your computer starts, create an _/etc/mdadm.conf_ configuration file.
|
||||
|
||||
For the most up-to-date and detailed information, check the man pages:
|
||||
|
||||
```
|
||||
$ man mdadm
|
||||
$ man mdadm.conf
|
||||
```
|
||||
|
||||
The next article of this series will show a step-by-step guide on how to convert an existing single-disk Linux installation to a mirrored-disk installation, that will continue running even if one of its hard drives suddenly stops working!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/glb/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/mdadm-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/RAID
|
||||
[3]: https://en.wikipedia.org/wiki/Disk_mirroring
|
||||
[4]: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
|
||||
[5]: https://en.wikipedia.org/wiki/EFI_system_partition
|
@ -0,0 +1,119 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Electronics designed in 5 different countries with open hardware)
|
||||
[#]: via: (https://opensource.com/article/19/4/hardware-international)
|
||||
[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg)
|
||||
|
||||
Electronics designed in 5 different countries with open hardware
|
||||
======
|
||||
This month's open source hardware column looks at certified open
|
||||
hardware from five countries that may surprise you.
|
||||
![Gadgets and open hardware][1]
|
||||
|
||||
The Open Source Hardware Association's [Hardware Registry][2] lists hardware from 29 different countries on five continents, demonstrating the broad, international footprint of certified open source hardware.
|
||||
|
||||
![Open source hardware map][3]
|
||||
|
||||
In some ways, this international reach shouldn't be a surprise. Like many other open source communities, the open source hardware community is built on top of the internet, not grounded in any specific geographical location. The focus on documentation, sharing, and openness makes it easy for people in different places with different backgrounds to connect and work together to develop new hardware. Even the community-developed open source hardware [definition][4] has been translated into 11 languages from the original English.
|
||||
|
||||
Even if you're familiar with the international nature of open source hardware, it can still be refreshing to step back and remember what it means in practice. While it may not surprise you that there are many certifications from the United States, Germany, and India, some of the other countries boasting certifications might be a bit less expected. Let's look at six such projects from five of those countries.
|
||||
|
||||
### Bulgaria
|
||||
|
||||
Bulgaria may have the highest per-capita open source hardware certification rate of any country on earth. That distinction is mostly due to the work of two companies: [ANAVI Technology][5] and [Olimex][6].
|
||||
|
||||
ANAVI focuses mostly on IoT projects built on top of the Raspberry Pi and ESP8266. The concept of "creator contribution" means that these projects can be certified open source even though they are built upon non-open bases. That is because all of ANAVI's work to develop the hardware on top of these platforms (ANAVI's "creator contribution") has been open sourced in compliance with the certification requirements.
|
||||
|
||||
The [ANAVI Light pHAT][7] was the first piece of Bulgarian hardware to be certified by OSHWA. The Light pHAT makes it easy to add a 12V RGB LED strip to a Raspberry Pi.
|
||||
|
||||
![ANAVI-Light-pHAT][8]
|
||||
|
||||
[ANAVI-Light-pHAT][9]
|
||||
|
||||
Olimex's first OSHWA certification was for the [ESP32-PRO][10], a highly connectable IoT board built around an ESP32 microcontroller.
|
||||
|
||||
![Olimex ESP32-PRO][11]
|
||||
|
||||
[Olimex ESP32-PRO][12]
|
||||
|
||||
### China
|
||||
|
||||
While most people know China is a hotbed for hardware development, fewer realize that it is also the home to a thriving _open source_ hardware culture. One of the reasons is the tireless advocacy of Naomi Wu (also known as [SexyCyborg][13]). It is fitting that the first piece of certified hardware from China is one she helped develop: the [sino:bit][14]. The sino:bit is designed to help introduce students to programming and includes China-specific features like a LED matrix big enough to represent Chinese characters.
|
||||
|
||||
![sino:bit][15]
|
||||
|
||||
[ sino:bit][16]
|
||||
|
||||
### Mexico
|
||||
|
||||
Mexico has also produced a range of certified open source hardware. A recent certification is the [Meow Meow][17], a capacitive touch interface from [Electronic Cats][18]. Meow Meow makes it easy to use a wide range of objects—bananas are always a favorite—as controllers for your computer.
|
||||
|
||||
![Meow Meow][19]
|
||||
|
||||
[Meow Meow][20]
|
||||
|
||||
### Saudi Arabia
|
||||
|
||||
Saudi Arabia jumped into open source hardware earlier this year with the [M1 Rover][21]. The robot is an unmanned vehicle that you can build (and build upon). It is compatible with a number of different packages designed for specific purposes, so you can customize it for a wide range of applications.
|
||||
|
||||
![M1-Rover ][22]
|
||||
|
||||
[M1-Rover][23]
|
||||
|
||||
### Sri Lanka
|
||||
|
||||
This project from Sri Lanka is part of a larger effort to improve traffic flow in urban areas. The team behind the [Traffic Wave Disruptor][24] read research about how many traffic jams are caused by drivers slamming on their brakes when they drive too close to the car in front of them, producing a ripple of rapid breaking on the road behind them. This stop/start effect can be avoided if cars maintain a consistent, optimal distance from one another. If you reduce the stop/start pattern, you also reduce the number of traffic jams.
|
||||
|
||||
![Traffic Wave Disruptor][25]
|
||||
|
||||
[Traffic Wave Disruptor][26]
|
||||
|
||||
But how can drivers know if they are keeping an optimal distance? The prototype Traffic Wave Disruptor aims to give drivers feedback when they fail to keep optimal spacing. Wider adoption could help increase traffic flow without building new highways nor reducing the number of cars using them.
|
||||
|
||||
* * *
|
||||
|
||||
You may have noticed that all the hardware featured here is based on electronics. In next month's open source hardware column, we will take a look at open source hardware for the outdoors, away from batteries and plugs. Until then, [certify][27] your open source hardware project (especially if your country is not yet on the registry). It might be featured in a future column.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/hardware-international
|
||||
|
||||
作者:[Michael Weinberg][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mweinberg
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openhardwaretools_0.png?itok=NUIvc-R1 (Gadgets and open hardware)
|
||||
[2]: https://certification.oshwa.org/list.html
|
||||
[3]: https://opensource.com/sites/default/files/uploads/opensourcehardwaremap.jpg (Open source hardware map)
|
||||
[4]: https://www.oshwa.org/definition/
|
||||
[5]: http://anavi.technology/
|
||||
[6]: https://www.olimex.com/
|
||||
[7]: https://certification.oshwa.org/bg000001.html
|
||||
[8]: https://opensource.com/sites/default/files/uploads/anavi-light-phat.png (ANAVI-Light-pHAT)
|
||||
[9]: http://anavi.technology/#products
|
||||
[10]: https://certification.oshwa.org/bg000010.html
|
||||
[11]: https://opensource.com/sites/default/files/uploads/olimex-esp32-pro.png (Olimex ESP32-PRO)
|
||||
[12]: https://www.olimex.com/Products/IoT/ESP32/ESP32-PRO/open-source-hardware
|
||||
[13]: https://www.youtube.com/channel/UCh_ugKacslKhsGGdXP0cRRA
|
||||
[14]: https://certification.oshwa.org/cn000001.html
|
||||
[15]: https://opensource.com/sites/default/files/uploads/sinobit.png (sino:bit)
|
||||
[16]: https://github.com/sinobitorg/hardware
|
||||
[17]: https://certification.oshwa.org/mx000003.html
|
||||
[18]: https://electroniccats.com/
|
||||
[19]: https://opensource.com/sites/default/files/uploads/meowmeow.png (Meow Meow)
|
||||
[20]: https://electroniccats.com/producto/meowmeow/
|
||||
[21]: https://certification.oshwa.org/sa000001.html
|
||||
[22]: https://opensource.com/sites/default/files/uploads/m1-rover.png (M1-Rover )
|
||||
[23]: https://www.hackster.io/AhmedAzouz/m1-rover-362c05
|
||||
[24]: https://certification.oshwa.org/lk000001.html
|
||||
[25]: https://opensource.com/sites/default/files/uploads/traffic-wave-disruptor.png (Traffic Wave Disruptor)
|
||||
[26]: https://github.com/Aightm8/Traffic-wave-disruptor
|
||||
[27]: https://certification.oshwa.org/
|
@ -0,0 +1,120 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to organize with Calculist: Ideas, events, and more)
|
||||
[#]: via: (https://opensource.com/article/19/4/organize-calculist)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
How to organize with Calculist: Ideas, events, and more
|
||||
======
|
||||
Give structure to your ideas and plans with Calculist, an open source
|
||||
web app for creating outlines.
|
||||
![Team checklist][1]
|
||||
|
||||
Thoughts. Ideas. Plans. We all have a few of them. Often, more than a few. And all of us want to make some or all of them a reality.
|
||||
|
||||
Far too often, however, those thoughts and ideas and plans are a jumble inside our heads. They refuse to take a discernable shape, preferring instead to rattle around here, there, and everywhere in our brains.
|
||||
|
||||
One solution to that problem is to put everything into [an outline][2]. An outline can be a great way to organize what you need to organize and give it the shape you need to take it to the next step.
|
||||
|
||||
A number of people I know rely on a popular web-based tool called WorkFlowy for their outlining needs. If you prefer your applications (including web ones) to be open source, you'll want to take a look at [Calculist][3].
|
||||
|
||||
The brainchild of [Dan Allison][4], Calculist is billed as _the thinking tool for problem solvers_. It does much of what WorkFlowy does, and it has a few features that its rival is missing.
|
||||
|
||||
Let's take a look at using Calculist to organize your ideas (and more).
|
||||
|
||||
### Getting started
|
||||
|
||||
If you have a server, you can try to [install Calculist][5] on it. If, like me, you don't have server or just don't have the technical chops, you can turn to the [hosted version][6] of Calculist.
|
||||
|
||||
[Sign up][7] for a no-cost account, then log in. Once you've done that, you're ready to go.
|
||||
|
||||
### Creating a basic outline
|
||||
|
||||
What you use Calculist for really depends on your needs. I use Calculist to create outlines for articles and essays, to create lists of various sorts, and to plan projects. Regardless of what I'm doing, every outline I create follows the same pattern.
|
||||
|
||||
To get started, click the **New List** button. This creates a blank outline (which Calculist calls a _list_ ).
|
||||
|
||||
![Create a new list in Calculist][8]
|
||||
|
||||
The outline is a blank slate waiting for you to fill it up. Give the outline a name, then press Enter. When you do that, Calculist adds the first blank line for your outline. Use that as your starting point.
|
||||
|
||||
![A new outline in Calculist][9]
|
||||
|
||||
Add a new line by pressing Enter. To indent a line, press the Tab key while on that line. If you need to create a hierarchy, you can indent lines as far as you need to indent them. Press Shift+Tab to outdent a line.
|
||||
|
||||
Keep adding lines until you have a completed outline. Calculist saves your work every few seconds, so you don't need to worry about that.
|
||||
|
||||
![Calculist outline][10]
|
||||
|
||||
### Editing an outline
|
||||
|
||||
Outlines are fluid. They morph. They grow and shrink. Individual items in an outline change. Calculist makes it easy for you to adapt and make those changes.
|
||||
|
||||
You already know how to add an item to an outline. If you don't, go back a few paragraphs for a refresher. To edit text, click on an item and start typing. Don't double-click (more on this in a few moments). If you accidentally double-click on an item, press Esc on your keyboard and all will be well.
|
||||
|
||||
Sometimes you need to move an item somewhere else in the outline. Do that by clicking and holding the bullet for that item. Drag the item and drop it wherever you want it. Anything indented below the item moves with it.
|
||||
|
||||
At the moment, Calculist doesn't support adding notes or comments to an item in an outline. A simple workaround I use is to add a line indented one level deeper than the item where I want to add the note. That's not the most elegant solution, but it works.
|
||||
|
||||
### Let your keyboard do the walking
|
||||
|
||||
Not everyone likes to use their mouse to perform actions in an application. Like a good desktop application, you're not at the mercy of your mouse when you use Calculist. It has many keyboard shortcuts that you can use to move around your outlines and manipulate them.
|
||||
|
||||
The keyboard shortcuts I mentioned a few paragraphs ago are just the beginning. There are a couple of dozen keyboard shortcuts that you can use.
|
||||
|
||||
For example, you can focus on a single portion of an outline by pressing Ctrl+Right Arrow key. To get back to the full outline, press Ctrl+Left Arrow key. There are also shortcuts for moving up and down in your outline, expanding and collapsing lists, and deleting items.
|
||||
|
||||
You can view the list of shortcuts by clicking on your user name in the upper-right corner of the Calculist window and clicking **Preferences**. You can also find a list of [keyboard shortcuts][11] in the Calculist GitHub repository.
|
||||
|
||||
If you need or want to, you can change the shortcuts on the **Preferences** page. Click on the shortcut you want to change—you can, for example, change the shortcut for zooming in on an item to Ctrl+0.
|
||||
|
||||
### The power of commands
|
||||
|
||||
Calculist's keyboard shortcuts are useful, but they're only the beginning. The application has command mode that enables you to perform basic actions and do some interesting and complex tasks.
|
||||
|
||||
To use a command, double-click an item in your outline or press Ctrl+Enter while on it. The item turns black. Type a letter or two, and a list of commands displays. Scroll down to find the command you want to use, then press Enter. There's also a [list of commands][12] in the Calculist GitHub repository.
|
||||
|
||||
![Calclulist commands][13]
|
||||
|
||||
The commands are quite comprehensive. While in command mode, you can, for example, delete an item in an outline or delete an entire outline. You can import or export outlines, sort and group items in an outline, or change the application's theme or font.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
I've found that Calculist is a quick, easy, and flexible way to create and view outlines. It works equally well on my laptop and my phone, and it packs not only the features I regularly use but many others (including support for [LaTeX math expressions][14] and a [table/spreadsheet mode][15]) that more advanced users will find useful.
|
||||
|
||||
That said, Calculist isn't for everyone. If you prefer your outlines on the desktop, then check out [TreeLine][16], [Leo][17], or [Emacs org-mode][18].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/organize-calculist
|
||||
|
||||
作者:[Scott Nesbitt ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
|
||||
[2]: https://en.wikipedia.org/wiki/Outline_(list)
|
||||
[3]: https://calculist.io/
|
||||
[4]: https://danallison.github.io/
|
||||
[5]: https://github.com/calculist/calculist-web
|
||||
[6]: https://app.calculist.io/
|
||||
[7]: https://app.calculist.io/join
|
||||
[8]: https://opensource.com/sites/default/files/uploads/calculist-new-list.png (Create a new list in Calculist)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/calculist-getting-started.png (A new outline in Calculist)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/calculist-outline.png (Calculist outline)
|
||||
[11]: https://github.com/calculist/calculist/wiki/Keyboard-Shortcuts
|
||||
[12]: https://github.com/calculist/calculist/wiki/Command-Mode
|
||||
[13]: https://opensource.com/sites/default/files/uploads/calculist-commands.png (Calculist commands)
|
||||
[14]: https://github.com/calculist/calculist/wiki/LaTeX-Expressions
|
||||
[15]: https://github.com/calculist/calculist/issues/32
|
||||
[16]: https://opensource.com/article/18/1/creating-outlines-treeline
|
||||
[17]: http://www.leoeditor.com/
|
||||
[18]: https://orgmode.org/
|
@ -0,0 +1,196 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Level up command-line playgrounds with WebAssembly)
|
||||
[#]: via: (https://opensource.com/article/19/4/command-line-playgrounds-webassembly)
|
||||
[#]: author: (Robert Aboukhalil https://opensource.com/users/robertaboukhalil)
|
||||
|
||||
Level up command-line playgrounds with WebAssembly
|
||||
======
|
||||
WebAssembly is a powerful tool for bringing command line utilities to
|
||||
the web and giving people the chance to tinker with tools.
|
||||
![Various programming languages in use][1]
|
||||
|
||||
[WebAssembly][2] (Wasm) is a new low-level language designed with the web in mind. Its main goal is to enable developers to compile code written in other languages—such as C, C++, and Rust—into WebAssembly and run that code in the browser. In an environment where JavaScript has traditionally been the only option, WebAssembly is an appealing counterpart, and it enables portability along with the promise for near-native runtimes. WebAssembly has also already been used to port lots of tools to the web, including [desktop applications][3], [games][4], and even [data science tools written in Python][5]!
|
||||
|
||||
Another application of WebAssembly is command line playgrounds, where users are free to play with a simulated version of a command line tool. In this article, we'll explore a concrete example of leveraging WebAssembly for this purpose, specifically to port the tool **[jq][6]** —which is normally confined to the command line—to run directly in the browser.
|
||||
|
||||
If you haven't heard, jq is a very powerful command line tool for querying, modifying, and wrangling JSON objects on the command line.
|
||||
|
||||
### Why WebAssembly?
|
||||
|
||||
Aside from WebAssembly, there are two other approaches we can take to build a jq playground:
|
||||
|
||||
1. **Set up a sandboxed environment** on your server that executes queries and returns the result to the user via API calls. Although this means your users get to play with the real thing, the thought of hosting, securing, and sanitizing user inputs for such an application is worrisome. Aside from security, the other concern is responsiveness; the additional round trips to the server can introduce noticeable latencies and negatively impact the user experience.
|
||||
2. **Simulate the command line environment using JavaScript** , where you define a series of steps that the user can take. Although this approach is more secure than option 1, it involves _a lot_ more work, as you need to rewrite the logic of the tool in JavaScript. This method is also limiting: when I'm learning a new tool, I'm not just interested in the "happy path"; I want to break things!
|
||||
|
||||
|
||||
|
||||
These two solutions are not ideal because we have to choose between security and a meaningful learning experience. Ideally, we could simply run the command line tool directly in the browser, with no servers and no simulations. Lucky for us, WebAssembly is just the solution we need to achieve that.
|
||||
|
||||
### Set up your environment
|
||||
|
||||
In this article, we'll use the [Emscripten tool][7] to port jq from C to WebAssembly. Conveniently, it provides us with drop-in replacements for the most common C/C++ build tools, including gcc, make, and configure.
|
||||
|
||||
Instead of [installing Emscripten from scratch][8] (the build process can take a long time), we'll use a Docker image I put together that comes prepackaged with everything you'll need for this article (and beyond!).
|
||||
|
||||
Let's start by pulling the image and creating a container from it:
|
||||
|
||||
|
||||
```
|
||||
# Fetch docker image containing Emscripten
|
||||
docker pull robertaboukhalil/emsdk:1.38.26
|
||||
|
||||
# Create container from that image
|
||||
docker run -dt --name wasm robertaboukhalil/emsdk:1.38.26
|
||||
|
||||
# Enter the container
|
||||
docker exec -it wasm bash
|
||||
|
||||
# Make sure we can run emcc, Emscripten's wrapper around gcc
|
||||
emcc --version
|
||||
```
|
||||
|
||||
If you see the Emscripten version on the screen, you're good to go!
|
||||
|
||||
### Porting jq to WebAssembly
|
||||
|
||||
Next, let's clone the jq repository:
|
||||
|
||||
|
||||
```
|
||||
git clone <https://github.com/stedolan/jq.git>
|
||||
cd jq
|
||||
git checkout 9fa2e51
|
||||
```
|
||||
|
||||
Note that we're checking out a specific commit, just in case the jq code changes significantly after this article is published.
|
||||
|
||||
Before we compile jq to WebAssembly, let's first consider how we would normally compile jq to binary for use on the command line.
|
||||
|
||||
From the [README file][9], here is what we need to build jq to binary (don't type this in yet):
|
||||
|
||||
|
||||
```
|
||||
# Fetch jq dependencies
|
||||
git submodule update --init
|
||||
|
||||
# Generate ./configure file
|
||||
autoreconf -fi
|
||||
|
||||
# Run ./configure
|
||||
./configure \
|
||||
\--with-oniguruma=builtin \
|
||||
\--disable-maintainer-mode
|
||||
|
||||
# Build jq executable
|
||||
make LDFLAGS=-all-static
|
||||
```
|
||||
|
||||
Instead, to compile jq to WebAssembly, we'll leverage Emscripten's drop-in replacements for the configure and make build tools (note the differences here from the previous entry: **emconfigure** and **emmake** in the Run and Build statements, respectively):
|
||||
|
||||
|
||||
```
|
||||
# Fetch jq dependencies
|
||||
git submodule update --init
|
||||
|
||||
# Generate ./configure file
|
||||
autoreconf -fi
|
||||
|
||||
# Run ./configure
|
||||
emconfigure ./configure \
|
||||
\--with-oniguruma=builtin \
|
||||
\--disable-maintainer-mode
|
||||
|
||||
# Build jq executable
|
||||
emmake make LDFLAGS=-all-static
|
||||
```
|
||||
|
||||
If you type the commands above inside the Wasm container we created earlier, you'll notice that emconfigure and emmake will make sure jq is compiled using emcc instead of gcc (Emscripten also has a g++ replacement called em++).
|
||||
|
||||
So far, this was surprisingly easy: we just prepended a handful of commands with Emscripten tools and ported a codebase—comprising tens of thousands of lines—from C to WebAssembly. Note that it won't always be this easy, especially for more complex codebases and graphical applications, but that's for [another article][10].
|
||||
|
||||
Another advantage of Emscripten is that it can generate some JavaScript glue code for us that handles initializing the WebAssembly module, calling C functions from JavaScript, and even providing a [virtual filesystem][11].
|
||||
|
||||
Let's generate that glue code from the executable file jq that emmake outputs:
|
||||
|
||||
|
||||
```
|
||||
# But first, rename the jq executable to a .o file; otherwise,
|
||||
# emcc complains that the "file has an unknown suffix"
|
||||
mv jq jq.o
|
||||
|
||||
# Generate .js and .wasm files from jq.o
|
||||
# Disable errors on undefined symbols to avoid warnings about llvm_fma_f64
|
||||
emcc jq.o -o jq.js \
|
||||
-s ERROR_ON_UNDEFINED_SYMBOLS=0
|
||||
```
|
||||
|
||||
To make sure it works, let's try an example from the [jq tutorial][12] directly on the command line:
|
||||
|
||||
|
||||
```
|
||||
# Output the description of the latest commit on the jq repo
|
||||
$ curl -s "<https://api.github.com/repos/stedolan/jq/commits?per\_page=5>" | \
|
||||
node jq.js '.[0].commit.message'
|
||||
"Restore cfunction arity in builtins/0\n\nCount arguments up-front at definition/invocation instead of doing it at\nbind time, which comes after generating builtins/0 since e843a4f"
|
||||
```
|
||||
|
||||
And just like that, we are now ready to run jq in the browser!
|
||||
|
||||
### The result
|
||||
|
||||
Using the output of emcc above, we can put together a user interface that calls jq on a JSON blob the user provides. This is the approach I took to build [jqkungfu][13] (source code [available on GitHub][14]):
|
||||
|
||||
![jqkungfu screenshot][15]
|
||||
|
||||
jqkungfu, a playground built by compiling jq to WebAssembly
|
||||
|
||||
Although there are similar web apps that let you execute arbitrary jq queries in the browser, they are generally implemented as server-side applications that execute user queries in a sandbox (option #1 above).
|
||||
|
||||
Instead, by compiling jq from C to WebAssembly, we get the best of both worlds: the flexibility of the server and the security of the browser. Specifically, the benefits are:
|
||||
|
||||
1. **Flexibility** : Users can "choose their own adventure" and use the app with fewer limitations
|
||||
2. **Speed** : Once the Wasm module is loaded, executing queries is extremely fast because all the magic happens in the browser
|
||||
3. **Security** : No backend means we don't have to worry about our servers being compromised or used to mine Bitcoins
|
||||
4. **Convenience** : Since we don't need a backend, jqkungfu is simply hosted as static files on a cloud storage platform
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
WebAssembly is a powerful tool for bringing existing command line utilities to the web. When included as part of a tutorial, such playgrounds can become powerful teaching tools. They can even allow your users to test-drive your tool before they bother installing it.
|
||||
|
||||
If you want to dive further into WebAssembly and learn how to build applications like jqkungfu (or games like Pacman!), check out my book [_Level up with WebAssembly_][16].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/command-line-playgrounds-webassembly
|
||||
|
||||
作者:[Robert Aboukhalil][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/robertaboukhalil
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_language_c.png?itok=mPwqDAD9 (Various programming languages in use)
|
||||
[2]: https://webassembly.org/
|
||||
[3]: https://www.figma.com/blog/webassembly-cut-figmas-load-time-by-3x/
|
||||
[4]: http://www.continuation-labs.com/projects/d3wasm/
|
||||
[5]: https://hacks.mozilla.org/2019/03/iodide-an-experimental-tool-for-scientific-communicatiodide-for-scientific-communication-exploration-on-the-web/
|
||||
[6]: https://stedolan.github.io/jq/
|
||||
[7]: https://emscripten.org/
|
||||
[8]: https://emscripten.org/docs/getting_started/downloads.html
|
||||
[9]: https://github.com/stedolan/jq/blob/9fa2e51099c55af56e3e541dc4b399f11de74abe/README.md
|
||||
[10]: https://medium.com/@robaboukhalil/porting-games-to-the-web-with-webassembly-70d598e1a3ec?sk=20c835664031227eae5690b8a12514f0
|
||||
[11]: https://emscripten.org/docs/porting/files/file_systems_overview.html
|
||||
[12]: https://stedolan.github.io/jq/tutorial/
|
||||
[13]: http://jqkungfu.com
|
||||
[14]: https://github.com/robertaboukhalil/jqkungfu/
|
||||
[15]: https://opensource.com/sites/default/files/uploads/jqkungfu.gif (jqkungfu screenshot)
|
||||
[16]: http://levelupwasm.com/
|
@ -0,0 +1,167 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Simplifying organizational change: A guide for the perplexed)
|
||||
[#]: via: (https://opensource.com/open-organization/19/4/simplifying-change)
|
||||
[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchner)
|
||||
|
||||
Simplifying organizational change: A guide for the perplexed
|
||||
======
|
||||
Here's a 4-step, open process for making change easier—both for you and
|
||||
your organization.
|
||||
![][1]
|
||||
|
||||
Most organizational leaders have encountered a certain paralysis around efforts to implement culture change—perhaps because of perceived difficulty or the time necessary for realizing our work. But change is only as difficult as we choose to make it. In order to lead successful change efforts, we must simplify our understanding and approach to change.
|
||||
|
||||
Change isn't something rare. We live everyday life in a continuous state of change—from grappling with the speed of innovation to simply interacting with the environment around us. Quite simply, *change is how we process, disseminate, and adopt new information. *And whether you're leading a team or an organization—or are simply breathing—you'll benefit from a more focused, simplified approach to change. Here's a process that can save you time and reduce frustration.
|
||||
|
||||
### Three interactions with change
|
||||
|
||||
Everyone interacts with change in different ways. Those differences are based on who we are, our own unique experiences, and our core beliefs. In fact, [only 5% of decision making involves conscious processing][2]. Even when you don't _think_ you're making a decision, you are actually making a decision (that is, to not take action).
|
||||
|
||||
So you see, two actors are at play in situations involving change. The first is the human decision maker. The second is the information _coming to_ the decision maker. Both are present in three sets of interactions at varying stages in the decision-making process.
|
||||
|
||||
#### **Engaging change**
|
||||
|
||||
First, we must understand that uncertainty is really the result of "new information" we must process. We must accept where we are, at that moment, while waiting for additional information. Engaging with change requires us to trust—at the very least, ourselves and our capacity to manage—as new information continues to arrive. Everyone will respond to new information differently, and those responses are based on multiple factors: general hardwiring, unconscious needs that need to be met to feel safe, and so on. How do you feel safe in periods of uncertainty? Are you routine driven? Do you need details or need to assess risk? Are you good with figuring it out on the fly? Or does safety feel like creating something brand new?
|
||||
|
||||
#### **Navigating change**
|
||||
|
||||
"Navigating" doesn't necessarily mean "going around" something safely. It's knowing how to "get through it." Navigating change truly requires "all hands on deck" in order to keep everything intact and moving forward as we encounter each oncoming wave of new information. Everyone around you has something to contribute to the process of navigation; leverage them for “smooth sailing."
|
||||
|
||||
#### **Adopting change**
|
||||
|
||||
Only a small set of members in your organization will be truly comfortable with adopting change. But that committed and confident minority can spread the fire of change and help you grow some innovative ideas within your organization. Consider taking advantage of what researchers call "[the pendulum effect][3]," which holds that a group as small as 5% of an organization's population can influence a crowd's direction (the other 95% will follow along without realizing it). Moreover, [scientists at Rensselaer Polytechnic Institute have found][4] that when just 10% of a population holds an unshakable belief, that belief will always be adopted by a majority. Findings from this cognitive study have implications for the spread of innovations and movements within a collective group of people. Opportunities for mass adoption are directly related to your influence with the external parties around you.
|
||||
|
||||
Everyone interacts with change in different ways. Those differences are based on who we are, our own unique experiences, and our core beliefs.
|
||||
|
||||
### A useful matrix to guide culture change
|
||||
|
||||
So far, we've identified three "interactions" every person, team, or department will experience with change: "engaging," "navigating," and "adopting." When we examine the work of _implementing_ change in the broader context of an organization (any kind), we can also identify _three relationships_ that drive the success of each interaction: "people," "capacity," and "information."
|
||||
|
||||
Here's a brief list of considerations you should make—at every moment and with every relationship—to help you build roadmaps thoughtfully.
|
||||
|
||||
#### **Engaging—People**
|
||||
|
||||
Organizational success comes from the overlap of awareness and action of the "I" and the "We."
|
||||
|
||||
* _Individuals (I)_ are aware of and engage based on their [natural response strength][5].
|
||||
* _Teams (We)_ are aware of and balance their responsibilities based on the Individual strengths by initiative.
|
||||
* _Leaders (I/We) l_ everage insight based on knowing their (I) and the collective (We).
|
||||
|
||||
|
||||
|
||||
#### **Engaging—Capacity**
|
||||
|
||||
"Capacity" applies to skills, processes, and culture that is clearly structured, documented, and accessible with your organization. It is the “space” within which you operate and achieve solutions.
|
||||
|
||||
* _Current state_ awareness allows you to use what and who you have available and accessible through your known operational capacity.
|
||||
* _Future state_ needs will show you what is required of you to learn, _or stretch_ , in order to bridge any gaps; essentially, you will design the recoding of your organization.
|
||||
|
||||
|
||||
|
||||
#### **Engaging—Information**
|
||||
|
||||
* _Access to information_ is readily available to all based on appropriate needs within protocols.
|
||||
* _Communication flows_ easily and is reciprocated at all levels.
|
||||
* _Communication flow_ is timely and transparent.
|
||||
|
||||
|
||||
|
||||
#### **Navigating—People**
|
||||
|
||||
* Balance responses from both individuals and the collective will impact your outcomes.
|
||||
* Balance the _I_ with the _We_. This allows for responses to co-exist in a seamless, collaborative way—which fuels every project.
|
||||
|
||||
|
||||
|
||||
#### **Navigating—Capacity**
|
||||
|
||||
* _Skills_ : Assuring a continuous state of assessment and learning through various modalities allows you to navigate with ease as each person graduates their understanding in preparation for the next iteration of change.
|
||||
* _Culture:_ Be clear on goals and mission with a supported ecosystem in which your teams can operate by contributing their best efforts when working together.
|
||||
* _Processes:_ Review existing processes and let go of anything that prevents you from evolving. Open practices and methodologies do allow for a higher rate of adaptability and decision making.
|
||||
* _Utilize Talent:_ Discover who is already in your organization and how you can leverage their talent in new ways. Go beyond your known teams and seek out sources of new perspectives.
|
||||
|
||||
|
||||
|
||||
#### **Navigating—Information**
|
||||
|
||||
* Be clear on your mission.
|
||||
* Be very clear on your desired endgame so everyone knows what you are navigating toward (without clearly defined and posted directions, it's easy to waste time, money and efforts resulting in missed targets).
|
||||
|
||||
|
||||
|
||||
#### **Adopting—People**
|
||||
|
||||
* _Behaviors_ have a critical impact on influence and adoption.
|
||||
* For _internal adoption_ , consider the [pendulum of thought][3] swung by the committed few.
|
||||
|
||||
|
||||
|
||||
#### **Adopting—Capacity**
|
||||
|
||||
* _Sustainability:_ Leverage people who are more routine and legacy-oriented to help stabilize and embed your new initiatives.
|
||||
* Allows your innovators and co-creators to move into the next phase of development and begin solving problems while other team members can perform follow-through efforts.
|
||||
|
||||
|
||||
|
||||
#### **Adopting—Information**
|
||||
|
||||
* Be open and transparent with your external communication.
|
||||
* Lead the way in _what_ you do and _how_ you do it to create a tidal wave of change.
|
||||
* Remember that mass adoption has a tipping point of 10%.
|
||||
|
||||
|
||||
|
||||
[**Download a one-page guide to this model on GitHub.**][6]
|
||||
---
|
||||
|
||||
### Four steps to simplify change
|
||||
|
||||
You now understand what change is and how you are processing it. You've seen how you and your organization can reframe various interactions with it. Now, let's examine the four steps to simplify how you interact with and implement change as an individual, team leader, or organizational executive.
|
||||
|
||||
#### **1\. Understand change**
|
||||
|
||||
Change is receiving and processing new information and determining how to respond and participate with it (think personal or organizational operating system). Change is a _reciprocal_ action between yourself and incoming new information (think system interface). Change is an evolutionary process that happens in layers and stages in a continuous cycle (think data processing, bug fixes, and program iterations).
|
||||
|
||||
#### **2\. Know your people**
|
||||
|
||||
Change is personal and responses vary by context. People's responses to change are not indicators of the speed of adoption. Knowing how your people and your teams interact with change allows you to balance and optimize your efforts to solving problems, building solutions and sustaining implementations. Are they change makers, fast followers, innovators, stabilizers? When you know how you, _or others_ , process change, you can leverage your risk mitigators to sweep for potential pitfalls; and, your routine minded folks to be responsible for implementation follow through.
|
||||
|
||||
Only a small set of members in your organization will be truly comfortable with adopting change. But that committed and confident minority can spread the fire of change and help you grow some innovative ideas within your organization.
|
||||
|
||||
#### **3\. Know your capacity**
|
||||
|
||||
Your capacity to implement widespread change will depend on your culture, your processes, and decision-making models. Get familiar with your operational capacity and guardrails (process and policy).
|
||||
|
||||
#### **4\. Prepare for Interaction**
|
||||
|
||||
Each interaction uses your people, capacity (operational), and information flow. Working with the stages of change is not always a linear process and may overlap at certain points along the way. Understand that [_people_ feed all engagement, navigation, and adoption actions][7].
|
||||
|
||||
Humans are built for adaptation to our environments. Yes, any kind of change can be scary at first. But it need not involve some major new implementation with a large, looming deadline that throws you off. Knowing that you can take a simplified approach to change, hopefully, you're able to engage new information with ease. Using this approach over time—and integrating it as habit—allows for both the _I_ and the _We_ to experience continuous cycles of change without the tensions of old.
|
||||
|
||||
_Want to learn more about simplifying change?[View additional resources on GitHub][8]._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/4/simplifying-change
|
||||
|
||||
作者:[Jen Kelchner][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jenkelchner
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_2dot0.png?itok=bKJ41T85
|
||||
[2]: http://www.simplifyinginterfaces.com/2008/08/01/95-percent-of-brain-activity-is-beyond-our-conscious-awareness/
|
||||
[3]: http://www.leeds.ac.uk/news/article/397/sheep_in_human_clothing__scientists_reveal_our_flock_mentality
|
||||
[4]: https://news.rpi.edu/luwakkey/2902
|
||||
[5]: https://opensource.com/open-organization/18/7/transformation-beyond-digital-2
|
||||
[6]: https://github.com/jenkelchner/simplifying-change/blob/master/Visual_%20Simplifying%20Change%20(1).pdf
|
||||
[7]: https://opensource.com/open-organization/17/7/digital-transformation-people-1
|
||||
[8]: https://github.com/jenkelchner/simplifying-change
|
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu 19.04 ‘Disco Dingo’ Has Arrived: Downloads Available Now!)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-19-04-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Ubuntu 19.04 ‘Disco Dingo’ Has Arrived: Downloads Available Now!
|
||||
======
|
||||
|
||||
It’s the time to disco! Why? Well, Ubuntu 19.04 ‘Disco Dingo’ is here and finally available to download. Although, we are aware of the [new features in Ubuntu 19.04][1] – I will mention a few important things below and would also point you to the official links to download it and get started.
|
||||
|
||||
### Ubuntu 19.04: What You Need To Know
|
||||
|
||||
Here are a few things you should know about Ubuntu 19.04 Disco Dingo release.
|
||||
|
||||
#### Ubuntu 19.04 is not an LTS Release
|
||||
|
||||
Unlike Ubuntu 18.04 LTS, this will not be [supported for 10 years][2]. Instead, the non-LTS 19.04 will be supported for **9 months until January 2020.**
|
||||
|
||||
So, if you have a production environment, we may not recommend upgrading it right away. For example, if you have a server that runs on Ubuntu 18.04 LTS – it may not be a good idea to upgrade it to 19.04 just because it is an exciting release.
|
||||
|
||||
However, for users who want the latest and greatest installed on their machines can try it out.
|
||||
|
||||
![][3]
|
||||
|
||||
#### Ubuntu 19.04 is a sweet update for NVIDIA GPU Owners
|
||||
|
||||
_Martin Wimpress_ (from Canonical) mentioned that Ubuntu 19.04 is particularly a big deal for NVIDIA GPU owners in the final release notes of Ubuntu MATE 19.04 (one of the Ubuntu flavors) on [GitHub][4].
|
||||
|
||||
In other words, while installing the proprietary graphics driver – it now selects the best driver compatible with your particular GPU model.
|
||||
|
||||
#### Ubuntu 19.04 Features
|
||||
|
||||
Even though we have already discussed the [best features of Ubuntu 19.04][1] Disco Dingo, it is worth mentioning that I’m exciting about the desktop updates (GNOME 3.32) and the Linux Kernel (5.0) that comes as one of the major changes in this release.
|
||||
|
||||
#### Upgrading from Ubuntu 18.10 to 19.04
|
||||
|
||||
If you have Ubuntu 18.10 installed, you should upgrade it for obvious reasons. 18.10 will reach its end of life in July 2019 – so we recommend you to upgrade it to 19.04.
|
||||
|
||||
To do that, you can simply head on to the “ **Software and Updates** ” settings and then navigate your way to the “ **Updates** ” tab.
|
||||
|
||||
Now, change the option for – **Notify me of a new Ubuntu version** to “ _For any new version_ “.
|
||||
|
||||
When you run the update manager now, you should see that Ubuntu 19.04 is available now.
|
||||
|
||||
![][5]
|
||||
|
||||
#### Upgrading from Ubuntu 18.04 to 19.04
|
||||
|
||||
It is not recommended to directly upgrade from 18.04 to 19.04 because you will have to update the OS to 18.10 first and then proceed to get 19.04 on board.
|
||||
|
||||
Instead, you can simply download the official ISO image of Ubuntu 19.04 and then re-install Ubuntu on your system.
|
||||
|
||||
### Ubuntu 19.04: Downloads Available for all flavors
|
||||
|
||||
As per the [release notes][6], Ubuntu 19.04 is available to download now. You can get the torrent or the ISO file on its official release download page.
|
||||
|
||||
[Download Ubuntu 19.04][7]
|
||||
|
||||
If you need a different desktop environment or need something specific, you should check out the official flavors of Ubuntu available:
|
||||
|
||||
* [Ubuntu MATE][8]
|
||||
* [Kubuntu][9]
|
||||
* [Lubuntu][10]
|
||||
* [Ubuntu Budgie][11]
|
||||
* [Ubuntu Studio][12]
|
||||
* [Xubuntu][13]
|
||||
|
||||
|
||||
|
||||
Some of the above mentioned Ubuntu flavors haven’t put the 19.04 release on their download yet. But you can [still find the ISOs on the Ubuntu’s release note webpage][6]. Personally, I use Ubuntu with GNOME desktop. You can choose whatever you like.
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
What do you think about Ubuntu 19.04 Disco Dingo? Are the new features exciting enough? Have you tried it yet? Let us know in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-19-04-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ubuntu-19-04-release-features/
|
||||
[2]: https://itsfoss.com/ubuntu-18-04-ten-year-support/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/ubuntu-19-04-Disco-Dingo-default-wallpaper.jpg?resize=800%2C450&ssl=1
|
||||
[4]: https://github.com/ubuntu-mate/ubuntu-mate.org/blob/master/blog/20190418-ubuntu-mate-disco-final-release.md
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ubuntu-19-04-upgrade-available.jpg?ssl=1
|
||||
[6]: https://wiki.ubuntu.com/DiscoDingo/ReleaseNotes
|
||||
[7]: https://www.ubuntu.com/download/desktop
|
||||
[8]: https://ubuntu-mate.org/download/
|
||||
[9]: https://kubuntu.org/getkubuntu/
|
||||
[10]: https://lubuntu.me/cosmic-released/
|
||||
[11]: https://ubuntubudgie.org/downloads
|
||||
[12]: https://ubuntustudio.org/2019/04/ubuntu-studio-19-04-released/
|
||||
[13]: https://xubuntu.org/download/
|
@ -0,0 +1,110 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 cool new projects to try in COPR for April 2019)
|
||||
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2019/)
|
||||
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
|
||||
|
||||
4 cool new projects to try in COPR for April 2019
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
COPR is a [collection][2] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
|
||||
|
||||
Here’s a set of new and interesting projects in COPR.
|
||||
|
||||
### Joplin
|
||||
|
||||
[Joplin][3] is a note-taking and to-do app. Notes are written in the Markdown format, and organized by sorting them into various notebooks and using tags.
|
||||
Joplin can import notes from any Markdown source or exported from Evernote. In addition to the desktop app, there’s an Android version with the ability to synchronize notes between them — using Nextcloud, Dropbox or other cloud services. Finally, there’s a browser extension for Chrome and Firefox to save web pages and screenshots.
|
||||
|
||||
![][4]
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The [repo][5] currently provides Joplin for Fedora 29 and 30, and for EPEL 7. To install Joplin, use these commands [with sudo][6]:
|
||||
|
||||
```
|
||||
sudo dnf copr enable taw/joplin
|
||||
sudo dnf install joplin
|
||||
```
|
||||
|
||||
### Fzy
|
||||
|
||||
[Fzy][7] is a command-line utility for fuzzy string searching. It reads from a standard input and sorts the lines based on what is most likely the sought after text, and then prints the selected line. In addition to command-line, fzy can be also used within vim. You can try fzy in this online [demo][8].
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The [repo][9] currently provides fzy for Fedora 29, 30, and Rawhide, and other distributions. To install fzy, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable lehrenfried/fzy
|
||||
sudo dnf install fzy
|
||||
```
|
||||
|
||||
### Fondo
|
||||
|
||||
Fondo is a program for browsing many photographs from the [unsplash.com][10] website. It has a simple interface that allows you to look for pictures of one of several themes, or all of them at once. You can then set a found picture as a wallpaper with a single click, or share it.
|
||||
|
||||
* ![][11]
|
||||
|
||||
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The [repo][12] currently provides Fondo for Fedora 29, 30, and Rawhide. To install Fondo, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable atim/fondo
|
||||
sudo dnf install fondo
|
||||
```
|
||||
|
||||
### YACReader
|
||||
|
||||
[YACReader][13] is a digital comic book reader that supports many comics and image formats, such as _cbz_ , _cbr_ , _pdf_ and others. YACReader keeps track of reading progress, and can download comics’ information from [Comic Vine.][14] It also comes with a YACReader Library for organizing and browsing your comic book collection.
|
||||
|
||||
* ![][15]
|
||||
|
||||
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The [repo][16] currently provides YACReader for Fedora 29, 30, and Rawhide. To install YACReader, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable atim/yacreader
|
||||
sudo dnf install yacreader
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2019/
|
||||
|
||||
作者:[Dominik Turecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/dturecek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
|
||||
[2]: https://copr.fedorainfracloud.org/
|
||||
[3]: https://joplin.cozic.net/
|
||||
[4]: https://fedoramagazine.org/wp-content/uploads/2019/04/joplin.png
|
||||
[5]: https://copr.fedorainfracloud.org/coprs/taw/joplin/
|
||||
[6]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[7]: https://github.com/jhawthorn/fzy
|
||||
[8]: https://jhawthorn.github.io/fzy-demo/
|
||||
[9]: https://copr.fedorainfracloud.org/coprs/lehrenfried/fzy/
|
||||
[10]: https://unsplash.com/
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2019/04/fondo.png
|
||||
[12]: https://copr.fedorainfracloud.org/coprs/atim/fondo/
|
||||
[13]: https://www.yacreader.com/
|
||||
[14]: https://comicvine.gamespot.com/
|
||||
[15]: https://fedoramagazine.org/wp-content/uploads/2019/04/yacreader.png
|
||||
[16]: https://copr.fedorainfracloud.org/coprs/atim/yacreader/
|
@ -0,0 +1,294 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Building scalable social media sentiment analysis services in Python)
|
||||
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable)
|
||||
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
|
||||
|
||||
Building scalable social media sentiment analysis services in Python
|
||||
======
|
||||
Learn how you can use spaCy, vaderSentiment, Flask, and Python to add
|
||||
sentiment analysis capabilities to your work.
|
||||
![Tall building with windows][1]
|
||||
|
||||
The [first part][2] of this series provided some background on how sentiment analysis works. Now let's investigate how to add these capabilities to your designs.
|
||||
|
||||
### Exploring spaCy and vaderSentiment in Python
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
* A terminal shell
|
||||
* Python language binaries (version 3.4+) in your shell
|
||||
* The **pip** command for installing Python packages
|
||||
* (optional) A [Python Virtualenv][3] to keep your work isolated from the system
|
||||
|
||||
|
||||
|
||||
#### Configure your environment
|
||||
|
||||
Before you begin writing code, you will need to set up the Python environment by installing the [spaCy][4] and [vaderSentiment][5] packages and downloading a language model to assist your analysis. Thankfully, most of this is relatively easy to do from the command line.
|
||||
|
||||
In your shell, type the following command to install the spaCy and vaderSentiment packages:
|
||||
|
||||
|
||||
```
|
||||
`pip install spacy vaderSentiment`
|
||||
```
|
||||
|
||||
After the command completes, install a language model that spaCy can use for text analysis. The following command will use the spaCy module to download and install the English language [model][6]:
|
||||
|
||||
|
||||
```
|
||||
`python -m spacy download en_core_web_sm`
|
||||
```
|
||||
|
||||
With these libraries and models installed, you are now ready to begin coding.
|
||||
|
||||
#### Do a simple text analysis
|
||||
|
||||
Use the [Python interpreter interactive mode][7] to write some code that will analyze a single text fragment. Begin by starting the Python environment:
|
||||
|
||||
|
||||
```
|
||||
$ python
|
||||
Python 3.6.8 (default, Jan 31 2019, 09:38:34)
|
||||
[GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>>
|
||||
```
|
||||
|
||||
_(Your Python interpreter version print might look different than this.)_
|
||||
|
||||
1. Import the necessary modules: [code] >>> import spacy
|
||||
>>> from vaderSentiment import vaderSentiment
|
||||
```
|
||||
2. Load the English language model from spaCy: [code]`>>> english = spacy.load("en_core_web_sm")`
|
||||
```
|
||||
3. Process a piece of text. This example shows a very simple sentence that we expect to return a slightly positive sentiment: [code]`>>> result = english("I like to eat applesauce with sugar and cinnamon.")`
|
||||
```
|
||||
4. Gather the sentences from the processed result. SpaCy has identified and processed the entities within the phrase; this step generates sentiment for each sentence (even though there is only one sentence in this example): [code]`>>> sentences = [str(s) for s in result.sents]`
|
||||
```
|
||||
5. Create an analyzer using vaderSentiments: [code]`>>> analyzer = vaderSentiment.SentimentIntensityAnalyzer()`
|
||||
```
|
||||
6. Perform the sentiment analysis on the sentences: [code]`>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]`
|
||||
```
|
||||
|
||||
|
||||
|
||||
The sentiment variable now contains the polarity scores for the example sentence. Print out the value to see how it analyzed the sentence.
|
||||
|
||||
|
||||
```
|
||||
>>> print(sentiment)
|
||||
[{'neg': 0.0, 'neu': 0.737, 'pos': 0.263, 'compound': 0.3612}]
|
||||
```
|
||||
|
||||
What does this structure mean?
|
||||
|
||||
On the surface, this is an array with a single dictionary object; had there been multiple sentences, there would be a dictionary for each one. There are four keys in the dictionary that correspond to different types of sentiment. The **neg** key represents negative sentiment, of which none has been reported in this text, as evidenced by the **0.0** value. The **neu** key represents neutral sentiment, which has gotten a fairly high score of **0.737** (with a maximum of **1.0** ). The **pos** key represents positive sentiments, which has a moderate score of **0.263**. Last, the **compound** key represents an overall score for the text; this can range from negative to positive scores, with the value **0.3612** representing a sentiment more on the positive side.
|
||||
|
||||
To see how these values might change, you can run a small experiment using the code you already entered. The following block demonstrates an evaluation of sentiment scores on a similar sentence.
|
||||
|
||||
|
||||
```
|
||||
>>> result = english("I love applesauce!")
|
||||
>>> sentences = [str(s) for s in result.sents]
|
||||
>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
>>> print(sentiment)
|
||||
[{'neg': 0.0, 'neu': 0.182, 'pos': 0.818, 'compound': 0.6696}]
|
||||
```
|
||||
|
||||
You can see that by changing the example sentence to something overwhelmingly positive, the sentiment values have changed dramatically.
|
||||
|
||||
### Building a sentiment analysis service
|
||||
|
||||
Now that you have assembled the basic building blocks for doing sentiment analysis, let's turn that knowledge into a simple service.
|
||||
|
||||
For this demonstration, you will create a [RESTful][8] HTTP server using the Python [Flask package][9]. This service will accept text data in English and return the sentiment analysis. Please note that this example service is for learning the technologies involved and not something to put into production.
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
* A terminal shell
|
||||
* The Python language binaries (version 3.4+) in your shell.
|
||||
* The **pip** command for installing Python packages
|
||||
* The **curl** command
|
||||
* A text editor
|
||||
* (optional) A [Python Virtualenv][3] to keep your work isolated from the system
|
||||
|
||||
|
||||
|
||||
#### Configure your environment
|
||||
|
||||
This environment is nearly identical to the one in the previous section. The only difference is the addition of the Flask package to Python.
|
||||
|
||||
1. Install the necessary dependencies: [code]`pip install spacy vaderSentiment flask`
|
||||
```
|
||||
2. Install the English language model for spaCy: [code]`python -m spacy download en_core_web_sm`
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### Create the application file
|
||||
|
||||
Open your editor and create a file named **app.py**. Add the following contents to it _(don't worry, we will review every line)_ :
|
||||
|
||||
|
||||
```
|
||||
import flask
|
||||
import spacy
|
||||
import vaderSentiment.vaderSentiment as vader
|
||||
|
||||
app = flask.Flask(__name__)
|
||||
analyzer = vader.SentimentIntensityAnalyzer()
|
||||
english = spacy.load("en_core_web_sm")
|
||||
|
||||
def get_sentiments(text):
|
||||
result = english(text)
|
||||
sentences = [str(sent) for sent in result.sents]
|
||||
sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
return sentiments
|
||||
|
||||
@app.route("/", methods=["POST", "GET"])
|
||||
def index():
|
||||
if flask.request.method == "GET":
|
||||
return "To access this service send a POST request to this URL with" \
|
||||
" the text you want analyzed in the body."
|
||||
body = flask.request.data.decode("utf-8")
|
||||
sentiments = get_sentiments(body)
|
||||
return flask.json.dumps(sentiments)
|
||||
```
|
||||
|
||||
Although this is not an overly large source file, it is quite dense. Let's walk through the pieces of this application and describe what they are doing.
|
||||
|
||||
|
||||
```
|
||||
import flask
|
||||
import spacy
|
||||
import vaderSentiment.vaderSentiment as vader
|
||||
```
|
||||
|
||||
The first three lines bring in the packages needed for performing the language analysis and the HTTP framework.
|
||||
|
||||
|
||||
```
|
||||
app = flask.Flask(__name__)
|
||||
analyzer = vader.SentimentIntensityAnalyzer()
|
||||
english = spacy.load("en_core_web_sm")
|
||||
```
|
||||
|
||||
The next three lines create a few global variables. The first variable, **app** , is the main entry point that Flask uses for creating HTTP routes. The second variable, **analyzer** , is the same type used in the previous example, and it will be used to generate the sentiment scores. The last variable, **english** , is also the same type used in the previous example, and it will be used to annotate and tokenize the initial text input.
|
||||
|
||||
You might be wondering why these variables have been declared globally. In the case of the **app** variable, this is standard procedure for many Flask applications. But, in the case of the **analyzer** and **english** variables, the decision to make them global is based on the load times associated with the classes involved. Although the load time might appear minor, when it's run in the context of an HTTP server, these delays can negatively impact performance.
|
||||
|
||||
|
||||
```
|
||||
def get_sentiments(text):
|
||||
result = english(text)
|
||||
sentences = [str(sent) for sent in result.sents]
|
||||
sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
return sentiments
|
||||
```
|
||||
|
||||
The next piece is the heart of the service—a function for generating sentiment values from a string of text. You can see that the operations in this function correspond to the commands you ran in the Python interpreter earlier. Here they're wrapped in a function definition with the source **text** being passed in as the variable text and finally the **sentiments** variable returned to the caller.
|
||||
|
||||
|
||||
```
|
||||
@app.route("/", methods=["POST", "GET"])
|
||||
def index():
|
||||
if flask.request.method == "GET":
|
||||
return "To access this service send a POST request to this URL with" \
|
||||
" the text you want analyzed in the body."
|
||||
body = flask.request.data.decode("utf-8")
|
||||
sentiments = get_sentiments(body)
|
||||
return flask.json.dumps(sentiments)
|
||||
```
|
||||
|
||||
The last function in the source file contains the logic that will instruct Flask how to configure the HTTP server for the service. It starts with a line that will associate an HTTP route **/** with the request methods **POST** and **GET**.
|
||||
|
||||
After the function definition line, the **if** clause will detect if the request method is **GET**. If a user sends this request to the service, the following line will return a text message instructing how to access the server. This is largely included as a convenience to end users.
|
||||
|
||||
The next line uses the **flask.request** object to acquire the body of the request, which should contain the text string to be processed. The **decode** function will convert the array of bytes into a usable, formatted string. The decoded text message is now passed to the **get_sentiments** function to generate the sentiment scores. Last, the scores are returned to the user through the HTTP framework.
|
||||
|
||||
You should now save the file, if you have not done so already, and return to the shell.
|
||||
|
||||
#### Run the sentiment service
|
||||
|
||||
With everything in place, running the service is quite simple with Flask's built-in debugging server. To start the service, enter the following command from the same directory as your source file:
|
||||
|
||||
|
||||
```
|
||||
`FLASK_APP=app.py flask run`
|
||||
```
|
||||
|
||||
You will now see some output from the server in your shell, and the server will be running. To test that the server is running, you will need to open a second shell and use the **curl** command.
|
||||
|
||||
First, check to see that the instruction message is printed by entering this command:
|
||||
|
||||
|
||||
```
|
||||
`curl http://localhost:5000`
|
||||
```
|
||||
|
||||
You should see the instruction message:
|
||||
|
||||
|
||||
```
|
||||
`To access this service send a POST request to this URI with the text you want analyzed in the body.`
|
||||
```
|
||||
|
||||
Next, send a test message to see the sentiment analysis by running the following command:
|
||||
|
||||
|
||||
```
|
||||
`curl http://localhost:5000 --header "Content-Type: application/json" --data "I love applesauce!"`
|
||||
```
|
||||
|
||||
The response you get from the server should be similar to the following:
|
||||
|
||||
|
||||
```
|
||||
`[{"compound": 0.6696, "neg": 0.0, "neu": 0.182, "pos": 0.818}]`
|
||||
```
|
||||
|
||||
Congratulations! You have now implemented a RESTful HTTP sentiment analysis service. You can find a link to a [reference implementation of this service and all the code from this article on GitHub][10].
|
||||
|
||||
### Continue exploring
|
||||
|
||||
Now that you have an understanding of the principles and mechanics behind natural language processing and sentiment analysis, here are some ways to further your discovery of this topic.
|
||||
|
||||
#### Create a streaming sentiment analyzer on OpenShift
|
||||
|
||||
While creating local applications to explore sentiment analysis is a convenient first step, having the ability to deploy your applications for wider usage is a powerful next step. By following the instructions and code in this [workshop from Radanalytics.io][11], you will learn how to create a sentiment analyzer that can be containerized and deployed to a Kubernetes platform. You will also see how Apache Kafka is used as a framework for event-driven messaging and how Apache Spark can be used as a distributed computing platform for sentiment analysis.
|
||||
|
||||
#### Discover live data with the Twitter API
|
||||
|
||||
Although the [Radanalytics.io][12] lab generated synthetic tweets to stream, you are not limited to synthetic data. In fact, anyone with a Twitter account can access the Twitter streaming API and perform sentiment analysis on tweets with the [Tweepy Python][13] package.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable
|
||||
|
||||
作者:[Michael McCune ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/elmiko/users/jschlessman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
|
||||
[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-1
|
||||
[3]: https://virtualenv.pypa.io/en/stable/
|
||||
[4]: https://pypi.org/project/spacy/
|
||||
[5]: https://pypi.org/project/vaderSentiment/
|
||||
[6]: https://spacy.io/models
|
||||
[7]: https://docs.python.org/3.6/tutorial/interpreter.html
|
||||
[8]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[9]: http://flask.pocoo.org/
|
||||
[10]: https://github.com/elmiko/social-moments-service
|
||||
[11]: https://github.com/radanalyticsio/streaming-lab
|
||||
[12]: http://Radanalytics.io
|
||||
[13]: https://github.com/tweepy/tweepy
|
@ -0,0 +1,117 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with social media sentiment analysis in Python)
|
||||
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python)
|
||||
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
|
||||
|
||||
Getting started with social media sentiment analysis in Python
|
||||
======
|
||||
Learn the basics of natural language processing and explore two useful
|
||||
Python packages.
|
||||
![Raspberry Pi and Python][1]
|
||||
|
||||
Natural language processing (NLP) is a type of machine learning that addresses the correlation between spoken/written languages and computer-aided analysis of those languages. We experience numerous innovations from NLP in our daily lives, from writing assistance and suggestions to real-time speech translation and interpretation.
|
||||
|
||||
This article examines one specific area of NLP: sentiment analysis, with an emphasis on determining the positive, negative, or neutral nature of the input language. This part will explain the background behind NLP and sentiment analysis and explore two open source Python packages. [Part 2][2] will demonstrate how to begin building your own scalable sentiment analysis services.
|
||||
|
||||
When learning sentiment analysis, it is helpful to have an understanding of NLP in general. This article won't dig into the mathematical guts, rather our goal is to clarify key concepts in NLP that are crucial to incorporating these methods into your solutions in practical ways.
|
||||
|
||||
### Natural language and text data
|
||||
|
||||
A reasonable place to begin is defining: "What is natural language?" It is the means by which we, as humans, communicate with one another. The primary modalities for communication are verbal and text. We can take this a step further and focus solely on text communication; after all, living in an age of pervasive Siri, Alexa, etc., we know speech is a group of computations away from text.
|
||||
|
||||
### Data landscape and challenges
|
||||
|
||||
Limiting ourselves to textual data, what can we say about language and text? First, language, particularly English, is fraught with exceptions to rules, plurality of meanings, and contextual differences that can confuse even a human interpreter, let alone a computational one. In elementary school, we learn articles of speech and punctuation, and from speaking our native language, we acquire intuition about which words have less significance when searching for meaning. Examples of the latter would be articles of speech such as "a," "the," and "or," which in NLP are referred to as _stop words_ , since traditionally an NLP algorithm's search for meaning stops when reaching one of these words in a sequence.
|
||||
|
||||
Since our goal is to automate the classification of text as belonging to a sentiment class, we need a way to work with text data in a computational fashion. Therefore, we must consider how to represent text data to a machine. As we know, the rules for utilizing and interpreting language are complicated, and the size and structure of input text can vary greatly. We'll need to transform the text data into numeric data, the form of choice for machines and math. This transformation falls under the area of _feature extraction_.
|
||||
|
||||
Upon extracting numeric representations of input text data, one refinement might be, given an input body of text, to determine a set of quantitative statistics for the articles of speech listed above and perhaps classify documents based on them. For example, a glut of adverbs might make a copywriter bristle, or excessive use of stop words might be helpful in identifying term papers with content padding. Admittedly, this may not have much bearing on our goal of sentiment analysis.
|
||||
|
||||
### Bag of words
|
||||
|
||||
When you assess a text statement as positive or negative, what are some contextual clues you use to assess its polarity (i.e., whether the text has positive, negative, or neutral sentiment)? One way is connotative adjectives: something called "disgusting" is viewed as negative, but if the same thing were called "beautiful," you would judge it as positive. Colloquialisms, by definition, give a sense of familiarity and often positivity, whereas curse words could be a sign of hostility. Text data can also include emojis, which carry inherent sentiments.
|
||||
|
||||
Understanding the polarity influence of individual words provides a basis for the [_bag-of-words_][3] (BoW) model of text. It considers a set of words or vocabulary and extracts measures about the presence of those words in the input text. The vocabulary is formed by considering text where the polarity is known, referred to as _labeled training data_. Features are extracted from this set of labeled data, then the relationships between the features are analyzed and labels are associated with the data.
|
||||
|
||||
The name "bag of words" illustrates what it utilizes: namely, individual words without consideration of spatial locality or context. A vocabulary typically is built from all words appearing in the training set, which tends to be pruned afterward. Stop words, if not cleaned prior to training, are removed due to their high frequency and low contextual utility. Rarely used words can also be removed, given the lack of information they provide for general input cases.
|
||||
|
||||
It is important to note, however, that you can (and should) go further and consider the appearance of words beyond their use in an individual instance of training data, or what is called [_term frequency_][4] (TF). You should also consider the counts of a word through all instances of input data; typically the infrequency of words among all documents is notable, which is called the [_inverse document frequency_][5] (IDF). These metrics are bound to be mentioned in other articles and software packages on this subject, so having an awareness of them can only help.
|
||||
|
||||
BoW is useful in a number of document classification applications; however, in the case of sentiment analysis, things can be gamed when the lack of contextual awareness is leveraged. Consider the following sentences:
|
||||
|
||||
* We are not enjoying this war.
|
||||
* I loathe rainy days, good thing today is sunny.
|
||||
* This is not a matter of life and death.
|
||||
|
||||
|
||||
|
||||
The sentiment of these phrases is questionable for human interpreters, and by strictly focusing on instances of individual vocabulary words, it's difficult for a machine interpreter as well.
|
||||
|
||||
Groupings of words, called _n-grams_ , can also be considered in NLP. A bigram considers groups of two adjacent words instead of (or in addition to) the single BoW. This should alleviate situations such as "not enjoying" above, but it will remain open to gaming due to its loss of contextual awareness. Furthermore, in the second sentence above, the sentiment context of the second half of the sentence could be perceived as negating the first half. Thus, spatial locality of contextual clues also can be lost in this approach. Complicating matters from a pragmatic perspective is the sparsity of features extracted from a given input text. For a thorough and large vocabulary, a count is maintained for each word, which can be considered an integer vector. Most documents will have a large number of zero counts in their vectors, which adds unnecessary space and time complexity to operations. While a number of clever approaches have been proposed for reducing this complexity, it remains an issue.
|
||||
|
||||
### Word embeddings
|
||||
|
||||
Word embeddings are a distributed representation that allows words with a similar meaning to have a similar representation. This is based on using a real-valued vector to represent words in connection with the company they keep, as it were. The focus is on the manner that words are used, as opposed to simply their existence. In addition, a huge pragmatic benefit of word embeddings is their focus on dense vectors; by moving away from a word-counting model with commensurate amounts of zero-valued vector elements, word embeddings provide a more efficient computational paradigm with respect to both time and storage.
|
||||
|
||||
Following are two prominent word embedding approaches.
|
||||
|
||||
#### Word2vec
|
||||
|
||||
The first of these word embeddings, [Word2vec][6], was developed at Google. You'll probably see this embedding method mentioned as you go deeper in your study of NLP and sentiment analysis. It utilizes either a _continuous bag of words_ (CBOW) or a _continuous skip-gram_ model. In CBOW, a word's context is learned during training based on the words surrounding it. Continuous skip-gram learns the words that tend to surround a given word. Although this is more than what you'll probably need to tackle, if you're ever faced with having to generate your own word embeddings, the author of Word2vec advocates the CBOW method for speed and assessment of frequent words, while the skip-gram approach is better suited for embeddings where rare words are more important.
|
||||
|
||||
#### GloVe
|
||||
|
||||
The second word embedding, [_Global Vectors for Word Representation_][7] (GloVe), was developed at Stanford. It's an extension to the Word2vec method that attempts to combine the information gained through classical global text statistical feature extraction with the local contextual information determined by Word2vec. In practice, GloVe has outperformed Word2vec for some applications, while falling short of Word2vec's performance in others. Ultimately, the targeted dataset for your word embedding will dictate which method is optimal; as such, it's good to know the existence and high-level mechanics of each, as you'll likely come across them.
|
||||
|
||||
#### Creating and using word embeddings
|
||||
|
||||
Finally, it's useful to know how to obtain word embeddings; in part 2, you'll see that we are standing on the shoulders of giants, as it were, by leveraging the substantial work of others in the community. This is one method of acquiring a word embedding: namely, using an existing trained and proven model. Indeed, myriad models exist for English and other languages, and it's possible that one does what your application needs out of the box!
|
||||
|
||||
If not, the opposite end of the spectrum in terms of development effort is training your own standalone model without consideration of your application. In essence, you would acquire substantial amounts of labeled training data and likely use one of the approaches above to train a model. Even then, you are still only at the point of acquiring understanding of your input-text data; you then need to develop a model specific for your application (e.g., analyzing sentiment valence in software version-control messages) which, in turn, requires its own time and effort.
|
||||
|
||||
You also could train a word embedding on data specific to your application; while this could reduce time and effort, the word embedding would be application-specific, which would reduce reusability.
|
||||
|
||||
### Available tooling options
|
||||
|
||||
You may wonder how you'll ever get to a point of having a solution for your problem, given the intensive time and computing power needed. Indeed, the complexities of developing solid models can be daunting; however, there is good news: there are already many proven models, tools, and software libraries available that may provide much of what you need. We will focus on [Python][8], which conveniently has a plethora of tooling in place for these applications.
|
||||
|
||||
#### SpaCy
|
||||
|
||||
[SpaCy][9] provides a number of language models for parsing input text data and extracting features. It is highly optimized and touted as the fastest library of its kind. Best of all, it's open source! SpaCy performs tokenization, parts-of-speech classification, and dependency annotation. It contains word embedding models for performing this and other feature extraction operations for over 46 languages. You will see how it can be used for text analysis and feature extraction in the second article in this series.
|
||||
|
||||
#### vaderSentiment
|
||||
|
||||
The [vaderSentiment][10] package provides a measure of positive, negative, and neutral sentiment. As the [original paper][11]'s title ("VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text") indicates, the models were developed and tuned specifically for social media text data. VADER was trained on a thorough set of human-labeled data, which included common emoticons, UTF-8 encoded emojis, and colloquial terms and abbreviations (e.g., meh, lol, sux).
|
||||
|
||||
For given input text data, vaderSentiment returns a 3-tuple of polarity score percentages. It also provides a single scoring measure, referred to as _vaderSentiment's compound metric_. This is a real-valued measurement within the range **[-1, 1]** wherein sentiment is considered positive for values greater than **0.05** , negative for values less than **-0.05** , and neutral otherwise.
|
||||
|
||||
In [part 2][2], you will learn how to use these tools to add sentiment analysis capabilities to your designs.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python
|
||||
|
||||
作者:[Michael McCune ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/elmiko/users/jschlessman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Raspberry Pi and Python)
|
||||
[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-2
|
||||
[3]: https://en.wikipedia.org/wiki/Bag-of-words_model
|
||||
[4]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Term_frequency
|
||||
[5]: https://en.wikipedia.org/wiki/Tf%E2%80%93idf#Inverse_document_frequency
|
||||
[6]: https://en.wikipedia.org/wiki/Word2vec
|
||||
[7]: https://en.wikipedia.org/wiki/GloVe_(machine_learning)
|
||||
[8]: https://www.python.org/
|
||||
[9]: https://pypi.org/project/spacy/
|
||||
[10]: https://pypi.org/project/vaderSentiment/
|
||||
[11]: http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (This is how System76 does open hardware)
|
||||
[#]: via: (https://opensource.com/article/19/4/system76-hardware)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
This is how System76 does open hardware
|
||||
======
|
||||
What sets the new Thelio line of desktops apart from the rest.
|
||||
![metrics and data shown on a computer screen][1]
|
||||
|
||||
Most people know very little about the hardware in their computers. As a long-time Linux user, I've had my share of frustration while getting my wireless cards, video cards, displays, and other hardware working with my chosen distribution. Proprietary hardware often makes it difficult to determine why an Ethernet controller, wireless controller, or mouse performs differently than we expect. As Linux distributions have matured, this has become less of a problem, but we still see some quirks with touchpads and other peripherals, especially when we don't know much—if anything—about our underlying hardware.
|
||||
|
||||
Companies like [System76][2] aim to take these types of problems out of the Linux user experience. System76 manufactures a line of Linux laptops, desktops, and servers, and even offers its own Linux distro, [Pop! OS][3], as an option for buyers, Recently I had the privilege of visiting System76's plant in Denver for [the unveiling][4] of [Thelio][5], its new desktop product line.
|
||||
|
||||
### About Thelio
|
||||
|
||||
System76 says Thelio's open hardware daughterboard, named Thelio Io after the fifth moon of Jupiter, is one thing that makes the computer unique in the marketplace. Thelio Io is certified [OSHWA #us000145][6] and has four SATA ports for storage and an embedded controller for fan and power button control. Thelio Io SAS is certified [OSHWA #us000146][7] and has four U.2 ports for storage and no embedded controller. During a demonstration, System76 showed how these components adjust fans throughout the chassis to optimize the unit's performance.
|
||||
|
||||
The controller also runs the power button and the LED ring around the button, which glows at 100% brightness when it is pressed. This provides both tactile and visual confirmation that the unit is being powered on. While the computer is in use, the button is set to 35% brightness, and when it's in suspend mode, it pulses between 2.35% and 25%. When the computer is off, the LED remains dimly lit so that it's easy to find the power control in a dark room.
|
||||
|
||||
Thelio's embedded controller is a low-power [ATmega32U4][8] microchip, and the controller's setup can be prototyped with an Arduino Micro. The number of Thelio Io boards changes depending on which Thelio model you purchase.
|
||||
|
||||
Thelio is also perhaps the best-designed computer case and system I have ever seen. You'll probably agree if you have ever skinned your knuckles trying to operate inside a typical PC case. I have done this a number of times, and I have the scars to prove it.
|
||||
|
||||
### Why open hardware?
|
||||
|
||||
The boards were designed in [KiCAD][9], and you can access all of Thelio's design files under GPL on [GitHub][10]. So, why would a company that competes with other PC manufacturers design a unique interface then license it openly? It's because the company recognizes the value of open design and the ability to share and adjust an I/O board to your needs, even if you're a competitor in the marketplace.
|
||||
|
||||
![Don Watkins speaks with System76 CEO Carl Richell at the Thelio launch event.][11]
|
||||
|
||||
Don Watkins speaks with System76 CEO Carl Richell at the [Thelio launch event][12].
|
||||
|
||||
I asked [Carl Richell][13], System76's founder and CEO, whether the company is concerned that openly licensing its hardware designs means someone could take its unique design and use it to drive System76 out of business. He said:
|
||||
|
||||
> Open hardware benefits all of us. It's how we further advance technology and make it more available to everyone. We welcome anyone who wishes to improve on Thelio's design to do so. Opening the hardware not only helps advance improvements of our computers more quickly, but it also empowers our customers to truly own 100% of their device. Our goal is to remove as much proprietary functioning as we can, while still producing a competitive Linux computer for customers.
|
||||
>
|
||||
> We've been working with the Linux community for over 13 years to create a flawless and digestible experience on all of our laptops, desktops, and servers. Our long tenure serving the Linux community, providing our customers with a high level of service, and our personability are what makes System76 unique.
|
||||
|
||||
I also asked Carl why open hardware makes sense for System76 and the PC business in general. He replied:
|
||||
|
||||
> System76 was founded on the idea that technology should be open and accessible to everyone. We're not yet at the point where we can create a computer that is 100% open source, but with open hardware, we're one large, essential step closer to reaching that point.
|
||||
>
|
||||
> We live in an era where technology has become a utility. Computers are tools for people at every level of education and across many industries. With everyone's needs specific, each person has their own ideas on how they might improve the computer and its software as their primary tool. Having our computers open allows these ideas to come to fruition, which in turn makes the technology a more powerful tool. In an open environment, we constantly get to iterate a better PC. And that's kind of cool.
|
||||
|
||||
We wrapped up our conversation by talking about System76's roadmap, which includes open hardware mini PCs and, eventually, laptops. Existing mini PCs and laptops sold under the System76 brand are manufactured by other vendors and are not based on open hardware (although their Linux software is, of course, open source).
|
||||
|
||||
Designing and supporting open hardware is a game-changer in the PC business, and it is what sets System76's new Thelio line of desktop computers apart.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/system76-hardware
|
||||
|
||||
作者:[Don Watkins ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://system76.com/
|
||||
[3]: https://opensource.com/article/18/1/behind-scenes-popos-linux
|
||||
[4]: /article/18/11/system76-thelio-desktop-computer
|
||||
[5]: https://system76.com/desktops
|
||||
[6]: https://certification.oshwa.org/us000145.html
|
||||
[7]: https://certification.oshwa.org/us000146.html
|
||||
[8]: https://www.microchip.com/wwwproducts/ATmega32u4
|
||||
[9]: http://kicad-pcb.org/
|
||||
[10]: https://github.com/system76/thelio-io
|
||||
[11]: https://opensource.com/sites/default/files/uploads/don_system76_ceo.jpg (Don Watkins speaks with System76 CEO Carl Richell at the Thelio launch event.)
|
||||
[12]: https://trevgstudios.smugmug.com/System76/121418-Thelio-Press-Event/i-FKWFxFv
|
||||
[13]: https://www.linkedin.com/in/carl-richell-9435781
|
@ -0,0 +1,147 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (New Features Coming to Debian 10 Buster Release)
|
||||
[#]: via: (https://itsfoss.com/new-features-coming-to-debian-10-buster-release/)
|
||||
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
|
||||
|
||||
New Features Coming to Debian 10 Buster Release
|
||||
======
|
||||
|
||||
Debian 10 Buster is nearing its release. The first release candidate is already out and we should see the final release, hopefully, in a few weeks.
|
||||
|
||||
If you are excited about this major new release, let me tell you what’s in it for you.
|
||||
|
||||
### Debian 10 Buster Release Schedule
|
||||
|
||||
There is no set release date for [Debian 10 Buster][1]. Why is that so? Unlike other distributions, [Debian][2] doesn’t do time-based releases. It instead focuses on fixing release-critical bugs. Release-critical bugs are bugs which have either security issues [CVE’s][3] or some other critical issues which prevent Debian from releasing.
|
||||
|
||||
Debian has three parts in its archive, called Main, contrib and non-free. Of the three, Debian Developers and Release Managers are most concerned that the packages which form the bedrock of the distribution i.e. Main is rock stable. So they make sure that there aren’t any major functional or security issues. They are also given priority values such as Essential, Required, Important, Standard, Optional and Extra. More on this in some later Debian article.
|
||||
|
||||
This is necessary because Debian is used as a server in many different environments and people have come to depend on Debian. They also look at upgrade cycles to see nothing breaks for which they look for people to test and see if something breaks while upgrading and inform Debian of the same.
|
||||
|
||||
This commitment to stability is one of the [many reasons why I love to use Debian][4].
|
||||
|
||||
### What’s new in Debian 10 Buster Release
|
||||
|
||||
Here are a few visual and under the hood changes in the upcoming major release of Debian.
|
||||
|
||||
#### New theme and wallpaper
|
||||
|
||||
The Debian theme for Buster is called [FuturePrototype][5] and can be seen below:
|
||||
|
||||
![Debian Buster FuturePrototype Theme][6]
|
||||
|
||||
#### 1\. GNOME Desktop 3.30
|
||||
|
||||
The GNOME desktop which was 1.3.22 in Debian Stretch is updated to 1.3.30 in Buster. Some of the new packages included in this GNOME desktop release are gnome-todo, tracker instead of tracker-gui , dependency against gstreamer1.0-packagekit so there is automatic codec installation for playing movies etc. The big move has been all packages being moved from libgtk2+ to libgtk3+ .
|
||||
|
||||
#### 2\. Linux Kernel 4.19.0-4
|
||||
|
||||
Debian uses LTS Kernel versions so you can expect much better hardware support and long 5 year maintainance and support cycle from Debian. From kernel 4.9.0.3 we have come to 4.19.0-4 .
|
||||
|
||||
```
|
||||
$ uname -r
|
||||
4.19.0-4-amd64
|
||||
```
|
||||
|
||||
#### 3\. OpenJDK 11.0
|
||||
|
||||
For a long time Debian was stuck on OpenJDK 8.0. Now in Debian Buster we have moved to OpenJDK 11.0 and have a team which will take care of new versions.
|
||||
|
||||
#### 4\. AppArmor Enabled by Default
|
||||
|
||||
In Debian Buster [AppArmor][7] will be enabled by default. While this is a good thing, care would have to be taken care by system administrators to enable correct policies. This is only the first step and would need fixing probably lot of scripts to be as useful as been envisioned for the user.
|
||||
|
||||
#### 5\. Nodejs 10.15.2
|
||||
|
||||
For a long time Debian had Nodejs 4.8 in the repo. In this cycle Debian has moved to Nodejs 10.15.2 . In fact, Debian Buster has many javascript libraries such as yarnpkg (an npm alternative) and many others.
|
||||
|
||||
Of course, you can [install latest Nodejs in Debian][8] from the project’s repository but it’s good to see newer version in Debian repository.
|
||||
|
||||
#### 6\. NFtables replaces iptables
|
||||
|
||||
Debian buster provides nftables as a full replacement to iptables which means better and easier syntax, better support for dual-stack ipv4-v6 firewalls and more.
|
||||
|
||||
#### 7\. Support for lot of ARM 64 and ARMHF SBC Boards.
|
||||
|
||||
There has been a constant stream of new SBC boards which Debian is supporting, the latest amongst these are pine64_plus, pinebook for ARM64, while Firefly-RK3288, u-boot-rockchip for ARMHF 64 as well as Odroid HC1/HC2 boards, SolidRun Cubox-i Dual/Quad (1.5som), and SolidRun Cubox-i Dual/Quad (1.5som+emmc) boards, Cubietruckplus as well. There is support for Rock 64, Banana Pi M2 Berry, Pine A64 LTS Board, Olimex A64 Teres-1 as well as Raspberry Pi 1, Zero and Pi 3. Support will be out-of-the box for RISC-V systems as well.
|
||||
|
||||
#### 8\. Python 2 is dead, long live Python 3
|
||||
|
||||
Python 2 will be [deprecated][9] on January 1, 2020 by python.org . While Debian does have Python 2.7 efforts are on to remove after moving all packages to Python 3 to remove it from the repo. This may happen either at Buster release or in a future point release but this is imminent. So Python developers are encouraged to move their code-base to be compatible with Python 3. At the moment of writing, both python2 and python3 are supported in Debian buster.
|
||||
|
||||
#### 9\. Mailman 3
|
||||
|
||||
Mailman3 is finally available in Debian. While [Mailman][10] has been further sub-divided into components. To install the whole stack, install mailman3-full to get all the components.
|
||||
|
||||
#### 10\. Any existing Postgresql databases used will need to be reindexed
|
||||
|
||||
Due to updates in glibc locale data, the way the information is sorted in put in text indexes will change hence it would be beneficial to reindex the data so no data corruption arises in near future.
|
||||
|
||||
#### 11\. Bash 5.0 by Default
|
||||
|
||||
You probably have already about the [new features in Bash 5.0][11], this version is already in Debian.
|
||||
|
||||
#### 12\. Debian implementing /usr/merge
|
||||
|
||||
An excellent freedesktop [primer][12] on what /usr/merge brings is already shared. Couple of things to note though. While Debian would like to do the whole transition, there is possibility that due to unforseen circumstances, some binaries may not be in a position to do the change. One point to note though, /var and /etc/ will be left alone so people who are using containers or cloud would not have to worry too much :)
|
||||
|
||||
#### 13\. Secure-boot support
|
||||
|
||||
With Buster RC1, Debian now has secure-boot support. Which means machines which have the secure-boot bit turned on in the machine should be easily able to install Debian on such machines. No need to disable or workaround Secure boot anymore :)
|
||||
|
||||
#### 14\. Calameres Live-installer for Debian-Live images
|
||||
|
||||
For Debian buster, Debian Live, Debian introduces [Calameres Installer][13] instead of plain old debian-installer. While the Debian-installer has lot many features than Calameres, for newbies Calameres provides a fresh alternative to install than debian-installer. Some screenshots from the installation process.
|
||||
|
||||
![Calamares Partitioning Stage][14]
|
||||
|
||||
As can be seen it is pretty easy to Install Debian under Calamares, only 5 stages to go through and you can have Debian installed at your end.
|
||||
|
||||
### Download Debian 10 Live Images (only for testing)
|
||||
|
||||
Don’t use it on production machines just yet. Try it on a test machine or a virtual machine.
|
||||
|
||||
You can get Debian 64-bit and 32 bit images from Debian Live [directory][15]. If you want the 64-bit look into 64-bit directory, if you want the 32-bit, you can look into the 32-bit directory.
|
||||
|
||||
[Debian 10 Buster Live Images][15]
|
||||
|
||||
If you upgrade from existing stable and something breaks, see if it is reported against the [upgrade-reports][16] psuedo-package using [reportbug][17] you saw the issue with. If the bug has not reported in the package then report it and share as much information as you can.
|
||||
|
||||
**In Conclusion**
|
||||
|
||||
While thousands of packages have been updated and it is virtually impossible to list them all. I have tired to list some of the major changes that you can look for in Debian buster. What do you think of it?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/new-features-coming-to-debian-10-buster-release/
|
||||
|
||||
作者:[Shirish][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/shirish/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://wiki.debian.org/DebianBuster
|
||||
[2]: https://www.debian.org/
|
||||
[3]: https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures
|
||||
[4]: https://itsfoss.com/reasons-why-i-love-debian/
|
||||
[5]: https://wiki.debian.org/DebianArt/Themes/futurePrototype
|
||||
[6]: https://itsfoss.com/wp-content/uploads/2019/04/debian-buster-theme-800x450.png
|
||||
[7]: https://wiki.debian.org/AppArmor
|
||||
[8]: https://itsfoss.com/install-nodejs-ubuntu/
|
||||
[9]: https://www.python.org/dev/peps/pep-0373/
|
||||
[10]: https://www.list.org/
|
||||
[11]: https://itsfoss.com/bash-5-release/
|
||||
[12]: https://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/
|
||||
[13]: https://calamares.io/about/
|
||||
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/calamares-partitioning-wizard.jpg?fit=800%2C538&ssl=1
|
||||
[15]: https://cdimage.debian.org/cdimage/weekly-live-builds/
|
||||
[16]: https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=upgrade-reports;dist=unstable
|
||||
[17]: https://itsfoss.com/bug-report-debian/
|
@ -0,0 +1,341 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/monitor-disk-io-activity-using-iotop-iostat-command-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?
|
||||
======
|
||||
|
||||
Do you know what are the tools we can use for troubleshooting or monitoring real-time disk activity in Linux?
|
||||
|
||||
If **[Linux system performance][1]** gets slow down we may use **[top command][2]** to see the system performance.
|
||||
|
||||
It is used to check what are the processes are consuming high utilization on server.
|
||||
|
||||
It’s common for most of the Linux administrator.
|
||||
|
||||
It’s widely used by Linux administrator in the real world.
|
||||
|
||||
If you don’t see much difference in the process output still you have an option to check other things.
|
||||
|
||||
I would like to advise you to check `wa` status in the top output because most of the time the server performance will be degraded due to high I/O Read and Write on hard disk.
|
||||
|
||||
If it’s high or fluctuation, it could be a cause. So, we need to check I/O activity on hard drive.
|
||||
|
||||
We can monitory disk I/O statistics for all disks and file system in Linux system using `iotop` and `iostat` commands.
|
||||
|
||||
### What Is iotop?
|
||||
|
||||
iotop is a top-like utility for displaying real-time disk activity.
|
||||
|
||||
iotop watches I/O usage information output by the Linux kernel and displays a table of current I/O usage by processes or threads on the system.
|
||||
|
||||
It displays the I/O bandwidth read and written by each process/thread. It also displays the percentage of time the thread/process spent while swapping in and while waiting on I/O.
|
||||
|
||||
Total DISK READ and Total DISK WRITE values represent total read and write bandwidth between processes and kernel threads on the one side and kernel block device subsystem on the other.
|
||||
|
||||
Actual DISK READ and Actual DISK WRITE values represent corresponding bandwidths for actual disk I/O between kernel block device subsystem and underlying hardware (HDD, SSD, etc.).
|
||||
|
||||
### How To Install iotop In Linux?
|
||||
|
||||
We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][3]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo dnf install iotop
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo apt install iotop
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo pacman -S iotop
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo yum install iotop
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo zypper install iotop
|
||||
```
|
||||
|
||||
### How To Monitor Disk I/O Activity/Statistics In Linux Using iotop Command?
|
||||
|
||||
There are many options are available in iotop command to check varies statistics about disk I/O.
|
||||
|
||||
Run the iotop command without any arguments to see each process or thread current I/O usage.
|
||||
|
||||
```
|
||||
# iotop
|
||||
```
|
||||
|
||||
[![][9]![][9]][10]
|
||||
|
||||
If you would like to check which process are actually doing IO then run the iotop command with `-o` or `--only` option.
|
||||
|
||||
```
|
||||
# iotop --only
|
||||
```
|
||||
|
||||
[![][9]![][9]][11]
|
||||
|
||||
**Details:**
|
||||
|
||||
* **`IO:`** It shows I/O utilization for each process, which includes disk and swap.
|
||||
* **`SWAPIN:`** It shows only the swap usage of each process.
|
||||
|
||||
|
||||
|
||||
### What Is iostat?
|
||||
|
||||
iostat is used to report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.
|
||||
|
||||
The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates.
|
||||
|
||||
The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks.
|
||||
|
||||
All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics.
|
||||
|
||||
On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured.
|
||||
|
||||
The iostat command generates two types of reports, the CPU Utilization report and the Device Utilization report.
|
||||
|
||||
### How To Install iostat In Linux?
|
||||
|
||||
iostat tool is part of sysstat package so, We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][3]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo dnf install sysstat
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo apt install sysstat
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo pacman -S sysstat
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo yum install sysstat
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo zypper install sysstat
|
||||
```
|
||||
|
||||
### How To Monitor Disk I/O Activity/Statistics In Linux Using sysstat Command?
|
||||
|
||||
There are many options are available in iostat command to check varies statistics about disk I/O and CPU.
|
||||
|
||||
Run the iostat command without any arguments to see complete statistics of the system.
|
||||
|
||||
```
|
||||
# iostat
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.45 0.02 16.47 0.12 0.00 53.94
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 6.68 126.95 124.97 0.00 58420014 57507206 0
|
||||
sda 0.18 6.77 80.24 0.00 3115036 36924764 0
|
||||
loop0 0.00 0.00 0.00 0.00 2160 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1093 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-d` option to see I/O statistics for all the devices
|
||||
|
||||
```
|
||||
# iostat -d
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 6.68 126.95 124.97 0.00 58420030 57509090 0
|
||||
sda 0.18 6.77 80.24 0.00 3115292 36924764 0
|
||||
loop0 0.00 0.00 0.00 0.00 2160 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1093 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-p` option to see I/O statistics for all the devices and their partitions.
|
||||
|
||||
```
|
||||
# iostat -p
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.42 0.02 16.45 0.12 0.00 53.99
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 6.68 126.94 124.96 0.00 58420062 57512278 0
|
||||
nvme0n1p1 6.40 124.46 118.36 0.00 57279753 54474898 0
|
||||
nvme0n1p2 0.27 2.47 6.60 0.00 1138069 3037380 0
|
||||
sda 0.18 6.77 80.23 0.00 3116060 36924764 0
|
||||
sda1 0.00 0.01 0.00 0.00 3224 0 0
|
||||
sda2 0.18 6.76 80.23 0.00 3111508 36924764 0
|
||||
loop0 0.00 0.00 0.00 0.00 2160 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1093 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-x` option to see detailed I/O statistics for all the devices.
|
||||
|
||||
```
|
||||
# iostat -x
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.41 0.02 16.45 0.12 0.00 54.00
|
||||
|
||||
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
|
||||
nvme0n1 2.45 126.93 0.60 19.74 0.40 51.74 4.23 124.96 5.12 54.76 3.16 29.54 0.00 0.00 0.00 0.00 0.00 0.00 0.31 30.28
|
||||
sda 0.06 6.77 0.00 0.00 8.34 119.20 0.12 80.23 19.94 99.40 31.84 670.73 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13
|
||||
loop0 0.00 0.00 0.00 0.00 0.08 19.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
loop1 0.00 0.00 0.00 0.00 0.40 12.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
loop2 0.00 0.00 0.00 0.00 0.38 19.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
```
|
||||
|
||||
Run the iostat command with `-d [Device_Name]` option to see I/O statistics of particular device and their partitions.
|
||||
|
||||
```
|
||||
# iostat -p [Device_Name]
|
||||
|
||||
# iostat -p sda
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.38 0.02 16.43 0.12 0.00 54.05
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
sda 0.18 6.77 80.21 0.00 3117468 36924764 0
|
||||
sda2 0.18 6.76 80.21 0.00 3112916 36924764 0
|
||||
sda1 0.00 0.01 0.00 0.00 3224 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-m` option to see I/O statistics with `MB` for all the devices instead of `KB`. By default it shows the output with KB.
|
||||
|
||||
```
|
||||
# iostat -m
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.36 0.02 16.41 0.12 0.00 54.09
|
||||
|
||||
Device tps MB_read/s MB_wrtn/s MB_dscd/s MB_read MB_wrtn MB_dscd
|
||||
nvme0n1 6.68 0.12 0.12 0.00 57050 56176 0
|
||||
sda 0.18 0.01 0.08 0.00 3045 36059 0
|
||||
loop0 0.00 0.00 0.00 0.00 2 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with certain interval then use the following format. In this example, we are going to capture totally two reports at five seconds interval.
|
||||
|
||||
```
|
||||
# iostat [Interval] [Number Of Reports]
|
||||
|
||||
# iostat 5 2
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.35 0.02 16.41 0.12 0.00 54.10
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 6.68 126.89 124.95 0.00 58420116 57525344 0
|
||||
sda 0.18 6.77 80.20 0.00 3118492 36924764 0
|
||||
loop0 0.00 0.00 0.00 0.00 2160 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1093 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
3.71 0.00 2.51 0.05 0.00 93.73
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 19.00 0.20 311.40 0.00 1 1557 0
|
||||
sda 0.20 25.60 0.00 0.00 128 0 0
|
||||
loop0 0.00 0.00 0.00 0.00 0 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 0 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 0 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-N` option to see the LVM disk I/O statistics report.
|
||||
|
||||
```
|
||||
# iostat -N
|
||||
|
||||
Linux 4.15.0-47-generic (Ubuntu18.2daygeek.com) Thursday 18 April 2019 _x86_64_ (2 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
0.38 0.07 0.18 0.26 0.00 99.12
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 3.60 57.07 69.06 968729 1172340
|
||||
sdb 0.02 0.33 0.00 5680 0
|
||||
sdc 0.01 0.12 0.00 2108 0
|
||||
2g-2gvol1 0.00 0.07 0.00 1204 0
|
||||
```
|
||||
|
||||
Run the nfsiostat command to see the I/O statistics for Network File System(NFS).
|
||||
|
||||
```
|
||||
# nfsiostat
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/monitor-disk-io-activity-using-iotop-iostat-command-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/monitoring-tools/
|
||||
[2]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
|
||||
[3]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[5]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[6]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[7]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[8]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
||||
[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-1.jpg
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-2.jpg
|
99
translated/tech/20171226 The shell scripting trap.md
Normal file
99
translated/tech/20171226 The shell scripting trap.md
Normal file
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (jdh8383)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The shell scripting trap)
|
||||
[#]: via: (https://arp242.net/weblog/shell-scripting-trap.html)
|
||||
[#]: author: (Martin Tournoij https://arp242.net/)
|
||||
|
||||
Shell 脚本陷阱
|
||||
======
|
||||
|
||||
Shell 脚本很棒,你可以非常轻松地写出有用的东西来。甚至像是下面这个傻瓜式的命令:
|
||||
```
|
||||
# 用含有 Go 的词汇起名字:
|
||||
$ grep -i ^go /usr/share/dict/american-english /usr/share/dict/british /usr/share/dict/british-english /usr/share/dict/catala /usr/share/dict/catalan /usr/share/dict/cracklib-small /usr/share/dict/finnish /usr/share/dict/french /usr/share/dict/german /usr/share/dict/italian /usr/share/dict/ngerman /usr/share/dict/ogerman /usr/share/dict/spanish /usr/share/dict/usa /usr/share/dict/words | cut -d: -f2 | sort -R | head -n1
|
||||
goldfish
|
||||
```
|
||||
|
||||
如果用其他编程语言,就需要花费更多的脑力,用多行代码实现,比如用 Ruby 的话:
|
||||
```
|
||||
puts(Dir['/usr/share/dict/*-english'].map do |f|
|
||||
File.open(f)
|
||||
.readlines
|
||||
.select { |l| l[0..1].downcase == 'go' }
|
||||
end.flatten.sample.chomp)
|
||||
```
|
||||
|
||||
Ruby 版本的代码虽然不是那么长,也并不复杂。但是 shell 版是如此简单,我甚至不用实际测试就可以确保它是正确的。而 Ruby 版的我就没法确定它不会出错了,必须得测试一下。而且它要长一倍,看起来也更复杂。
|
||||
|
||||
这就是人们使用 Shell 脚本的原因,它简单却实用。下面是另一个例子:
|
||||
|
||||
```
|
||||
curl https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_gemeenten |
|
||||
grep '^<li><a href=' |
|
||||
sed -r 's|<li><a href="/wiki/.+" title=".+">(.+)</a>.*</li>|\1|' |
|
||||
grep -Ev '(^Tabel van|^Lijst van|Nederland)'
|
||||
```
|
||||
|
||||
这个脚本可以从维基百科上获取荷兰基层政权的列表。几年前我写了这个临时的脚本,用来快速生成一个数据库,到现在它仍然可以正常运行,当时写它并没有花费我多少精力。但要用 Ruby 完成同样的功能则会麻烦得多。
|
||||
|
||||
现在来说说 shell 的缺点吧。随着代码量的增加,你的脚本会变得越来越难以维护,但你也不会想用别的语言重写一遍,因为你已经在这个 shell 版上花费了很多时间。
|
||||
|
||||
我把这种情况称为“Shell 脚本陷阱”,这是[沉没成本谬论][1]的一种特例(沉没成本谬论是一个经济学概念,可以简单理解为,对已经投入的成本可能被浪费而念念不忘)。
|
||||
|
||||
实际上许多脚本会增长到超出预期的大小,你经常会花费过多的时间来“修复某个 bug”,或者“添加一个小功能”。如此循环往复,让人头大。
|
||||
|
||||
如果你从一开始就使用 Python、Ruby 或是其他类似的语言来写这个程序,你可能会在写第一版的时候多花些时间,但以后维护起来就容易很多,bug 也肯定会少很多。
|
||||
|
||||
以我的 [packman.vim][2] 脚本为例。它起初只包含一个简单的用来遍历所有目录的 `for` 循环,外加一个 `git pull`,但在这之后就刹不住车了,它现在有 200 行左右的代码,这肯定不能算是最复杂的脚本,但假如我一上来就按计划用 Go 来编写它的话,那么增加一些像“打印状态”或者“从配置文件里克隆新的 git 库”这样的功能就会轻松很多;添加“并行克隆”的支持也几乎不算个事儿了,而在 shell 脚本里却很难实现(尽管不是不可能)。事后看来,我本可以节省时间,并且获得更好的结果。
|
||||
|
||||
出于类似的原因,我很后悔写出了许多这样的 shell 脚本,而我在 2018 年的新年誓言就是不要再犯类似的错误了。
|
||||
|
||||
#### 附录:问题汇总
|
||||
|
||||
需要指出的是,shell 编程的确存在一些实际的限制。下面是一些例子:
|
||||
|
||||
* 在处理一些包含“空格”或者其他“特殊”字符的文件名时,需要特别注意细节。绝大多数脚本都会犯错,即使是那些经验丰富的作者(比如我)编写的脚本,因为太容易写错了,[只添加引号是不够的][3]。
|
||||
|
||||
* 有许多所谓“正确”和“错误”的做法。你应该用 `which` 还是 `command`?该用 `$@` 还是 `$*`,是不是得加引号?你是该用 `cmd $arg` 还是 `cmd "$arg"`?等等等等。
|
||||
|
||||
* 你没法在变量里存储空字节(0x00);shell 脚本处理二进制数据很麻烦。
|
||||
|
||||
* 虽然你可以非常快速地写出有用的东西,但实现更复杂的算法则要痛苦许多,即使用 ksh/zsh/bash 扩展也是如此。我上面那个解析 HTML 的脚本临时用用是可以的,但你真的不会想在生产环境中使用这种脚本。
|
||||
|
||||
* 很难写出跨平台的通用型 shell 脚本。`/bin/sh` 可能是 `dash` 或者 `bash`,不同的 shell 有不同的运行方式。外部工具如 `grep`、`sed` 等,不一定能支持同样的参数。你能确定你的脚本可以适用于 Linux、macOS 和 Windows 的所有版本吗(过去、现在和将来)?
|
||||
|
||||
* 调试 shell 脚本会很难,特别是你眼中的语法可能会很快变得模糊起来,并不是所有人都熟悉 shell 编程的语境。
|
||||
|
||||
* 处理错误会很棘手(检查 `$?` 或是 `set -e`),排查一些超过“出了个小错”级别的复杂错误几乎是不可能的。
|
||||
|
||||
* 除非你使用了 `set -u`,变量未定义将不会报错,而这会导致一些“搞笑事件”,比如 `rm -r ~/$undefined` 会删除用户的整个家目录([瞅瞅 Github 上的这个悲剧][4])。
|
||||
|
||||
* 所有东西都是字符串。一些 shell 引入了数组,能用,但是语法非常丑陋和模糊。带分数的数字运算仍然难以应付,并且依赖像 `bc` 或 `dc` 这样的外部工具(`$(( .. ))` 这种方式只能对付一下整数)。
|
||||
|
||||
|
||||
**反馈**
|
||||
|
||||
你可以发邮件到 [martin@arp242.net][5],或者[在 GitHub 上创建 issue][6] 来向我反馈,提问等。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arp242.net/weblog/shell-scripting-trap.html
|
||||
|
||||
作者:[Martin Tournoij][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[jdh8383](https://github.com/jdh8383)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://arp242.net/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://youarenotsosmart.com/2011/03/25/the-sunk-cost-fallacy/
|
||||
[2]: https://github.com/Carpetsmoker/packman.vim
|
||||
[3]: https://dwheeler.com/essays/filenames-in-shell.html
|
||||
[4]: https://github.com/ValveSoftware/steam-for-linux/issues/3671
|
||||
[5]: mailto:martin@arp242.net
|
||||
[6]: https://github.com/Carpetsmoker/arp242.net/issues/new
|
@ -1,102 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to run PostgreSQL on Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
|
||||
[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
|
||||
|
||||
怎样在Kubernetes上运行PostgreSQL
|
||||
======
|
||||
创建统一管理的,具备灵活性的云原生生产部署来部署一个人性化的数据库即服务。
|
||||
![cubes coming together to create a larger cube][1]
|
||||
|
||||
通过在[Kubernetes][2]上运行[PostgreSQL][3]数据库,你能创建统一管理的,具备灵活性的云原生生产部署应用来部署一个个性化的数据库即服务为你的特定需求进行量身定制。
|
||||
|
||||
对于Kubernetes,使用Operator允许你提供额外的上下文去[管理有状态应用][4]。当使用像PostgreSQL这样开源的数据库去执行包括配置,扩张,高可用和用户管理时,Operator也很有帮助。
|
||||
|
||||
让我们来探索如何在Kubernetes上启动并运行PostgreSQL。
|
||||
|
||||
### 安装 PostgreSQL Operator
|
||||
|
||||
将PostgreSQL和Kubernetes结合使用的第一步是安装一个Operator。在针对Linux系统的Crunchy's [快速启动脚本][6]的帮助下,你可以在任意基于Kubernetes的环境下启动和运行开源的[Crunchy PostgreSQL Operator][5]。
|
||||
|
||||
快速启动脚本有一些必要前提:
|
||||
|
||||
* [Wget][7]工具已安装。
|
||||
* [kubectl][8]工具已安装。
|
||||
* 一个[StorageClass][9]已经定义在你的Kubernetes中。
|
||||
* 拥有集群权限的可访问Kubernetes的用户账号。安装Operator的[RBAC][10]规则是必要的。
|
||||
* 拥有一个[namespace][11]去管理PostgreSQL Operator。
|
||||
|
||||
|
||||
|
||||
执行这个脚本将提供给你一个默认的PostgreSQL Operator deployment,它假设你的[dynamic storage][12]和存储类的名字为**standard**。通过这个脚本允许用户自定义的值去覆盖这些默认值。
|
||||
|
||||
通过下列命令,你能下载这个快速启动脚本并把它的权限设置为可执行:
|
||||
```
|
||||
wget <https://raw.githubusercontent.com/CrunchyData/postgres-operator/master/examples/quickstart.sh>
|
||||
chmod +x ./quickstart.sh
|
||||
```
|
||||
|
||||
然后你运行快速启动脚本:
|
||||
```
|
||||
./examples/quickstart.sh
|
||||
```
|
||||
|
||||
在脚本提示你相关的Kubernetes集群基本信息后,它将执行下列操作:
|
||||
* 下载Operator配置文件
|
||||
* 将 **$HOME/.pgouser** 这个文件设置为默认设置
|
||||
* 以Kubernetes [Deployment][13]部署Operator
|
||||
* 设置你的 **.bashrc** 文件包含Operator环境变量
|
||||
* 设置你的 **$HOME/.bash_completion** 文件为 **pgo bash_completion**文件
|
||||
|
||||
在快速启动脚本的执行期间,你将会被提示在你的Kubernetes集群设置RBAC规则。在另一个终端,执行快速启动命令所提示你的命令。
|
||||
|
||||
一旦这个脚本执行完成,你将会得到关于打开一个端口转发到PostgreSQL Operator pod的信息。在另一个终端,执行端口转发;这将允许你开始对PostgreSQL Operator执行命令!尝试输入下列命令创建集群:
|
||||
|
||||
```
|
||||
pgo create cluster mynewcluster
|
||||
```
|
||||
|
||||
你能输入下列命令测试你的集群运行状况:
|
||||
|
||||
```
|
||||
pgo test mynewcluster
|
||||
```
|
||||
|
||||
现在,你能在Kubernetes环境下管理你的PostgreSQL数据库!你可以在[官方文档][14]找到非常全面的命令,包括扩容,高可用,备份等等。
|
||||
|
||||
* * *
|
||||
这篇文章部分参考了该作者为了Crunchy博客而写的[在Kubernetes上开始运行PostgreSQL][15]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/how-run-postgresql-kubernetes
|
||||
|
||||
作者:[Jonathan S. Katz][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[arrowfeng](https://github.com/arrowfeng)
|
||||
校对:[校对ID](https://github.com/校对ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jkatz05
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
|
||||
[2]: https://www.postgresql.org/
|
||||
[3]: https://kubernetes.io/
|
||||
[4]: https://opensource.com/article/19/2/scaling-postgresql-kubernetes-operators
|
||||
[5]: https://github.com/CrunchyData/postgres-operator
|
||||
[6]: https://crunchydata.github.io/postgres-operator/stable/installation/#quickstart-script
|
||||
[7]: https://www.gnu.org/software/wget/
|
||||
[8]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
[9]: https://kubernetes.io/docs/concepts/storage/storage-classes/
|
||||
[10]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||
[11]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
|
||||
[12]: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
|
||||
[13]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
|
||||
[14]: https://crunchydata.github.io/postgres-operator/stable/#documentation
|
||||
[15]: https://info.crunchydata.com/blog/get-started-runnning-postgresql-on-kubernetes
|
@ -1,186 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Raverstern)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fixing Ubuntu Freezing at Boot Time)
|
||||
[#]: via: (https://itsfoss.com/fix-ubuntu-freezing/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
解决 Ubuntu 在启动时冻结的问题
|
||||
======
|
||||
|
||||
_**本文将向您一步步展示如何通过安装 NVIDIA 专有驱动来处理 Ubuntu 在启动过程中冻结的问题。本教程仅在一个新安装的 Ubuntu 系统上操作验证过,不过在其他情况下也理应可用。**_
|
||||
|
||||
不久前我买了台[宏碁掠夺者][1](此为[广告联盟][2]链接)笔记本电脑来测试各种 Linux 发行版。这台庞大且笨重的机器与我喜欢的,类似[戴尔 XPS][3]那般小巧轻便的笔记本电脑大相径庭。
|
||||
|
||||
我即便不打游戏也选择这台电竞笔记本电脑的原因,就是为了 [NVIDIA 的显卡][4]。宏碁掠夺者 Helios 300 上搭载了一块 [NVIDIA Geforce][5] GTX 1050Ti 显卡。
|
||||
|
||||
NVIDIA 那糟糕的 Linux 兼容性为人们所熟知。过去很多 It’s FOSS 的读者都向我求助过关于 NVIDIA 笔记本电脑的问题,而我当时无能为力,因为我手头上没有使用 NVIDIA 显卡的系统。
|
||||
|
||||
所以当我决定搞一台专门的设备来测试 Linux 发行版时,我选择了带有 NVIDIA 显卡的笔记本电脑。
|
||||
|
||||
这台笔记本原装的 Windows 10 系统安装在 120 GB 的固态硬盘上,并另外配有 1 TB 的机械硬盘来存储数据。在此之上我配置好了 [Windows 10 和 Ubuntu 18.04 双系统][6]。整个的安装过程舒适,方便,快捷。
|
||||
|
||||
随后我启动了 [Ubuntu][7]。那熟悉的紫色界面展现了出来,然后我就发现它卡在那儿了。鼠标一动不动,我也输入不了任何东西,然后除了长按电源键强制关机以外我啥事儿都做不了。
|
||||
|
||||
然后再次尝试启动,结果一模一样。整个系统就一直卡在那个紫色界面,随后的登录界面也出不来。
|
||||
|
||||
这听起来很耳熟吧?下面就让我来告诉您如何解决这个 Ubuntu 在启动过程中冻结的问题。
|
||||
|
||||
要不您考虑考虑抛弃 Ubuntu?
|
||||
|
||||
请注意,尽管是在 Ubuntu 18.04 上操作的,本教程应该也能用于其他基于 Ubuntu 的发行版,例如 Linux Mint,elementary OS 等等。关于这点我已经在 Zorin OS 上确认过。
|
||||
|
||||
### 解决 Ubuntu 启动中由 NVIDIA 驱动引起的冻结问题
|
||||
|
||||
![][8]
|
||||
|
||||
我介绍的解决方案适用于配有 NVIDIA 显卡的系统,因为您所面临的系统冻结问题是由开源的 [NVIDIA Nouveau 驱动][9]所导致的。
|
||||
|
||||
事不宜迟,让我们马上来看看如何解决这个问题。
|
||||
|
||||
#### 步骤 1:编辑 Grub
|
||||
|
||||
在启动系统的过程中,请您在如下图所示的 Grub 界面上停下。如果您没看到这个界面,在启动电脑时请按住 Shift 键。
|
||||
|
||||
在这个界面上,按“E”键进入编辑模式。
|
||||
|
||||
![按“E”按键][10]
|
||||
|
||||
您应该看到一些如下图所示的代码。此刻您应关注于以 Linux 开头的那一行。
|
||||
|
||||
![前往 Linux 开头的那一行][11]
|
||||
|
||||
#### 步骤 2:在 Grub 中临时修改 Linux 内核参数
|
||||
|
||||
回忆一下,我们的问题出在 NVIDIA 显卡驱动上,是开源版 NVIDIA 驱动的不适配导致了我们的问题。所以此处我们能做的就是禁用这些驱动。
|
||||
|
||||
此刻,您有多种方式可以禁用这些驱动。我最喜欢的方式是通过 nomodeset 来禁用所有显卡的驱动。
|
||||
|
||||
请把下列文本添加到以 Linux 开头的那一行的末尾。此处您应该可以正常输入。请确保您把这段文本加到了行末。
|
||||
|
||||
```
|
||||
nomodeset
|
||||
```
|
||||
|
||||
现在您屏幕上的显示应如下图所示:
|
||||
|
||||
![通过向内核添加 nomodeset 来禁用显卡驱动][12]
|
||||
|
||||
按 Ctrl+X 或 F10 保存并退出。下次您就将以修改后的内核参数来启动。
|
||||
|
||||
对以上操作的解释(点击展开)
|
||||
|
||||
所以我们究竟做了些啥?那个 nomodeset 又是个什么玩意儿?让我来向您简单地解释一下。
|
||||
|
||||
通常来说,显卡是在 X 或者是其他显示服务开始执行后才被启用的,也就是在您登录系统并看到图形界面以后。
|
||||
|
||||
但最近,视频模式的设置被移植进了内核。这么做的众多优点之一就是能您看到一个漂亮且高清的启动画面。
|
||||
|
||||
若您往内核中加入 nomodeset 参数,它就会指示内核在显示服务启动后才加载显卡驱动。
|
||||
|
||||
换句话说,您在此时禁止视频驱动的加载,由此产生的冲突也会随之消失。您在登录进系统以后,还是能看到一切如旧,那是因为显卡驱动在随后的过程中被加载了。
|
||||
|
||||
#### 步骤 3:更新您的系统并安装 NVIDIA 专有驱动
|
||||
|
||||
别因为现在可以登录系统了就过早地高兴起来。您之前所做的只是临时措施,在下次启动的时候,您的系统依旧会尝试加载 Nouveau 驱动而因此冻结。
|
||||
|
||||
这是否意味着您将不得不在 Grub 界面上不断地编辑内核?可喜可贺,答案是否定的。
|
||||
|
||||
您可以在 Ubuntu 上为 NVIDIA 显卡[安装额外的驱动][13]。在使用专有驱动后,Ubuntu 将不会在启动过程中冻结。
|
||||
|
||||
我假设这是您第一次登录到一个新安装的系统。这意味着在做其他事情之前您必须先[更新 Ubuntu][14]。通过 Ubuntu 的 Ctrl+Alt+T [系统快捷键][15]打开一个终端,并输入以下命令:
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
```
|
||||
|
||||
在上述命令执行完以后,您可以尝试安装额外的驱动。不过根据我的经验,在安装新驱动之前您需要先重启一下您的系统。在您重启时,您还是需要按我们之前做的那样修改内核参数。
|
||||
|
||||
当您的系统已经更新和重启完毕,按下 Windows 键打开一个菜单栏,并搜索“软件与更新”(Software & Updates)。
|
||||
|
||||
![点击“软件与更新”(Software & Updates)][16]
|
||||
|
||||
然后切换到“额外驱动”(Additional Drivers)标签页,并等待数秒。然后您就能看到可供系统使用的专有驱动了。在这个列表上您应该可以找到 NVIDIA。
|
||||
|
||||
选择专有驱动并点击“应用更改”(Apply Changes)。
|
||||
|
||||
![NVIDIA 驱动安装中][17]
|
||||
|
||||
新驱动的安装会费点时间。若您的系统启用了 UEFI 安全启动,您将被要求设置一个密码。_您可以将其设置为任何容易记住的密码_。它的用处我将在步骤 4 中说明。
|
||||
|
||||
![您可能需要设置一个安全启动密码][18]
|
||||
|
||||
安装完成后,您会被要求重启系统以令之前的更改生效。
|
||||
|
||||
![在新驱动安装好后重启您的系统][19]
|
||||
|
||||
#### 步骤 4:处理 MOK(仅针对启用了 UEFI 安全启动的设备)
|
||||
|
||||
如果您之前被要求设置安全启动密码,此刻您会看到一个蓝色界面,上面写着“MOK management”。这是个复杂的概念,我试着长话短说。
|
||||
|
||||
对 MOK([设备所有者密码][20])的要求是因为安全启动的功能要求所有内核模块都必须被签名。Ubuntu 中所有随 ISO 镜像发行的内核模块都已经签了名。由于您安装了一个新模块(也就是那额外的驱动),或者您对内核模块做了修改,您的安全系统可能视之为一个未经验证的外部修改,从而拒绝启动。
|
||||
|
||||
因此,您可以自己对系统模块进行签名(以告诉 UEFI 系统莫要大惊小怪,这些修改是您做的),或者您也可以简单粗暴地[禁用安全启动][21]。
|
||||
|
||||
现在你对[安全启动和 MOK ][22]有了一定了解,那咱们就来看看在遇到这个蓝色界面后该做些什么。
|
||||
|
||||
如果您选择“继续启动”,您的系统将有很大概率如往常一样启动,并且您啥事儿也不用做。不过在这种情况下,新驱动的有些功能有可能工作不正常。
|
||||
|
||||
这就是为什么,您应该**选择注册 MOK **。
|
||||
|
||||
![][23]
|
||||
|
||||
它会在下一个页面让您点击“继续”,然后要您输入一串密码。请输入在上一步中,在安装额外驱动时设置的密码。
|
||||
|
||||
别担心!
|
||||
|
||||
如果您错过了这个关于 MOK 的蓝色界面,或不小心点了“继续启动”而不是“注册 MOK”,不必惊慌。您的主要目的是能够成功启动系统,而通过禁用 Nouveau 显卡驱动,您已经成功地实现了这一点。
|
||||
|
||||
最坏的情况也不过就是您的系统切换到 Intel 集成显卡而不再使用 NVIDIA 显卡。您可以之后的任何时间安装 NVIDIA 显卡驱动。您的首要任务是启动系统。
|
||||
|
||||
#### 步骤 5:享受安装了专有 NVIDIA 驱动的 Linux 系统
|
||||
|
||||
当新驱动被安装好后,您需要再次重启系统。别担心!目前的情况应该已经好起来了,并且您不必再去修改内核参数,而是能够直接启动 Ubuntu 系统了。
|
||||
|
||||
我希望本教程帮助您解决了 Ubuntu 系统在启动中冻结的问题,并让您能够成功启动 Ubuntu 系统。
|
||||
|
||||
如果您有任何问题或建议,请在下方评论区给我留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/fix-ubuntu-freezing/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Raverstern](https://github.com/Raverstern)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://amzn.to/2YVV6rt
|
||||
[2]: https://itsfoss.com/affiliate-policy/
|
||||
[3]: https://itsfoss.com/dell-xps-13-ubuntu-review/
|
||||
[4]: https://www.nvidia.com/en-us/
|
||||
[5]: https://www.nvidia.com/en-us/geforce/
|
||||
[6]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
|
||||
[7]: https://www.ubuntu.com/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/fixing-frozen-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[9]: https://nouveau.freedesktop.org/wiki/
|
||||
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/edit-grub-menu.jpg?resize=800%2C393&ssl=1
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue.jpg?resize=800%2C343&ssl=1
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/editing-grub-to-fix-nvidia-issue-2.jpg?resize=800%2C320&ssl=1
|
||||
[13]: https://itsfoss.com/install-additional-drivers-ubuntu/
|
||||
[14]: https://itsfoss.com/update-ubuntu/
|
||||
[15]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/activities_software_updates_search-e1551416201782-800x228.png?resize=800%2C228&ssl=1
|
||||
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-nvidia-driver-ubuntu.jpg?resize=800%2C520&ssl=1
|
||||
[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/secure-boot-nvidia.jpg?ssl=1
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/nvidia-drivers-installed-Ubuntu.jpg?resize=800%2C510&ssl=1
|
||||
[20]: https://firmware.intel.com/blog/using-mok-and-uefi-secure-boot-suse-linux
|
||||
[21]: https://itsfoss.com/disable-secure-boot-in-acer/
|
||||
[22]: https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS
|
||||
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/MOK-Secure-boot.jpg?resize=800%2C350&ssl=1
|
@ -5,19 +5,19 @@
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Mercurial for version control)
|
||||
[#]: via: (https://opensource.com/article/19/4/getting-started-mercurial)
|
||||
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Getting started with Mercurial for version control
|
||||
开始使用 Mercurial 进行版本控制
|
||||
======
|
||||
Learn the basics of Mercurial, a distributed version control system
|
||||
written in Python.
|
||||
了解 Mercurial 的基础知识,它是一个用 Python 写的分布式版本控制系统。
|
||||
|
||||
![][1]
|
||||
|
||||
[Mercurial][2] is a distributed version control system written in Python. Because it's written in a high-level language, you can write a Mercurial extension with a few Python functions.
|
||||
[Mercurial][2] 是一个用 Python 编写的分布式版本控制系统。因为它是用高级语言编写的,所以你可以用 Python 函数编写一个 Mercurial 扩展。
|
||||
|
||||
There are several ways to install Mercurial, which are explained in the [official documentation][3]. My favorite one is not there: using **pip**. This is the most amenable way to develop local extensions!
|
||||
在[官方文档中][3]说明了几种安装 Mercurial 的方法。我最喜欢的一种方法不在里面:使用 **pip**。这是开发本地扩展的最合适方式!
|
||||
|
||||
For now, Mercurial only supports Python 2.7, so you will need to create a Python 2.7 virtual environment:
|
||||
目前,Mercurial 仅支持 Python 2.7,因此你需要创建一个 Python 2.7 虚拟环境:
|
||||
|
||||
|
||||
```
|
||||
@ -25,7 +25,7 @@ python2 -m virtualenv mercurial-env
|
||||
./mercurial-env/bin/pip install mercurial
|
||||
```
|
||||
|
||||
To have a short command, and to satisfy everyone's insatiable need for chemistry-based humor, the command is called **hg**.
|
||||
为了有一个简短的命令,以及满足人们对化学幽默的无法满足的需求,命令称之为 **hg**。
|
||||
|
||||
|
||||
```
|
||||
@ -37,7 +37,7 @@ $ source mercurial-env/bin/activate
|
||||
(mercurial-env)$
|
||||
```
|
||||
|
||||
The status is empty since you do not have any files. Add a couple of files:
|
||||
由于还没有任何文件,因此状态为空。添加几个文件:
|
||||
|
||||
|
||||
```
|
||||
@ -58,11 +58,11 @@ date: Fri Mar 29 12:42:43 2019 -0700
|
||||
summary: Adding stuff
|
||||
```
|
||||
|
||||
The **addremove** command is useful: it adds any new files that are not ignored to the list of managed files and removes any files that have been removed.
|
||||
**addremove** 命令很有用:它将任何未被忽略的新文件添加到托管文件列表中,并移除任何已删除的文件。
|
||||
|
||||
As I mentioned, Mercurial extensions are written in Python—they are just regular Python modules.
|
||||
如我所说,Mercurial 扩展用 Python 写成,它们只是常规的 Python 模块。
|
||||
|
||||
This is an example of a short Mercurial extension:
|
||||
这是一个简短的 Mercurial 扩展示例:
|
||||
|
||||
|
||||
```
|
||||
@ -78,14 +78,14 @@ def say_hello(ui, repo, **opts):
|
||||
ui.write("hello ", opts['whom'], "\n")
|
||||
```
|
||||
|
||||
A simple way to test it is to put it in a file in the virtual environment manually:
|
||||
一个简单的测试方法是将它手动加入虚拟环境中的文件中:
|
||||
|
||||
|
||||
```
|
||||
`$ vi ../mercurial-env/lib/python2.7/site-packages/hello_ext.py`
|
||||
```
|
||||
|
||||
Then you need to _enable_ the extension. You can start by enabling it only in the current repository:
|
||||
然后你需要_启用_扩展。你可以仅在当前仓库中启用它开始:
|
||||
|
||||
|
||||
```
|
||||
@ -94,7 +94,7 @@ $ cat >> .hg/hgrc
|
||||
hello_ext =
|
||||
```
|
||||
|
||||
Now, a greeting is possible:
|
||||
现在,问候有了:
|
||||
|
||||
|
||||
```
|
||||
@ -102,9 +102,9 @@ Now, a greeting is possible:
|
||||
hello world
|
||||
```
|
||||
|
||||
Most extensions will do more useful stuff—possibly even things to do with Mercurial. The **repo** object is a **mercurial.hg.repository** object.
|
||||
大多数扩展会做更多有用的东西,甚至可能与 Mercurial 有关。 **repo** 对象是 **mercurial.hg.repository** 的对象。
|
||||
|
||||
Refer to the [official documentation][5] for more about Mercurial's API. And visit the [official repo][6] for more examples and inspiration.
|
||||
有关 Mercurial API 的更多信息,请参阅[官方文档][5]。并访问[官方仓库][6]获取更多示例和灵感。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -112,7 +112,7 @@ via: https://opensource.com/article/19/4/getting-started-mercurial
|
||||
|
||||
作者:[Moshe Zadka (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,33 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Troubleshooting slow WiFi on Linux)
|
||||
[#]: via: (https://www.linux.com/blog/troubleshooting-slow-wifi-linux)
|
||||
[#]: author: (OpenSource.com https://www.linux.com/USERS/OPENSOURCECOM)
|
||||
|
||||
排查 Linux 上的 WiFi 慢速问题
|
||||
======
|
||||
|
||||
我对 [Linux 系统][1]上的硬件问题进行诊断并不陌生。虽然我过去几年大部分工作都都涉及虚拟化,但我仍然喜欢蹲在桌子下,摸索设备和内存模块。好吧,除开“蹲在桌子下”的部分。但这些都不意味着持久而神秘的 bug 不令人沮丧。我最近遇到了我的 Ubuntu 18.04 工作站上的一个 bug,这个 bug 几个月来一直没有解决。
|
||||
|
||||
在这里,我将分享我的问题和许多我尝试的方法。即使你可能永远不会遇到我的具体问题,但故障排查的过程可能会有所帮助。此外,你会对我在这无用的线索上浪费的时间和精力感到好笑。
|
||||
|
||||
更多阅读: [OpenSource.com][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/troubleshooting-slow-wifi-linux
|
||||
|
||||
作者:[OpenSource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/USERS/OPENSOURCECOM
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
|
||||
[2]: https://opensource.com/article/19/4/troubleshooting-wifi-linux
|
244
translated/tech/20190416 How to Install MySQL in Ubuntu Linux.md
Normal file
244
translated/tech/20190416 How to Install MySQL in Ubuntu Linux.md
Normal file
@ -0,0 +1,244 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install MySQL in Ubuntu Linux)
|
||||
[#]: via: (https://itsfoss.com/install-mysql-ubuntu/)
|
||||
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
|
||||
|
||||
怎样在Ubuntu Linux上安装MySQL
|
||||
======
|
||||
|
||||
_**简要: 本教程教你如何在基于Ubuntu的Linux发行版上安装MySQL。对于首次使用的用户,你将会学习到如何验证你的安装和第一次怎样去连接MySQL。**_
|
||||
|
||||
**[MySQL][1]** 是一个典型的数据库管理系统。它被用于许多技术栈中,包括流行的 **[LAMP][2]** (Linux, Apache, MySQL, PHP) 技术栈. 它已经被证实了其稳定性。 另一个让**MySQL**受欢迎的原因是它是开源的。
|
||||
|
||||
**MySQL** 是 **关系型数据库** (基本上是 **表格 数据** ). 以这种方式它很容易去存储,组织和访问数据。它使用**SQL**( **结构化查询语言** )来管理数据。
|
||||
|
||||
这这篇文章中,我将向你展示如何在Ubuntu 18.04安装和使用MySQL 8.0。让我们一起来看看吧!
|
||||
|
||||
|
||||
### 在Ubuntu上安装MySQL
|
||||
|
||||
![][3]
|
||||
|
||||
我将会介绍两种在Ubuntu18.04上安装**MySQL**的方法:
|
||||
1. 从Ubuntu仓库上安装MySQL。非常简单,但不是最新版(5.7)
|
||||
2. 从官方仓库安装MySQL。你将额外增加一些步处理过程,但不用担心。你将会拥有最新版的MySQL(8.0)
|
||||
|
||||
|
||||
有必要的时候,我将会提供屏幕截图去引导你。但这边文章中的大部分步骤,我将直接在**终端**(**默认热键**: CTRL+ALT+T)输入命令。别害怕它!
|
||||
|
||||
#### 方法 1. 从Ubuntu仓库安装MySQL
|
||||
|
||||
首先,输入下列命令确保你的仓库已经被更新:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
现在, 安装 **MySQL 5.7** , 简单输入下列命令:
|
||||
|
||||
```
|
||||
sudo apt install mysql-server -y
|
||||
```
|
||||
|
||||
就是这样!简单且高效。
|
||||
|
||||
#### 方法 2. 使用官方仓库安装MySQL
|
||||
|
||||
虽然这个方法多了一些步骤,但我将逐一介绍,并尝试写下清晰的笔记。
|
||||
|
||||
首先浏览官方的MySQL网站[download page][4]。
|
||||
|
||||
![][5]
|
||||
|
||||
在这,选择**DEB Package**点击**download link**。
|
||||
|
||||
![][6]
|
||||
|
||||
滑到有关于Oracle网站信息的底部,右键 **No thanks, just start my download.** ,然后选择 **Copy link location**。
|
||||
|
||||
|
||||
现在回到终端,我们将使用 **[Curl][7]** 命令去下载这个软件包:
|
||||
|
||||
Now go back to the terminal. We’ll [use][7] **[Curl][7]** [command][7] to the download the package:
|
||||
|
||||
```
|
||||
curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
|
||||
```
|
||||
|
||||
**<https://dev.mysql.com/get/mysql-apt-config\_0.8.12-1\_all.deb>** 是我刚刚从网页上复制的链接。根据当前的MySQL版本,它有可能不同。让我们使用**dpkg**去开始安装MySQL:
|
||||
|
||||
```
|
||||
sudo dpkg -i mysql-apt-config*
|
||||
```
|
||||
|
||||
更新你的仓库:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
要实际安装MySQL,我们将使用像第一个方法中同样的命令来安装:
|
||||
|
||||
```
|
||||
sudo apt install mysql-server -y
|
||||
```
|
||||
|
||||
这样做会在你的终端中打开**包配置**的提示。使用**向下箭头**选择 **Ok**选项。
|
||||
|
||||
![][8]
|
||||
|
||||
点击 **Enter**.这应该会提示你输入**password**:你的基本上是在为MySQL设置root密码。不要与[Ubuntu的root密码混淆][9]。
|
||||
|
||||
![][10]
|
||||
|
||||
输入密码然后点击**Tab**键去选择 **< Ok>**.点击**Enter**键,你将**重新输入** **password**。操作完之后,再次键入**Tab**去选择 **< Ok>**。按下**Enter**键。
|
||||
|
||||
![][11]
|
||||
|
||||
一些关于MySQL Server的配置信息将会展示。再次按下**Tab**去选择 **< Ok>** 和按下 **Enter**键:
|
||||
|
||||
![][12]
|
||||
|
||||
这里你需要去选择**default authentication plugin**。确保**Use Strong Password Encryption**被选择。按下**Tab**键和**Enter**键。
|
||||
|
||||
就是这样!你已经成功地安装了MySQL。
|
||||
|
||||
#### 验证你的MySQL安装
|
||||
|
||||
要验证MySQL已经正确安装,使用下列命令:
|
||||
|
||||
```
|
||||
sudo systemctl status mysql.service
|
||||
```
|
||||
|
||||
这将展示一些关于MySQL服务的信息:
|
||||
|
||||
![][13]
|
||||
|
||||
你应该在那里看到 **Active: active (running)** 。如果你没有看到,使用下列命令去开始这个 **service**:
|
||||
|
||||
```
|
||||
sudo systemctl start mysql.service
|
||||
```
|
||||
|
||||
#### 配置/保护 MySQL
|
||||
|
||||
对于刚安装的MySQL,你应该运行它提供的安全相关的更新命令。就是:
|
||||
|
||||
```
|
||||
sudo mysql_secure_installation
|
||||
```
|
||||
|
||||
这样做首先会询问你是否想使用**VALIDATE PASSWORD COMPONENT**.如果你想使用它,你将不得不去选择一个最小密码强度( **0 – Low, 1 – Medium, 2 – High** )。你将无法输入任何不遵守所选规则的密码。如果你没有使用强密码的习惯(本应该使用),这可能会配上用场。如果你认为它可能有帮助,那你就键入**y** 或者 **Y**,按下**Enter**键,然后为你的密码选择一个**强度等级**和输入一个你想使用的。如果成功,你将继续**securing**过程;否则你将重新输入一个密码。
|
||||
|
||||
|
||||
但是,如果你不想要此功能(我不会),只需按**Enter**或**任何其他键**即可跳过使用它。
|
||||
|
||||
对于其他选项,我建议**开启**他们(对于每一步输入**y** 或者 **Y** 和按下**Enter**)。他们是(以这样的顺序):**remove anonymous user, disallow root login remotely, remove test database and access to it, reload privilege tables now**.
|
||||
MySQL Server上
|
||||
#### 链接与断开MySQL Server
|
||||
|
||||
为了去运行SQL查询,你首先不得不使用MySQL链接到MySQL服务并使用MySQL提示符。
|
||||
To be able to run SQL queries, you’ll first have to connect to the server using MySQL and use the MySQL prompt. 执行此操作的命令是:
|
||||
|
||||
```
|
||||
mysql -h host_name -u user -p
|
||||
```
|
||||
|
||||
* **-h** 被用来指定一个 **主机名** (如果这个服务被安装到其他机器上,那么会有用;如果没有,忽略它)
|
||||
* **-u** 指定登录的 **用户**
|
||||
* **-p** 指定你想输入的 **密码**.
|
||||
|
||||
|
||||
|
||||
你能通过在最右边输入 **-p**后直接输入密码在命令行,但这样做是不建议的(为了安全原因),如果用户**test_user** 的密码是**1234**,那么你可以尝试去连接你正在使用的机器上的mysql,你应该这样使用:
|
||||
|
||||
```
|
||||
mysql -u test_user -p1234
|
||||
```
|
||||
|
||||
如果你成功输入了必要的参数,你将会收到由**MySQL shell 提示符**提供的欢迎( **mysql >** ):
|
||||
![][14]
|
||||
|
||||
要从服务端断开连接和离开mysql提示符,输入:
|
||||
|
||||
```
|
||||
QUIT
|
||||
```
|
||||
|
||||
输入**quit** (MySQL不区分大小写)或者 **\q**也能工作。按下**Enter**退出。
|
||||
|
||||
你使用简单的命令也能输出关于版本的信息:
|
||||
|
||||
```
|
||||
sudo mysqladmin -u root version -p
|
||||
```
|
||||
|
||||
如果你想看 **选项列表**,使用:
|
||||
|
||||
```
|
||||
mysql --help
|
||||
```
|
||||
|
||||
#### 卸载 MySQL
|
||||
|
||||
|
||||
|
||||
如果您决定要使用较新版本或只是想停止使用MySQL。
|
||||
|
||||
首先,关闭服务:
|
||||
|
||||
```
|
||||
sudo systemctl stop mysql.service && sudo systemctl disable mysql.service
|
||||
```
|
||||
|
||||
确保你备份了你的数据库,以防你之后想使用它们。你可以通过运行下列命令卸载MySQL:
|
||||
|
||||
```
|
||||
sudo apt purge mysql*
|
||||
```
|
||||
|
||||
清理依赖:
|
||||
|
||||
```
|
||||
sudo apt autoremove
|
||||
```
|
||||
|
||||
**小结**
|
||||
|
||||
在这边文章中,我已经介绍如何在Ubuntu Linux上**安装 Mysql**。我很高兴如果这篇文章能帮助到那些正为此挣扎的用户或者刚刚开始的用户。
|
||||
In this article, I’ve covered **installing MySQL** in Ubuntu Linux. I’d be glad if this guide helps struggling users and beginners.
|
||||
|
||||
如果你发现这篇文章是一个很有用的资源,在评论里告诉我们。你为了什么使用MySQL? 我们渴望收到你的任何反馈,印象和建议。感谢阅读,并毫不犹豫地尝试这个令人难以置信的工具!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-mysql-ubuntu/
|
||||
|
||||
作者:[Sergiu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[arrowfeng](https://github.com/arrowfeng)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/sergiu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.mysql.com/
|
||||
[2]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-mysql-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[4]: https://dev.mysql.com/downloads/repo/apt/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_apt_download_page.jpg?fit=800%2C280&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_deb_download_link.jpg?fit=800%2C507&ssl=1
|
||||
[7]: https://linuxhandbook.com/curl-command-examples/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_package_configuration_ok.jpg?fit=800%2C587&ssl=1
|
||||
[9]: https://itsfoss.com/change-password-ubuntu/
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_enter_password.jpg?fit=800%2C583&ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_information_on_configuring.jpg?fit=800%2C581&ssl=1
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_default_authentication_plugin.jpg?fit=800%2C586&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_service_information.jpg?fit=800%2C402&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_shell_prompt-2.jpg?fit=800%2C423&ssl=1
|
Loading…
Reference in New Issue
Block a user