mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-31 23:30:11 +08:00
commit
4b7250f52b
@ -0,0 +1,99 @@
|
||||
5 款适合程序员的开源字体
|
||||
======
|
||||
|
||||
> 编程字体有些在普通字体中没有的特点,这五种字体你可以看看。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn)
|
||||
|
||||
什么是最好的编程字体呢?首先,你需要考虑到字体被设计出来的初衷可能并不相同。当选择一款用于休闲阅读的字体时,读者希望该字体的字母能够顺滑地衔接,提供一种轻松愉悦的体验。一款标准字体的每个字符,类似于拼图的一块,它需要被仔细的设计,从而与整个字体的其他部分融合在一起。
|
||||
|
||||
然而,在编写代码时,通常来说对字体的要求更具功能性。这也是为什么大多数程序员在选择时更偏爱使用固定宽度的等宽字体。选择一款带有容易分辨的数字和标点的字体在美学上令人愉悦;但它是否拥有满足你需求的版权许可也是非常重要的。
|
||||
|
||||
某些功能使得字体更适合编程。首先要清楚是什么使得等宽字体看上去井然有序。这里,让我们对比一下字母 `w` 和字母 `i`。当选择一款字体时,重要的是要考虑字母本身及周围的空白。在纸质的书籍和报纸中,有效地利用空间是极为重要的,为瘦小的 `i` 分配较小的空间,为宽大的字母 `w` 分配较大的空间是有意义的。
|
||||
|
||||
然而在终端中,你没有这些限制。每个字符享有相等的空间将非常有用。这么做的首要好处是你可以随意扫过一段代码来“估测”代码的长度。第二个好处是能够轻松地对齐字符和标点,高亮在视觉上更加明显。另外打印纸张上的等宽字体比均衡字体更加容易通过 OCR 识别。
|
||||
|
||||
在本篇文章中,我们将探索 5 款卓越的开源字体,使用它们来编程和写代码都非常理想。
|
||||
|
||||
### 1、Firacode:最佳整套编程字体
|
||||
|
||||
![FiraCode 示例][1]
|
||||
|
||||
*FiraCode, Andrew Lekashman*
|
||||
|
||||
在我们列表上的首款字体是 [FiraCode][3],一款真正符合甚至超越了其职责的编程字体。FiraCode 是 Fira 的扩展,而后者是由 Mozilla 委托设计的开源字体族。使得 FiraCode 与众不同的原因是它修改了在代码中常使用的一些符号的组合或连字,使得它看上去更具可读性。这款字体有几种不同的风格,特别是还包含 Retina 选项。你可以在它的 [GitHub][3] 主页中找到它被使用到多种编程语言中的例子。
|
||||
|
||||
![FiraCode compared to Fira Mono][2]
|
||||
|
||||
*FiraCode 与 Fira Mono 的对比,[Nikita Prokopov][3],源自 GitHub*
|
||||
|
||||
### 2、Inconsolata:优雅且由卓越设计者创造
|
||||
|
||||
![Inconsolata 示例][4]
|
||||
|
||||
*Inconsolata, Andrew Lekashman*
|
||||
|
||||
[Inconsolata][5] 是最为漂亮的等宽字体之一。从 2006 年开始它便一直是一款开源和可免费获取的字体。它的创造者 Raph Levien 在设计 Inconsolata 时秉承的一个基本原则是:等宽字体并不应该那么糟糕。使得 Inconsolata 如此优秀的两个原因是:对于 `0` 和 `o` 这两个字符它们有很大的不同,另外它还特别地设计了标点符号。
|
||||
|
||||
### 3、DejaVu Sans Mono:许多 Linux 发行版的标准配置,庞大的字形覆盖率
|
||||
|
||||
![DejaVu Sans Mono example][6]
|
||||
|
||||
*DejaVu Sans Mono, Andrew Lekashman*
|
||||
|
||||
受在 GNOME 中使用的带有版权和闭源的 Vera 字体的启发,[DejaVu Sans Mono][7] 是一个非常受欢迎的编程字体,几乎在每个现代的 Linux 发行版中都带有它。在 Book Variant 风格下 DejaVu 拥有惊人的 3310 个字形,相比于一般的字体,它们含有 100 个左右的字形。在工作中你将不会出现缺少某些字符的情况,它覆盖了 Unicode 的绝大部分,并且一直在活跃地增长着。
|
||||
|
||||
### 4、Source Code Pro:优雅、可读性强,由 Adobe 中一个小巧但天才的团队打造
|
||||
|
||||
![Source Code Pro example][8]
|
||||
|
||||
*Source Code Pro, Andrew Lekashman*
|
||||
|
||||
由 Paul Hunt 和 Teo Tuominen 设计,[Source Code Pro][9] 是[由 Adobe 创造的][10],成为了它的首款开源字体。Source Code Pro 值得注意的地方在于它极具可读性,且对于容易混淆的字符和标点,它有着非常好的区分度。Source Code Pro 也是一个字体族,有 7 中不同的风格:Extralight、Light、Regular、Medium、Semibold、Bold 和 Black,每种风格都还有斜体变体。
|
||||
|
||||
![Differentiating potentially confusable characters][11]
|
||||
|
||||
*潜在易混淆的字符之间的区别,[Paul D. Hunt][10] 源自 Adobe Typekit 博客。*
|
||||
|
||||
![Metacharacters with special meaning in computer languages][12]
|
||||
|
||||
*在计算机领域中有特别含义的特殊元字符, [Paul D. Hunt][10] 源自 Adobe Typekit 博客。*
|
||||
|
||||
### 5、Noto Mono:巨量的语言覆盖率,由 Google 中的一个大团队打造
|
||||
|
||||
![Noto Mono example][13]
|
||||
|
||||
*Noto Mono, Andrew Lekashman*
|
||||
|
||||
在我们列表上的最后一款字体是 [Noto Mono][14],这是 Google 打造的庞大 Note 字体族中的等宽版本。尽管它并不是专为编程所设计,但它在 209 种语言(包括 emoji 颜文字!)中都可以使用,并且一直在维护和更新。该项目非常庞大,是 Google 宣称 “组织全世界信息” 的使命的延续。假如你想更多地了解它,可以查看这个绝妙的[关于这些字体的视频][15]。
|
||||
|
||||
### 选择合适的字体
|
||||
|
||||
无论你选择那个字体,你都有可能在每天中花费数小时面对它,所以请确保它在审美和哲学层面上与你产生共鸣。选择正确的开源字体是确保你拥有最佳生产环境的一个重要部分。这些字体都是很棒的选择,每个都具有让它脱颖而出的功能强大的特性。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/how-select-open-source-programming-font
|
||||
|
||||
作者:[Andrew Lekashman][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com
|
||||
[1]:https://opensource.com/sites/default/files/u128651/firacode.png (FiraCode example)
|
||||
[2]:https://opensource.com/sites/default/files/u128651/firacode2.png (FiraCode compared to Fira Mono)
|
||||
[3]:https://github.com/tonsky/FiraCode
|
||||
[4]:https://opensource.com/sites/default/files/u128651/inconsolata.png (Inconsolata example)
|
||||
[5]:http://www.levien.com/type/myfonts/inconsolata.html
|
||||
[6]:https://opensource.com/sites/default/files/u128651/dejavu_sans_mono.png (DejaVu Sans Mono example)
|
||||
[7]:https://dejavu-fonts.github.io/
|
||||
[8]:https://opensource.com/sites/default/files/u128651/source_code_pro.png (Source Code Pro example)
|
||||
[9]:https://github.com/adobe-fonts/source-code-pro
|
||||
[10]:https://blog.typekit.com/2012/09/24/source-code-pro/
|
||||
[11]:https://opensource.com/sites/default/files/u128651/source_code_pro2.png (Differentiating potentially confusable characters)
|
||||
[12]:https://opensource.com/sites/default/files/u128651/source_code_pro3.png (Metacharacters with special meaning in computer languages)
|
||||
[13]:https://opensource.com/sites/default/files/u128651/noto.png (Noto Mono example)
|
||||
[14]:https://www.google.com/get/noto/#mono-mono
|
||||
[15]:https://www.youtube.com/watch?v=AAzvk9HSi84
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (liujing97)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10727-1.html)
|
||||
[#]: subject: (7 Methods To Identify Disk Partition/FileSystem UUID On Linux)
|
||||
[#]: via: (https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-unique-identifier-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
@ -10,27 +10,23 @@
|
||||
Linux 中获取硬盘分区或文件系统的 UUID 的七种方法
|
||||
======
|
||||
|
||||
作为一个 Linux 系统管理员,你应该知道如何去查看分区的 UUID 或文件系统的 UUID。
|
||||
|
||||
因为大多数的 Linux 系统使用 UUID 挂载分区。在 `/etc/fstab` 文件中可以验证此的内容。
|
||||
作为一个 Linux 系统管理员,你应该知道如何去查看分区的 UUID 或文件系统的 UUID。因为现在大多数的 Linux 系统都使用 UUID 挂载分区。你可以在 `/etc/fstab` 文件中可以验证。
|
||||
|
||||
有许多可用的实用程序可以查看 UUID。本文我们将会向你展示多种查看 UUID 的方法,并且你可以选择一种适合于你的方法。
|
||||
|
||||
### 何为 UUID?
|
||||
|
||||
UUID 代表着通用唯一识别码,它帮助 Linux 系统去识别一个磁盘驱动分区而不是块设备文件。
|
||||
UUID 意即<ruby>通用唯一识别码<rt>Universally Unique Identifier</rt></ruby>,它可以帮助 Linux 系统识别一个磁盘分区而不是块设备文件。
|
||||
|
||||
libuuid 是内核 2.15.1 中 util-linux-ng 包中的一部分,它被默认安装在 Linux 系统中。
|
||||
自内核 2.15.1 起,libuuid 就是 util-linux-ng 包中的一部分,它被默认安装在 Linux 系统中。UUID 由该库生成,可以合理地认为在一个系统中 UUID 是唯一的,并且在所有系统中也是唯一的。
|
||||
|
||||
UUID 由该库生成,可以合理地认为它在一个系统中是唯一的,并且在所有系统中也是唯一的。
|
||||
这是在计算机系统中用来标识信息的一个 128 位(比特)的数字。UUID 最初被用在<ruby>阿波罗网络计算机系统<rt>Apollo Network Computing System</rt></ruby>(NCS)中,之后 UUID 被<ruby>开放软件基金会<rt>Open Software Foundation</rt></ruby>(OSF)标准化,成为<ruby>分布式计算环境<rt>Distributed Computing Environment</rt></ruby>(DCE)的一部分。
|
||||
|
||||
在计算机系统中使用了 128 位数字去标识信息。UUID 最初被用在 Apollo 网络计算机系统(NCS)中,之后 UUID 被开放软件基金会(OSF)标准化,成为分布式计算环境(DCE)的一部分。
|
||||
UUID 以 32 个十六进制的数字表示,被连字符分割为 5 组显示,总共的 36 个字符的格式为 8-4-4-4-12(32 个字母或数字和 4 个连字符)。
|
||||
|
||||
UUID 以 32 个十六进制(基数为 16)的数字表示,被连字符分割为 5 组显示,总共的 36 个字符格式为 8-4-4-4-12(32 个字母或数字和 4 个连字符)。
|
||||
例如: `d92fa769-e00f-4fd7-b6ed-ecf7224af7fa`
|
||||
|
||||
例如:d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
||||
|
||||
我的 /etc/fstab 文件示例。
|
||||
我的 `/etc/fstab` 文件示例:
|
||||
|
||||
```
|
||||
# cat /etc/fstab
|
||||
@ -48,19 +44,17 @@ UUID=a2092b92-af29-4760-8e68-7a201922573b swap swap defaults,noatime 0 2
|
||||
|
||||
我们可以使用下面的 7 个命令来查看。
|
||||
|
||||
* **`blkid 命令:`** 定位或打印块设备的属性。
|
||||
* **`lsblk 命令:`** lsblk 列出所有可用的或指定的块设备的信息。
|
||||
* **`hwinfo 命令:`** hwinfo 表示硬件信息工具,是另外一个很好的实用工具,用于查询系统中已存在硬件。
|
||||
* **`udevadm 命令:`** udev 管理工具
|
||||
* **`tune2fs 命令:`** 调整 ext2/ext3/ext4 文件系统上的可调文件系统参数。
|
||||
* **`dumpe2fs 命令:`** 查询 ext2/ ext3/ext4 文件系统的信息。
|
||||
* **`使用 by-uuid 路径:`** 该目录下包含有 UUID 和实际的块设备文件,UUID 与实际的块设备文件链接在一起。
|
||||
|
||||
|
||||
* `blkid` 命令:定位或打印块设备的属性。
|
||||
* `lsblk` 命令:列出所有可用的或指定的块设备的信息。
|
||||
* `hwinfo` 命令:硬件信息工具,是另外一个很好的实用工具,用于查询系统中已存在硬件。
|
||||
* `udevadm` 命令:udev 管理工具
|
||||
* `tune2fs` 命令:调整 ext2/ext3/ext4 文件系统上的可调文件系统参数。
|
||||
* `dumpe2fs` 命令:查询 ext2/ext3/ext4 文件系统的信息。
|
||||
* 使用 `by-uuid` 路径:该目录下包含有 UUID 和实际的块设备文件,UUID 与实际的块设备文件链接在一起。
|
||||
|
||||
### Linux 中如何使用 blkid 命令查看磁盘分区或文件系统的 UUID?
|
||||
|
||||
blkid 是定位或打印块设备属性的命令行实用工具。它利用 libblkid 库在 Linux 系统中获得到磁盘分区的 UUID。
|
||||
`blkid` 是定位或打印块设备属性的命令行实用工具。它利用 libblkid 库在 Linux 系统中获得到磁盘分区的 UUID。
|
||||
|
||||
```
|
||||
# blkid
|
||||
@ -72,24 +66,24 @@ blkid 是定位或打印块设备属性的命令行实用工具。它利用 libb
|
||||
|
||||
### Linux 中如何使用 lsblk 命令查看磁盘分区或文件系统的 UUID?
|
||||
|
||||
lsblk 列出所有有关可用或指定块设备的信息。lsblk 命令读取 sysfs 文件系统和 udev 数据库以收集信息。
|
||||
`lsblk` 列出所有有关可用或指定块设备的信息。`lsblk` 命令读取 sysfs 文件系统和 udev 数据库以收集信息。
|
||||
|
||||
如果 udev 数据库是不可用的或者编译的 lsblk 是不支持 udev 的,它会试图从块设备中读取 LABEL,UUID 和文件系统类型。这种情况下,必须为 root 身份。该命令默认会以类似于树的格式打印出所有的块设备(RAM 盘除外)。
|
||||
如果 udev 数据库不可用或者编译的 lsblk 不支持 udev,它会试图从块设备中读取卷标、UUID 和文件系统类型。这种情况下,必须以 root 身份运行。该命令默认会以类似于树的格式打印出所有的块设备(RAM 盘除外)。
|
||||
|
||||
```
|
||||
# lsblk -o name,mountpoint,size,uuid
|
||||
NAME MOUNTPOINT SIZE UUID
|
||||
sda 30G
|
||||
└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
||||
sdb 10G
|
||||
sdc 10G
|
||||
├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||
├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
|
||||
├─sdc4 1K
|
||||
└─sdc5 1G
|
||||
sdd 10G
|
||||
sde 10G
|
||||
sr0 1024M
|
||||
NAME MOUNTPOINT SIZE UUID
|
||||
sda 30G
|
||||
└─sda1 / 20G d92fa769-e00f-4fd7-b6ed-ecf7224af7fa
|
||||
sdb 10G
|
||||
sdc 10G
|
||||
├─sdc1 1G d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||
├─sdc3 1G ca307aa4-0866-49b1-8184-004025789e63
|
||||
├─sdc4 1K
|
||||
└─sdc5 1G
|
||||
sdd 10G
|
||||
sde 10G
|
||||
sr0 1024M
|
||||
```
|
||||
|
||||
### Linux 中如何使用 by-uuid 路径查看磁盘分区或文件系统的 UUID?
|
||||
@ -106,7 +100,7 @@ lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> .
|
||||
|
||||
### Linux 中如何使用 hwinfo 命令查看磁盘分区或文件系统的 UUID?
|
||||
|
||||
**[hwinfo][1]** 表示硬件信息工具,是另外一种很好的实用工具。它被用来检测系统中已存在的硬件,并且以可读的格式显示各种硬件组件的细节信息。
|
||||
[hwinfo][1] 意即硬件信息工具,是另外一种很好的实用工具。它被用来检测系统中已存在的硬件,并且以可读的格式显示各种硬件组件的细节信息。
|
||||
|
||||
```
|
||||
# hwinfo --block | grep by-uuid | awk '{print $3,$7}'
|
||||
@ -117,16 +111,16 @@ lrwxrwxrwx 1 root root 10 Jan 29 08:34 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> .
|
||||
|
||||
### Linux 中如何使用 udevadm 命令查看磁盘分区或文件系统的 UUID?
|
||||
|
||||
udevadm 需要命令和命令特定的操作。它控制 systemd-udevd 的运行时的行为,请求内核事件、管理事件队列并且提供简单的调试机制。
|
||||
`udevadm` 需要命令和命令特定的操作。它控制 systemd-udevd 的运行时行为,请求内核事件、管理事件队列并且提供简单的调试机制。
|
||||
|
||||
```
|
||||
udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
|
||||
# udevadm info -q all -n /dev/sdc1 | grep -i by-uuid | head -1
|
||||
S: disk/by-uuid/d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||
```
|
||||
|
||||
### Linux 中如何使用 tune2fs 命令查看磁盘分区或文件系统的 UUID?
|
||||
|
||||
tune2fs 允许系统管理员在 Linux 的 ext2, ext3, ext4 文件系统中调整各种可调的文件系统参数。这些选项的当前值可以使用选项 -l 显示。
|
||||
`tune2fs` 允许系统管理员在 Linux 的 ext2、ext3、ext4 文件系统中调整各种可调的文件系统参数。这些选项的当前值可以使用选项 `-l` 显示。
|
||||
|
||||
```
|
||||
# tune2fs -l /dev/sdc1 | grep UUID
|
||||
@ -135,7 +129,7 @@ Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7
|
||||
|
||||
### Linux 中如何使用 dumpe2fs 命令查看磁盘分区或文件系统的 UUID?
|
||||
|
||||
dumpe2fs 打印出现在设备文件系统中的超级块和块组的信息。
|
||||
`dumpe2fs` 打印出现在设备文件系统中的超级块和块组的信息。
|
||||
|
||||
```
|
||||
# dumpe2fs /dev/sdc1 | grep UUID
|
||||
@ -150,7 +144,7 @@ via: https://www.2daygeek.com/check-partitions-uuid-filesystem-uuid-universally-
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[liujing97](https://github.com/liujing97)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,53 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10731-1.html)
|
||||
[#]: subject: (How to contribute to the Raspberry Pi community)
|
||||
[#]: via: (https://opensource.com/article/19/3/contribute-raspberry-pi-community)
|
||||
[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva/users/kepler22b/users/ansilva)
|
||||
|
||||
树莓派使用入门:如何为树莓派社区做出贡献
|
||||
======
|
||||
|
||||
> 在我们的入门系列的第 13 篇文章中,发现参与树莓派社区的方法。
|
||||
|
||||
![][1]
|
||||
|
||||
这个系列已经逐渐接近尾声,我已经写了很多它的乐趣,我大多希望它能帮助人们使用树莓派进行教育或娱乐。也许这些文章能说服你买你的第一个树莓派,或者让你重新发现抽屉里的吃灰设备。如果这里有真的,那么我认为这个系列就是成功的。
|
||||
|
||||
如果你想买一台,并宣传这块绿色的小板子有多么多功能,这里有几个方法帮你与树莓派社区建立连接:
|
||||
|
||||
* 帮助改进[官方文档][2]
|
||||
* 贡献代码给依赖的[项目][3]
|
||||
* 用 Raspbian 报告 [bug][4]
|
||||
* 报告不同 ARM 架构分发版的的 bug
|
||||
* 看一眼英国国内的树莓派基金会的[代码俱乐部][5]或英国境外的[国际代码俱乐部][6],帮助孩子学习编码
|
||||
* 帮助[翻译][7]
|
||||
* 在 [Raspberry Jam][8] 当志愿者
|
||||
|
||||
这些只是你可以为树莓派社区做贡献的几种方式。最后但同样重要的是,你可以加入我并[投稿文章][9]到你最喜欢的开源网站 [Opensource.com][10]。 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/contribute-raspberry-pi-community
|
||||
|
||||
作者:[Anderson Silva (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/kepler22b/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_community.jpg?itok=dcKwb5et
|
||||
[2]: https://www.raspberrypi.org/documentation/CONTRIBUTING.md
|
||||
[3]: https://www.raspberrypi.org/github/
|
||||
[4]: https://www.raspbian.org/RaspbianBugs
|
||||
[5]: https://www.codeclub.org.uk/
|
||||
[6]: https://www.codeclubworld.org/
|
||||
[7]: https://www.raspberrypi.org/translate/
|
||||
[8]: https://www.raspberrypi.org/jam/
|
||||
[9]: https://opensource.com/participate
|
||||
[10]: http://Opensource.com
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10734-1.html)
|
||||
[#]: subject: (14 days of celebrating the Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/happy-pi-day)
|
||||
[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva)
|
||||
|
||||
树莓派使用入门:庆祝树莓派的 14 天
|
||||
======
|
||||
|
||||
> 在我们关于树莓派入门系列的第 14 篇也是最后一篇文章中,回顾一下我们学到的所有东西。
|
||||
|
||||
![][1]
|
||||
|
||||
### 派节快乐!
|
||||
|
||||
每年的 3 月 14 日,我们这些极客都会庆祝派节。我们用这种方式缩写日期: `MMDD`,3 月 14 于是写成 03/14,它的数字上提醒我们 3.14,或者说 [π][2] 的前三位数字。许多美国人没有意识到的是,世界上几乎没有其他国家使用这种[日期格式][3],因此派节几乎只适用于美国,尽管它在全球范围内得到了庆祝。
|
||||
|
||||
无论你身在何处,让我们一起庆祝树莓派,并通过回顾过去两周我们所涉及的主题来结束本系列:
|
||||
|
||||
* 第 1 天:[你应该选择哪种树莓派?][4]
|
||||
* 第 2 天:[如何购买树莓派][5]
|
||||
* 第 3 天:[如何启动一个新的树莓派][6]
|
||||
* 第 4 天:[用树莓派学习 Linux][7]
|
||||
* 第 5 天:[教孩子们用树莓派学编程的 5 种方法][8]
|
||||
* 第 6 天:[可以使用树莓派学习的 3 种流行编程语言][9]
|
||||
* 第 7 天:[如何更新树莓派][10]
|
||||
* 第 8 天:[如何使用树莓派来娱乐][11]
|
||||
* 第 9 天:[树莓派上的模拟器和原生 Linux 游戏][12]
|
||||
* 第 10 天:[进入物理世界 —— 如何使用树莓派的 GPIO 针脚][13]
|
||||
* 第 11 天:[通过树莓派和 kali Linux 学习计算机安全][14]
|
||||
* 第 12 天:[在树莓派上使用 Mathematica 进行高级数学运算][15]
|
||||
* 第 13 天:[如何为树莓派社区做出贡献][16]
|
||||
|
||||
![Pi Day illustration][18]
|
||||
|
||||
我将结束本系列,感谢所有关注的人,尤其是那些在过去 14 天里从中学到了东西的人!我还想鼓励大家不断扩展他们对树莓派以及围绕它构建的所有开源(和闭源)技术的了解。
|
||||
|
||||
我还鼓励你了解其他文化、哲学、宗教和世界观。让我们成为人类的是这种惊人的 (有时是有趣的) 能力,我们不仅要适应外部环境,而且要适应智力环境。
|
||||
|
||||
不管你做什么,保持学习!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/happy-pi-day
|
||||
|
||||
作者:[Anderson Silva (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA
|
||||
[2]: https://www.piday.org/million/
|
||||
[3]: https://en.wikipedia.org/wiki/Date_format_by_country
|
||||
[4]: https://linux.cn/article-10611-1.html
|
||||
[5]: https://linux.cn/article-10615-1.html
|
||||
[6]: https://linux.cn/article-10644-1.html
|
||||
[7]: https://linux.cn/article-10645-1.html
|
||||
[8]: https://linux.cn/article-10653-1.html
|
||||
[9]: https://linux.cn/article-10661-1.html
|
||||
[10]: https://linux.cn/article-10665-1.html
|
||||
[11]: https://linux.cn/article-10669-1.html
|
||||
[12]: https://linux.cn/article-10682-1.html
|
||||
[13]: https://linux.cn/article-10687-1.html
|
||||
[14]: https://linux.cn/article-10690-1.html
|
||||
[15]: https://linux.cn/article-10711-1.html
|
||||
[16]: https://linux.cn/article-10731-1.html
|
||||
[17]: /file/426561
|
||||
[18]: https://opensource.com/sites/default/files/uploads/raspberrypi_14_piday.jpg (Pi Day illustration)
|
@ -92,7 +92,7 @@ via: https://itsfoss.com/history-of-firefox
|
||||
[3]: https://en.wikipedia.org/wiki/Tim_Berners-Lee
|
||||
[4]: https://www.w3.org/DesignIssues/TimBook-old/History.html
|
||||
[5]: http://viola.org/
|
||||
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser
|
||||
[6]: https://en.wikipedia.org/wiki/Mosaic_(web_browser)
|
||||
[7]: http://www.computinghistory.org.uk/det/1789/Marc-Andreessen/
|
||||
[8]: http://www.davetitus.com/mozilla/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Mozilla_boxing.jpg?ssl=1
|
||||
@ -110,7 +110,7 @@ via: https://itsfoss.com/history-of-firefox
|
||||
[21]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
|
||||
[22]: http://gs.statcounter.com/browser-market-share/desktop/worldwide/#monthly-201901-201901-bar
|
||||
[23]: https://en.wikipedia.org/wiki/Red_panda
|
||||
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser
|
||||
[24]: https://en.wikipedia.org/wiki/Flock_(web_browser)
|
||||
[25]: https://www.windowscentral.com/microsoft-building-chromium-powered-web-browser-windows-10
|
||||
[26]: https://itsfoss.com/why-firefox/
|
||||
[27]: https://itsfoss.com/firefox-quantum-ubuntu/
|
||||
|
@ -1,24 +1,24 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10732-1.html)
|
||||
[#]: subject: (Sweet Home 3D: An open source tool to help you decide on your dream home)
|
||||
[#]: via: (https://opensource.com/article/19/3/tool-find-home)
|
||||
[#]: author: (Jeff Macharyas (Community Moderator) )
|
||||
|
||||
Sweet Home 3D:一个帮助你决定梦想家庭的开源工具
|
||||
Sweet Home 3D:一个帮助你寻找梦想家庭的开源工具
|
||||
======
|
||||
|
||||
室内设计应用可以轻松渲染你喜欢的房子,不管是真实的或是想象的。
|
||||
> 室内设计应用可以轻松渲染你喜欢的房子,不管是真实的或是想象的。
|
||||
|
||||
![Houses in a row][1]
|
||||
|
||||
我最近接受了一份在弗吉尼亚州的新工作。由于我妻子一直在纽约工作,看着我们在纽约的房子直至出售,我有责任出去为我们和我们的猫找一所新房子。在我们搬进去之前她不会看到的房子!
|
||||
我最近接受了一份在弗吉尼亚州的新工作。由于我妻子一直在纽约工作,看着我们在纽约的房子直至出售,我有责任出去为我们和我们的猫找一所新房子。在我们搬进去之前她看不到新房子。
|
||||
|
||||
我和一个房地产经纪人签约,并看了几间房子,拍了许多照片,写下了潦草的笔记。晚上,我会将照片上传到 Google Drive 文件夹中,我和我老婆会通过手机同时查看这些照片,同时我还想记住房间是在右边还是左边,是否有风扇等。
|
||||
我和一个房地产经纪人签约,并看了几间房子,拍了许多照片,写下了潦草的笔记。晚上,我会将照片上传到 Google Drive 文件夹中,我和我老婆会通过手机同时查看这些照片,同时我还要记住房间是在右边还是左边,是否有风扇等。
|
||||
|
||||
由于这是一个相当繁琐且不太准确的方式来展示我的发现,我因此去寻找一个开源解决方案,以更好地展示我们未来的梦想之家将会是什么样的,而不会取决于我的模糊记忆和模糊的照片。
|
||||
由于这是一个相当繁琐且不太准确的展示我的发现的方式,我因此去寻找一个开源解决方案,以更好地展示我们未来的梦想之家将会是什么样的,而不会取决于我的模糊记忆和模糊的照片。
|
||||
|
||||
[Sweet Home 3D][2] 完全满足了我的要求。Sweet Home 3D 可在 Sourceforge 上获取,并在 GNU 通用公共许可证下发布。它的[网站][3]信息非常丰富,我能够立即启动并运行。Sweet Home 3D 由总部位于巴黎的 eTeks 的 Emmanuel Puybaret 开发。
|
||||
|
||||
@ -32,19 +32,19 @@ Sweet Home 3D:一个帮助你决定梦想家庭的开源工具
|
||||
|
||||
现在我画完了“内墙”,我从网站下载了各种“家具”,其中包括实际的家具以及门、窗、架子等。每个项目都以 ZIP 文件的形式下载,因此我创建了一个包含所有未压缩文件的文件夹。我可以自定义每件家具和重复的物品比如门,可以方便地复制粘贴到指定的地方。
|
||||
|
||||
在我将所有墙壁和门窗都布置完后,我就使用应用的 3D 视图浏览房屋。根据照片和记忆,我对所有物体进行了调整直到接近房屋的样子。我可以花更多时间添加纹理,附属家具和物品,但这已经达到了我需要的程度。
|
||||
在我将所有墙壁和门窗都布置完后,我就使用这个应用的 3D 视图浏览房屋。根据照片和记忆,我对所有物体进行了调整,直到接近房屋的样子。我可以花更多时间添加纹理,附属家具和物品,但这已经达到了我需要的程度。
|
||||
|
||||
![Sweet Home 3D floorplan][7]
|
||||
|
||||
完成之后,我将计划导出为 OBJ 文件,它可在各种程序中打开,例如 [Blender][8] 和 Mac 上的 Preview,方便旋转房屋并从各个角度查看。视频功能最有用,我可以创建一个起点,然后在房子中绘制一条路径,并记录“旅程”。我将视频导出为 MOV 文件,并使用 QuickTime 在 Mac 上打开和查看。
|
||||
完成之后,我将该项目导出为 OBJ 文件,它可在各种程序中打开,例如 [Blender][8] 和 Mac 上的“预览”中,方便旋转房屋并从各个角度查看。视频功能最有用,我可以创建一个起点,然后在房子中绘制一条路径,并记录“旅程”。我将视频导出为 MOV 文件,并使用 QuickTime 在 Mac 上打开和查看。
|
||||
|
||||
我的妻子能够(几乎)所有我看到的,我们甚至可以开始在搬家前布置家具。现在,我所要做的就是装上卡车搬到新家。
|
||||
我的妻子能够(几乎)能看到所有我看到的,我们甚至可以开始在搬家前布置家具。现在,我所要做的就是把行李装上卡车搬到新家。
|
||||
|
||||
Sweet Home 3D 在我的新工作中也是有用的。我正在寻找一种方法来改善学院建筑的地图,并计划在 [Inkscape][9] 或 Illustrator 或其他软件中重新绘制它。但是,由于我有平面地图,我可以使用 Sweet Home 3D 创建平面图的 3D 版本并将其上传到我们的网站以便更方便地找到地方。
|
||||
|
||||
### 开源犯罪现场?
|
||||
|
||||
一件有趣的事:根据 [Sweet Home 3D 的博客][10],“法国法医办公室(科学警察)最近选择 Sweet Home 3D 作为设计计划表示路线和犯罪现场的工具。这是法国政府建议优先考虑免费开源解决方案的具体应用。“
|
||||
一件有趣的事:根据 [Sweet Home 3D 的博客][10],“法国法医办公室(科学警察)最近选择 Sweet Home 3D 作为设计规划表示路线和犯罪现场的工具。这是法国政府建议优先考虑自由开源解决方案的具体应用。“
|
||||
|
||||
这是公民和政府如何利用开源解决方案创建个人项目、解决犯罪和建立世界的又一点证据。
|
||||
|
||||
@ -55,11 +55,11 @@ via: https://opensource.com/article/19/3/tool-find-home
|
||||
作者:[Jeff Macharyas (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[a]: https://opensource.com/users/jeffmacharyas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
|
||||
[2]: https://sourceforge.net/projects/sweethome3d/
|
@ -1,18 +1,16 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (liujing97)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10746-1.html)
|
||||
[#]: subject: (How To Configure sudo Access In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Linux 中如何配置 sudo 访问权限?
|
||||
如何在 Linux 中配置 sudo 访问权限
|
||||
======
|
||||
|
||||
Linux 系统中 root 用户拥有所有的控制权力。
|
||||
|
||||
Linux 系统中 root 是拥有最高权力的用户,可以在系统中实施任意的行为。
|
||||
Linux 系统中 root 用户拥有 Linux 中全部控制权力。Linux 系统中 root 是拥有最高权力的用户,可以在系统中实施任意的行为。
|
||||
|
||||
如果其他用户想去实施一些行为,不能为所有人都提供 root 访问权限。因为如果他或她做了一些错误的操作,没有办法去纠正它。
|
||||
|
||||
@ -20,43 +18,40 @@ Linux 系统中 root 是拥有最高权力的用户,可以在系统中实施
|
||||
|
||||
我们可以把 sudo 权限发放给相应的用户来克服这种情况。
|
||||
|
||||
sudo 命令提供了一种机制,它可以在不用分享 root 用户的密码的前提下,为信任的用户提供系统的管理权限。
|
||||
`sudo` 命令提供了一种机制,它可以在不用分享 root 用户的密码的前提下,为信任的用户提供系统的管理权限。
|
||||
|
||||
他们可以执行大部分的管理操作,但又不像 root 一样有全部的权限。
|
||||
|
||||
### 什么是 sudo?
|
||||
|
||||
sudo 是一个程序,普通用户可以使用它以超级用户或其他用户的身份执行命令,是由安全策略指定的。
|
||||
`sudo` 是一个程序,普通用户可以使用它以超级用户或其他用户的身份执行命令,是由安全策略指定的。
|
||||
|
||||
sudo 用户的访问权限是由 `/etc/sudoers` 文件控制的。
|
||||
|
||||
### sudo 用户有什么优点?
|
||||
|
||||
在 Linux 系统中,如果你不熟悉一个命令,sudo 是运行它的一个安全方式。
|
||||
在 Linux 系统中,如果你不熟悉一个命令,`sudo` 是运行它的一个安全方式。
|
||||
|
||||
* Linux 系统在 `/var/log/secure` 和 `/var/log/auth.log` 文件中保留日志,并且你可以验证 sudo 用户实施了哪些行为操作。
|
||||
* 每一次它都为当前的操作提示输入密码。所以,你将会有时间去验证这个操作是不是你想要执行的。如果你发觉它是不正确的行为,你可以安全地退出而且没有执行此操作。
|
||||
* Linux 系统在 `/var/log/secure` 和 `/var/log/auth.log` 文件中保留日志,并且你可以验证 sudo 用户实施了哪些行为操作。
|
||||
* 每一次它都为当前的操作提示输入密码。所以,你将会有时间去验证这个操作是不是你想要执行的。如果你发觉它是不正确的行为,你可以安全地退出而且没有执行此操作。
|
||||
|
||||
基于 RHEL 的系统(如 Redhat (RHEL)、 CentOS 和 Oracle Enterprise Linux (OEL))和基于 Debian 的系统(如 Debian、Ubuntu 和 LinuxMint)在这点是不一样的。
|
||||
|
||||
基于 RHEL 的系统(如 Redhat (RHEL), CentOS 和 Oracle Enterprise Linux (OEL))和基于 Debian 的系统(如 Debian, Ubuntu 和 LinuxMint)在这点是不一样的。
|
||||
|
||||
我们将会教你如何在本文中的两种发行版中执行该操作。
|
||||
我们将会教你如何在本文中提及的两种发行版中执行该操作。
|
||||
|
||||
这里有三种方法可以应用于两个发行版本。
|
||||
|
||||
* 增加用户到相应的组。基于 RHEL 的系统,我们需要添加用户到 `wheel` 组。基于 Debain 的系统,我们添加用户到 `sudo` 或 `admin` 组。
|
||||
* 手动添加用户到 `/etc/group` 文件中。
|
||||
* 用 visudo 命令添加用户到 `/etc/sudoers` 文件中。
|
||||
|
||||
|
||||
* 增加用户到相应的组。基于 RHEL 的系统,我们需要添加用户到 `wheel` 组。基于 Debain 的系统,我们添加用户到 `sudo` 或 `admin` 组。
|
||||
* 手动添加用户到 `/etc/group` 文件中。
|
||||
* 用 `visudo` 命令添加用户到 `/etc/sudoers` 文件中。
|
||||
|
||||
### 如何在 RHEL/CentOS/OEL 系统中配置 sudo 访问权限?
|
||||
|
||||
在基于 RHEL 的系统中(如 Redhat (RHEL), CentOS 和 Oracle Enterprise Linux (OEL)),使用下面的三个方法就可以做到。
|
||||
在基于 RHEL 的系统中(如 Redhat (RHEL)、 CentOS 和 Oracle Enterprise Linux (OEL)),使用下面的三个方法就可以做到。
|
||||
|
||||
### 方法 1:在 Linux 中如何使用 wheel 组为普通用户授予超级用户访问权限?
|
||||
#### 方法 1:在 Linux 中如何使用 wheel 组为普通用户授予超级用户访问权限?
|
||||
|
||||
Wheel 是基于 RHEL 的系统中的一个特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
|
||||
wheel 是基于 RHEL 的系统中的一个特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
|
||||
|
||||
注意,应该在 `/etc/sudoers` 文件中激活 `wheel` 组来获得该访问权限。
|
||||
|
||||
@ -70,7 +65,7 @@ Wheel 是基于 RHEL 的系统中的一个特殊组,它提供额外的权限
|
||||
|
||||
假设我们已经创建了一个用户账号来执行这些操作。在此,我将会使用 `daygeek` 这个用户账号。
|
||||
|
||||
执行下面的命令,添加用户到 wheel 组。
|
||||
执行下面的命令,添加用户到 `wheel` 组。
|
||||
|
||||
```
|
||||
# usermod -aG wheel daygeek
|
||||
@ -87,10 +82,10 @@ wheel:x:10:daygeek
|
||||
|
||||
```
|
||||
$ tail -5 /var/log/secure
|
||||
tail: cannot open _/var/log/secure_ for reading: Permission denied
|
||||
tail: cannot open /var/log/secure for reading: Permission denied
|
||||
```
|
||||
|
||||
当我试图以普通用户身份访问 `/var/log/secure` 文件时出现错误。 我将使用 sudo 访问同一个文件,让我们看看这个魔术。
|
||||
当我试图以普通用户身份访问 `/var/log/secure` 文件时出现错误。 我将使用 `sudo` 访问同一个文件,让我们看看这个魔术。
|
||||
|
||||
```
|
||||
$ sudo tail -5 /var/log/secure
|
||||
@ -102,9 +97,9 @@ Mar 17 07:05:10 CentOS7 sudo: daygeek : TTY=pts/0 ; PWD=/home/daygeek ; USER=roo
|
||||
Mar 17 07:05:10 CentOS7 sudo: pam_unix(sudo:session): session opened for user root by daygeek(uid=0)
|
||||
```
|
||||
|
||||
### 方法 2:在 RHEL/CentOS/OEL 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
|
||||
#### 方法 2:在 RHEL/CentOS/OEL 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
|
||||
|
||||
我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 wheel 组。
|
||||
我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 `wheel` 组。
|
||||
|
||||
只需打开该文件,并在恰当的组后追加相应的用户就可完成这一点。
|
||||
|
||||
@ -115,7 +110,7 @@ wheel:x:10:daygeek,user1
|
||||
|
||||
在该例中,我将使用 `user1` 这个用户账号。
|
||||
|
||||
我将要通过在系统中重启 `Apache` 服务来检查用户 `user1` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
|
||||
我将要通过在系统中重启 Apache httpd 服务来检查用户 `user1` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart httpd
|
||||
@ -128,11 +123,11 @@ Mar 17 07:10:40 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ;
|
||||
Mar 17 07:12:35 CentOS7 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/grep -i httpd /var/log/secure
|
||||
```
|
||||
|
||||
### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
|
||||
#### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
|
||||
|
||||
sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 wheel 组下的 sudoers 文件中即可。
|
||||
sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 `sudoers` 文件中 的 `wheel` 组下即可。
|
||||
|
||||
只需通过 visudo 命令将期望的用户追加到 /etc/sudoers 文件中。
|
||||
只需通过 `visudo` 命令将期望的用户追加到 `/etc/sudoers` 文件中。
|
||||
|
||||
```
|
||||
# grep -i user2 /etc/sudoers
|
||||
@ -141,7 +136,7 @@ user2 ALL=(ALL) ALL
|
||||
|
||||
在该例中,我将使用 `user2` 这个用户账号。
|
||||
|
||||
我将要通过在系统中重启 `MariaDB` 服务来检查用户 `user2` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
|
||||
我将要通过在系统中重启 MariaDB 服务来检查用户 `user2` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart mariadb
|
||||
@ -155,11 +150,11 @@ Mar 17 07:26:52 CentOS7 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ;
|
||||
|
||||
### 在 Debian/Ubuntu 系统中如何配置 sudo 访问权限?
|
||||
|
||||
在基于 Debian 的系统中(如 Debian, Ubuntu 和 LinuxMint),使用下面的三个方法就可以做到。
|
||||
在基于 Debian 的系统中(如 Debian、Ubuntu 和 LinuxMint),使用下面的三个方法就可以做到。
|
||||
|
||||
### 方法 1:在 Linux 中如何使用 sudo 或 admin 组为普通用户授予超级用户访问权限?
|
||||
#### 方法 1:在 Linux 中如何使用 sudo 或 admin 组为普通用户授予超级用户访问权限?
|
||||
|
||||
sudo 或 admin 是基于 Debian 的系统中的特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
|
||||
`sudo` 或 `admin` 是基于 Debian 的系统中的特殊组,它提供额外的权限,可以授权用户像超级用户一样执行受到限制的命令。
|
||||
|
||||
注意,应该在 `/etc/sudoers` 文件中激活 `sudo` 或 `admin` 组来获得该访问权限。
|
||||
|
||||
@ -175,7 +170,7 @@ sudo 或 admin 是基于 Debian 的系统中的特殊组,它提供额外的权
|
||||
|
||||
假设我们已经创建了一个用户账号来执行这些操作。在此,我将会使用 `2gadmin` 这个用户账号。
|
||||
|
||||
执行下面的命令,添加用户到 sudo 组。
|
||||
执行下面的命令,添加用户到 `sudo` 组。
|
||||
|
||||
```
|
||||
# usermod -aG sudo 2gadmin
|
||||
@ -195,7 +190,7 @@ $ less /var/log/auth.log
|
||||
/var/log/auth.log: Permission denied
|
||||
```
|
||||
|
||||
当我试图以普通用户身份访问 `/var/log/auth.log` 文件时出现错误。 我将要使用 sudo 访问同一个文件,让我们看看这个魔术。
|
||||
当我试图以普通用户身份访问 `/var/log/auth.log` 文件时出现错误。 我将要使用 `sudo` 访问同一个文件,让我们看看这个魔术。
|
||||
|
||||
```
|
||||
$ sudo tail -5 /var/log/auth.log
|
||||
@ -209,7 +204,7 @@ Mar 17 20:40:48 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user r
|
||||
|
||||
或者,我们可以通过添加用户到 `admin` 组来执行相同的操作。
|
||||
|
||||
运行下面的命令,添加用户到 admin 组。
|
||||
运行下面的命令,添加用户到 `admin` 组。
|
||||
|
||||
```
|
||||
# usermod -aG admin user1
|
||||
@ -231,9 +226,9 @@ Mar 17 20:53:36 Ubuntu18 sudo: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ;
|
||||
Mar 17 20:53:36 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user1(uid=0)
|
||||
```
|
||||
|
||||
### 方法 2:在 Debian/Ubuntu 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
|
||||
#### 方法 2:在 Debian/Ubuntu 中如何使用 /etc/group 文件为普通用户授予超级用户访问权限?
|
||||
|
||||
我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 sudo 组或 admin 组。
|
||||
我们可以通过编辑 `/etc/group` 文件来手动地添加用户到 `sudo` 组或 `admin` 组。
|
||||
|
||||
只需打开该文件,并在恰当的组后追加相应的用户就可完成这一点。
|
||||
|
||||
@ -244,7 +239,7 @@ sudo:x:27:2gadmin,user2
|
||||
|
||||
在该例中,我将使用 `user2` 这个用户账号。
|
||||
|
||||
我将要通过在系统中重启 `Apache` 服务来检查用户 `user2` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
|
||||
我将要通过在系统中重启 Apache httpd 服务来检查用户 `user2` 是不是拥有 `sudo` 访问权限。让我们看看这个魔术。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart apache2
|
||||
@ -257,11 +252,11 @@ Mar 17 21:01:04 Ubuntu18 systemd: pam_unix(systemd-user:session): session opened
|
||||
Mar 17 21:01:33 Ubuntu18 sudo: user2 : TTY=pts/0 ; PWD=/home/user2 ; USER=root ; COMMAND=/bin/systemctl restart apache2
|
||||
```
|
||||
|
||||
### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
|
||||
#### 方法 3:在 Linux 中如何使用 /etc/sudoers 文件为普通用户授予超级用户访问权限?
|
||||
|
||||
sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 sudo 或 admin 组下的 sudoers 文件中即可。
|
||||
sudo 用户的访问权限是被 `/etc/sudoers` 文件控制的。因此,只需将用户添加到 `sudoers` 文件中的 `sudo` 或 `admin` 组下即可。
|
||||
|
||||
只需通过 visudo 命令将期望的用户追加到 /etc/sudoers 文件中。
|
||||
只需通过 `visudo` 命令将期望的用户追加到 `/etc/sudoers` 文件中。
|
||||
|
||||
```
|
||||
# grep -i user3 /etc/sudoers
|
||||
@ -270,7 +265,7 @@ user3 ALL=(ALL:ALL) ALL
|
||||
|
||||
在该例中,我将使用 `user3` 这个用户账号。
|
||||
|
||||
我将要通过在系统中重启 `MariaDB` 服务来检查用户 `user3` 是不是拥有 sudo 访问权限。让我们看看这个魔术。
|
||||
我将要通过在系统中重启 MariaDB 服务来检查用户 `user3` 是不是拥有 `sudo` 访问权限。让我们看看这个魔术。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart mariadb
|
||||
@ -285,6 +280,7 @@ Mar 17 21:12:53 Ubuntu18 sudo: pam_unix(sudo:session): session closed for user r
|
||||
Mar 17 21:13:08 Ubuntu18 sudo: user3 : TTY=pts/0 ; PWD=/home/user3 ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log
|
||||
Mar 17 21:13:08 Ubuntu18 sudo: pam_unix(sudo:session): session opened for user root by user3(uid=0)
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
|
||||
@ -292,7 +288,7 @@ via: https://www.2daygeek.com/how-to-configure-sudo-access-in-linux/
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[liujing97](https://github.com/liujing97)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10742-1.html)
|
||||
[#]: subject: (Parallel computation in Python with Dask)
|
||||
[#]: via: (https://opensource.com/article/19/4/parallel-computation-python-dask)
|
||||
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
|
||||
|
||||
使用 Dask 在 Python 中进行并行计算
|
||||
======
|
||||
|
||||
> Dask 库可以将 Python 计算扩展到多个核心甚至是多台机器。
|
||||
|
||||
![Pair programming][1]
|
||||
|
||||
关于 Python 性能的一个常见抱怨是[全局解释器锁][2](GIL)。由于 GIL,同一时刻只能有一个线程执行 Python 字节码。因此,即使在现代的多核机器上,使用线程也不会加速计算。
|
||||
|
||||
但当你需要并行化到多核时,你不需要放弃使用 Python:[Dask][3] 库可以将计算扩展到多个内核甚至多个机器。某些设置可以在数千台机器上配置 Dask,每台机器都有多个内核。虽然存在扩展规模的限制,但一般达不到。
|
||||
|
||||
虽然 Dask 有许多内置的数组操作,但举一个非内置的例子,我们可以计算[偏度][4]:
|
||||
|
||||
```
|
||||
import numpy
|
||||
import dask
|
||||
from dask import array as darray
|
||||
|
||||
arr = dask.from_array(numpy.array(my_data), chunks=(1000,))
|
||||
mean = darray.mean()
|
||||
stddev = darray.std(arr)
|
||||
unnormalized_moment = darry.mean(arr * arr * arr)
|
||||
## See formula in wikipedia:
|
||||
skewness = ((unnormalized_moment - (3 * mean * stddev ** 2) - mean ** 3) /
|
||||
stddev ** 3)
|
||||
```
|
||||
|
||||
请注意,每个操作将根据需要使用尽可能多的内核。这将在所有核心上并行化执行,即使在计算数十亿个元素时也是如此。
|
||||
|
||||
当然,并不是我们所有的操作都可由这个库并行化,有时我们需要自己实现并行性。
|
||||
|
||||
为此,Dask 有一个“延迟”功能:
|
||||
|
||||
```
|
||||
import dask
|
||||
|
||||
def is_palindrome(s):
|
||||
return s == s[::-1]
|
||||
|
||||
palindromes = [dask.delayed(is_palindrome)(s) for s in string_list]
|
||||
total = dask.delayed(sum)(palindromes)
|
||||
result = total.compute()
|
||||
```
|
||||
|
||||
这将计算字符串是否是回文并返回回文的数量。
|
||||
|
||||
虽然 Dask 是为数据科学家创建的,但它绝不仅限于数据科学。每当我们需要在 Python 中并行化任务时,我们可以使用 Dask —— 无论有没有 GIL。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/parallel-computation-python-dask
|
||||
|
||||
作者:[Moshe Zadka (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 (Pair programming)
|
||||
[2]: https://wiki.python.org/moin/GlobalInterpreterLock
|
||||
[3]: https://github.com/dask/dask
|
||||
[4]: https://en.wikipedia.org/wiki/Skewness#Definition
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10741-1.html)
|
||||
[#]: subject: (Streaming internet radio with RadioDroid)
|
||||
[#]: via: (https://opensource.com/article/19/4/radiodroid-internet-radio-player)
|
||||
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen)
|
||||
|
||||
使用 RadioDroid 流传输网络广播
|
||||
======
|
||||
|
||||
> 通过简单的设置使用你家中的音响收听你最爱的网络电台。
|
||||
|
||||
![][1]
|
||||
|
||||
最近网络媒体对 [Google 的 Chromecast 音频设备的下架][2]发出叹息。该设备在音频媒体界备受[好评][3],因此我已经在考虑入手一个。基于 Chromecast 退场的消息,我决定在它们全部被打包扔进垃圾堆之前以一个合理价位买一个。
|
||||
|
||||
我在 [MobileFun][4] 上找到一个放进我的订单中。这个设备最终到货了。它被包在一个普普通通、简简单单的 Google 包装袋中,外面打印着非常简短的使用指南。
|
||||
|
||||
![Google Chromecast 音频][5]
|
||||
|
||||
我通过我的数模转换器的光纤 S/PDIF 连接接入到家庭音响,希望以此能提供最佳的音质。
|
||||
|
||||
安装过程并无纰漏,在五分钟后我就可以播放一些音乐了。我知道一些安卓应用支持 Chromecast,因此我决定用 Google Play Music 测试它。意料之中,它工作得不错,音乐效果听上去也相当好。然而作为一个具有开源精神的人,我决定看看我能找到什么开源播放器能兼容 Chromecast。
|
||||
|
||||
### RadioDroid 的救赎
|
||||
|
||||
[RadioDroid 安卓应用][6] 满足条件。它是开源的,并且可从 [GitHub][7]、Google Play 以及 [F-Droid][8] 上获取。根据帮助文档,RadioDroid 从 [Community Radio Browser][9] 网页寻找播放流。因此我决定在我的手机上安装尝试一下。
|
||||
|
||||
![RadioDroid][10]
|
||||
|
||||
安装过程快速顺利,RadioDroid 打开展示当地电台十分迅速。你可以在这个屏幕截图的右上方附近看到 Chromecast 按钮(看上去像一个有着波阵面的长方形图标)。
|
||||
|
||||
我尝试了几个当地电台。这个应用可靠地在我手机喇叭上播放了音乐。但是我不得不摆弄 Chromecast 按钮来通过 Chromecast 把音乐传到流上。但是它确实可以做到流传输。
|
||||
|
||||
我决定找一下我喜爱的网络广播电台:法国马赛的 [格雷诺耶广播电台][11]。在 RadioDroid 上有许多找到电台的方法。其中一种是使用标签——“当地”、“最流行”等——就在电台列表上方。其中一个标签是国家,我找到法国,在其 1500 个电台中划来划去寻找格雷诺耶广播电台。另一种办法是使用屏幕上方的查询按钮;查询迅速找到了那家美妙的电台。我尝试了其它几次查询它们都返回了合理的信息。
|
||||
|
||||
回到“当地”标签,我在列表中翻来覆去,发现“当地”的定义似乎是“在同一个国家”。因此尽管西雅图、波特兰、旧金山、洛杉矶和朱诺比多伦多更靠近我的家,我并没有在“当地”标签中看到它们。然而通过使用查询功能,我可以发现所有名字中带有西雅图的电台。
|
||||
|
||||
“语言”标签使我找到所有用葡语(及葡语方言)播报的电台。我很快发现了另一个最爱的电台 [91 Rock Curitiba][12]。
|
||||
|
||||
接着灵感来了,虽然现在是春天了,但又如何呢?让我们听一些圣诞音乐。意料之中,搜寻圣诞把我引到了 [181.FM – Christmas Blender][13]。不错,一两分钟的欣赏对我就够了。
|
||||
|
||||
因此总的来说,我推荐把 RadioDroid 和 Chromecast 的组合作为一种用家庭音响以合理价位播放网络电台的良好方式。
|
||||
|
||||
### 对于音乐方面……
|
||||
|
||||
最近我从 [Blue Coast Music][16] 商店里选了一个 [Qua Continuum][15] 创作的叫作 [Continuum One][14] 的有趣的氛围(甚至无节拍)音乐专辑。
|
||||
|
||||
Blue Coast 有许多可提供给开源音乐爱好者的。音乐可以无需通过那些奇怪的平台专用下载管理器下载(有时以物理形式)。它通常提供几种形式,包括 WAV、FLAC 和 DSD;WAV 和 FLAC 还提供不同的字长和比特率,包括 16/44.1、24/96 和 24/192,针对 DSD 则有 2.8、5.6 和 11.2 MHz。音乐是用优秀的仪器精心录制的。不幸的是,我并没有找到许多符合我口味的音乐,尽管我喜欢 Blue Coast 上能获取的几个艺术家,包括 Qua Continuum,[Art Lande][17] 以及 [Alex De Grassi][18]。
|
||||
|
||||
在 [Bandcamp][19] 上,我挑选了 [Emancipator's Baralku][20] 和 [Framework's Tides][21],两个都是我喜欢的。两位艺术家创作的音乐符合我的口味——电音但又(总体来说)不是舞蹈,它们的音乐旋律优美,副歌也很好听。有许多可以让开源音乐发烧友爱上 Bandcamp 的东西,比如买前试听整首歌的服务;没有垃圾软件下载器;与大量音乐家的合作;以及对 [Creative Commons music][22] 的支持。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/radiodroid-internet-radio-player
|
||||
|
||||
作者:[Chris Hermansen (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (woman programming)
|
||||
[2]: https://www.theverge.com/2019/1/11/18178751/google-chromecast-audio-discontinued-sale
|
||||
[3]: https://www.whathifi.com/google/chromecast-audio/review
|
||||
[4]: https://www.mobilefun.com/google-chromecast-audio-black-70476
|
||||
[5]: https://opensource.com/sites/default/files/uploads/internet-radio_chromecast.png (Google Chromecast Audio)
|
||||
[6]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2
|
||||
[7]: https://github.com/segler-alex/RadioDroid
|
||||
[8]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/
|
||||
[9]: http://www.radio-browser.info/gui/#!/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/internet-radio_radiodroid.png (RadioDroid)
|
||||
[11]: http://www.radiogrenouille.com/
|
||||
[12]: https://91rock.com.br/
|
||||
[13]: http://player.181fm.com/?station=181-xblender
|
||||
[14]: https://www.youtube.com/watch?v=PqLCQXPS8iQ
|
||||
[15]: https://bluecoastmusic.com/artists/qua-continuum
|
||||
[16]: https://bluecoastmusic.com/store
|
||||
[17]: https://bluecoastmusic.com/store?f%5B0%5D=search_api_multi_aggregation_1%3Aart%20lande
|
||||
[18]: https://bluecoastmusic.com/store?f%5B0%5D=search_api_multi_aggregation_1%3Aalex%20de%20grassi
|
||||
[19]: https://bandcamp.com/
|
||||
[20]: https://emancipator.bandcamp.com/album/baralku
|
||||
[21]: https://frameworksuk.bandcamp.com/album/tides
|
||||
[22]: https://bandcamp.com/tag/creative-commons
|
@ -1,44 +1,45 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10725-1.html)
|
||||
[#]: subject: (Bash vs. Python: Which language should you use?)
|
||||
[#]: via: (https://opensource.com/article/19/4/bash-vs-python)
|
||||
[#]: author: (Archit Modi Red Hat https://opensource.com/users/architmodi/users/greg-p/users/oz123)
|
||||
|
||||
|
||||
Bash vs Python: 该使用哪种语言?
|
||||
Bash vs Python:你该使用哪个?
|
||||
======
|
||||
两种编程语言都各有优缺点,一些任务使得它们在某些方面互有胜负。
|
||||
|
||||
> 两种编程语言都各有优缺点,它们在某些任务方面互有胜负。
|
||||
|
||||
![][1]
|
||||
|
||||
[Bash][2] 和 [Python][3] 是大多数自动化工程师最喜欢的编程语言。它们都各有优缺点,有时很难选择应该使用哪一个。诚实的答案是:这取决于任务、范围、背景和任务的复杂性。
|
||||
[Bash][2] 和 [Python][3] 是大多数自动化工程师最喜欢的编程语言。它们都各有优缺点,有时很难选择应该使用哪一个。所以,最诚实的答案是:这取决于任务、范围、背景和任务的复杂性。
|
||||
|
||||
让我们来比较一下这两种语言,以便更好地理解它们各自的优点。
|
||||
|
||||
### Bash
|
||||
|
||||
* 是一种 Linux/Unix shell 命令语言
|
||||
* 非常适合编写使用命令行界面(CLI)实用程序的 shell 脚本,利用一个命令的输出传递给另一个命令(管道),以及执行简单的任务(最多 100 行代码)
|
||||
* 非常适合编写使用命令行界面(CLI)实用程序的 shell 脚本,利用一个命令的输出传递给另一个命令(管道),以及执行简单的任务(可以多达 100 行代码)
|
||||
* 可以按原样使用命令行命令和实用程序
|
||||
* 启动时间比 Python 快,但执行时间性能差
|
||||
* Windows 默认没有安装。你的脚本可能不会兼容多个操作系统,但是 Bash 是大多数 Linux/Unix 系统的默认 shell
|
||||
* 与其它 shell (如 csh、zsh、fish) _不_ 完全兼容。
|
||||
* 管道 ("|") CLI 实用程序如 sed、awk、grep 等可以降低其性能
|
||||
* 缺少很多函数、对象、数据结构和多线程,这限制了它在复杂脚本或编程中的使用
|
||||
* 启动时间比 Python 快,但执行时性能差
|
||||
* Windows 中默认没有安装。你的脚本可能不会兼容多个操作系统,但是 Bash 是大多数 Linux/Unix 系统的默认 shell
|
||||
* 与其它 shell (如 csh、zsh、fish) *不* 完全兼容。
|
||||
* 通过管道(`|`)传递 CLI 实用程序如 `sed`、`awk`、`grep` 等会降低其性能
|
||||
* 缺少很多函数、对象、数据结构和多线程支持,这限制了它在复杂脚本或编程中的使用
|
||||
* 缺少良好的调试工具和实用程序
|
||||
|
||||
### Python
|
||||
|
||||
* 是一种面对对象编程语言(OOP),因此它比 Bash 更加通用
|
||||
* 几乎可以用于任何任务
|
||||
* 适用于大多数操作系统,默认情况下它安装在大多数 Unix/Linux 系统中
|
||||
* 适用于大多数操作系统,默认情况下它在大多数 Unix/Linux 系统中都有安装
|
||||
* 与伪代码非常相似
|
||||
* 具有简单、清晰、易于学习和阅读的语法
|
||||
* 拥有大量的库、文档以及一个活跃的社区
|
||||
* 提供比 Bash 更友好的错误处理特性
|
||||
* 有比 Bash 更好的调试工具和实用程序,这使得它在开发涉及很多行代码的复杂软件应用程序中是一种很棒的语言
|
||||
* 有比 Bash 更好的调试工具和实用程序,这使得它在开发涉及到很多行代码的复杂软件应用程序时是一种很棒的语言
|
||||
* 应用程序(或脚本)可能包含许多第三方依赖项,这些依赖项必须在执行前安装
|
||||
* 对于简单任务,需要编写比 Bash 更多的代码
|
||||
|
||||
@ -53,7 +54,7 @@ via: https://opensource.com/article/19/4/bash-vs-python
|
||||
作者:[Archit Modi (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10747-1.html)
|
||||
[#]: subject: (Cisco, Google reenergize multicloud/hybrid cloud joint development)
|
||||
[#]: via: (https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
思科、谷歌重新赋能多/混合云共同开发
|
||||
======
|
||||
> 思科、VMware、HPE 等公司开始采用了新的 Google Cloud Athos 云技术。
|
||||
|
||||
![Thinkstock][1]
|
||||
|
||||
思科与谷歌已扩展它们的混合云开发活动,以帮助其客户可以在从本地数据中心到公共云上的任何地方更轻松地搭建安全的多云以及混合云应用。
|
||||
|
||||
这次扩张围绕着谷歌被称作 Anthos 的新的开源混合云包展开,它是在这周的 Google Next 活动上推出的。Anthos 基于并取代了谷歌现有的谷歌云服务测试版。Anthos 将让客户们无须修改应用就可以在现有的本地硬件或公共云上运行应用。据谷歌说,它可以在[谷歌云平台][5] (GCP) 与 [谷歌 Kubernetes 引擎][6] (GKE) 或者在数据中心中与 [GKE On-Prem][7] 一同使用。谷歌说,Anthos 首次让客户们可以无需管理员和开发者了解不同的坏境和 API 就能从谷歌平台上管理在第三方云上(如 AWS 和 Azure)的工作负荷。
|
||||
|
||||
关键在于,Athos 提供了一个单一的托管服务,它使得客户们无须担心不同的环境或 API 就能跨云管理、部署工作负荷。
|
||||
|
||||
作为首秀的一部分,谷歌也宣布一个叫做 [Anthos Migrate][8] 的测试计划,它能够从本地环境或者其它云自动迁移虚拟机到 GKE 上的容器中。谷歌说,“这种独特的迁移技术使你无须修改原来的虚拟机或者应用就能以一种行云流水般的方式迁移、更新你的基础设施”。谷歌称它给予了公司按客户节奏转移本地应用到云环境的灵活性。
|
||||
|
||||
### 思科和谷歌
|
||||
|
||||
就思科来说,它宣布对 Anthos 的支持并承诺将它紧密集成进思科的数据中心技术中,例如 HyperFlex 超融合包、应用中心基础设施(思科的旗舰 SDN 方案)、SD-WAN 和 StealthWatch 云。思科说,无论是本地的还是在云端的,这次集成将通过自动更新到最新版本和安全补丁,给予一种一致的、云般的感觉。
|
||||
|
||||
“谷歌云在容器(Kubernetes)和<ruby>服务网格<rt>service mesh</rt></ruby>(Istio)上的专业与它们在开发者社区的领导力,再加上思科的企业级网络、计算、存储和安全产品及服务,将为我们的顾客促成一次强强联合。”思科的云平台和解决方案集团资深副总裁 Kip Compton 这样[写道][9],“思科对于 Anthos 的集成将会帮助顾客跨本地数据中心和公共云搭建、管理多云/混合云应用,让他们专注于创新和灵活性,同时不会影响安全性或增加复杂性。”
|
||||
|
||||
### 谷歌云和思科
|
||||
|
||||
谷歌云工程副总裁 Eyal Manor [写道][10] 通过思科对 Anthos 的支持,客户将能够:
|
||||
|
||||
* 受益于全托管服务例如 GKE 以及思科的超融合基础设施、网络和安全技术;
|
||||
* 在企业数据中心和云中一致运行
|
||||
* 在企业数据中心使用云服务
|
||||
* 用最新的云技术更新本地基础设施
|
||||
|
||||
思科和谷歌从 2017 年 10 月就在紧密合作,当时他们表示正在开发一个能够连接本地基础设施和云环境的开放混合云平台。该套件,即[思科为谷歌云打造的混合云平台][11],大致在 2018 年 9 月上市。它使得客户们能通过谷歌云托管 Kubernetes 容器开发企业级功能,包含思科网络和安全技术以及来自 Istio 的服务网格监控。
|
||||
|
||||
谷歌说开源的 Istio 的容器和微服务优化技术给开发者提供了一种一致的方式,通过服务级的 mTLS (双向传输层安全)身份验证访问控制来跨云连接、保护、管理和监听微服务。因此,客户能够轻松实施新的可移植的服务,并集中配置和管理这些服务。
|
||||
|
||||
思科不是唯一宣布对 Anthos 支持的供应商。谷歌表示,至少 30 家大型合作商包括 [VMware][12]、[Dell EMC][13]、[HPE][14]、Intel 和联想致力于为他们的客户在它们自己的超融合基础设施上提供 Anthos 服务。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2016/12/hybrid_cloud-100700390-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
|
||||
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://cloud.google.com/
|
||||
[6]: https://cloud.google.com/kubernetes-engine/
|
||||
[7]: https://cloud.google.com/gke-on-prem/
|
||||
[8]: https://cloud.google.com/contact/
|
||||
[9]: https://blogs.cisco.com/news/next-phase-cisco-google-cloud
|
||||
[10]: https://cloud.google.com/blog/topics/partners/google-cloud-partners-with-cisco-on-hybrid-cloud-next19?utm_medium=unpaidsocial&utm_campaign=global-googlecloud-liveevent&utm_content=event-next
|
||||
[11]: https://cloud.google.com/cisco/
|
||||
[12]: https://blogs.vmware.com/networkvirtualization/2019/04/vmware-and-google-showcase-hybrid-cloud-deployment.html/
|
||||
[13]: https://www.dellemc.com/en-us/index.htm
|
||||
[14]: https://www.hpe.com/us/en/newsroom/blog-post/2019/04/hpe-and-google-cloud-join-forces-to-accelerate-innovation-with-hybrid-cloud-solutions-optimized-for-containerized-applications.html
|
||||
[15]: https://www.facebook.com/NetworkWorld/
|
||||
[16]: https://www.linkedin.com/company/network-world
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10751-1.html)
|
||||
[#]: subject: (How To Install And Enable Flatpak Support On Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
@ -10,35 +10,30 @@
|
||||
如何在 Linux 上安装并启用 Flatpak 支持?
|
||||
======
|
||||
|
||||
<to 校正:之前似乎发表过跟这个类似的一篇 https://linux.cn/article-10459-1.html>
|
||||
|
||||
目前,我们都在使用 Linux 发行版的官方软件包管理器来安装所需的软件包。
|
||||
|
||||
在 Linux 中,它做得很好,没有任何问题。(它很好地完成了它应该做的工作,同时它没有任何妥协)
|
||||
在 Linux 中,它做得很好,没有任何问题。(它不打折扣地很好的完成了它应该做的工作)
|
||||
|
||||
在一些方面它也有一些限制,所以会让我们考虑其他替代解决方案来解决。
|
||||
但在一些方面它也有一些限制,所以会让我们考虑其他替代解决方案来解决。
|
||||
|
||||
是的,默认情况下,我们不会从发行版官方软件包管理器获取最新版本的软件包,因为这些软件包是在构建当前 OS 版本时构建的。它们只会提供安全更新,直到下一个主要版本发布。
|
||||
是的,默认情况下,我们不能从发行版官方软件包管理器获取到最新版本的软件包,因为这些软件包是在构建当前 OS 版本时构建的。它们只会提供安全更新,直到下一个主要版本发布。
|
||||
|
||||
那么,这种情况有什么解决办法吗?
|
||||
|
||||
是的,我们有多种解决方案,而且我们大多数人已经开始使用其中的一些了。
|
||||
那么,这种情况有什么解决办法吗?是的,我们有多种解决方案,而且我们大多数人已经开始使用其中的一些了。
|
||||
|
||||
有些什么呢,它们有什么好处?
|
||||
|
||||
* **对于基于 Ubuntu 的系统:** PPAs
|
||||
* **对于基于 RHEL 的系统:** [EPEL Repository][1]、[ELRepo Repository][2]、[nux-dextop Repository][3]、[IUS Community Repo][4]、[RPMfusion Repository][5] 和 [Remi Repository][6]
|
||||
* **对于基于 Ubuntu 的系统:** PPA
|
||||
* **对于基于 RHEL 的系统:** [EPEL 仓库][1]、[ELRepo 仓库][2]、[nux-dextop 仓库][3]、[IUS 社区仓库][4]、[RPMfusion 仓库][5] 和 [Remi 仓库][6]
|
||||
|
||||
|
||||
使用上面的仓库,我们将获得最新的软件包。这些软件包通常都得到了很好的维护,还有大多数社区的建议。但这对于操作系统来说应该是适当的,因为它们可能并不安全。
|
||||
使用上面的仓库,我们将获得最新的软件包。这些软件包通常都得到了很好的维护,还有大多数社区的推荐。但这些只是建议,可能并不总是安全的。
|
||||
|
||||
近年来,出现了一下通用软件包封装格式,并且得到了广泛的应用。
|
||||
|
||||
* **`Flatpak:`** 它是独立于发行版的包格式,主要贡献者是 Fedora 项目团队。大多数主要的 Linux 发行版都采用了 Flatpak 框架。
|
||||
* **`Snaps:`** Snappy 是一种通用的软件包封装格式,最初由 Canonical 为 Ubuntu 手机及其操作系统设计和构建的。后来,大多数发行版都进行了改编。
|
||||
* **`AppImage:`** AppImage 是一种可移植的包格式,可以在不安装或不需要 root 权限的情况下运行。
|
||||
* Flatpak:它是独立于发行版的包格式,主要贡献者是 Fedora 项目团队。大多数主要的 Linux 发行版都采用了 Flatpak 框架。
|
||||
* Snaps:Snappy 是一种通用的软件包封装格式,最初由 Canonical 为 Ubuntu 手机及其操作系统设计和构建的。后来,更多的发行版都接纳了它。
|
||||
* AppImage:AppImage 是一种可移植的包格式,可以在不安装和不需要 root 权限的情况下运行。
|
||||
|
||||
我们之前已经介绍过 **[Snap 包管理器和包封装格式][7]**。今天我们将讨论 Flatpak 包封装格式。
|
||||
我们之前已经介绍过 [Snap 包管理器和包封装格式][7]。今天我们将讨论 Flatpak 包封装格式。
|
||||
|
||||
### 什么是 Flatpak?
|
||||
|
||||
@ -56,13 +51,13 @@ Flatpak 的一个缺点是不像 Snap 和 AppImage 那样支持服务器操作
|
||||
|
||||
大多数 Linux 发行版官方仓库都提供 Flatpak 软件包。因此,可以使用它们来进行安装。
|
||||
|
||||
对于 **`Fedora`** 系统,使用 **[DNF 命令][8]** 来安装 flatpak。
|
||||
对于 Fedora 系统,使用 [DNF 命令][8] 来安装 flatpak。
|
||||
|
||||
```
|
||||
$ sudo dnf install flatpak
|
||||
```
|
||||
|
||||
对于 **`Debian/Ubuntu`** 系统,使用 **[APT-GET 命令][9]** 或 **[APT 命令][10]** 来安装 flatpak。
|
||||
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][9] 或 [APT 命令][10] 来安装 flatpak。
|
||||
|
||||
```
|
||||
$ sudo apt install flatpak
|
||||
@ -76,19 +71,19 @@ $ sudo apt update
|
||||
$ sudo apt install flatpak
|
||||
```
|
||||
|
||||
对于基于 **`Arch Linux`** 的系统,使用 **[Pacman 命令][11]** 来安装 flatpak。
|
||||
对于基于 Arch Linux 的系统,使用 [Pacman 命令][11] 来安装 flatpak。
|
||||
|
||||
```
|
||||
$ sudo pacman -S flatpak
|
||||
```
|
||||
|
||||
对于 **`RHEL/CentOS`** 系统,使用 **[YUM 命令][12]** 来安装 flatpak。
|
||||
对于 RHEL/CentOS 系统,使用 [YUM 命令][12] 来安装 flatpak。
|
||||
|
||||
```
|
||||
$ sudo yum install flatpak
|
||||
```
|
||||
|
||||
对于 **`openSUSE Leap`** 系统,使用 **[Zypper 命令][13]** 来安装 flatpak。
|
||||
对于 openSUSE Leap 系统,使用 [Zypper 命令][13] 来安装 flatpak。
|
||||
|
||||
```
|
||||
$ sudo zypper install flatpak
|
||||
@ -96,9 +91,7 @@ $ sudo zypper install flatpak
|
||||
|
||||
### 如何在 Linux 上启用 Flathub 支持?
|
||||
|
||||
Flathub 网站是一个应用程序商店,你可以在其中找到 flatpak。
|
||||
|
||||
它是一个中央仓库,所有的 flatpak 应用程序都可供用户使用。
|
||||
Flathub 网站是一个应用程序商店,你可以在其中找到 flatpak 软件包。它是一个中央仓库,所有的 flatpak 应用程序都可供用户使用。
|
||||
|
||||
运行以下命令在 Linux 上启用 Flathub 支持:
|
||||
|
||||
@ -226,7 +219,7 @@ org.gnome.Platform/x86_64/3.30 system,runtime
|
||||
|
||||
### 如何查看有关已安装应用程序的详细信息?
|
||||
|
||||
运行以下命令以查看有关已安装应用程序的详细信息。
|
||||
运行以下命令以查看有关已安装应用程序的详细信息:
|
||||
|
||||
```
|
||||
$ flatpak info com.github.muriloventuroso.easyssh
|
||||
@ -264,6 +257,7 @@ $ flatpak update com.github.muriloventuroso.easyssh
|
||||
### 如何移除已安装的应用程序?
|
||||
|
||||
运行以下命令来移除已安装的应用程序:
|
||||
|
||||
```
|
||||
$ sudo flatpak uninstall com.github.muriloventuroso.easyssh
|
||||
```
|
||||
@ -281,7 +275,7 @@ via: https://www.2daygeek.com/how-to-install-and-enable-flatpak-support-on-linux
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,45 +1,35 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10736-1.html)
|
||||
[#]: subject: (How To Check The List Of Open Ports In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
|
||||
如何检查Linux中的开放端口列表?
|
||||
如何检查 Linux 中的开放端口列表?
|
||||
======
|
||||
|
||||
最近,我们就同一主题写了两篇文章。
|
||||
最近,我们就同一主题写了两篇文章。这些文章内容帮助你如何检查远程服务器中给定的端口是否打开。
|
||||
|
||||
这些文章内容帮助您如何检查远程服务器中给定的端口是否打开。
|
||||
如果你想 [检查远程 Linux 系统上的端口是否打开][1] 请点击链接浏览。如果你想 [检查多个远程 Linux 系统上的端口是否打开][2] 请点击链接浏览。如果你想 [检查多个远程 Linux 系统上的多个端口状态][2] 请点击链接浏览。
|
||||
|
||||
如果您想 **[检查远程 Linux 系统上的端口是否打开][1]** 请点击链接浏览。
|
||||
但是本文帮助你检查本地系统上的开放端口列表。
|
||||
|
||||
如果您想 **[检查多个远程 Linux 系统上的端口是否打开][2]** 请点击链接浏览。
|
||||
在 Linux 中很少有用于此目的的实用程序。然而,我提供了四个最重要的 Linux 命令来检查这一点。
|
||||
|
||||
如果您想 **[检查多个远程Linux系统上的多个端口状态][2]** 请点击链接浏览。
|
||||
|
||||
但是本文帮助您检查本地系统上的开放端口列表。
|
||||
|
||||
在 Linux 中很少有用于此目的的实用程序。
|
||||
|
||||
然而,我提供了四个最重要的 Linux 命令来检查这一点。
|
||||
|
||||
您可以使用以下四个命令来完成这个工作。这些命令是非常出名的并被 Linux 管理员广泛使用。
|
||||
|
||||
* **`netstat:`** netstat (“network statistics”) 是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
|
||||
* **`nmap:`** Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络。
|
||||
* **`ss:`** ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
|
||||
* **`lsof:`** lsof 是 List Open File 的缩写. 它用于输出被某个进程打开的所有文件。
|
||||
你可以使用以下四个命令来完成这个工作。这些命令是非常出名的并被 Linux 管理员广泛使用。
|
||||
|
||||
* `netstat`:netstat (“network statistics”) 是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
|
||||
* `nmap`:Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络。
|
||||
* `ss`: ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
|
||||
* `lsof`: lsof 是 List Open File 的缩写. 它用于输出被某个进程打开的所有文件。
|
||||
|
||||
### 如何使用 Linux 命令 netstat 检查系统中的开放端口列表
|
||||
|
||||
netstat 是 Network Statistics 的缩写,是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表, 伪装连接,多点传送成员和网络端口。
|
||||
`netstat` 是 Network Statistics 的缩写,是一个显示网络连接(进和出)相关信息命令行工具,例如:路由表、伪装连接、多播成员和网络端口。
|
||||
|
||||
它可以列出所有的 tcp, udp 连接 和所有的 unix 套接字连接。
|
||||
它可以列出所有的 tcp、udp 连接和所有的 unix 套接字连接。
|
||||
|
||||
它用于发现发现网络问题,确定网络连接数量。
|
||||
|
||||
@ -81,7 +71,7 @@ eth0 1 ff02::1
|
||||
eth0 1 ff01::1
|
||||
```
|
||||
|
||||
您也可以使用下面的命令检查特定的端口。
|
||||
你也可以使用下面的命令检查特定的端口。
|
||||
|
||||
```
|
||||
# # netstat -tplugn | grep :22
|
||||
@ -92,7 +82,7 @@ tcp6 0 0 :::22 :::* LISTEN
|
||||
|
||||
### 如何使用 Linux 命令 ss 检查系统中的开放端口列表?
|
||||
|
||||
ss 被用于转储套接字统计信息。它也可以类似 netstat 使用。相比其他工具它可以展示更多的TCP状态信息。
|
||||
`ss` 被用于转储套接字统计信息。它也可以显示类似 `netstat` 的信息。相比其他工具它可以展示更多的 TCP 状态信息。
|
||||
|
||||
```
|
||||
# ss -lntu
|
||||
@ -121,7 +111,7 @@ tcp LISTEN 0 100 :::25
|
||||
tcp LISTEN 0 128 :::22 :::*
|
||||
```
|
||||
|
||||
您也可以使用下面的命令检查特定的端口。
|
||||
你也可以使用下面的命令检查特定的端口。
|
||||
|
||||
```
|
||||
# # ss -lntu | grep ':25'
|
||||
@ -132,12 +122,11 @@ tcp LISTEN 0 100 :::25 :::*
|
||||
|
||||
### 如何使用 Linux 命令 nmap 检查系统中的开放端口列表?
|
||||
|
||||
|
||||
Nmap (“Network Mapper”) 是一个网络探索与安全审计的开源工具。它旨在快速扫描大型网络,当然它也可以工作在独立主机上。
|
||||
|
||||
Nmap使用裸 IP 数据包以一种新颖的方式来确定网络上有哪些主机可用,这些主机提供什么服务(应用程序名称和版本),它们运行什么操作系统(操作系统版本),使用什么类型的数据包过滤器/防火墙,以及许多其他特征。
|
||||
Nmap 使用裸 IP 数据包以一种新颖的方式来确定网络上有哪些主机可用,这些主机提供什么服务(应用程序名称和版本),它们运行什么操作系统(版本),使用什么类型的数据包过滤器/防火墙,以及许多其他特征。
|
||||
|
||||
虽然 Nmap 通常用于安全审计,但许多系统和网络管理员发现它对于日常工作也非常有用,例如网络清点、管理服务升级计划以及监控主机或服务正常运行时间。
|
||||
虽然 Nmap 通常用于安全审计,但许多系统和网络管理员发现它对于日常工作也非常有用,例如网络资产清点、管理服务升级计划以及监控主机或服务正常运行时间。
|
||||
|
||||
```
|
||||
# nmap -sTU -O localhost
|
||||
@ -166,9 +155,7 @@ OS detection performed. Please report any incorrect results at http://nmap.org/s
|
||||
Nmap done: 1 IP address (1 host up) scanned in 1.93 seconds
|
||||
```
|
||||
|
||||
|
||||
您也可以使用下面的命令检查特定的端口。
|
||||
|
||||
你也可以使用下面的命令检查特定的端口。
|
||||
|
||||
```
|
||||
# nmap -sTU -O localhost | grep 123
|
||||
@ -176,10 +163,9 @@ Nmap done: 1 IP address (1 host up) scanned in 1.93 seconds
|
||||
123/udp open ntp
|
||||
```
|
||||
|
||||
|
||||
### 如何使用 Linux 命令 lsof 检查系统中的开放端口列表?
|
||||
|
||||
它向您显示系统上打开的文件列表以及打开它们的进程。还会向您显示与文件相关的其他信息。
|
||||
它向你显示系统上打开的文件列表以及打开它们的进程。还会向你显示与文件相关的其他信息。
|
||||
|
||||
```
|
||||
# lsof -i
|
||||
@ -214,8 +200,7 @@ httpd 13374 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
|
||||
httpd 13375 apache 3u IPv4 20337 0t0 TCP *:http (LISTEN)
|
||||
```
|
||||
|
||||
您也可以使用下面的命令检查特定的端口。
|
||||
|
||||
你也可以使用下面的命令检查特定的端口。
|
||||
|
||||
```
|
||||
# lsof -i:80
|
||||
@ -236,11 +221,11 @@ via: https://www.2daygeek.com/linux-scan-check-open-ports-using-netstat-ss-nmap/
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/
|
||||
[1]: https://linux.cn/article-10675-1.html
|
||||
[2]: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/
|
443
published/20190413 The Fargate Illusion.md
Normal file
443
published/20190413 The Fargate Illusion.md
Normal file
@ -0,0 +1,443 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10740-1.html)
|
||||
[#]: subject: (The Fargate Illusion)
|
||||
[#]: via: (https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html)
|
||||
[#]: author: (Lee Briggs https://leebriggs.co.uk/)
|
||||
|
||||
破除对 AWS Fargate 的幻觉
|
||||
======
|
||||
|
||||
我在 $work 建立了一个基于 Kubernetes 的平台已经快一年了,而且有点像 Kubernetes 的布道者了。真的,我认为这项技术太棒了。然而我并没有对它的运营和维护的困难程度抱过什么幻想。今年早些时候我读了[这样][1]的一篇文章,并对其中某些观点深以为然。如果我在一家规模较小的、有 10 到 15 个工程师的公司,假如有人建议管理和维护一批 Kubernetes 集群,那我会感到害怕的,因为它的运维开销太高了!
|
||||
|
||||
尽管我现在对 Kubernetes 的一切都很感兴趣,但我仍然对“<ruby>无服务器<rt>Serverless</rt></ruby>”计算会消灭运维工程师的说法抱有好奇。这种好奇心主要来源于我希望在未来仍然能有一份有报酬的工作 —— 如果我们前景光明的未来不需要运维工程师,那我得明白到底是怎么回事。我已经在 Lamdba 和Google Cloud Functions 上做了一些实验,结果让我印象十分深刻,但我仍然坚信无服务器解决方案只是解决了一部分问题。
|
||||
|
||||
我关注 [AWS Fargate][2] 已经有一段时间了,这就是 $work 的开发人员所推崇为“无服务器计算”的东西 —— 主要是因为 Fargate,用它你就可以无需管理底层节点而运行你的 Docker 容器。我想看看它到底意味着什么,所以我开始尝试从头开始在 Fargate 上运行一个应用,看看是否可以成功。这里我对成功的定义是一个与“生产级”应用程序相近的东西,我想应该包含以下内容:
|
||||
|
||||
* 一个在 Fargate 上运行的容器
|
||||
* 配置信息以环境变量的形式下推
|
||||
* “秘密信息” 不能是明文的
|
||||
* 位于负载均衡器之后
|
||||
* 有效的 SSL 证书的 TLS 通道
|
||||
|
||||
我以“基础设施即代码”的角度来开始整个任务,不遵循默认的 AWS 控制台向导,而是使用 terraform 来定义基础架构。这很可能让整个事情变得复杂,但我想确保任何部署都是可重现的,任何想要遵循此步骤的人都可发现我的结论。
|
||||
|
||||
上述所有标准通常都可以通过基于 Kubernetes 的平台使用一些外部的附加组件和插件来实现,所以我确实是以一种比较的心态来处理整个任务的,因为我要将它与我的常用工作流程进行比较。我的主要目标是看看Fargate 有多容易,特别是与 Kubernetes 相比时。结果让我感到非常惊讶。
|
||||
|
||||
### AWS 是有开销的
|
||||
|
||||
我有一个干净的 AWS 账户,并决定从零到部署一个 webapp。与 AWS 中的其它基础设施一样,我必须首先使基本的基础设施正常工作起来,因此我需要先定义一个 VPC。
|
||||
|
||||
遵循最佳实践,因此我将这个 VPC 划分为跨可用区(AZ)的子网,一个公共子网和私有子网。这时我想到,只要这种设置基础设施的需求存在,我就能找到一份这种工作。AWS 是"免"运维的这一概念一直让我感到愤怒。开发者社区中的许多人理所当然地认为在设置和定义一个设计良好的 AWS 账户和基础设施是不需要付出多少工作和努力的。而这种想当然甚至发生在开始谈论多帐户架构*之前*就有了——现在我仍然使用单一帐户,我已经必须定义好基础设施和传统的网络设备。
|
||||
|
||||
这里也值得记住,我已经做了很多次,所以我*很清楚*该做什么。我可以在我的帐户中使用默认的 VPC 以及预先提供的子网,我觉得很多刚开始的人也可以使用它。这大概花了我半个小时才运行起来,但我不禁想到,即使我想运行 lambda 函数,我仍然需要某种连接和网络。定义 NAT 网关和在 VPC 中路由根本不会让你觉得很“Serverless”,但要往下进行这就是必须要做的。
|
||||
|
||||
### 运行简单的容器
|
||||
|
||||
在我启动运行了基本的基础设施之后,现在我想让我的 Docker 容器运行起来。我开始翻阅 Fargate 文档并浏览 [入门][3] 文档,这些就马上就展现在了我面前:
|
||||
|
||||
![][4]
|
||||
|
||||
等等,只是让我的容器运行就至少要有**三个**步骤?这完全不像我所想的,不过还是让我们开始吧。
|
||||
|
||||
#### 任务定义
|
||||
|
||||
“<ruby>任务定义<rt>Task Definition<rt></ruby>”用来定义要运行的实际容器。我在这里遇到的问题是,任务定义这件事非常复杂。这里有很多选项都很简单,比如指定 Docker 镜像和内存限制,但我还必须定义一个网络模型以及我并不熟悉的其它各种选项。真需要这样吗?如果我完全没有 AWS 方面的知识就进入到这个过程里,那么在这个阶段我会感觉非常的不知所措。可以在 AWS 页面上找到这些 [参数][5] 的完整列表,这个列表很长。我知道我的容器需要一些环境变量,它需要暴露一个端口。所以我首先在一个神奇的 [terraform 模块][6] 的帮助下定义了这一点,这真的让这件事更容易了。如果没有这个模块,我就得手写 JSON 来定义我的容器定义。
|
||||
|
||||
首先我定义了一些环境变量:
|
||||
|
||||
```
|
||||
container_environment_variables = [
|
||||
{
|
||||
name = "USER"
|
||||
value = "${var.user}"
|
||||
},
|
||||
{
|
||||
name = "PASSWORD"
|
||||
value = "${var.password}"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
然后我使用上面提及的模块组成了任务定义:
|
||||
|
||||
```
|
||||
module "container_definition_app" {
|
||||
source = "cloudposse/ecs-container-definition/aws"
|
||||
version = "v0.7.0"
|
||||
|
||||
container_name = "${var.name}"
|
||||
container_image = "${var.image}"
|
||||
|
||||
container_cpu = "${var.ecs_task_cpu}"
|
||||
container_memory = "${var.ecs_task_memory}"
|
||||
container_memory_reservation = "${var.container_memory_reservation}"
|
||||
|
||||
port_mappings = [
|
||||
{
|
||||
containerPort = "${var.app_port}"
|
||||
hostPort = "${var.app_port}"
|
||||
protocol = "tcp"
|
||||
},
|
||||
]
|
||||
|
||||
environment = "${local.container_environment_variables}"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
在这一点上我非常困惑,我需要在这里定义很多配置才能运行,而这时什么都没有开始呢,但这是必要的 —— 运行 Docker 容器肯定需要了解一些容器配置的知识。我 [之前写过][7] 关于 Kubernetes 和配置管理的问题的文章,在这里似乎遇到了同样的问题。
|
||||
|
||||
接下来,我在上面的模块中定义了任务定义(幸好从我这里抽象出了所需的 JSON —— 如果我不得不手写JSON,我可能已经放弃了)。
|
||||
|
||||
当我定义模块参数时,我突然意识到我漏掉了一些东西。我需要一个 IAM 角色!好吧,让我来定义:
|
||||
|
||||
```
|
||||
resource "aws_iam_role" "ecs_task_execution" {
|
||||
name = "${var.name}-ecs_task_execution"
|
||||
|
||||
assume_role_policy = <<EOF
|
||||
{
|
||||
"Version": "2008-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "ecs-tasks.amazonaws.com"
|
||||
},
|
||||
"Effect": "Allow"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
resource "aws_iam_role_policy_attachment" "ecs_task_execution" {
|
||||
count = "${length(var.policies_arn)}"
|
||||
|
||||
role = "${aws_iam_role.ecs_task_execution.id}"
|
||||
policy_arn = "${element(var.policies_arn, count.index)}"
|
||||
}
|
||||
```
|
||||
|
||||
这样做是有意义的,我需要在 Kubernetes 中定义一个 RBAC 策略,所以在这里我还未完全错失或获得任何东西。这时我开始觉得从 Kubernetes 的角度来看,这种感觉非常熟悉。
|
||||
|
||||
```
|
||||
resource "aws_ecs_task_definition" "app" {
|
||||
family = "${var.name}"
|
||||
network_mode = "awsvpc"
|
||||
requires_compatibilities = ["FARGATE"]
|
||||
cpu = "${var.ecs_task_cpu}"
|
||||
memory = "${var.ecs_task_memory}"
|
||||
execution_role_arn = "${aws_iam_role.ecs_task_execution.arn}"
|
||||
task_role_arn = "${aws_iam_role.ecs_task_execution.arn}"
|
||||
|
||||
container_definitions = "${module.container_definition_app.json}"
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
现在,为了运行起来我已经写了很多行代码,我阅读了很多 ECS 文档,我所做的就是定义一个任务定义。我还没有让这个东西运行起来。在这一点上,我真的很困惑这个基于 Kubernetes 的平台到底增值了什么,但我继续前行。
|
||||
|
||||
#### 服务
|
||||
|
||||
服务,一定程度上是将容器如何暴露给外部,另外是如何定义它拥有的副本数量。我的第一个想法是“啊!这就像一个 Kubernetes 服务!”我开始着手映射端口等。这是我第一次在 terraform 上跑:
|
||||
|
||||
```
|
||||
resource "aws_ecs_service" "app" {
|
||||
name = "${var.name}"
|
||||
cluster = "${module.ecs.this_ecs_cluster_id}"
|
||||
task_definition = "${data.aws_ecs_task_definition.app.family}:${max(aws_ecs_task_definition.app.revision, data.aws_ecs_task_definition.app.revision)}"
|
||||
desired_count = "${var.ecs_service_desired_count}"
|
||||
launch_type = "FARGATE"
|
||||
deployment_maximum_percent = "${var.ecs_service_deployment_maximum_percent}"
|
||||
deployment_minimum_healthy_percent = "${var.ecs_service_deployment_minimum_healthy_percent}"
|
||||
|
||||
network_configuration {
|
||||
subnets = ["${values(local.private_subnets)}"]
|
||||
security_groups = ["${module.app.this_security_group_id}"]
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
当我必须定义允许访问所需端口的安全组时,我再次感到沮丧,当我这样做了并将其插入到网络配置中后,我就像被扇了一巴掌。
|
||||
|
||||
我还需要定义自己的负载均衡器?
|
||||
|
||||
什么?
|
||||
|
||||
当然不是吗?
|
||||
|
||||
##### 负载均衡器从未远离
|
||||
|
||||
老实说,我很满意,我甚至不确定为什么。我已经习惯了 Kubernetes 的服务和 Ingress 对象,我一心认为用 Kubernetes 将我的应用程序放到网上是多么容易。当然,我们在 $work 花了几个月的时间建立一个平台,以便更轻松。我是 [external-dns][8] 和 [cert-manager][9] 的重度用户,它们可以自动填充 Ingress 对象上的 DNS 条目并自动化 TLS 证书,我非常了解进行这些设置所需的工作,但老实说,我认为在 Fargate 上做这件事会更容易。我认识到 Fargate 并没有声称自己是“如何运行应用程序”这件事的全部和最终目的,它只是抽象出节点管理,但我一直被告知这比 Kubernetes *更加容易*。我真的很惊讶。定义负载均衡器(即使你不想使用 Ingress 和 Ingress Controller)也是向 Kubernetes 部署服务的重要组成部分,我不得不在这里再次做同样的事情。这一切都让人觉得如此熟悉。
|
||||
|
||||
我现在意识到我需要:
|
||||
|
||||
* 一个负载均衡器
|
||||
* 一个 TLS 证书
|
||||
* 一个 DNS 名字
|
||||
|
||||
所以我着手做了这些。我使用了一些流行的 terraform 模块,并做了这个:
|
||||
|
||||
```
|
||||
# Define a wildcard cert for my app
|
||||
module "acm" {
|
||||
source = "terraform-aws-modules/acm/aws"
|
||||
version = "v1.1.0"
|
||||
|
||||
create_certificate = true
|
||||
|
||||
domain_name = "${var.route53_zone_name}"
|
||||
zone_id = "${data.aws_route53_zone.this.id}"
|
||||
|
||||
subject_alternative_names = [
|
||||
"*.${var.route53_zone_name}",
|
||||
]
|
||||
|
||||
|
||||
tags = "${local.tags}"
|
||||
|
||||
}
|
||||
# Define my loadbalancer
|
||||
resource "aws_lb" "main" {
|
||||
name = "${var.name}"
|
||||
subnets = [ "${values(local.public_subnets)}" ]
|
||||
security_groups = ["${module.alb_https_sg.this_security_group_id}", "${module.alb_http_sg.this_security_group_id}"]
|
||||
}
|
||||
|
||||
resource "aws_lb_target_group" "main" {
|
||||
name = "${var.name}"
|
||||
port = "${var.app_port}"
|
||||
protocol = "HTTP"
|
||||
vpc_id = "${local.vpc_id}"
|
||||
target_type = "ip"
|
||||
depends_on = [ "aws_lb.main" ]
|
||||
}
|
||||
|
||||
# Redirect all traffic from the ALB to the target group
|
||||
resource "aws_lb_listener" "main" {
|
||||
load_balancer_arn = "${aws_lb.main.id}"
|
||||
port = "80"
|
||||
protocol = "HTTP"
|
||||
|
||||
default_action {
|
||||
target_group_arn = "${aws_lb_target_group.main.id}"
|
||||
type = "forward"
|
||||
}
|
||||
}
|
||||
|
||||
resource "aws_lb_listener" "main-tls" {
|
||||
load_balancer_arn = "${aws_lb.main.id}"
|
||||
port = "443"
|
||||
protocol = "HTTPS"
|
||||
certificate_arn = "${module.acm.this_acm_certificate_arn}"
|
||||
|
||||
default_action {
|
||||
target_group_arn = "${aws_lb_target_group.main.id}"
|
||||
type = "forward"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
我必须承认,我在这里搞砸了好几次。我不得不在 AWS 控制台中四处翻弄,以弄清楚我做错了什么。这当然不是一个“轻松”的过程,而且我之前已经做过很多次了。老实说,在这一点上,Kubernetes 看起来对我很有启发性,但我意识到这是因为我对它非常熟悉。幸运的是我能够使用托管的 Kubernetes 平台(预装了 external-dns 和 cert-manager),我真的很想知道我漏掉了 Fargate 什么增值的地方。它真的感觉不那么简单。
|
||||
|
||||
经过一番折腾,我现在有一个可以工作的 ECS 服务。包括服务在内的最终定义如下所示:
|
||||
|
||||
```
|
||||
data "aws_ecs_task_definition" "app" {
|
||||
task_definition = "${var.name}"
|
||||
depends_on = ["aws_ecs_task_definition.app"]
|
||||
}
|
||||
|
||||
resource "aws_ecs_service" "app" {
|
||||
name = "${var.name}"
|
||||
cluster = "${module.ecs.this_ecs_cluster_id}"
|
||||
task_definition = "${data.aws_ecs_task_definition.app.family}:${max(aws_ecs_task_definition.app.revision, data.aws_ecs_task_definition.app.revision)}"
|
||||
desired_count = "${var.ecs_service_desired_count}"
|
||||
launch_type = "FARGATE"
|
||||
deployment_maximum_percent = "${var.ecs_service_deployment_maximum_percent}"
|
||||
deployment_minimum_healthy_percent = "${var.ecs_service_deployment_minimum_healthy_percent}"
|
||||
|
||||
network_configuration {
|
||||
subnets = ["${values(local.private_subnets)}"]
|
||||
security_groups = ["${module.app_sg.this_security_group_id}"]
|
||||
}
|
||||
|
||||
load_balancer {
|
||||
target_group_arn = "${aws_lb_target_group.main.id}"
|
||||
container_name = "app"
|
||||
container_port = "${var.app_port}"
|
||||
}
|
||||
|
||||
depends_on = [
|
||||
"aws_lb_listener.main",
|
||||
]
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
我觉得我已经接近完成了,但后来我记起了我只完成了最初的“入门”文档中所需的 3 个步骤中的 2 个,我仍然需要定义 ECS 群集。
|
||||
|
||||
#### 集群
|
||||
|
||||
感谢 [定义模块][10],定义要所有这些运行的集群实际上非常简单。
|
||||
|
||||
```
|
||||
module "ecs" {
|
||||
source = "terraform-aws-modules/ecs/aws"
|
||||
version = "v1.1.0"
|
||||
|
||||
name = "${var.name}"
|
||||
}
|
||||
```
|
||||
|
||||
这里让我感到惊讶的是为什么我必须定义一个集群。作为一个相当熟悉 ECS 的人,你会觉得你需要一个集群,但我试图从一个必须经历这个过程的新人的角度来考虑这一点 —— 对我来说,Fargate 标榜自己“
|
||||
Serverless”而你仍需要定义集群,这似乎很令人惊讶。当然这是一个小细节,但它确实盘旋在我的脑海里。
|
||||
|
||||
### 告诉我你的 Secret
|
||||
|
||||
在这个阶段,我很高兴我成功地运行了一些东西。然而,我的原始的成功标准缺少一些东西。如果我们回到任务定义那里,你会记得我的应用程序有一个存放密码的环境变量:
|
||||
|
||||
```
|
||||
container_environment_variables = [
|
||||
{
|
||||
name = "USER"
|
||||
value = "${var.user}"
|
||||
},
|
||||
{
|
||||
name = "PASSWORD"
|
||||
value = "${var.password}"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
如果我在 AWS 控制台中查看我的任务定义,我的密码就在那里,明晃晃的明文。我希望不要这样,所以我开始尝试将其转化为其他东西,类似于 [Kubernetes 的Secrets管理][11]。
|
||||
|
||||
#### AWS SSM
|
||||
|
||||
Fargate / ECS 执行<ruby>secret 管理<rt>secret management</rt></ruby>部分的方式是使用 [AWS SSM][12](此服务的全名是 AWS 系统管理器参数存储库 Systems Manager Parameter Store,但我不想使用这个名称,因为坦率地说这个名字太愚蠢了)。
|
||||
|
||||
AWS 文档很好的[涵盖了这个内容][13],因此我开始将其转换为 terraform。
|
||||
|
||||
##### 指定秘密信息
|
||||
|
||||
首先,你必须定义一个参数并为其命名。在 terraform 中,它看起来像这样:
|
||||
|
||||
```
|
||||
resource "aws_ssm_parameter" "app_password" {
|
||||
name = "${var.app_password_param_name}" # The name of the value in AWS SSM
|
||||
type = "SecureString"
|
||||
value = "${var.app_password}" # The actual value of the password, like correct-horse-battery-stable
|
||||
}
|
||||
```
|
||||
|
||||
显然,这里的关键部分是 “SecureString” 类型。这会使用默认的 AWS KMS 密钥来加密数据,这对我来说并不是很直观。这比 Kubernetes 的 Secret 管理具有巨大优势,默认情况下,这些 Secret 在 etcd 中是不加密的。
|
||||
|
||||
然后我为 ECS 指定了另一个本地值映射,并将其作为 Secret 参数传递:
|
||||
|
||||
```
|
||||
container_secrets = [
|
||||
{
|
||||
name = "PASSWORD"
|
||||
valueFrom = "${var.app_password_param_name}"
|
||||
},
|
||||
]
|
||||
|
||||
module "container_definition_app" {
|
||||
source = "cloudposse/ecs-container-definition/aws"
|
||||
version = "v0.7.0"
|
||||
|
||||
container_name = "${var.name}"
|
||||
container_image = "${var.image}"
|
||||
|
||||
container_cpu = "${var.ecs_task_cpu}"
|
||||
container_memory = "${var.ecs_task_memory}"
|
||||
container_memory_reservation = "${var.container_memory_reservation}"
|
||||
|
||||
port_mappings = [
|
||||
{
|
||||
containerPort = "${var.app_port}"
|
||||
hostPort = "${var.app_port}"
|
||||
protocol = "tcp"
|
||||
},
|
||||
]
|
||||
|
||||
environment = "${local.container_environment_variables}"
|
||||
secrets = "${local.container_secrets}"
|
||||
```
|
||||
|
||||
##### 出了个问题
|
||||
|
||||
此刻,我重新部署了我的任务定义,并且非常困惑。为什么任务没有正确拉起?当新的任务定义(版本 8)可用时,我一直在控制台中看到正在运行的应用程序仍在使用先前的任务定义(版本 7)。解决这件事花费的时间比我预期的要长,但是在控制台的事件屏幕上,我注意到了 IAM 错误。我错过了一个步骤,容器无法从 AWS SSM 中读取 Secret 信息,因为它没有正确的 IAM 权限。这是我第一次真正对整个这件事情感到沮丧。从用户体验的角度来看,这里的反馈非常*糟糕*。如果我没有发觉的话,我会认为一切都很好,因为仍然有一个任务正在运行,我的应用程序仍然可以通过正确的 URL 访问 —— 只不过是旧的配置而已。
|
||||
|
||||
在 Kubernetes 里,我会清楚地看到 pod 定义中的错误。Fargate 可以确保我的应用不会停止,这绝对是太棒了,但作为一名运维,我需要一些关于发生了什么的实际反馈。这真的不够好。我真的希望 Fargate 团队的人能够读到这篇文章,改善这种体验。
|
||||
|
||||
### 就这样了
|
||||
|
||||
到这里就结束了,我的应用程序正在运行,也符合我的所有标准。我确实意识到我做了一些改进,其中包括:
|
||||
|
||||
* 定义一个 cloudwatch 日志组,这样我就可以正确地写日志了
|
||||
* 添加了一个 route53 托管区域,使整个事情从 DNS 角度更容易自动化
|
||||
* 修复并重新调整了 IAM 权限,这里太宽泛了
|
||||
|
||||
但老实说,现在我想反思一下这段经历。我写了一个关于我的经历的 [推特会话][14],然后花了其余时间思考我在这里的真实感受。
|
||||
|
||||
### 代价
|
||||
|
||||
经过一夜的反思,我意识到无论你是使用 Fargate 还是 Kubernetes,这个过程都大致相同。最让我感到惊讶的是,尽管我经常听说 Fargate “更容易”,但我真的没有看到任何超过 Kubernetes 平台的好处。现在,如果你正在构建 Kubernetes 集群,我绝对可以看到这里的价值 —— 管理节点和控制面板只是不必要的开销,问题是 —— 基于 Kubernetes 的平台的大多数消费者都*没有*这样做。如果你很幸运能够使用 GKE,你几乎不需要考虑集群的管理,你可以使用单个 `gcloud` 命令来运行集群。我经常使用 Digital Ocean 的 Kubernetes 托管服务,我可以肯定地说它就像操作 Fargate 集群一样简单,实际上在某种程度上它更容易。
|
||||
|
||||
必须定义一些基础设施来运行你的容器就是此时的代价。谷歌本周可能刚刚使用他们的 [Google Cloud Run][15] 产品改变了游戏规则,但他们在这一领域的领先优势远远领先于其他所有人。
|
||||
|
||||
从这整个经历中,我可以肯定的说:*大规模运行容器仍然很难。*它需要思考,需要领域知识,需要运维和开发人员之间的协作。它还需要一个基础来构建 —— 任何基于 AWS 的操作都需要事先定义和运行一些基础架构。我对一些公司似乎渴望的 “NoOps” 概念非常感兴趣。我想如果你正在运行一个无状态应用程序,你可以把它全部放在一个 lambda 函数和一个 API 网关中,这可能不错,但我们是否真的适合在任何一种企业环境中这样做?我真的不这么认为。
|
||||
|
||||
#### 公平比较
|
||||
|
||||
令我印象深刻的另一个现实是,技术 A 和技术 B 之间的比较通常不太公平,我经常在 AWS 上看到这一点。这种实际情况往往与 Jeff Barr 博客文章截然不同。如果你是一家足够小的公司,你可以使用 AWS 控制台在 AWS 中部署你的应用程序并接受所有默认值,这绝对更容易。但是,我不想使用默认值,因为默认值几乎是不适用于生产环境的。一旦你开始剥离掉云服务商服务的层面,你就会开始意识到最终你仍然是在运行软件 —— 它仍然需要设计良好、部署良好、运行良好。我相信 AWS 和 Kubernetes 以及所有其他云服务商的增值服务使得它更容易运行、设计和操作,但它绝对不是免费的。
|
||||
|
||||
#### Kubernetes 的争议
|
||||
|
||||
最后就是:如果你将 Kubernetes 纯粹视为一个容器编排工具,你可能会喜欢 Fargate。然而,随着我对 Kubernetes 越来越熟悉,我开始意识到它作为一种技术的重要性 —— 不仅因为它是一个伟大的容器编排工具,而且因为它的设计模式 —— 它是声明性的、API 驱动的平台。 在*整个* Fargate 过程期间发生的一个简单的事情是,如果我删除这里某个东西,Fargate 不一定会为我重新创建它。自动缩放很不错,不需要管理服务器和操作系统的补丁及更新也很棒,但我觉得因为无法使用 Kubernetes 自我修复和 API 驱动模型而失去了很多。当然,Kubernetes 有一个学习曲线,但从这里的体验来看,Fargate 也是如此。
|
||||
|
||||
### 总结
|
||||
|
||||
尽管我在这个过程中遭遇了困惑,但我确实很喜欢这种体验。我仍然相信 Fargate 是一项出色的技术,AWS 团队对 ECS/Fargate 所做的工作确实非常出色。然而,我的观点是,这绝对不比 Kubernetes “更容易”,只是……难点不同。
|
||||
|
||||
在生产环境中运行容器时出现的问题大致相同。如果你从这篇文章中有所收获,它应该是这样的:*不管你选择的哪种方式都有运维开销*。不要相信你选择一些东西你的世界就变得更轻松。我个人的意见是:如果你有一个运维团队,而你的公司要为多个应用程序团队部署容器 —— 选择一种技术并围绕它构建流程和工具以使其更容易。
|
||||
|
||||
人们说的一点肯定没错,用点技巧可以更容易地使用某种技术。在这个阶段,谈到 Fargate,下面的漫画这总结了我的感受:
|
||||
|
||||
![][16]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html
|
||||
|
||||
作者:[Lee Briggs][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Bestony](https://github.com/Bestony)
|
||||
校对:[wxy](https://github.com/wxy), 临石(阿里云智能技术专家)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://leebriggs.co.uk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://matthias-endler.de/2019/maybe-you-dont-need-kubernetes/
|
||||
[2]: https://aws.amazon.com/fargate/
|
||||
[3]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html
|
||||
[4]: https://i.imgur.com/YfMyXBdl.png
|
||||
[5]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
|
||||
[6]: https://github.com/cloudposse/terraform-aws-ecs-container-definition
|
||||
[7]: https://leebriggs.co.uk/blog/2018/05/08/kubernetes-config-mgmt.html
|
||||
[8]: https://github.com/kubernetes-incubator/external-dns
|
||||
[9]: https://github.com/jetstack/cert-manager
|
||||
[10]: https://github.com/terraform-aws-modules/terraform-aws-ecs
|
||||
[11]: https://kubernetes.io/docs/concepts/configuration/secret/
|
||||
[12]: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html
|
||||
[13]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
|
||||
[14]: https://twitter.com/briggsl/status/1116870900719030272
|
||||
[15]: https://cloud.google.com/run/
|
||||
[16]: https://i.imgur.com/Bx7Q50Jl.jpg
|
@ -1,46 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Which programming languages should you learn?)
|
||||
[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
Which programming languages should you learn?
|
||||
======
|
||||
Learning a new programming language is a great way to get ahead in your career. But which one?
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0)
|
||||
|
||||
If you want to get started or get ahead in your programming career, learning a new language is a smart idea. But the huge number of languages in active use invites the question: Which programming language is the best one to know? To answer that, let's start with a simplifying question: What sort of programming do you want to do?
|
||||
|
||||
If you want to do web programming on the client side, then the specialized languages HTML, CSS, and JavaScript—in one of its seemingly infinite dialects—are de rigueur.
|
||||
|
||||
If you want to do web programming on the server side, the options include all of the familiar general-purpose languages: C++, Golang, Java, C#, Node.js, Perl, Python, Ruby, and so on. As a matter of course, server-side programs interact with datastores, such as relational and other databases, which means query languages such as SQL may come into play.
|
||||
|
||||
If you're writing native apps for mobile devices, knowing the target platform is important. For Apple devices, Swift has supplanted Objective C as the language of choice. For Android devices, Java (with dedicated libraries and toolsets) remains the dominant language. There are special languages such as Xamarin, used with C#, that can generate platform-specific code for Apple, Android, and Windows devices.
|
||||
|
||||
What about general-purpose languages? There are various choices within the usual pigeonholes. Among the dynamic or scripting languages (e.g., Perl, Python, and Ruby), there are newer offerings such as Node.js. Java and C#, which are more alike than their fans like to admit, remain the dominant statically compiled languages targeted at a virtual machine (the JVM and CLR, respectively). Among languages that compile into native executables, C++ is still in the mix, along with later arrivals such as Golang and Rust. General-purpose functional languages abound (e.g., Clojure, Haskell, Erlang, F#, Lisp, and Scala), often with passionately devoted communities. It's worth noting that object-oriented languages such as Java and C# have added functional constructs (in particular, lambdas), and the dynamic languages have had functional constructs from the start.
|
||||
|
||||
Let me end with a pitch for C, which is a small, elegant, and extensible language not to be confused with C++. Modern operating systems are written mostly in C, with the rest in assembly language. The standard libraries on any platform are likewise mostly in C. For example, any program that issues the Hello, world! greeting does so through a call to the C library function named **write**.
|
||||
|
||||
C serves as a portable assembly language, exposing details about the underlying system that other high-level languages deliberately hide. To understand C is thus to gain a better grasp of how programs contend for the shared system resources (processors, memory, and I/O devices) required for execution. C is at once high-level and close-to-the-metal, so unrivaled in performance—except, of course, for assembly language. Finally, C is the lingua franca among programming languages, and almost every general-purpose language supports C calls in one form or another.
|
||||
|
||||
For a modern introduction to C, consider my book [C Programming: Introducing Portable Assembler][1]. No matter how you go about it, learn C and you'll learn a lot more than just another programming language.
|
||||
|
||||
What programming languages do you think are important to know? Do you agree or disagree with these recommendations? Let us know in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/which-programming-languages-should-you-learn
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.amazon.com/dp/1977056954?ref_=pe_870760_150889320
|
@ -1,130 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zgj1024 )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why DevOps is the most important tech strategy today)
|
||||
[#]: via: (https://opensource.com/article/19/3/devops-most-important-tech-strategy)
|
||||
[#]: author: (Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht)
|
||||
|
||||
Why DevOps is the most important tech strategy today
|
||||
======
|
||||
Clearing up some of the confusion about DevOps.
|
||||
![CICD with gears][1]
|
||||
|
||||
Many people first learn about [DevOps][2] when they see one of its outcomes and ask how it happened. It's not necessary to understand why something is part of DevOps to implement it, but knowing that—and why a DevOps strategy is important—can mean the difference between being a leader or a follower in an industry.
|
||||
|
||||
Maybe you've heard some the incredible outcomes attributed to DevOps, such as production environments that are so resilient they can handle thousands of releases per day while a "[Chaos Monkey][3]" is running around randomly unplugging things. This is impressive, but on its own, it's a weak business case, essentially burdened with [proving a negative][4]: The DevOps environment is resilient because a serious failure hasn't been observed… yet.
|
||||
|
||||
There is a lot of confusion about DevOps and many people are still trying to make sense of it. Here's an example from someone in my LinkedIn feed:
|
||||
|
||||
> Recently attended few #DevOps sessions where some speakers seemed to suggest #Agile is a subset of DevOps. Somehow, my understanding was just the opposite.
|
||||
>
|
||||
> Would like to hear your thoughts. What do you think is the relationship between Agile and DevOps?
|
||||
>
|
||||
> 1. DevOps is a subset of Agile
|
||||
> 2. Agile is a subset of DevOps
|
||||
> 3. DevOps is an extension of Agile, starts where Agile ends
|
||||
> 4. DevOps is the new version of Agile
|
||||
>
|
||||
|
||||
|
||||
Tech industry professionals have been weighing in on the LinkedIn post with a wide range of answers. How would you respond?
|
||||
|
||||
### DevOps' roots in lean and agile
|
||||
|
||||
DevOps makes a lot more sense if we start with the strategies of Henry Ford and the Toyota Production System's refinements of Ford's model. Within this history is the birthplace of lean manufacturing, which has been well studied. In [_Lean Thinking_][5], James P. Womack and Daniel T. Jones distill it into five principles:
|
||||
|
||||
1. Specify the value desired by the customer
|
||||
2. Identify the value stream for each product providing that value and challenge all of the wasted steps currently necessary to provide it
|
||||
3. Make the product flow continuously through the remaining value-added steps
|
||||
4. Introduce pull between all steps where continuous flow is possible
|
||||
5. Manage toward perfection so that the number of steps and the amount of time and information needed to serve the customer continually falls
|
||||
|
||||
|
||||
|
||||
Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
|
||||
|
||||
This is a huge difference, but not everything in life is as simple and predictable as the penny in the Penny Game. This is where agile comes in. We certainly see lean principles on high-performing agile teams, but these teams need more than lean to do what they do.
|
||||
|
||||
To be able to handle the unpredictability and variance of typical software development tasks, agile methodology focuses on awareness, deliberation, decision, and action to adjust course in the face of a constantly changing reality. For example, agile frameworks (like scrum) increase awareness with ceremonies like the daily standup and the sprint review. If the scrum team becomes aware of a new reality, the framework allows and encourages them to adjust course if necessary.
|
||||
|
||||
For teams to make these types of decisions, they need to be self-organizing in a high-trust environment. High-performing agile teams working this way achieve a fast flow of value while continuously adjusting course, removing the waste of going in the wrong direction.
|
||||
|
||||
### Optimal batch size
|
||||
|
||||
To understand the power of DevOps in software development, it helps to understand the economics of batch size. Consider the following U-curve optimization illustration from Donald Reinertsen's _[Principles of Product Development Flow][8]:_
|
||||
|
||||
![U-curve optimization illustration of optimal batch size][9]
|
||||
|
||||
This can be explained with an analogy about grocery shopping. Suppose you need to buy some eggs and you live 30 minutes from the store. Buying one egg (far left on the illustration) at a time would mean a 30-minute trip each time. This is your _transaction cost_. The _holding cost_ might represent the eggs spoiling and taking up space in your refrigerator over time. The _total cost_ is the _transaction cost_ plus your _holding cost_. This U-curve explains why, for most people, buying a dozen eggs at a time is their _optimal batch size_. If you lived next door to the store, it'd cost you next to nothing to walk there, and you'd probably buy a smaller carton each time to save room in your refrigerator and enjoy fresher eggs.
|
||||
|
||||
This U-curve optimization illustration can shed some light on why productivity increases significantly in successful agile transformations. Consider the effect of agile transformation on decision making in an organization. In traditional hierarchical organizations, decision-making authority is centralized. This leads to larger decisions made less frequently by fewer people. An agile methodology will effectively reduce an organization's transaction cost for making decisions by decentralizing the decisions to where the awareness and information is the best known: across the high-trust, self-organizing agile teams.
|
||||
|
||||
The following animation shows how reducing transaction cost shifts the optimal batch size to the left. You can't understate the value to an organization in making faster decisions more frequently.
|
||||
|
||||
![U-curve optimization illustration][10]
|
||||
|
||||
### Where does DevOps fit in?
|
||||
|
||||
Automation is one of the things DevOps is most known for. The previous illustration shows the value of automation in great detail. Through automation, we reduce our transaction costs to nearly zero, essentially getting our testing and deployments for free. This lets us take advantage of smaller and smaller batch sizes of work. Smaller batches of work are easier to understand, commit to, test, review, and know when they are done. These smaller batch sizes also contain less variance and risk, making them easier to deploy and, if something goes wrong, to troubleshoot and recover from. With automation combined with a solid agile practice, we can get our feature development very close to single piece flow, providing value to customers quickly and continuously.
|
||||
|
||||
More traditionally, DevOps is understood as a way to knock down the walls of confusion between the dev and ops teams. In this model, development teams develop new features, while operations teams keep the system stable and running smoothly. Friction occurs because new features from development introduce change into the system, increasing the risk of an outage, which the operations team doesn't feel responsible for—but has to deal with anyway. DevOps is not just trying to get people working together, it's more about trying to make more frequent changes safely in a complex environment.
|
||||
|
||||
We can look to [Ron Westrum][11] for research about achieving safety in complex organizations. In researching why some organizations are safer than others, he found that an organization's culture is predictive of its safety. He identified three types of culture: Pathological, Bureaucratic, and Generative. He found that the Pathological culture was predictive of less safety and the Generative culture was predictive of more safety (e.g., far fewer plane crashes or accidental hospital deaths in his main areas of research).
|
||||
|
||||
![Three types of culture identified by Ron Westrum][12]
|
||||
|
||||
Effective DevOps teams achieve a Generative culture with lean and agile practices, showing that speed and safety are complementary, or two sides of the same coin. By reducing the optimal batch sizes of decisions and features to become very small, DevOps achieves a faster flow of information and value while removing waste and reducing risk.
|
||||
|
||||
In line with Westrum's research, change can happen easily with safety and reliability improving at the same time. When an agile DevOps team is trusted to make its own decisions, we get the tools and techniques DevOps is most known for today: automation and continuous delivery. Through this automation, transaction costs are reduced further than ever, and a near single piece lean flow is achieved, creating the potential for thousands of decisions and releases per day, as we've seen happen in high-performing DevOps organizations.
|
||||
|
||||
### Flow, feedback, learning
|
||||
|
||||
DevOps doesn't stop there. We've mainly been talking about DevOps achieving a revolutionary flow, but lean and agile practices are further amplified through similar efforts that achieve faster feedback loops and faster learning. In the [_DevOps Handbook_][13], the authors explain in detail how, beyond its fast flow, DevOps achieves telemetry across its entire value stream for fast and continuous feedback. Further, leveraging the [kaizen][14] bursts of lean and the [retrospectives][15] of scrum, high-performing DevOps teams will continuously drive learning and continuous improvement deep into the foundations of their organizations, achieving a lean manufacturing revolution in the software product development industry.
|
||||
|
||||
### Start with a DevOps assessment
|
||||
|
||||
The first step in leveraging DevOps is, either after much study or with the help of a DevOps consultant and coach, to conduct an assessment across a suite of dimensions consistently found in high-performing DevOps teams. The assessment should identify weak or non-existent team norms that need improvement. Evaluate the assessment's results to find quick wins—focus areas with high chances for success that will produce high-impact improvement. Quick wins are important for gaining the momentum needed to tackle more challenging areas. The teams should generate ideas that can be tried quickly and start to move the needle on the DevOps transformation.
|
||||
|
||||
After some time, the team should reassess on the same dimensions to measure improvements and identify new high-impact focus areas, again with fresh ideas from the team. A good coach will consult, train, mentor, and support as needed until the team owns its own continuous improvement and achieves near consistency on all dimensions by continually reassessing, experimenting, and learning.
|
||||
|
||||
In the [second part][16] of this article, we'll look at results from a DevOps survey in the Drupal community and see where the quick wins are most likely to be found.
|
||||
|
||||
* * *
|
||||
|
||||
_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
|
||||
|
||||
作者:[Kelly AlbrechtWilly-Peter Schaub][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
|
||||
[2]: https://opensource.com/resources/devops
|
||||
[3]: https://github.com/Netflix/chaosmonkey
|
||||
[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
|
||||
[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
|
||||
[6]: https://youtu.be/5t6GhcvKB8o?t=54
|
||||
[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
|
||||
[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
|
||||
[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif (U-curve optimization illustration of optimal batch size)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif (U-curve optimization illustration)
|
||||
[11]: https://en.wikipedia.org/wiki/Ron_Westrum
|
||||
[12]: https://opensource.com/sites/default/files/uploads/information_flow.png (Three types of culture identified by Ron Westrum)
|
||||
[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
|
||||
[14]: https://en.wikipedia.org/wiki/Kaizen
|
||||
[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
|
||||
[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
|
||||
[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
|
||||
[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
|
||||
[19]: https://events.drupal.org/seattle2019
|
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Gov’t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software)
|
||||
[#]: via: (https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Gov’t warns on VPN security bug in Cisco, Palo Alto, F5, Pulse software
|
||||
======
|
||||
VPN packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies
|
||||
![Getty Images][1]
|
||||
|
||||
The Department of Homeland Security has issued a warning that some [VPN][2] packages from Cisco, Palo Alto, F5 and Pusle may improperly secure tokens and cookies, allowing nefarious actors an opening to invade and take control over an end user’s system.
|
||||
|
||||
The DHS’s Cybersecurity and Infrastructure Security Agency (CISA) [warning][3] comes on the heels of a notice from Carnegie Mellon's CERT that multiple VPN applications store the authentication and/or session cookies insecurely in memory and/or log files.
|
||||
|
||||
**[Also see:[What to consider when deploying a next generation firewall][4]. Get regularly scheduled insights by [signing up for Network World newsletters][5]]**
|
||||
|
||||
“If an attacker has persistent access to a VPN user's endpoint or exfiltrates the cookie using other methods, they can replay the session and bypass other authentication methods,” [CERT wrote][6]. “An attacker would then have access to the same applications that the user does through their VPN session.”
|
||||
|
||||
According to the CERT warning, the following products and versions store the cookie insecurely in log files:
|
||||
|
||||
* Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0 ([CVE-2019-1573][7])
|
||||
* Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
|
||||
|
||||
|
||||
|
||||
The following products and versions store the cookie insecurely in memory:
|
||||
|
||||
* Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS0.
|
||||
* Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2.
|
||||
* Cisco AnyConnect 4.7.x and prior.
|
||||
|
||||
|
||||
|
||||
CERT says that Palo Alto Networks GlobalProtect version 4.1.1 [patches][8] this vulnerability.
|
||||
|
||||
In the CERT warning F5 stated it has been aware of the insecure memory storage since 2013 and has not yet been patched. More information can be found [here][9]. F5 also stated it has been aware of the insecure log storage since 2017 and fixed it in version 12.1.3 and 13.1.0 and onwards. More information can be found [here][10].
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][11] ]**
|
||||
|
||||
CERT said it is unaware of any patches at the time of publishing for Cisco AnyConnect and Pulse Secure Connect Secure.
|
||||
|
||||
CERT credited the [National Defense ISAC Remote Access Working Group][12] for reporting the vulnerability.
|
||||
|
||||
Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3388646/gov-t-warns-on-vpn-security-bug-in-cisco-palo-alto-f5-pulse-software.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/10/broken-chain_metal_link_breach_security-100777433-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html
|
||||
[3]: https://www.us-cert.gov/ncas/current-activity/2019/04/12/Vulnerability-Multiple-VPN-Applications
|
||||
[4]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html
|
||||
[6]: https://www.kb.cert.org/vuls/id/192371/
|
||||
[7]: https://nvd.nist.gov/vuln/detail/CVE-2019-1573
|
||||
[8]: https://securityadvisories.paloaltonetworks.com/Home/Detail/146
|
||||
[9]: https://support.f5.com/csp/article/K14969
|
||||
[10]: https://support.f5.com/csp/article/K45432295
|
||||
[11]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[12]: https://ndisac.org/workinggroups/
|
||||
[13]: https://www.facebook.com/NetworkWorld/
|
||||
[14]: https://www.linkedin.com/company/network-world
|
75
sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md
Normal file
75
sources/talk/20190415 Nyansa-s Voyance expands to the IoT.md
Normal file
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Nyansa’s Voyance expands to the IoT)
|
||||
[#]: via: (https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Nyansa’s Voyance expands to the IoT
|
||||
======
|
||||
|
||||
![Brandon Mowinkel \(CC0\)][1]
|
||||
|
||||
Nyansa announced today that their flagship Voyance product can now apply its AI-based secret sauce to [IoT][2] devices, over and above the networking equipment and IT endpoints it could already manage.
|
||||
|
||||
Voyance – a network management product that leverages AI to automate the discovery of devices on the network and identify unusual behavior – has been around for two years now, and Nyansa says that it’s being used to observe a total of 25 million client devices operating across roughly 200 customer networks.
|
||||
|
||||
**More on IoT:**
|
||||
|
||||
* [Most powerful Internet of Things companies][3]
|
||||
* [10 Hot IoT startups to watch][4]
|
||||
* [The 6 ways to make money in IoT][5]
|
||||
* [][6] [Blockchain, service-centric networking key to IoT success][7]
|
||||
* [Getting grounded in IoT networking and security][8]
|
||||
* [Building IoT-ready networks must become a priority][9]
|
||||
* [What is the Industrial IoT? [And why the stakes are so high]][10]
|
||||
|
||||
|
||||
|
||||
It’s a software-only product (available either via public SaaS or private cloud) that works by scanning a customer’s network and identifying every device attached to it, then establishing a behavioral baseline that will let it flag suspicious actions (e.g., sending a lot more data than other devices of its kind, connecting to unusual servers) and even perform automated root-cause analysis of network issues.
|
||||
|
||||
The process doesn’t happen instantaneously, particularly the creation of the baseline, but it’s designed to be minimally invasive to existing network management frameworks and easy to implement.
|
||||
|
||||
Nyansa said that the medical field has been one of the key targets for the newly IoT-enabled iteration of Voyance, and one early customer – Baptist Health, a Florida-based healthcare company that runs four hospitals and several other clinics and practices – said that Voyance IoT has offered a new level of visibility into the business’ complex array of connected diagnostic and treatment machines.
|
||||
|
||||
“In the past we didn’t have the ability to identify security concerns in this way, related to rogue devices on the enterprise network, and now we’re able to do that,” said CISO Thad Phillips.
|
||||
|
||||
While spiraling network complexity isn’t an issue confined to the IoT, there’s a strong argument that the number and variety of devices connected to an IoT-enabled network represent a new challenge to network management, particularly in light of the fact that many such devices aren’t particularly secure.
|
||||
|
||||
“They’re not manufactured by networking vendors or security vendors, so for a performance standpoint, they have a lot of quirks … and on the security side, that’s sort of a big problem there as well,” said Anand Srinivas, Nyansa’s co-founder and CTO.
|
||||
|
||||
Enabling the Voyance platform to identify and manage IoT devices along with traditional endpoints seems to be mostly a matter of adding new device signatures to the system, but Enterprise Management Associates research director Shamus McGillicuddy said that, while the system’s designed for automation and ease of use, AIOps products like Voyance do need to be managed to make sure that they’re functioning correctly.
|
||||
|
||||
“Anything based on machine learning is going to take a while to make sure it understands your environment and you might have to retrain it,” he said. “There’s always going to be more and more things connecting to IP networks, and it’s just going to be a question of building up a database.”
|
||||
|
||||
Voyance IoT is available now. Pricing starts at $16,000 per year, and goes up with the number of total devices managed. (Current Voyance users can manage up to 100 IoT devices at no additional cost.)
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3388301/nyansa-s-voyance-expands-to-the-iot.html#tk.rss_all
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/geometric_architecture_ceiling_structure_lines_connections_networks_perspective_by_brandon_mowinkel_cc0_via_unsplash_2400x1600-100788530-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[3]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html
|
||||
[4]: https://www.networkworld.com/article/3270961/internet-of-things/10-hot-iot-startups-to-watch.html
|
||||
[5]: https://www.networkworld.com/article/3279346/internet-of-things/the-6-ways-to-make-money-in-iot.html
|
||||
[6]: https://www.networkworld.com/article/3280225/internet-of-things/what-is-digital-twin-technology-and-why-it-matters.html
|
||||
[7]: https://www.networkworld.com/article/3276313/internet-of-things/blockchain-service-centric-networking-key-to-iot-success.html
|
||||
[8]: https://www.networkworld.com/article/3269736/internet-of-things/getting-grounded-in-iot-networking-and-security.html
|
||||
[9]: https://www.networkworld.com/article/3276304/internet-of-things/building-iot-ready-networks-must-become-a-priority.html
|
||||
[10]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Two tools to help visualize and simplify your data-driven operations)
|
||||
[#]: via: (https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all)
|
||||
[#]: author: (Kent McNeil, Vice President of Software, Ciena Blue Planet )
|
||||
|
||||
Two tools to help visualize and simplify your data-driven operations
|
||||
======
|
||||
Amidst the rising complexity of networks, and influx of data, service providers are striving to keep operational complexity under control. Blue Planet’s Kent McNeil explains how they can turn this challenge into a huge opportunity, and in fact reduce operational effort by exploiting state-of-the-art graph database visualization and delta-based federation technologies.
|
||||
![danleap][1]
|
||||
|
||||
**Build the picture: Visualize your data**
|
||||
|
||||
The Internet of Things (IoT), 5G, smart technology, virtual reality – all these applications guarantee one thing for communications service providers (CSPs): more data. As networks become increasingly overwhelmed by mounds of data, CSPs are on the hunt for ways to make the most of the intelligence collected and are looking for ways to monetize their services, provide more customizable offerings, and enhance their network performance.
|
||||
|
||||
Customer analytics has gone some way towards fulfilling this need for greater insights, but with the rise in the volume and variety of consumer and IoT applications, the influx of data will increase at a phenomenal rate. The data includes not only customer-related data, but also device and network data, adding complexity to the picture. CSPs must harness this information to understand the relationships between any two things, to understand the connections within their data and to ultimately, leverage it for a better customer experience.
|
||||
|
||||
**See the upward graphical trend with graph databases**
|
||||
|
||||
Traditional relational databases certainly have their use, but graph databases offer a novel perspective. The visual representation between the component parts enables CSPs to understand and analyze the characteristics, as well as to act in a timely manner when confronted with any discrepancies.
|
||||
|
||||
Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
|
||||
|
||||
The use of graph databases has started to become more mainstream, as businesses see the benefits. IBM conducted a generic industry study, entitled “The State of Graph Databases Worldwide”, which found that people are moving to graph databases for speed, performance enhancement of applications, and streamlined operations. Ways in which businesses are using, or are planning to use, graph technology is highest for network and IT operations, followed by master data management. Performance is a key factor for CSPs, as is personalization, which enables support for more tailored service offerings.
|
||||
|
||||
Another advantage of graph databases for CSPs is that of unravelling the complexity of network inventory in a clear, visualized picture – this capability gives CSPs a competitive advantage as speed and performance become increasingly paramount. This need for speed and reliability will increase tenfold as IoT continues its impressive global ramp-up. Operational complexity also grows as the influx of generated data produced by IoT will further challenge the scalability of existing operational environments. Graph databases can help CSPs tackle this new challenge, ensuring the data is not just stored, but also processed and analyzed. It enables complex network questions to be asked and answered, ensuring that CSPs are not sidelined as “dumb pipes” in the IoT movement.
|
||||
|
||||
**Change the tide of data with delta-based federation**
|
||||
|
||||
New data, updated data, corrected data, deleted data – all needs to be managed, in line with regulations, and instantaneously. But this capability does not exist in the reality of many CSP’s Operational Support Systems (OSS). Many still battle with updating data and relying on full uploads of network inventory in order to perform key service fulfillment and assurance tasks. This method is time-intensive and risky due to potential conflicts and inaccuracies. With data being accessed from a variety of systems, CSPs must have a way to effectively hone in on only what is required.
|
||||
|
||||
Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
|
||||
|
||||
A delta-based federation model ensures that an accurate picture is presented, and only essential changes are conducted reliably and quickly. This simplified method filters the delta changes, reducing the time involved in updating, and minimizing the system load and risks. A validation process takes place to catch any errors or issues with the data, so CSPs can apply checks and retain control over modifications. Integrating network data into one simplified system limits the impact on the legacy OSS systems. This allows each OSS to continue its specific role, yet to feed data into a single interface, hence enabling teams to see the complete picture and gain efficiencies while launching new services or pinpointing and resolving service and network issues.
|
||||
|
||||
**Ride the wave**
|
||||
|
||||
25 billion connected things are predicted by Gartner on a global scale by 2021 and CSPs are already struggling with the current levels of data, which Gartner estimates at 14.2 billion in 2019. Over the last decade, CSPs have faced significant rises in the levels of data consumed as demand for new services and higher bandwidth applications has taken off. This data wave is set to continue and CSPs have two important tools at their disposal helping them ride the wave. Firstly, CSPs have specialist, legacy OSS already in place which they can leverage as a basis for integrating data and implementing optimized systems. Secondly, they can utilize new technologies in database inventory management: graph databases and delta-based federation. The advantages of effectively integrating network data, visualizing it, and creating a clear map of the inter-connections, enable CSPs to make critical decisions more quickly and accurately, resulting in most optimized and informed service operations.
|
||||
|
||||
[Watch this video to learn more about Blue Planet][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3389756/two-tools-to-help-visualize-and-simplify-your-data-driven-operations.html#tk.rss_all
|
||||
|
||||
作者:[Kent McNeil, Vice President of Software, Ciena Blue Planet][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-165721901-100793858-large.jpg
|
||||
[2]: https://www.blueplanet.com/resources/IT-plus-network-now-a-powerhouse-combination.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVideo&utm_medium=sponsoredpost4
|
146
sources/talk/20190416 What SDN is and where it-s going.md
Normal file
146
sources/talk/20190416 What SDN is and where it-s going.md
Normal file
@ -0,0 +1,146 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What SDN is and where it’s going)
|
||||
[#]: via: (https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
What SDN is and where it’s going
|
||||
======
|
||||
Software-defined networking (SDN) established a foothold in cloud computing, intent-based networking, and network security, with Cisco, VMware, Juniper and others leading the charge.
|
||||
![seedkin / Getty Images][1]
|
||||
|
||||
Hardware reigned supreme in the networking world until the emergence of software-defined networking (SDN), a category of technologies that separate the network control plane from the forwarding plane to enable more automated provisioning and policy-based management of network resources.
|
||||
|
||||
SDN's origins can be traced to a research collaboration between Stanford University and the University of California at Berkeley that ultimately yielded the [OpenFlow][2] protocol in the 2008 timeframe.
|
||||
|
||||
**[Learn more about the[difference between SDN and NFV][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
|
||||
|
||||
OpenFlow is only one of the first SDN canons, but it's a key component because it started the networking software revolution. OpenFlow defined a programmable network protocol that could help manage and direct traffic among routers and switches no matter which vendor made the underlying router or switch.
|
||||
|
||||
In the years since its inception, SDN has evolved into a reputable networking technology offered by key vendors including Cisco, VMware, Juniper, Pluribus and Big Switch. The Open Networking Foundation develops myriad open-source SDN technologies as well.
|
||||
|
||||
"Datacenter SDN no longer attracts breathless hype and fevered expectations, but the market is growing healthily, and its prospects remain robust," wrote Brad Casemore, IDC research vice president, data center networks, in a recent report, [_Worldwide Datacenter Software-Defined Networking Forecast, 2018–2022_][5]*. "*Datacenter modernization, driven by the relentless pursuit of digital transformation and characterized by the adoption of cloudlike infrastructure, will help to maintain growth, as will opportunities to extend datacenter SDN overlays and fabrics to multicloud application environments."
|
||||
|
||||
SDN will be increasingly perceived as a form of established, conventional networking, Casemore said.
|
||||
|
||||
IDC estimates that the worldwide data center SDN market will be worth more than $12 billion in 2022, recording a CAGR of 18.5% during the 2017–2022 period. The market generated revenue of nearly $5.15 billion in 2017, up more than 32.2% from 2016.
|
||||
|
||||
In 2017, the physical network represented the largest segment of the worldwide datacenter SDN market, accounting for revenue of nearly $2.2 billion, or about 42% of the overall total revenue. In 2022, however, the physical network is expected to claim about $3.65 billion in revenue, slightly less than the $3.68 billion attributable to network virtualization overlays/SDN controller software but more than the $3.18 billion for SDN applications.
|
||||
|
||||
“We're now at a point where SDN is better understood, where its use cases and value propositions are familiar to most datacenter network buyers and where a growing number of enterprises are finding that SDN offerings offer practical benefits,” Casemore said. “With SDN growth and the shift toward software-based network automation, the network is regaining lost ground and moving into better alignment with a wave of new application workloads that are driving meaningful business outcomes.”
|
||||
|
||||
### **What is SDN? **
|
||||
|
||||
The idea of programmability is the basis for the most precise definition of what SDN is: technology that separates the control plane management of network devices from the underlying data plane that forwards network traffic.
|
||||
|
||||
IDC broadens that definition of SDN by stating: “Datacenter SDN architectures feature software-defined overlays or controllers that are abstracted from the underlying network hardware, offering intent-or policy-based management of the network as a whole. This results in a datacenter network that is better aligned with the needs of application workloads through automated (thereby faster) provisioning, programmatic network management, pervasive application-oriented visibility, and where needed, direct integration with cloud orchestration platforms.”
|
||||
|
||||
The driving ideas behind the development of SDN are myriad. For example, it promises to reduce the complexity of statically defined networks; make automating network functions much easier; and allow for simpler provisioning and management of networked resources, everywhere from the data center to the campus or wide area network.
|
||||
|
||||
Separating the control and data planes is the most common way to think of what SDN is, but it is much more than that, said Mike Capuano, chief marketing officer for [Pluribus][6].
|
||||
|
||||
“At its heart SDN has a centralized or distributed intelligent entity that has an entire view of the network, that can make routing and switching decisions based on that view,” Capuano said. “Typically, network routers and switches only know about their neighboring network gear. But with a properly configured SDN environment, that central entity can control everything, from easily changing policies to simplifying configuration and automation across the enterprise.”
|
||||
|
||||
### How does SDN support edge computing, IoT and remote access?
|
||||
|
||||
A variety of networking trends have played into the central idea of SDN. Distributing computing power to remote sites, moving data center functions to the [edge][7], adopting cloud computing, and supporting [Internet of Things][8] environments – each of these efforts can be made easier and more cost efficient via a properly configured SDN environment.
|
||||
|
||||
Typically in an SDN environment, customers can see all of their devices and TCP flows, which means they can slice up the network from the data or management plane to support a variety of applications and configurations, Capuano said. So users can more easily segment an IoT application from the production world if they want, for example.
|
||||
|
||||
Some SDN controllers have the smarts to see that the network is getting congested and, in response, pump up bandwidth or processing to make sure remote and edge components don’t suffer latency.
|
||||
|
||||
SDN technologies also help in distributed locations that have few IT personnel on site, such as an enterprise branch office or service provider central office, said Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks.
|
||||
|
||||
“Naturally these places require remote and centralized delivery of connectivity, visibility and security. SDN solutions that centralize and abstract control and automate workflows across many places in the network, and their devices, improve operational reliability, speed and experience,” Bushong said.
|
||||
|
||||
### **How does SDN support intent-based networking?**
|
||||
|
||||
Intent-based networking ([IBN][9]) has a variety of components, but basically is about giving network administrators the ability to define what they want the network to do, and having an automated network management platform create the desired state and enforce policies to ensure what the business wants happens.
|
||||
|
||||
“If a key tenet of SDN is abstracted control over a fleet of infrastructure, then the provisioning paradigm and dynamic control to regulate infrastructure state is necessarily higher level,” Bushong said. “Policy is closer to declarative intent, moving away from the minutia of individual device details and imperative and reactive commands.”
|
||||
|
||||
IDC says that intent-based networking “represents an evolution of SDN to achieve even greater degrees of operational simplicity, automated intelligence, and closed-loop functionality.”
|
||||
|
||||
For that reason, IBN represents a notable milestone on the journey toward autonomous infrastructure that includes a self-driving network, which will function much like the self-driving car, producing desired outcomes based on what network operators and their organizations wish to accomplish, Casemore stated.
|
||||
|
||||
“While the self-driving car has been designed to deliver passengers safely to their destination with minimal human intervention, the self-driving network, as part of autonomous datacenter infrastructure, eventually will achieve similar outcomes in areas such as network provisioning, management, and troubleshooting — delivering applications and data, dynamically creating and altering network paths, and providing security enforcement with minimal need for operator intervention,” Casemore stated.
|
||||
|
||||
While IBN technologies are relatively young, Gartner says by 2020, more than 1,000 large enterprises will use intent-based networking systems in production, up from less than 15 in the second quarter of 2018.
|
||||
|
||||
### **How does SDN help customers with security?**
|
||||
|
||||
SDN enables a variety of security benefits. A customer can split up a network connection between an end user and the data center and have different security settings for the various types of network traffic. A network could have one public-facing, low security network that does not touch any sensitive information. Another segment could have much more fine-grained remote access control with software-based [firewall][10] and encryption policies on it, which allow sensitive data to traverse over it.
|
||||
|
||||
“For example, if a customer has an IoT group it doesn’t feel is all that mature with regards to security, via the SDN controller you can segment that group off away from the critical high-value corporate traffic,” Capuano stated. “SDN users can roll out security policies across the network from the data center to the edge and if you do all of this on top of white boxes, deployments can be 30 – 60 percent cheaper than traditional gear.”
|
||||
|
||||
The ability to look at a set of workloads and see if they match a given security policy is a key benefit of SDN, especially as data is distributed, said Thomas Scheibe, vice president of product management for Cisco’s Nexus and ACI product lines.
|
||||
|
||||
"The ability to deploy a whitelist security model like we do with ACI [Application Centric Infrastructure] that lets only specific entities access explicit resources across your network fabric is another key security element SDN enables," Scheibe said.
|
||||
|
||||
A growing number of SDN platforms now support [microsegmentation][11], according to Casemore.
|
||||
|
||||
“In fact, micro-segmentation has developed as a notable use case for SDN. As SDN platforms are extended to support multicloud environments, they will be used to mitigate the inherent complexity of establishing and maintaining consistent network and security policies across hybrid IT landscapes,” Casemore said.
|
||||
|
||||
### **What is SDN’s role in cloud computing?**
|
||||
|
||||
SDN’s role in the move toward [private cloud][12] and [hybrid cloud][13] adoption seems a natural. In fact, big SDN players such as Cisco, Juniper and VMware have all made moves to tie together enterprise data center and cloud worlds.
|
||||
|
||||
Cisco's ACI Anywhere package would, for example, let policies configured through Cisco's SDN APIC (Application Policy Infrastructure Controller) use native APIs offered by a public-cloud provider to orchestrate changes within both the private and public cloud environments, Cisco said.
|
||||
|
||||
“As organizations look to scale their hybrid cloud environments, it will be critical to leverage solutions that help improve productivity and processes,” said [Bob Laliberte][14], a senior analyst with Enterprise Strategy Group, in a recent [Network World article][15]. “The ability to leverage the same solution, like Cisco’s ACI, in your own private-cloud environment as well as across multiple public clouds will enable organizations to successfully scale their cloud environments.”
|
||||
|
||||
Growth of public and private clouds and enterprises' embrace of distributed multicloud application environments will have an ongoing and significant impact on data center SDN, representing both a challenge and an opportunity for vendors, said IDC’s Casemore.
|
||||
|
||||
“Agility is a key attribute of digital transformation, and enterprises will adopt architectures, infrastructures, and technologies that provide for agile deployment, provisioning, and ongoing operational management. In a datacenter networking context, the imperative of digital transformation drives adoption of extensive network automation, including SDN,” Casemore said.
|
||||
|
||||
### Where does SD-WAN fit in?
|
||||
|
||||
The software-defined wide area network ([SD-WAN][16]) is a natural application of SDN that extends the technology over a WAN. While the SDN architecture is typically the underpinning in a data center or campus, SD-WAN takes it a step further.
|
||||
|
||||
At its most basic, SD-WAN lets companies aggregate a variety of network connections – including MPLS, 4G LTE and DSL – into a branch or network edge location and have a software management platform that can turn up new sites, prioritize traffic and set security policies.
|
||||
|
||||
SD-WAN's driving principle is to simplify the way big companies turn up new links to branch offices, better manage the way those links are utilized – for data, voice or video – and potentially save money in the process.
|
||||
|
||||
[SD-WAN][17] lets networks route traffic based on centrally managed roles and rules, no matter what the entry and exit points of the traffic are, and with full security. For example, if a user in a branch office is working in Office365, SD-WAN can route their traffic directly to the closest cloud data center for that app, improving network responsiveness for the user and lowering bandwidth costs for the business.
|
||||
|
||||
"SD-WAN has been a promised technology for years, but in 2019 it will be a major driver in how networks are built and re-built," Anand Oswal, senior vice president of engineering in Cisco’s Enterprise Networking Business, said a Network World [article][18] earlier this year.
|
||||
|
||||
It's a profoundly hot market with tons of players including [Cisco][19], VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa.
|
||||
|
||||
IDC says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a more than 40% yearly clip between now and then.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3209131/what-sdn-is-and-where-its-going.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/what-is-sdn_2_where-is-it-going_arrows_fork-in-the-road-100793314-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2202144/data-center-faq-what-is-openflow-and-why-is-it-needed.html
|
||||
[3]: https://www.networkworld.com/article/3206709/lan-wan/what-s-the-difference-between-sdn-and-nfv.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.idc.com/getdoc.jsp?containerId=US43862418
|
||||
[6]: https://www.networkworld.com/article/3192318/pluribus-recharges-expands-software-defined-network-platform.html
|
||||
[7]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[8]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[9]: https://www.networkworld.com/article/3202699/what-is-intent-based-networking.html
|
||||
[10]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
[11]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
|
||||
[12]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
|
||||
[13]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
|
||||
[14]: https://www.linkedin.com/in/boblaliberte90/
|
||||
[15]: https://www.networkworld.com/article/3336075/cisco-serves-up-flexible-data-center-options.html
|
||||
[16]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[17]: https://www.networkworld.com/article/3031279/sd-wan/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[18]: https://www.networkworld.com/article/3332027/cisco-touts-5-technologies-that-will-change-networking-in-2019.html
|
||||
[19]: https://www.networkworld.com/article/3322937/what-will-be-hot-for-cisco-in-2019.html
|
@ -1,283 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Build a game framework with Python using the module Pygame)
|
||||
[#]: via: (https://opensource.com/article/17/12/game-framework-python)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Build a game framework with Python using the module Pygame
|
||||
======
|
||||
The first part of this series explored Python by creating a simple dice game. Now it's time to make your own game from scratch.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python2-header.png?itok=tEvOVo4A)
|
||||
|
||||
In my [first article in this series][1], I explained how to use Python to create a simple, text-based dice game. This time, I'll demonstrate how to use the Python module Pygame to create a graphical game. It will take several articles to get a game that actually does anything, but by the end of the series, you will have a better understanding of how to find and learn new Python modules and how to build an application from the ground up.
|
||||
|
||||
Before you start, you must install [Pygame][2].
|
||||
|
||||
### Installing new Python modules
|
||||
|
||||
There are several ways to install Python modules, but the two most common are:
|
||||
|
||||
* From your distribution's software repository
|
||||
* Using the Python package manager, pip
|
||||
|
||||
|
||||
|
||||
Both methods work well, and each has its own set of advantages. If you're developing on Linux or BSD, leveraging your distribution's software repository ensures automated and timely updates.
|
||||
|
||||
However, using Python's built-in package manager gives you control over when modules are updated. Also, it is not OS-specific, meaning you can use it even when you're not on your usual development machine. Another advantage of pip is that it allows local installs of modules, which is helpful if you don't have administrative rights to a computer you're using.
|
||||
|
||||
### Using pip
|
||||
|
||||
If you have both Python and Python3 installed on your system, the command you want to use is probably `pip3`, which differentiates it from Python 2.x's `pip` command. If you're unsure, try `pip3` first.
|
||||
|
||||
The `pip` command works a lot like most Linux package managers. You can search for Python modules with `search`, then install them with `install`. If you don't have permission to install software on the computer you're using, you can use the `--user` option to just install the module into your home directory.
|
||||
|
||||
```
|
||||
$ pip3 search pygame
|
||||
[...]
|
||||
Pygame (1.9.3) - Python Game Development
|
||||
sge-pygame (1.5) - A 2-D game engine for Python
|
||||
pygame_camera (0.1.1) - A Camera lib for PyGame
|
||||
pygame_cffi (0.2.1) - A cffi-based SDL wrapper that copies the pygame API.
|
||||
[...]
|
||||
$ pip3 install Pygame --user
|
||||
```
|
||||
|
||||
Pygame is a Python module, which means that it's just a set of libraries that can be used in your Python programs. In other words, it's not a program that you launch, like [IDLE][3] or [Ninja-IDE][4] are.
|
||||
|
||||
### Getting started with Pygame
|
||||
|
||||
A video game needs a setting; a world in which it takes place. In Python, there are two different ways to create your setting:
|
||||
|
||||
* Set a background color
|
||||
* Set a background image
|
||||
|
||||
|
||||
|
||||
Your background is only an image or a color. Your video game characters can't interact with things in the background, so don't put anything too important back there. It's just set dressing.
|
||||
|
||||
### Setting up your Pygame script
|
||||
|
||||
To start a new Pygame project, create a folder on your computer. All your game files go into this directory. It's vitally important that you keep all the files needed to run your game inside of your project folder.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/project.jpg)
|
||||
|
||||
A Python script starts with the file type, your name, and the license you want to use. Use an open source license so your friends can improve your game and share their changes with you:
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
# by Seth Kenlon
|
||||
|
||||
## GPLv3
|
||||
# This program is free software: you can redistribute it and/or
|
||||
# modify it under the terms of the GNU General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful, but
|
||||
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
# General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
```
|
||||
|
||||
Then you tell Python what modules you want to use. Some of the modules are common Python libraries, and of course, you want to include the one you just installed, Pygame.
|
||||
|
||||
```
|
||||
import pygame # load pygame keywords
|
||||
import sys # let python use your file system
|
||||
import os # help python identify your OS
|
||||
```
|
||||
|
||||
Since you'll be working a lot with this script file, it helps to make sections within the file so you know where to put stuff. You do this with block comments, which are comments that are visible only when looking at your source code. Create three blocks in your code.
|
||||
|
||||
```
|
||||
'''
|
||||
Objects
|
||||
'''
|
||||
|
||||
# put Python classes and functions here
|
||||
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
|
||||
# put run-once code here
|
||||
|
||||
'''
|
||||
Main Loop
|
||||
'''
|
||||
|
||||
# put game loop here
|
||||
```
|
||||
|
||||
Next, set the window size for your game. Keep in mind that not everyone has a big computer screen, so it's best to use a screen size that fits on most people's computers.
|
||||
|
||||
There is a way to toggle full-screen mode, the way many modern video games do, but since you're just starting out, keep it simple and just set one size.
|
||||
|
||||
```
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
worldx = 960
|
||||
worldy = 720
|
||||
```
|
||||
|
||||
The Pygame engine requires some basic setup before you can use it in a script. You must set the frame rate, start its internal clock, and start (`init`) Pygame.
|
||||
|
||||
```
|
||||
fps = 40 # frame rate
|
||||
ani = 4 # animation cycles
|
||||
clock = pygame.time.Clock()
|
||||
pygame.init()
|
||||
```
|
||||
|
||||
Now you can set your background.
|
||||
|
||||
### Setting the background
|
||||
|
||||
Before you continue, open a graphics application and create a background for your game world. Save it as `stage.png` inside a folder called `images` in your project directory.
|
||||
|
||||
There are several free graphics applications you can use.
|
||||
|
||||
* [Krita][5] is a professional-level paint materials emulator that can be used to create beautiful images. If you're very interested in creating art for video games, you can even purchase a series of online [game art tutorials][6].
|
||||
* [Pinta][7] is a basic, easy to learn paint application.
|
||||
* [Inkscape][8] is a vector graphics application. Use it to draw with shapes, lines, splines, and Bézier curves.
|
||||
|
||||
|
||||
|
||||
Your graphic doesn't have to be complex, and you can always go back and change it later. Once you have it, add this code in the setup section of your file:
|
||||
|
||||
```
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
backdrop = pygame.image.load(os.path.join('images','stage.png').convert())
|
||||
backdropbox = world.get_rect()
|
||||
```
|
||||
|
||||
If you're just going to fill the background of your game world with a color, all you need is:
|
||||
|
||||
```
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
```
|
||||
|
||||
You also must define a color to use. In your setup section, create some color definitions using values for red, green, and blue (RGB).
|
||||
|
||||
```
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
|
||||
BLUE = (25,25,200)
|
||||
BLACK = (23,23,23 )
|
||||
WHITE = (254,254,254)
|
||||
```
|
||||
|
||||
At this point, you could theoretically start your game. The problem is, it would only last for a millisecond.
|
||||
|
||||
To prove this, save your file as `your-name_game.py` (replace `your-name` with your actual name). Then launch your game.
|
||||
|
||||
If you are using IDLE, run your game by selecting `Run Module` from the Run menu.
|
||||
|
||||
If you are using Ninja, click the `Run file` button in the left button bar.
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/ninja_run_0.png)
|
||||
|
||||
You can also run a Python script straight from a Unix terminal or a Windows command prompt.
|
||||
|
||||
```
|
||||
$ python3 ./your-name_game.py
|
||||
```
|
||||
|
||||
If you're using Windows, use this command:
|
||||
|
||||
```
|
||||
py.exe your-name_game.py
|
||||
```
|
||||
|
||||
However you launch it, don't expect much, because your game only lasts a few milliseconds right now. You can fix that in the next section.
|
||||
|
||||
### Looping
|
||||
|
||||
Unless told otherwise, a Python script runs once and only once. Computers are very fast these days, so your Python script runs in less than a second.
|
||||
|
||||
To force your game to stay open and active long enough for someone to see it (let alone play it), use a `while` loop. To make your game remain open, you can set a variable to some value, then tell a `while` loop to keep looping for as long as the variable remains unchanged.
|
||||
|
||||
This is often called a "main loop," and you can use the term `main` as your variable. Add this anywhere in your setup section:
|
||||
|
||||
```
|
||||
main = True
|
||||
```
|
||||
|
||||
During the main loop, use Pygame keywords to detect if keys on the keyboard have been pressed or released. Add this to your main loop section:
|
||||
|
||||
```
|
||||
'''
|
||||
Main loop
|
||||
'''
|
||||
while main == True:
|
||||
for event in pygame.event.get():
|
||||
if event.type == pygame.QUIT:
|
||||
pygame.quit(); sys.exit()
|
||||
main = False
|
||||
|
||||
if event.type == pygame.KEYDOWN:
|
||||
if event.key == ord('q'):
|
||||
pygame.quit()
|
||||
sys.exit()
|
||||
main = False
|
||||
```
|
||||
|
||||
Also in your main loop, refresh your world's background.
|
||||
|
||||
If you are using an image for the background:
|
||||
|
||||
```
|
||||
world.blit(backdrop, backdropbox)
|
||||
```
|
||||
|
||||
If you are using a color for the background:
|
||||
|
||||
```
|
||||
world.fill(BLUE)
|
||||
```
|
||||
|
||||
Finally, tell Pygame to refresh everything on the screen and advance the game's internal clock.
|
||||
|
||||
```
|
||||
pygame.display.flip()
|
||||
clock.tick(fps)
|
||||
```
|
||||
|
||||
Save your file, and run it again to see the most boring game ever created.
|
||||
|
||||
To quit the game, press `q` on your keyboard.
|
||||
|
||||
In the [next article][9] of this series, I'll show you how to add to your currently empty game world, so go ahead and start creating some graphics to use!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/12/game-framework-python
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/17/10/python-101
|
||||
[2]: http://www.pygame.org/wiki/about
|
||||
[3]: https://en.wikipedia.org/wiki/IDLE
|
||||
[4]: http://ninja-ide.org/
|
||||
[5]: http://krita.org
|
||||
[6]: https://gumroad.com/l/krita-game-art-tutorial-1
|
||||
[7]: https://pinta-project.com/pintaproject/pinta/releases
|
||||
[8]: http://inkscape.org
|
||||
[9]: https://opensource.com/article/17/12/program-game-python-part-3-spawning-player
|
@ -1,12 +1,3 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (pityonline)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get Started with Snap Packages in Linux)
|
||||
[#]: via: (https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages-linux)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
|
||||
Get Started with Snap Packages in Linux
|
||||
======
|
||||
|
||||
@ -148,7 +139,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/5/get-started-snap-packages
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[pityonline](https://github.com/pityonline)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,3 @@
|
||||
DaivdMax2006 is translating
|
||||
100 Best Ubuntu Apps
|
||||
======
|
||||
|
||||
|
@ -1,292 +0,0 @@
|
||||
translating by MjSeven
|
||||
|
||||
Getting started with Sensu monitoring
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
|
||||
|
||||
Sensu is an open source infrastructure and application monitoring solution that monitors servers, services, and application health, and sends alerts and notifications with third-party integration. Written in Ruby, Sensu can use either [RabbitMQ][1] or [Redis][2] to handle messages. It uses Redis to store data.
|
||||
|
||||
If you want to monitor your cloud infrastructure in a simple and efficient manner, Sensu is a good option. It can be integrated with many of the modern DevOps stacks your organization may already be using, such as [Slack][3], [HipChat][4], or [IRC][5], and it can even send mobile/pager alerts with [PagerDuty][6].
|
||||
|
||||
Sensu's [modular architecture][7] means every component can be installed on the same server or on completely separate machines.
|
||||
|
||||
### Architecture
|
||||
|
||||
Sensu's main communication mechanism is the Transport. Every Sensu component must connect to the Transport in order to send messages to each other. Transport can use either RabbitMQ (recommended in production) or Redis.
|
||||
|
||||
Sensu Server processes event data and takes action. It registers clients and processes check results and monitoring events using filters, mutators, and handlers. The server publishes check definitions to the clients and the Sensu API provides a RESTful API, providing access to monitoring data and core functionality.
|
||||
|
||||
[Sensu Client][8] executes checks either scheduled by Sensu Server or local checks definitions. Sensu uses a data store (Redis) to keep all the persistent data. Finally, [Uchiwa][9] is the web interface to communicate with Sensu API.
|
||||
|
||||
![sensu_system.png][11]
|
||||
|
||||
### Installing Sensu
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
* One Linux installation to act as the server node (I used CentOS 7 for this article)
|
||||
|
||||
* One or more Linux machines to monitor (clients)
|
||||
|
||||
|
||||
|
||||
|
||||
#### Server side
|
||||
|
||||
Sensu requires Redis to be installed. To install Redis, enable the EPEL repository:
|
||||
```
|
||||
$ sudo yum install epel-release -y
|
||||
|
||||
```
|
||||
|
||||
Then install Redis:
|
||||
```
|
||||
$ sudo yum install redis -y
|
||||
|
||||
```
|
||||
|
||||
Modify `/etc/redis.conf` to disable protected mode, listen on every interface, and set a password:
|
||||
```
|
||||
$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
|
||||
|
||||
$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
|
||||
|
||||
$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
|
||||
|
||||
```
|
||||
|
||||
Enable and start Redis service:
|
||||
```
|
||||
$ sudo systemctl enable redis
|
||||
$ sudo systemctl start redis
|
||||
```
|
||||
|
||||
Redis is now installed and ready to be used by Sensu.
|
||||
|
||||
Now let’s install Sensu.
|
||||
|
||||
First, configure the Sensu repository and install the packages:
|
||||
```
|
||||
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
|
||||
[sensu]
|
||||
name=sensu
|
||||
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
EOF
|
||||
|
||||
$ sudo yum install sensu uchiwa -y
|
||||
```
|
||||
|
||||
Let’s create the bare minimum configuration files for Sensu:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/api.json << EOF
|
||||
{
|
||||
"api": {
|
||||
"host": "127.0.0.1",
|
||||
"port": 4567
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Next, configure `sensu-api` to listen on localhost, with Port 4567:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/redis.json << EOF
|
||||
{
|
||||
"redis": {
|
||||
"host": "<IP of server>",
|
||||
"port": 6379,
|
||||
"password": "password123"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
|
||||
$ sudo tee /etc/sensu/conf.d/transport.json << EOF
|
||||
{
|
||||
"transport": {
|
||||
"name": "redis"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
In these two files, we configure Sensu to use Redis as the transport mechanism and the address where Redis will listen. Clients need to connect directly to the transport mechanism. These two files will be required on each client machine.
|
||||
```
|
||||
$ sudo tee /etc/sensu/uchiwa.json << EOF
|
||||
{
|
||||
"sensu": [
|
||||
{
|
||||
"name": "sensu",
|
||||
"host": "127.0.0.1",
|
||||
"port": 4567
|
||||
}
|
||||
],
|
||||
"uchiwa": {
|
||||
"host": "0.0.0.0",
|
||||
"port": 3000
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
In this file, we configure Uchiwa to listen on every interface (0.0.0.0) on Port 3000. We also configure Uchiwa to use `sensu-api` (already configured).
|
||||
|
||||
For security reasons, change the owner of the configuration files you just created:
|
||||
```
|
||||
$ sudo chown -R sensu:sensu /etc/sensu
|
||||
```
|
||||
|
||||
Enable and start the Sensu services:
|
||||
```
|
||||
$ sudo systemctl enable sensu-server sensu-api sensu-client
|
||||
$ sudo systemctl start sensu-server sensu-api sensu-client
|
||||
$ sudo systemctl enable uchiwa
|
||||
$ sudo systemctl start uchiwa
|
||||
```
|
||||
|
||||
Try accessing the Uchiwa website: http://<IP of server>:3000
|
||||
|
||||
For production environments, it’s recommended to run a cluster of RabbitMQ as the Transport instead of Redis (a Redis cluster can be used in production too), and to run more than one instance of Sensu Server and API for load balancing and high availability.
|
||||
|
||||
Sensu is now installed. Now let’s configure the clients.
|
||||
|
||||
#### Client side
|
||||
|
||||
To add a new client, you will need to enable Sensu repository on the client machines by creating the file `/etc/yum.repos.d/sensu.repo`.
|
||||
```
|
||||
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
|
||||
[sensu]
|
||||
name=sensu
|
||||
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
EOF
|
||||
```
|
||||
|
||||
With the repository enabled, install the package Sensu:
|
||||
```
|
||||
$ sudo yum install sensu -y
|
||||
```
|
||||
|
||||
To configure `sensu-client`, create the same `redis.json` and `transport.json` created in the server machine, as well as the `client.json` configuration file:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/client.json << EOF
|
||||
{
|
||||
"client": {
|
||||
"name": "rhel-client",
|
||||
"environment": "development",
|
||||
"subscriptions": [
|
||||
"frontend"
|
||||
]
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
In the name field, specify a name to identify this client (typically the hostname). The environment field can help you filter, and subscription defines which monitoring checks will be executed by the client.
|
||||
|
||||
Finally, enable and start the services and check in Uchiwa, as the new client will register automatically:
|
||||
```
|
||||
$ sudo systemctl enable sensu-client
|
||||
$ sudo systemctl start sensu-client
|
||||
```
|
||||
|
||||
### Sensu checks
|
||||
|
||||
Sensu checks have two components: a plugin and a definition.
|
||||
|
||||
Sensu is compatible with the [Nagios check plugin specification][12], so any check for Nagios can be used without modification. Checks are executable files and are run by the Sensu client.
|
||||
|
||||
Check definitions let Sensu know how, where, and when to run the plugin.
|
||||
|
||||
#### Client side
|
||||
|
||||
Let’s install one check plugin on the client machine. Remember, this plugin will be executed on the clients.
|
||||
|
||||
Enable EPEL and install `nagios-plugins-http` :
|
||||
```
|
||||
$ sudo yum install -y epel-release
|
||||
$ sudo yum install -y nagios-plugins-http
|
||||
```
|
||||
|
||||
Now let’s explore the plugin by executing it manually. Try checking the status of a web server running on the client machine. It should fail as we don’t have a web server running:
|
||||
```
|
||||
$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
|
||||
connect to address 127.0.0.1 and port 80: Connection refused
|
||||
HTTP CRITICAL - Unable to open TCP socket
|
||||
```
|
||||
|
||||
It failed, as expected. Check the return code of the execution:
|
||||
```
|
||||
$ echo $?
|
||||
2
|
||||
|
||||
```
|
||||
|
||||
The Nagios check plugin specification defines four return codes for the plugin execution:
|
||||
|
||||
| **Plugin return code** | **State** |
|
||||
|------------------------|-----------|
|
||||
| 0 | OK |
|
||||
| 1 | WARNING |
|
||||
| 2 | CRITICAL |
|
||||
| 3 | UNKNOWN |
|
||||
|
||||
With this information, we can now create the check definition on the server.
|
||||
|
||||
#### Server side
|
||||
|
||||
On the server machine, create the file `/etc/sensu/conf.d/check_http.json`:
|
||||
```
|
||||
{
|
||||
"checks": {
|
||||
"check_http": {
|
||||
"command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
|
||||
"interval": 10,
|
||||
"subscribers": [
|
||||
"frontend"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In the command field, use the command we tested before. `Interval` will tell Sensu how frequently, in seconds, this check should be executed. Finally, `subscribers` will define the clients where the check will be executed.
|
||||
|
||||
Restart both sensu-api and sensu-server and confirm that the new check is available in Uchiwa.
|
||||
```
|
||||
$ sudo systemctl restart sensu-api sensu-server
|
||||
```
|
||||
|
||||
### What’s next?
|
||||
|
||||
Sensu is a powerful tool, and this article covers just a glimpse of what it can do. See the [documentation][13] to learn more, and visit the Sensu site to learn more about the [Sensu community][14].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mzamot
|
||||
[1]:https://www.rabbitmq.com/
|
||||
[2]:https://redis.io/topics/config
|
||||
[3]:https://slack.com/
|
||||
[4]:https://en.wikipedia.org/wiki/HipChat
|
||||
[5]:http://www.irc.org/
|
||||
[6]:https://www.pagerduty.com/
|
||||
[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
|
||||
[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
|
||||
[9]:https://uchiwa.io/#/
|
||||
[10]:/file/406576
|
||||
[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
|
||||
[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
|
||||
[13]:https://docs.sensu.io/
|
||||
[14]:https://sensu.io/community
|
@ -1,91 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Enjoy Netflix? You Should Thank FreeBSD)
|
||||
[#]: via: (https://itsfoss.com/netflix-freebsd-cdn/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Enjoy Netflix? You Should Thank FreeBSD
|
||||
======
|
||||
|
||||
Netflix is one of the most popular streaming services in the world.
|
||||
|
||||
But you already know that. Don’t you?
|
||||
|
||||
What you probably did not know is that Netflix uses [FreeBSD][1] to deliver its content to you.
|
||||
|
||||
Yes, that’s right. Netflix relies on FreeBSD to build its in-house content delivery network (CDN).
|
||||
|
||||
A [CDN][2] is a group of servers located in various part of the world. It is mainly used to deliver ‘heavy content’ like images and videos to the end-user faster than a centralized server.
|
||||
|
||||
Instead of opting for a commercial CDN service, Netflix has built its own in-house CDN called [Open Connect][3].
|
||||
|
||||
Open Connect utilizes [custom hardware][4], Open Connect Appliance. You can see it in the image below. It can handle 40Gb/s data and has a storage capacity of 248TB.
|
||||
|
||||
![Netflix’s Open Connect Appliance runs FreeBSD][5]
|
||||
|
||||
Netflix provides Open Connect Appliance to qualifying Internet Service Providers (ISP) for free. This way, substantial Netflix traffic gets localized and the ISPs deliver the Netflix content more efficiently.
|
||||
|
||||
This Open Connect Appliance runs on FreeBSD operating system and [almost exclusively runs open source software][6].
|
||||
|
||||
### Open Connect uses FreeBSD “Head”
|
||||
|
||||
![][7]
|
||||
|
||||
You would expect Netflix to use a stable release of FreeBSD for such a critical infrastructure but Netflix tracks the [FreeBSD head/current version][8]. Netflix says that tracking “head” lets them “stay forward-looking and focused on innovation”.
|
||||
|
||||
Here are the benefits Netflix sees of tracking FreeBSD:
|
||||
|
||||
* Quicker feature iteration
|
||||
* Quicker access to new FreeBSD features
|
||||
* Quicker bug fixes
|
||||
* Enables collaboration
|
||||
* Minimizes merge conflicts
|
||||
* Amortizes merge “cost”
|
||||
|
||||
|
||||
|
||||
> Running FreeBSD “head” lets us deliver large amounts of data to our users very efficiently, while maintaining a high velocity of feature development.
|
||||
>
|
||||
> Netflix
|
||||
|
||||
Remember, even [Google uses Debian][9] testing instead of Debian stable. Perhaps these enterprises prefer the cutting edge features more than anything else.
|
||||
|
||||
Like Google, Netflix also plans to upstream any code they can. This should help FreeBSD and other BSD distributions based on FreeBSD.
|
||||
|
||||
So what does Netflix achieves with FreeBSD? Here are some quick stats:
|
||||
|
||||
> Using FreeBSD and commodity parts, we achieve 90 Gb/s serving TLS-encrypted connections with ~55% CPU on a 16-core 2.6-GHz CPU.
|
||||
>
|
||||
> Netflix
|
||||
|
||||
If you want to know more about Netflix and FreeBSD, you can refer to [this presentation from FOSDEM][10]. You can also watch the video of the presentation [here][11].
|
||||
|
||||
These days big enterprises rely mostly on Linux for their server infrastructure but Netflix has put their trust in BSD. This is a good thing for BSD community because if an industry leader like Netflix throws its weight behind BSD, others could follow the lead. What do you think?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/netflix-freebsd-cdn/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.freebsd.org/
|
||||
[2]: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
|
||||
[3]: https://openconnect.netflix.com/en/
|
||||
[4]: https://openconnect.netflix.com/en/hardware/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-open-connect-appliance.jpeg?fit=800%2C533&ssl=1
|
||||
[6]: https://openconnect.netflix.com/en/software/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-freebsd.png?resize=800%2C450&ssl=1
|
||||
[8]: https://www.bsdnow.tv/tutorials/stable-current
|
||||
[9]: https://itsfoss.com/goobuntu-glinux-google/
|
||||
[10]: https://fosdem.org/2019/schedule/event/netflix_freebsd/attachments/slides/3103/export/events/attachments/netflix_freebsd/slides/3103/FOSDEM_2019_Netflix_and_FreeBSD.pdf
|
||||
[11]: http://mirror.onet.pl/pub/mirrors/video.fosdem.org/2019/Janson/netflix_freebsd.webm
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ustblixin)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,54 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to contribute to the Raspberry Pi community)
|
||||
[#]: via: (https://opensource.com/article/19/3/contribute-raspberry-pi-community)
|
||||
[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva/users/kepler22b/users/ansilva)
|
||||
|
||||
How to contribute to the Raspberry Pi community
|
||||
======
|
||||
Find ways to get involved in the Raspberry Pi community in the 13th
|
||||
article in our getting-started series.
|
||||
![][1]
|
||||
|
||||
Things are starting to wind down in this series, and as much fun as I've had writing it, mostly I hope it has helped someone out there use start using a Raspberry Pi for education or entertainment. Maybe the articles convinced you to buy your first Raspberry Pi or perhaps helped you rediscover the device that was collecting dust in a drawer. If any of that is true, I'll consider the series a success.
|
||||
|
||||
If you now want to pay it forward and help spread the word on how versatile this little green digital board is, here are a few ways you can get connected to the Raspberry Pi community:
|
||||
|
||||
* Contribute to improving the [official documentation][2]
|
||||
* Contribute code to [projects][3] the Raspberry Pi depends on
|
||||
* File [bugs][4] with Raspbian
|
||||
* File bugs with the different ARM architecture platform distributions
|
||||
* Help kids learn to code by taking a look at the Raspberry Pi Foundation's [Code Club][5] in the UK or [Code Club International][6] outside the UK
|
||||
* Help with [translation][7]
|
||||
* Volunteer on a [Raspberry Jam][8]
|
||||
|
||||
|
||||
|
||||
These are just a few of the ways you can contribute to the Raspberry Pi community. Last but not least, you can join me and [contribute articles][9] to your favorite open source website, [Opensource.com][10]. :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/contribute-raspberry-pi-community
|
||||
|
||||
作者:[Anderson Silva (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ansilva/users/kepler22b/users/ansilva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry_pi_community.jpg?itok=dcKwb5et
|
||||
[2]: https://www.raspberrypi.org/documentation/CONTRIBUTING.md
|
||||
[3]: https://www.raspberrypi.org/github/
|
||||
[4]: https://www.raspbian.org/RaspbianBugs
|
||||
[5]: https://www.codeclub.org.uk/
|
||||
[6]: https://www.codeclubworld.org/
|
||||
[7]: https://www.raspberrypi.org/translate/
|
||||
[8]: https://www.raspberrypi.org/jam/
|
||||
[9]: https://opensource.com/participate
|
||||
[10]: http://Opensource.com
|
@ -1,222 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Modrisco)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Vim: The basics)
|
||||
[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
|
||||
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
|
||||
|
||||
Getting started with Vim: The basics
|
||||
======
|
||||
|
||||
Learn to use Vim enough to get by at work or for a new project.
|
||||
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
I remember the very first time I encountered Vim. I was a university student, and the computers in the computer science department's lab were installed with Ubuntu Linux. While I had been exposed to different Linux variations (like RHEL) even before my college years (Red Hat sold its CDs at Best Buy!), this was the first time I needed to use the Linux operating system regularly, because my classes required me to do so. Once I started using Linux, like many others before and after me, I began to feel like a "real programmer."
|
||||
|
||||
![Real Programmers comic][2]
|
||||
|
||||
Real Programmers, by [xkcd][3]
|
||||
|
||||
Students could use a graphical text editor like [Kate][4], which was installed on the lab computers by default. For students who could use the shell but weren't used to the console-based editor, the popular choice was [Nano][5], which provided good interactive menus and an experience similar to Windows' graphical text editor.
|
||||
|
||||
I used Nano sometimes, but I heard awesome things about [Vi/Vim][6] and [Emacs][7] and really wanted to give them a try (mainly because they looked cool, and I was also curious to see what was so great about them). Using Vim for the first time scared me—I did not want to mess anything up! But once I got the hang of it, things became much easier and I could appreciate the editor's powerful capabilities. As for Emacs, well, I sort of gave up, but I'm happy I stuck with Vim.
|
||||
|
||||
In this article, I will walk through Vim (based on my personal experience) just enough so you can get by with it as an editor on a Linux system. This will neither make you an expert nor even scratch the surface of many of Vim's powerful capabilities. But the starting point always matters, and I want to make the beginning experience as easy as possible, and you can explore the rest on your own.
|
||||
|
||||
### Step 0: Open a console window
|
||||
|
||||
Before jumping into Vim, you need to do a little preparation. Open a console terminal from your Linux operating system. (Since Vim is also available on MacOS, Mac users can use these instructions, also.)
|
||||
|
||||
Once a terminal window is up, type the **ls** command to list the current directory. Then, type **mkdir Tutorial** to create a new directory called **Tutorial**. Go inside the directory by typing **cd Tutorial**.
|
||||
|
||||
![Create a folder][8]
|
||||
|
||||
That's it for preparation. Now it's time to move on to the fun part—starting to use Vim.
|
||||
|
||||
### Step 1: Create and close a Vim file without saving
|
||||
|
||||
Remember when I said I was scared to use Vim at first? Well, the scary part was thinking, "what if I change an existing file and mess things up?" After all, several computer science assignments required me to work on existing files by modifying them. I wanted to know: _How can I open and close a file without saving my changes?_
|
||||
|
||||
The good news is you can use the same command to create or open a file in Vim: **vim <FILE_NAME**>, where **< FILE_NAME>** represents the target file name you want to create or modify. Let's create a file named **HelloWorld.java** by typing **vim HelloWorld.java**.
|
||||
|
||||
Hello, Vim! Now, here is a very important concept in Vim, possibly the most important to remember: Vim has multiple modes. Here are three you need to know to do Vim basics:
|
||||
|
||||
Mode | Description
|
||||
---|---
|
||||
Normal | Default; for navigation and simple editing
|
||||
Insert | For explicitly inserting and modifying text
|
||||
Command Line | For operations like saving, exiting, etc.
|
||||
|
||||
Vim has other modes, like Visual, Select, and Ex-Mode, but Normal, Insert, and Command Line modes are good enough for us.
|
||||
|
||||
You are now in Normal mode. If you have text, you can move around with your arrow keys or other navigation keystrokes (which you will see later). To make sure you are in Normal mode, simply hit the **Esc** (Escape) key.
|
||||
|
||||
> **Tip:** **Esc** switches to Normal mode. Even though you are already in Normal mode, hit **Esc** just for practice's sake.
|
||||
|
||||
Now, this will be interesting. Press **:** (the colon key) followed by **q!** (i.e., **:q!** ). Your screen will look like this:
|
||||
|
||||
![Editing Vim][9]
|
||||
|
||||
Pressing the colon in Normal mode switches Vim to Command Line mode, and the **:q!** command quits the Vim editor without saving. In other words, you are abandoning all changes. You can also use **ZQ** ; choose whichever option is more convenient.
|
||||
|
||||
Once you hit **Enter** , you should no longer be in Vim. Repeat the exercise a few times, just to get the hang of it. Once you've done that, move on to the next section to learn how to make a change to this file.
|
||||
|
||||
### Step 2: Make and save modifications in Vim
|
||||
|
||||
Reopen the file by typing **vim HelloWorld.java** and pressing the **Enter** key. Insert mode is where you can make changes to a file. First, hit **Esc** to make sure you are in Normal mode, then press **i** to go into Insert mode. (Yes, that is the letter **i**.)
|
||||
|
||||
In the lower-left, you should see **\-- INSERT --**. This means you are in Insert mode.
|
||||
|
||||
![Vim insert mode][10]
|
||||
|
||||
Type some Java code. You can type anything you want, but here is an example for you to follow. Your screen will look like this:
|
||||
|
||||
|
||||
```
|
||||
public class HelloWorld {
|
||||
public static void main([String][11][] args) {
|
||||
}
|
||||
}
|
||||
```
|
||||
Very pretty! Notice how the text is highlighted in Java syntax highlight colors. Because you started the file in Java, Vim will detect the syntax color.
|
||||
|
||||
Save the file. Hit **Esc** to leave Insert mode and enter Command Line mode. Type **:** and follow that with **x!** (i.e., a colon followed by x and !). Hit **Enter** to save the file. You can also type **wq** to perform the same operation.
|
||||
|
||||
Now you know how to enter text using Insert mode and save the file using **:x!** or **:wq**.
|
||||
|
||||
### Step 3: Basic navigation in Vim
|
||||
|
||||
While you can always use your friendly Up, Down, Left, and Right arrow buttons to move around a file, that would be very difficult in a large file with almost countless lines. It's also helpful to be able to be able to jump around within a line. Although Vim has a ton of awesome navigation features, the first one I want to show you is how to go to a specific line.
|
||||
|
||||
Press the **Esc** key to make sure you are in Normal mode, then type **:set number** and hit **Enter** .
|
||||
|
||||
Voila! You see line numbers on the left side of each line.
|
||||
|
||||
![Showing Line Numbers][12]
|
||||
|
||||
OK, you may say, "that's cool, but how do I jump to a line?" Again, make sure you are in Normal mode, then press **: <LINE_NUMBER>**, where **< LINE_NUMBER>** is the number of the line you want to go to, and press **Enter**. Try moving to line 2.
|
||||
|
||||
```
|
||||
:2
|
||||
```
|
||||
|
||||
Now move to line 3.
|
||||
|
||||
![Jump to line 3][13]
|
||||
|
||||
But imagine a scenario where you are dealing with a file that is 1,000 lines long and you want to go to the end of the file. How do you get there? Make sure you are in Normal mode, then type **:$** and press **Enter**.
|
||||
|
||||
You will be on the last line!
|
||||
|
||||
Now that you know how to jump among the lines, as a bonus, let's learn how to move to the end of a line. Make sure you are on a line with some text, like line 3, and type **$**.
|
||||
|
||||
![Go to the last character][14]
|
||||
|
||||
You're now at the last character on the line. In this example, the open curly brace is highlighted to show where your cursor moved to, and the closing curly brace is highlighted because it is the opening curly brace's matching character.
|
||||
|
||||
That's it for basic navigation in Vim. Wait, don't exit the file, though. Let's move to basic editing in Vim. Feel free to grab a cup of coffee or tea, though.
|
||||
|
||||
### Step 4: Basic editing in Vim
|
||||
|
||||
Now that you know how to navigate around a file by hopping onto the line you want, you can use that skill to do some basic editing in Vim. Switch to Insert mode. (Remember how to do that, by hitting the **i** key?) Sure, you can edit by using the keyboard to delete or insert characters, but Vim offers much quicker ways to edit files.
|
||||
|
||||
Move to line 3, where it shows **public static void main(String[] args) {**. Quickly hit the **d** key twice in succession. Yes, that is **dd**. If you did it successfully, you will see a screen like this, where line 3 is gone, and every following line moved up by one (i.e., line 4 became line 3).
|
||||
|
||||
![Deleting A Line][15]
|
||||
|
||||
That's the _delete_ command. Don't fear! Hit **u** and you will see the deleted line recovered. Whew. This is the _undo_ command.
|
||||
|
||||
![Undoing a change in Vim][16]
|
||||
|
||||
The next lesson is learning how to copy and paste text, but first, you need to learn how to highlight text in Vim. Press **v** and move your Left and Right arrow buttons to select and deselect text. This feature is also very useful when you are showing code to others and want to identify the code you want them to see.
|
||||
|
||||
![Highlighting text in Vim][17]
|
||||
|
||||
Move to line 4, where it says **System.out.println("Hello, Opensource");**. Highlight all of line 4. Done? OK, while line 4 is still highlighted, press **y**. This is called _yank_ mode, and it will copy the text to the clipboard. Next, create a new line underneath by entering **o**. Note that this will put you into Insert mode. Get out of Insert mode by pressing **Esc** , then hit **p** , which stands for _paste_. This will paste the copied text from line 3 to line 4.
|
||||
|
||||
![Pasting in Vim][18]
|
||||
|
||||
As an exercise, repeat these steps but also modify the text on your newly created lines. Also, make sure the lines are aligned well.
|
||||
|
||||
> **Hint:** You need to switch back and forth between Insert mode and Command Line mode to accomplish this task.
|
||||
|
||||
Once you are finished, save the file with the **x!** command. That's all for basic editing in Vim.
|
||||
|
||||
### Step 5: Basic searching in Vim
|
||||
|
||||
Imagine your team lead wants you to change a text string in a project. How can you do that quickly? You might want to search for the line using a certain keyword.
|
||||
|
||||
Vim's search functionality can be very useful. Go into the Command Line mode by (1) pressing **Esc** key, then (2) pressing colon **:** ****key. We can search a keyword by entering : **/ <SEARCH_KEYWORD>**, where **< SEARCH_KEYWORD>** is the text string you want to find. Here we are searching for the keyword string "Hello." In the image below, the colon is missing but required.
|
||||
|
||||
![Searching in Vim][19]
|
||||
|
||||
However, a keyword can appear more than once, and this may not be the one you want. So, how do you navigate around to find the next match? You simply press the **n** key, which stands for _next_. Make sure that you aren't in Insert mode when you do this!
|
||||
|
||||
### Bonus step: Use split mode in Vim
|
||||
|
||||
That pretty much covers all the Vim basics. But, as a bonus, I want to show you a cool Vim feature called _split mode_.
|
||||
|
||||
Get out of _HelloWorld.java_ and create a new file. In a terminal window, type **vim GoodBye.java** and hit **Enter** to create a new file named _GoodBye.java_.
|
||||
|
||||
Enter any text you want; I decided to type "Goodbye." Save the file. (Remember you can use **:x!** or **:wq** in Command Line mode.)
|
||||
|
||||
In Command Line mode, type **:split HelloWorld.java** , and see what happens.
|
||||
|
||||
![Split mode in Vim][20]
|
||||
|
||||
Wow! Look at that! The **split** command created horizontally divided windows with _HelloWorld.java_ above and _GoodBye.java_ below. How can you switch between the windows? Hold **Control** (on a Mac) or **CTRL** (on a PC) then hit **ww** (i.e., **w** twice in succession).
|
||||
|
||||
As a final exercise, try to edit _GoodBye.java_ to match the screen below by copying and pasting from _HelloWorld.java_.
|
||||
|
||||
![Modify GoodBye.java file in Split Mode][21]
|
||||
|
||||
Save both files, and you are done!
|
||||
|
||||
> **TIP 1:** If you want to arrange the files vertically, use the command **:vsplit <FILE_NAME>** (instead of **:split <FILE_NAME>**, where **< FILE_NAME>** is the name of the file you want to open in Split mode.
|
||||
>
|
||||
> **TIP 2:** You can open more than two files by calling as many additional **split** or **vsplit** commands as you want. Try it and see how it looks.
|
||||
|
||||
### Vim cheat sheet
|
||||
|
||||
In this article, you learned how to use Vim just enough to get by for work or a project. But this is just the beginning of your journey to unlock Vim's powerful capabilities. Be sure to check out other great tutorials and tips on Opensource.com.
|
||||
|
||||
To make things a little easier, I've summarized everything you've learned into [a handy cheat sheet][22].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/getting-started-vim
|
||||
|
||||
作者:[Bryant Son (Red Hat, Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_xkcdcartoon.jpg (Real Programmers comic)
|
||||
[3]: https://xkcd.com/378/
|
||||
[4]: https://kate-editor.org
|
||||
[5]: https://www.nano-editor.org
|
||||
[6]: https://www.vim.org
|
||||
[7]: https://www.gnu.org/software/emacs
|
||||
[8]: https://opensource.com/sites/default/files/uploads/2_createtestfolder.jpg (Create a folder)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/4_existingvim.jpg (Editing Vim)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/6_insertionmode.jpg (Vim insert mode)
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[12]: https://opensource.com/sites/default/files/uploads/10_setnumberresult_0.jpg (Showing Line Numbers)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/12_jumpintoline3.jpg (Jump to line 3)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/14_gotolastcharacter.jpg (Go to the last character)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/15_deletinglines.jpg (Deleting A Line)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/16_undoingtheline.jpg (Undoing a change in Vim)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/17_highlighting.jpg (Highlighting text in Vim)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/19_pasting.jpg (Pasting in Vim)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/22_searchmode.jpg (Searching in Vim)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/26_copytonewfiles.jpg (Split mode in Vim)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/27_exercise.jpg (Modify GoodBye.java file in Split Mode)
|
||||
[22]: https://opensource.com/downloads/cheat-sheet-vim
|
@ -1,108 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to run PostgreSQL on Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
|
||||
[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
|
||||
|
||||
How to run PostgreSQL on Kubernetes
|
||||
======
|
||||
Create uniformly managed, cloud-native production deployments with the
|
||||
flexibility to deploy a personalized database-as-a-service.
|
||||
![cubes coming together to create a larger cube][1]
|
||||
|
||||
By running a [PostgreSQL][2] database on [Kubernetes][3], you can create uniformly managed, cloud-native production deployments with the flexibility to deploy a personalized database-as-a-service tailored to your specific needs.
|
||||
|
||||
Using an Operator allows you to provide additional context to Kubernetes to [manage a stateful application][4]. An Operator is also helpful when using an open source database like PostgreSQL to help with actions including provisioning, scaling, high availability, and user management.
|
||||
|
||||
Let's explore how to get PostgreSQL up and running on Kubernetes.
|
||||
|
||||
### Set up the PostgreSQL operator
|
||||
|
||||
The first step to using PostgreSQL with Kubernetes is installing an Operator. You can get up and running with the open source [Crunchy PostgreSQL Operator][5] on any Kubernetes-based environment with the help of Crunchy's [quickstart script][6] for Linux.
|
||||
|
||||
The quickstart script has a few prerequisites:
|
||||
|
||||
* The [Wget][7] utility installed
|
||||
* [kubectl][8] installed
|
||||
* A [StorageClass][9] defined on your Kubernetes cluster
|
||||
* Access to a Kubernetes user account with cluster-admin privileges. This is required to install the Operator [RBAC][10] rules
|
||||
* A [namespace][11] to hold the PostgreSQL Operator
|
||||
|
||||
|
||||
|
||||
Executing the script will give you a default PostgreSQL Operator deployment that assumes [dynamic storage][12] and a StorageClass named **standard**. User-provided values are allowed by the script to override these defaults.
|
||||
|
||||
You can download the quickstart script and set it to be executable with the following commands:
|
||||
|
||||
```
|
||||
wget <https://raw.githubusercontent.com/CrunchyData/postgres-operator/master/examples/quickstart.sh>
|
||||
chmod +x ./quickstart.sh
|
||||
```
|
||||
|
||||
Then you can execute the quickstart script:
|
||||
|
||||
```
|
||||
./examples/quickstart.sh
|
||||
```
|
||||
|
||||
After the script prompts you for some basic information about your Kubernetes cluster, it performs the following operations:
|
||||
|
||||
* Downloads the Operator configuration files
|
||||
* Sets the **$HOME/.pgouser** file to default settings
|
||||
* Deploys the Operator as a Kubernetes [Deployment][13]
|
||||
* Sets your **.bashrc** to include the Operator environmental variables
|
||||
* Sets your **$HOME/.bash_completion** file to be the **pgo bash_completion** file
|
||||
|
||||
|
||||
|
||||
During the quickstart's execution, you'll be prompted to set up the RBAC rules for your Kubernetes cluster. In a separate terminal, execute the command the quickstart command tells you to use.
|
||||
|
||||
Once the script completes, you'll get information on setting up a port forward to the PostgreSQL Operator pod. In a separate terminal, execute the port forward; this will allow you to begin executing commands to the PostgreSQL Operator! Try creating a cluster by entering:
|
||||
|
||||
```
|
||||
pgo create cluster mynewcluster
|
||||
```
|
||||
|
||||
You can test that your cluster is up and running with by entering:
|
||||
|
||||
```
|
||||
pgo test mynewcluster
|
||||
```
|
||||
|
||||
You can now manage your PostgreSQL databases in your Kubernetes environment! You can find a full reference to commands, including those for scaling, high availability, backups, and more, in the [documentation][14].
|
||||
|
||||
* * *
|
||||
|
||||
_Parts of this article are based on[Get Started Running PostgreSQL on Kubernetes][15] that the author wrote for the Crunchy blog._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/how-run-postgresql-kubernetes
|
||||
|
||||
作者:[Jonathan S. Katz][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jkatz05
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
|
||||
[2]: https://www.postgresql.org/
|
||||
[3]: https://kubernetes.io/
|
||||
[4]: https://opensource.com/article/19/2/scaling-postgresql-kubernetes-operators
|
||||
[5]: https://github.com/CrunchyData/postgres-operator
|
||||
[6]: https://crunchydata.github.io/postgres-operator/stable/installation/#quickstart-script
|
||||
[7]: https://www.gnu.org/software/wget/
|
||||
[8]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
[9]: https://kubernetes.io/docs/concepts/storage/storage-classes/
|
||||
[10]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||
[11]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
|
||||
[12]: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
|
||||
[13]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
|
||||
[14]: https://crunchydata.github.io/postgres-operator/stable/#documentation
|
||||
[15]: https://info.crunchydata.com/blog/get-started-runnning-postgresql-on-kubernetes
|
@ -1,71 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Parallel computation in Python with Dask)
|
||||
[#]: via: (https://opensource.com/article/19/4/parallel-computation-python-dask)
|
||||
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
|
||||
|
||||
Parallel computation in Python with Dask
|
||||
======
|
||||
The Dask library scales Python computation to multiple cores or even to
|
||||
multiple machines.
|
||||
![Pair programming][1]
|
||||
|
||||
One frequent complaint about Python performance is the [global interpreter lock][2] (GIL). Because of GIL, only one thread can execute Python byte code at a time. As a consequence, using threads does not speed up computation—even on modern, multi-core machines.
|
||||
|
||||
But when you need to parallelize to many cores, you don't need to stop using Python: the **[Dask][3]** library will scale computation to multiple cores or even to multiple machines. Some setups configure Dask on thousands of machines, each with multiple cores; while there are scaling limits, they are not easy to hit.
|
||||
|
||||
While Dask has many built-in array operations, as an example of something not built-in, we can calculate the [skewness][4]:
|
||||
```
|
||||
import numpy
|
||||
import dask
|
||||
from dask import array as darray
|
||||
|
||||
arr = dask.from_array(numpy.array(my_data), chunks=(1000,))
|
||||
mean = darray.mean()
|
||||
stddev = darray.std(arr)
|
||||
unnormalized_moment = darry.mean(arr * arr * arr)
|
||||
## See formula in wikipedia:
|
||||
skewness = ((unnormalized_moment - (3 * mean * stddev ** 2) - mean ** 3) /
|
||||
stddev ** 3)
|
||||
```
|
||||
|
||||
Notice that each operation will use as many cores as needed. This will parallelize across all cores, even when calculating across billions of elements.
|
||||
|
||||
Of course, it is not always the case that our operations can be parallelized by the library; sometimes we need to implement parallelism on our own.
|
||||
|
||||
For that, Dask has a "delayed" functionality:
|
||||
```
|
||||
import dask
|
||||
|
||||
def is_palindrome(s):
|
||||
return s == s[::-1]
|
||||
|
||||
palindromes = [dask.delayed(is_palindrome)(s) for s in string_list]
|
||||
total = dask.delayed(sum)(palindromes)
|
||||
result = total.compute()
|
||||
```
|
||||
|
||||
This will calculate whether strings are palindromes in parallel and will return a count of the palindromic ones.
|
||||
|
||||
While Dask was created for data scientists, it is by no means limited to data science. Whenever we need to parallelize tasks in Python, we can turn to Dask—GIL or no GIL.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/parallel-computation-python-dask
|
||||
|
||||
作者:[Moshe Zadka (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1 (Pair programming)
|
||||
[2]: https://wiki.python.org/moin/GlobalInterpreterLock
|
||||
[3]: https://github.com/dask/dask
|
||||
[4]: https://en.wikipedia.org/wiki/Skewness#Definition
|
@ -1,168 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using Square Brackets in Bash: Part 2)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
|
||||
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
|
||||
|
||||
Using Square Brackets in Bash: Part 2
|
||||
======
|
||||
|
||||
![square brackets][1]
|
||||
|
||||
We continue our tour of square brackets in Bash with a look at how they can act as a command.
|
||||
|
||||
[Creative Commons Zero][2]
|
||||
|
||||
Welcome back to our mini-series on square brackets. In the [previous article][3], we looked at various ways square brackets are used at the command line, including globbing. If you've not read that article, you might want to start there.
|
||||
|
||||
Square brackets can also be used as a command. Yep, for example, in:
|
||||
|
||||
```
|
||||
[ "a" = "a" ]
|
||||
```
|
||||
|
||||
which is, by the way, a valid command that you can execute, `[ ... ]` is a command. Notice that there are spaces between the opening bracket `[` and the parameters `"a" = "a"`, and then between the parameters and the closing bracket `]`. That is precisely because the brackets here act as a command, and you are separating the command from its parameters.
|
||||
|
||||
You would read the above line as " _test whether the string "a" is the same as string "a"_ ". If the premise is true, the `[ ... ]` command finishes with an exit status of 0. If not, the exit status is 1. [We talked about exit statuses in a previous article][4], and there you saw that you could access the value by checking the `$?` variable.
|
||||
|
||||
Try it out:
|
||||
|
||||
```
|
||||
[ "a" = "a" ]
|
||||
echo $?
|
||||
```
|
||||
|
||||
And now try:
|
||||
|
||||
```
|
||||
[ "a" = "b" ]
|
||||
echo $?
|
||||
```
|
||||
|
||||
In the first case, you will get a 0 (the premise is true), and running the second will give you a 1 (the premise is false). Remember that, in Bash, an exit status from a command that is 0 means it exited normally with no errors, and that makes it `true`. If there were any errors, the exit value would be a non-zero value (`false`). The `[ ... ]` command follows the same rules so that it is consistent with the rest of the other commands.
|
||||
|
||||
The `[ ... ]` command comes in handy in `if ... then` constructs and also in loops that require a certain condition to be met (or not) before exiting, like the `while` and `until` loops.
|
||||
|
||||
The logical operators for testing stuff are pretty straightforward:
|
||||
|
||||
```
|
||||
[ STRING1 = STRING2 ] => checks to see if the strings are equal
|
||||
[ STRING1 != STRING2 ] => checks to see if the strings are not equal
|
||||
[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
|
||||
[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
|
||||
[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
|
||||
[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
|
||||
[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
|
||||
[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
|
||||
etc...
|
||||
```
|
||||
|
||||
You can also test for some very shell-specific things. The `-f` option, for example, tests whether a file exists or not:
|
||||
|
||||
```
|
||||
for i in {000..099}; \
|
||||
do \
|
||||
if [ -f file$i ]; \
|
||||
then \
|
||||
echo file$i exists; \
|
||||
else \
|
||||
touch file$i; \
|
||||
echo I made file$i; \
|
||||
fi; \
|
||||
done
|
||||
```
|
||||
|
||||
If you run this in your test directory, line 3 will test to whether a file is in your long list of files. If it does exist, it will just print a message; but if it doesn't exist, it will create it, to make sure the whole set is complete.
|
||||
|
||||
You could write the loop more compactly like this:
|
||||
|
||||
```
|
||||
for i in {000..099};\
|
||||
do\
|
||||
if [ ! -f file$i ];\
|
||||
then\
|
||||
touch file$i;\
|
||||
echo I made file$i;\
|
||||
fi;\
|
||||
done
|
||||
```
|
||||
|
||||
The `!` modifier in the condition inverts the premise, thus line 3 would translate to " _if the file`file$i` does not exist_ ".
|
||||
|
||||
Try it: delete some random files from the bunch you have in your test directory. Then run the loop shown above and watch how it rebuilds the list.
|
||||
|
||||
There are plenty of other tests you can try, including `-d` tests to see if the name belongs to a directory and `-h` tests to see if it is a symbolic link. You can also test whether a files belongs to a certain group of users (`-G`), whether one file is older than another (`-ot`), or even whether a file contains something or is, on the other hand, empty.
|
||||
|
||||
Try the following for example. Add some content to some of your files:
|
||||
|
||||
```
|
||||
echo "Hello World" >> file023
|
||||
echo "This is a message" >> file065
|
||||
echo "To humanity" >> file010
|
||||
```
|
||||
|
||||
and then run this:
|
||||
|
||||
```
|
||||
for i in {000..099};\
|
||||
do\
|
||||
if [ ! -s file$i ];\
|
||||
then\
|
||||
rm file$i;\
|
||||
echo I removed file$i;\
|
||||
fi;\
|
||||
done
|
||||
```
|
||||
|
||||
And you'll remove all the files that are empty, leaving only the ones you added content to.
|
||||
|
||||
To find out more, check the manual page for the `test` command (a synonym for `[ ... ]`) with `man test`.
|
||||
|
||||
You may also see double brackets (`[[ ... ]]`) sometimes used in a similar way to single brackets. The reason for this is because double brackets give you a wider range of comparison operators. You can use `==`, for example, to compare a string to a pattern instead of just another string; or < and `>` to test whether a string would come before or after another in a dictionary.
|
||||
|
||||
To find out more about extended operators [check out this full list of Bash expressions][5].
|
||||
|
||||
### Next Time
|
||||
|
||||
In an upcoming article, we'll continue our tour and take a look at the role of parentheses `()` in Linux command lines. See you then!
|
||||
|
||||
_Read more:_
|
||||
|
||||
1. [The Meaning of Dot (`.`)][6]
|
||||
2. [Understanding Angle Brackets in Bash (`<...>`)][7]
|
||||
3. [More About Angle Brackets in Bash(`<` and `>`)][8]
|
||||
4. [And, Ampersand, and & in Linux (`&`)][9]
|
||||
5. [Ampersands and File Descriptors in Bash (`&`)][10]
|
||||
6. [Logical & in Bash (`&`)][4]
|
||||
7. [All about {Curly Braces} in Bash (`{}`)][11]
|
||||
8. [Using Square Brackets in Bash: Part 1][3]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
|
||||
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/bro66
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy (square brackets)
|
||||
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
||||
[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
|
||||
[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
|
||||
[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
|
||||
[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
|
||||
[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
|
||||
[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
|
||||
[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
|
||||
[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
|
||||
[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
|
@ -1,78 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco, Google reenergize multicloud/hybrid cloud joint development)
|
||||
[#]: via: (https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco, Google reenergize multicloud/hybrid cloud joint development
|
||||
======
|
||||
Cisco, VMware, HPE and others tap into new Google Cloud Athos cloud technology
|
||||
![Thinkstock][1]
|
||||
|
||||
Cisco and Google have expanded their joint cloud-development activities to help customers more easily build secure multicloud and hybrid applications everywhere from on-premises data centers to public clouds.
|
||||
|
||||
**[Check out[what hybrid cloud computing is][2] and learn [what you need to know about multi-cloud][3]. Get regularly scheduled insights by [signing up for Network World newsletters][4]]**
|
||||
|
||||
The expansion centers around Google’s new open-source hybrid cloud package called Anthos, which was introduced at the company’s Google Next event this week. Anthos is based on – and supplants – the company's existing Google Cloud Service beta. Anthos will let customers run applications, unmodified, on existing on-premises hardware or in the public cloud and will be available on [Google Cloud Platform][5] (GCP) with [Google Kubernetes Engine][6] (GKE), and in data centers with [GKE On-Prem][7], the company says. Anthos will also let customers for the first time manage workloads running on third-party clouds such as AWS and Azure from the Google platform without requiring administrators and developers to learn different environments and APIs, Google said.
|
||||
|
||||
Essentially, Athos offers a single managed service that promises to let customers manage and deploy workloads across clouds, without having to worry about dissimilar environments or APIs.
|
||||
|
||||
As part of the rollout, Google also announced a beta program called[ Anthos Migrate][8] that Google says auto-migrates VMs from on-premises, or other clouds, directly into containers in GKE. “This unique migration technology lets you migrate and modernize your infrastructure in one streamlined motion, without upfront modifications to the original VMs or applications,” Google said. It gives companies the flexibility to move on-prem apps to a cloud environment at the customers pace, Google said.
|
||||
|
||||
### Cisco and Google
|
||||
|
||||
For its part Cisco announced support of Anthos and promised to tightly integrate it with Cisco data center-technologies, such as its HyperFlex hyperconverged package, Application Centric Infrastructure (Cisco’s flagship SDN offering), SD-WAN and Stealthwatch Cloud. The integrations will enable a consistent, cloud-like experience whether on-prem or in the cloud with automatic upgrades to the latest versions and security patches, Cisco stated.
|
||||
|
||||
"Google Cloud’s expertise in containerization and service mesh – Kubernetes and Istio, respectively – as well as their leadership in the developer community, combined with Cisco’s enterprise-class networking, compute, storage and security products and services makes for a winning combination for our customers," [wrote][9] Kip Compton, Cisco senior vice president, Cloud Platform and Solutions Group. “The Cisco integrations for Anthos will help customers build and manage multicloud and hybrid applications across their on-premises datacenters and public clouds and let them focus on innovation and agility without compromising security or increasing complexity.”
|
||||
|
||||
### Google Cloud and Cisco
|
||||
|
||||
Eyal Manor, vice president, engineering at Google Cloud, [wrote][10] that with Cisco’s support for Anthos, customers will be able to:
|
||||
|
||||
* Benefit from a fully-managed service, like GKE, and Cisco’s hyperconverged infrastructure, networking, and security technologies.
|
||||
* Operate consistently across an enterprise data center and the cloud.
|
||||
* Consume cloud services from an enterprise data center.
|
||||
* Modernize now on premises with the latest cloud technologies.
|
||||
|
||||
|
||||
|
||||
Cisco and Google have been working closely together since October 2017, when the companies said they were working on an open hybrid cloud platform that bridges on-premises and cloud environments. That package, [Cisco Hybrid Cloud Platform for Google Cloud][11], became generally available in September 2018. It lets customer develop enterprise-grade capabilities from Google Cloud-managed Kubernetes containers that include Cisco networking and security technology as well as service mesh monitoring from Istio.
|
||||
|
||||
Google says Istio’s open-source, container- and microservice-optimized technology offers developers a uniform way to connect, secure, manage and monitor microservices across clouds through service-to-service level mTLS [Mutual Transport Layer Security] authentication access control. As a result, customers can easily implement new, portable services and centrally configure and manage those services.
|
||||
|
||||
Cisco wasn’t the only vendor to announce support for Anthos. At least 30 other big Google partners including [VMware][12], [Dell EMC][13], [HPE][14], Intel, and Lenovo committed to delivering Anthos on their own hyperconverged infrastructure for their customers, Google stated.
|
||||
|
||||
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3388218/cisco-google-reenergize-multicloudhybrid-cloud-joint-development.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2016/12/hybrid_cloud-100700390-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3233132/cloud-computing/what-is-hybrid-cloud-computing.html
|
||||
[3]: https://www.networkworld.com/article/3252775/hybrid-cloud/multicloud-mania-what-to-know.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://cloud.google.com/
|
||||
[6]: https://cloud.google.com/kubernetes-engine/
|
||||
[7]: https://cloud.google.com/gke-on-prem/
|
||||
[8]: https://cloud.google.com/contact/
|
||||
[9]: https://blogs.cisco.com/news/next-phase-cisco-google-cloud
|
||||
[10]: https://cloud.google.com/blog/topics/partners/google-cloud-partners-with-cisco-on-hybrid-cloud-next19?utm_medium=unpaidsocial&utm_campaign=global-googlecloud-liveevent&utm_content=event-next
|
||||
[11]: https://cloud.google.com/cisco/
|
||||
[12]: https://blogs.vmware.com/networkvirtualization/2019/04/vmware-and-google-showcase-hybrid-cloud-deployment.html/
|
||||
[13]: https://www.dellemc.com/en-us/index.htm
|
||||
[14]: https://www.hpe.com/us/en/newsroom/blog-post/2019/04/hpe-and-google-cloud-join-forces-to-accelerate-innovation-with-hybrid-cloud-solutions-optimized-for-containerized-applications.html
|
||||
[15]: https://www.facebook.com/NetworkWorld/
|
||||
[16]: https://www.linkedin.com/company/network-world
|
@ -1,65 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Enhanced security at the edge)
|
||||
[#]: via: (https://www.networkworld.com/article/3388130/enhanced-security-at-the-edge.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
Enhanced security at the edge
|
||||
======
|
||||
The risks presented by edge computing environments necessitates that companies pay special attention to security measures.
|
||||
![iStock][1]
|
||||
|
||||
It’s becoming a cliché to say that data security is a top concern among executives and boards of directors. The problem is: the problem just won’t go away.
|
||||
|
||||
Hackers and attackers are ever finding new ways to exploit weaknesses. Just as companies start to use emerging technologies like artificial intelligence and machine learning to protect their organizations in an automated fashion, [so too are bad actors][2] using these tools to further their goals.
|
||||
|
||||
In a nutshell, security simply cannot be overlooked. And now, as companies [increasingly adopt][3] edge computing, there are new considerations to securing these environments.
|
||||
|
||||
**More risks at the edge**
|
||||
|
||||
As a [Network World article][4] suggests, edge computing places a new focus on physical security. That’s not to dismiss the need to secure data in transit. However, it’s the actual physical sites and equipment that deserve special attention.
|
||||
|
||||
For example, edge hardware is often situated in larger corporate or wide-open spaces, sometimes in highly accessible, shared offices and public areas. Ostensibly, this is to take advantage of the cost savings and faster access associated with data not having to travel back and forth to the data center.
|
||||
|
||||
However, without any level of access control, this equipment is at risk of both malicious actions and simple human error. Imagine an office cleaner accidentally turning off a device, and the resulting effects of subsequent downtime.
|
||||
|
||||
Another risk is “Shadow edge IT.” Sometimes non-IT staff will deploy an edge site to quickly launch a project, without letting the IT department know this site is now connecting to the network. For example, a retail store might take the initiative to install its own digital signage. Or, a sales team could apply IoT sensors to TVs and deployed them on-the-fly at a sales demo.
|
||||
|
||||
In these cases, IT may have little or no visibility into these devices and edge sites, leaving the network potentially exposed.
|
||||
|
||||
**Securing the edge**
|
||||
|
||||
An easy way to avoid these risks is to deploy a micro data center (MDC).
|
||||
|
||||
> “Most of these [edge] environments have historically been uncontrolled,” [said Kevin Brown][5], SVP Innovation and CTO for Schneider Electric’s Secure Power Division. “They might be a Tier 1, but likely a Tier 0 type of design — they’re like open wiring closets. They now need to be treated like micro data centers. You need to be able to manage them as you would a mission-critical data center.”
|
||||
|
||||
Just as it sounds, this solution is a secure, self-contained enclosure that includes all the storage, processing, and networking that’s required to run applications both indoors and outdoors. It also includes the necessary power, cooling, security, and management tools.
|
||||
|
||||
The best part is the high level of security. The unit is enclosed, with locking doors, to prevent unauthorized access. And with the right vendor, the MDC can be customized to include surveillance cameras, sensors, and monitoring technology for remote digital management.
|
||||
|
||||
As companies increasingly take advantage of the benefits of edge computing, it’s critical they also take advantage of security solutions to protect their data and environments.
|
||||
|
||||
Discover how to best secure your edge computing environment at [APC.com][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3388130/enhanced-security-at-the-edge.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-1091707448-100793312-large.jpg
|
||||
[2]: https://www.csoonline.com/article/3250144/6-ways-hackers-will-use-machine-learning-to-launch-attacks.html
|
||||
[3]: https://www.marketwatch.com/press-release/edge-computing-market-2018-global-analysis-opportunities-and-forecast-to-2023-2018-08-20
|
||||
[4]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[5]: https://www.youtube.com/watch?v=1NLk1cXEukQ
|
||||
[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -1,355 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( NeverKnowsTomorrow )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Four Methods To Add A User To Group In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Four Methods To Add A User To Group In Linux
|
||||
======
|
||||
|
||||
Linux groups are organization units which are used to manage user accounts in Linux.
|
||||
|
||||
It has unique numerical identification number for each users and groups in the Linux system.
|
||||
|
||||
It’s called a userid (UID) and a groupid (GID). The main purpose of groups is to define a set of privileges to the member of the group.
|
||||
|
||||
They all can perform the particular operations but not others.
|
||||
|
||||
There are two types of default groups are available in Linux. Each user should have exactly one primary group and any number of secondary groups.
|
||||
|
||||
* **Primary Group:** Primary group has been added to the user when the user account creation. It’s typically the name of the user. The primary group is applied to the user when performing any actions such as creating new files (or directories), modifying files, or executing commands, etc,. The user primary group information is stored in the `/etc/passwd` file.
|
||||
* **Secondary Group:** It’s known as Supplementary Groups. It allows the group of users to perform the particular action in the same group members files.
|
||||
Say for example, if you would like to allow few users to run Apache (httpd) service command then it will perfectly suits.
|
||||
|
||||
|
||||
|
||||
You may have interested in the following articles which is related to user management.
|
||||
|
||||
* [Three Methods To Create An User Account In Linux?][1]
|
||||
* [How To Create The Bulk Users In Linux?][2]
|
||||
* [How to Update/Change Users Password in Linux Using Different Ways][3]
|
||||
|
||||
|
||||
|
||||
It can be done using the following four methods.
|
||||
|
||||
* **`usermod:`** The usermod command modifies the system account files to reflect the changes that are specified on the command line.
|
||||
* **`gpasswd:`** The gpasswd command is used to administer /etc/group, and /etc/gshadow. Every group can have administrators, members and a password.
|
||||
* **`Shell Script:`** Shell scripts are allow administrator to automate the required tasks.
|
||||
* **`Manual Method:`** We can manually add the users into any group by editing the `/etc/group` file.
|
||||
|
||||
|
||||
|
||||
I assume that you already have the necessary group and users for this activity. In this example, we are going to use following users and groups `user1`, `user2`, `user3` and group is `mygroup` and `mygroup1`.
|
||||
|
||||
Before making the changes, i would like to check the users and group information. See the details below.
|
||||
|
||||
I could see the below users were associate with their own group and not with others.
|
||||
|
||||
```
|
||||
# id user1
|
||||
uid=1008(user1) gid=1008(user1) groups=1008(user1)
|
||||
|
||||
# id user2
|
||||
uid=1009(user2) gid=1009(user2) groups=1009(user2)
|
||||
|
||||
# id user3
|
||||
uid=1010(user3) gid=1010(user3) groups=1010(user3)
|
||||
```
|
||||
|
||||
I could see there is no users are associated in this group.
|
||||
|
||||
```
|
||||
# getent group mygroup
|
||||
mygroup:x:1012:
|
||||
|
||||
# getent group mygroup1
|
||||
mygroup1:x:1013:
|
||||
```
|
||||
|
||||
### Method-1: What Is usermod Command?
|
||||
|
||||
The usermod command modifies the system account files to reflect the changes that are specified on the command line.
|
||||
|
||||
### How to Add an Existing User to Secondary or Supplementary Group Using usermod Command?
|
||||
|
||||
To add an existing user to a secondary group, use the usermod command with `-G` option and the name of the group.
|
||||
Syntax
|
||||
|
||||
```
|
||||
# usermod [-G] [GroupName] [UserName]
|
||||
```
|
||||
|
||||
You will be getting an error message if the given user or group doesn’t exist in your system. If you doesn’t get any error then the user has been added to the corresponding group.
|
||||
|
||||
```
|
||||
# usermod -a -G mygroup user1
|
||||
```
|
||||
|
||||
Let me see the output using id command. Yes, it’s added successfully.
|
||||
|
||||
```
|
||||
# id user1
|
||||
uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
|
||||
```
|
||||
|
||||
### How to Add an Existing User to Multiple Secondary or Supplementary Groups Using usermod Command?
|
||||
|
||||
To add an existing user to multiple secondary groups, use the usermod command with `-G` option and the name of the groups with comma.
|
||||
|
||||
Syntax
|
||||
|
||||
```
|
||||
# usermod [-G] [GroupName1,GroupName2] [UserName]
|
||||
```
|
||||
|
||||
In this example, we are going to add the `user2` into `mygroup` and `mygroup1`.
|
||||
|
||||
```
|
||||
# usermod -a -G mygroup,mygroup1 user2
|
||||
```
|
||||
|
||||
Let me see the output using id command. Yes, `user2` is successfully added into `mygroup` and `mygroup1`.
|
||||
|
||||
```
|
||||
# id user2
|
||||
uid=1009(user2) gid=1009(user2) groups=1009(user2),1012(mygroup),1013(mygroup1)
|
||||
```
|
||||
|
||||
### How to Change a User’s Primary Group?
|
||||
|
||||
To change a user’s primary group, use the usermod command with `-g` option and the name of the group.
|
||||
|
||||
Syntax
|
||||
|
||||
```
|
||||
# usermod [-g] [GroupName] [UserName]
|
||||
```
|
||||
|
||||
We have to use `-g` to change user’s primary group.
|
||||
|
||||
```
|
||||
# usermod -g mygroup user3
|
||||
```
|
||||
|
||||
Let see the output. Yes, it has been successfully changed. Now, it’s showing `mygroup` as `user3` primary group instead of `user3`.
|
||||
|
||||
```
|
||||
# id user3
|
||||
uid=1010(user3) gid=1012(mygroup) groups=1012(mygroup)
|
||||
```
|
||||
|
||||
### Method-2: What Is gpasswd Command?
|
||||
|
||||
The gpasswd command is used to administer /etc/group, and /etc/gshadow. Every group can have administrators, members and a password.
|
||||
|
||||
### How to Add an Existing User to Secondary or Supplementary Group Using gpasswd Command?
|
||||
|
||||
To add an existing user to a secondary group, use the gpasswd command with `-M` option and the name of the group.
|
||||
|
||||
Syntax
|
||||
|
||||
```
|
||||
# gpasswd [-M] [UserName] [GroupName]
|
||||
```
|
||||
|
||||
In this example, we are going to add the `user1` into `mygroup`.
|
||||
|
||||
```
|
||||
# gpasswd -M user1 mygroup
|
||||
```
|
||||
|
||||
Let me see the output using id command. Yes, `user1` is successfully added into `mygroup`.
|
||||
|
||||
```
|
||||
# id user1
|
||||
uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
|
||||
```
|
||||
|
||||
### How to Add The Multiple User’s to Secondary or Supplementary Group Using gpasswd Command?
|
||||
|
||||
To add the multiple users to a secondary group, use the gpasswd command with `-M` option and the name of the group.
|
||||
|
||||
Syntax
|
||||
|
||||
```
|
||||
# gpasswd [-M] [UserName1,UserName2] [GroupName]
|
||||
```
|
||||
|
||||
In this example, we are going to add the `user2` and `user3` into `mygroup1`.
|
||||
|
||||
```
|
||||
# gpasswd -M user2,user3 mygroup1
|
||||
```
|
||||
|
||||
Let me see the output using getent command. Yes, `user2` and `user3` are successfully added into `mygroup1`.
|
||||
|
||||
```
|
||||
# getent group mygroup1
|
||||
mygroup1:x:1013:user2,user3
|
||||
```
|
||||
|
||||
### How to Remove a User From a Group Using gpasswd Command?
|
||||
|
||||
To remove the user from the group, use the gpasswd command with `-d` option and the name of the user and group.
|
||||
|
||||
Syntax
|
||||
|
||||
```
|
||||
# gpasswd [-d] [UserName] [GroupName]
|
||||
```
|
||||
|
||||
In this example, we are going to remove the `user1` from `mygroup`.
|
||||
|
||||
```
|
||||
# gpasswd -d user1 mygroup
|
||||
Removing user user1 from group mygroup
|
||||
```
|
||||
|
||||
### Method-3: Using Shell Script?
|
||||
|
||||
Based on the above examples what i came to know is the `usermod` command doesn’t has capable to add multiple users into the group but it can be done through the `gpasswd` command.
|
||||
|
||||
However, it will overwrite the existing users which are currently associated on the group.
|
||||
|
||||
For example, `user1` has already associated with `mygroup`. If you would like to add `user2` and `user3` into the `mygroup` with `gpasswd` command, it doesn’t work as expected and it over right the group instead of modifying it.
|
||||
|
||||
What would be the solution if you would like to add multiple users to multiple groups?
|
||||
|
||||
There is no default option is available in both of the commands to achieve this.
|
||||
|
||||
Hence, we need to write a small shell script to achieve this.
|
||||
|
||||
### Method-3a: How to Add The Multiple Users to Secondary or Supplementary Group Using gpasswd Command?
|
||||
|
||||
Create the following small shell script if you would like to add the multiple users to secondary or supplementary group using gpasswd command.
|
||||
|
||||
Create The Users list. Each user should be in separate line.
|
||||
|
||||
```
|
||||
$ cat user-lists.txt
|
||||
user1
|
||||
user2
|
||||
user3
|
||||
```
|
||||
|
||||
Use the following shell script to add multiple users to single secondary group.
|
||||
|
||||
```
|
||||
vi group-update.sh
|
||||
|
||||
#!/bin/bash
|
||||
for user in `cat user-lists.txt`
|
||||
do
|
||||
usermod -a -G mygroup $user
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `group-update.sh` file.
|
||||
|
||||
```
|
||||
# chmod + group-update.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh group-update.sh
|
||||
```
|
||||
|
||||
Let me see the output using getent command. Yes, `user1`, `user2` and `user3` are successfully added into `mygroup`.
|
||||
|
||||
```
|
||||
# getent group mygroup
|
||||
mygroup:x:1012:user1,user2,user3
|
||||
```
|
||||
|
||||
### Method-3a: How to Add The Multiple Users Into Multiple Secondary or Supplementary Group Using gpasswd Command?
|
||||
|
||||
Create the following small shell script if you would like to add the multiple users into multiple secondary or supplementary group using gpasswd command.
|
||||
|
||||
Create The Users list. Each user should be in separate line.
|
||||
|
||||
```
|
||||
$ cat user-lists.txt
|
||||
user1
|
||||
user2
|
||||
user3
|
||||
```
|
||||
|
||||
Create The Groups list. Each group should be in separate line.
|
||||
|
||||
```
|
||||
$ cat group-lists.txt
|
||||
mygroup
|
||||
mygroup1
|
||||
```
|
||||
|
||||
Use the following shell script to add multiple users to multiple secondary groups.
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
for user in `more user-lists.txt`
|
||||
do
|
||||
for group in `more group-lists.txt`
|
||||
do
|
||||
usermod -a -G $group $user
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `group-update-1.sh` file.
|
||||
|
||||
```
|
||||
# chmod +x group-update-1.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh group-update-1.sh
|
||||
```
|
||||
|
||||
Let me see the output using getent command. Yes, `user1`, `user2` and `user3` are successfully added into `mygroup`.
|
||||
|
||||
```
|
||||
# getent group mygroup
|
||||
mygroup:x:1012:user1,user2,user3
|
||||
```
|
||||
|
||||
Also, `user1`, `user2` and `user3` are successfully added into `mygroup1`.
|
||||
|
||||
```
|
||||
# getent group mygroup1
|
||||
mygroup1:x:1013:user1,user2,user3
|
||||
```
|
||||
|
||||
### Method-4: Manual Method To Add A User Into A Group In Linux?
|
||||
|
||||
We can manually add the users into any group by editing the `/etc/group` file.
|
||||
|
||||
Open the `/etc/group` file and search the group name where you want to update the users. Finally update the Users into the corresponding group.
|
||||
|
||||
```
|
||||
# vi /etc/group
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/linux-user-account-creation-useradd-adduser-newusers/
|
||||
[2]: https://www.2daygeek.com/how-to-create-the-bulk-users-in-linux/
|
||||
[3]: https://www.2daygeek.com/linux-passwd-chpasswd-command-set-update-change-users-password-in-linux-using-shell-script/
|
@ -1,94 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing Partitions with sgdisk)
|
||||
[#]: via: (https://fedoramagazine.org/managing-partitions-with-sgdisk/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
Managing Partitions with sgdisk
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
[Roderick W. Smith][2]‘s _sgdisk_ command can be used to manage the partitioning of your hard disk drive from the command line. The basics that you need to get started with it are demonstrated below.
|
||||
|
||||
The following six parameters are all that you need to know to make use of sgdisk’s most basic features:
|
||||
|
||||
1. **-p**
|
||||
_Print_ the partition table:
|
||||
### sgdisk -p /dev/sda
|
||||
2. **-d x**
|
||||
_Delete_ partition x:
|
||||
### sgdisk -d 1 /dev/sda
|
||||
3. **-n x:y:z**
|
||||
Create a _new_ partition numbered x, starting at y and ending at z:
|
||||
### sgdisk -n 1:1MiB:2MiB /dev/sda
|
||||
4. **-c x:y**
|
||||
_Change_ the name of partition x to y:
|
||||
### sgdisk -c 1:grub /dev/sda
|
||||
5. **-t x:y**
|
||||
Change the _type_ of partition x to y:
|
||||
### sgdisk -t 1:ef02 /dev/sda
|
||||
6. **–list-types**
|
||||
List the partition type codes:
|
||||
### sgdisk --list-types
|
||||
|
||||
|
||||
|
||||
![The SGDisk Command][3]
|
||||
|
||||
As you can see in the above examples, most of the commands require that the [device file name][4] of the hard disk drive to operate on be specified as the last parameter.
|
||||
|
||||
The parameters shown above can be combined so that you can completely define a partition with a single run of the sgdisk command:
|
||||
|
||||
### sgdisk -n 1:1MiB:2MiB -t 1:ef02 -c 1:grub /dev/sda
|
||||
|
||||
Relative values can be specified for some fields by prefixing the value with a **+** or **–** symbol. If you use a relative value, sgdisk will do the math for you. For example, the above example could be written as:
|
||||
|
||||
### sgdisk -n 1:1MiB:+1MiB -t 1:ef02 -c 1:grub /dev/sda
|
||||
|
||||
The value **0** has a special-case meaning for several of the fields:
|
||||
|
||||
* In the _partition number_ field, 0 indicates that the next available number should be used (numbering starts at 1).
|
||||
* In the _starting address_ field, 0 indicates that the start of the largest available block of free space should be used. Some space at the start of the hard drive is always reserved for the partition table itself.
|
||||
* In the _ending address_ field, 0 indicates that the end of the largest available block of free space should be used.
|
||||
|
||||
|
||||
|
||||
By using **0** and relative values in the appropriate fields, you can create a series of partitions without having to pre-calculate any absolute values. For example, the following sequence of sgdisk commands would create all the basic partitions that are needed for a typical Linux installation if in run sequence against a blank hard drive:
|
||||
|
||||
### sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub /dev/sda
|
||||
### sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot /dev/sda
|
||||
### sgdisk -n 0:0:+4GiB -t 0:8200 -c 0:swap /dev/sda
|
||||
### sgdisk -n 0:0:0 -t 0:8300 -c 0:root /dev/sda
|
||||
|
||||
The above example shows how to partition a hard disk for a BIOS-based computer. The [grub partition][5] is not needed on a UEFI-based computer. Because sgdisk is calculating all the absolute values for you in the above example, you can just skip running the first command on a UEFI-based computer and the remaining commands can be run without modification. Likewise, you could skip creating the swap partition and the remaining commands would not need to be modified.
|
||||
|
||||
There is also a short-cut for deleting all the partitions from a hard disk with a single command:
|
||||
|
||||
### sgdisk --zap-all /dev/sda
|
||||
|
||||
For the most up-to-date and detailed information, check the man page:
|
||||
|
||||
$ man sgdisk
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/managing-partitions-with-sgdisk/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/glb/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/managing-partitions-816x345.png
|
||||
[2]: https://www.rodsbooks.com/
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/sgdisk.jpg
|
||||
[4]: https://en.wikipedia.org/wiki/Device_file
|
||||
[5]: https://en.wikipedia.org/wiki/BIOS_boot_partition
|
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Designing posters with Krita, Scribus, and Inkscape)
|
||||
[#]: via: (https://opensource.com/article/19/4/design-posters)
|
||||
[#]: author: (Raghavendra Kamath https://opensource.com/users/raghukamath/users/seilarashel/users/raghukamath/users/raghukamath/users/greg-p/users/raghukamath)
|
||||
|
||||
Designing posters with Krita, Scribus, and Inkscape
|
||||
======
|
||||
Graphic designers can do professional work with free and open source
|
||||
tools.
|
||||
![Hand drawing out the word "code"][1]
|
||||
|
||||
A few months ago, I was asked to design some posters for a local [Free Software Foundation][2] (FSF) event. Richard M. Stallman was [visiting][3] our country, and my friend [Abhas Abhinav][4] wanted to put up some posters and banners to promote his visit. I designed two posters for RMS's talk in Bangalore.
|
||||
|
||||
I create my artwork with F/LOSS (free/libre open source software) tools. Although many artists successfully use free software to create artwork, I repeatedly encounter comments in discussion forums claiming that free software is not made for creative work. This article is my effort to detail the process I typically use to create my artwork and to spread awareness that one can do professional work with the help of F/LOSS tools.
|
||||
|
||||
### Sketching some concepts
|
||||
|
||||
After understanding Abhas' initial requirements, I sat down to visualize some concepts. I am not that great of a copywriter, so I started reading the FSF website to get some copy material. I needed to finish the project in two days time, while simultaneously working on other projects. I started sketching some rough layouts. From five layouts, I liked three. I scanned them using [Skanlite][5]; although these sketches were very rough and would need proper layout and design, they were a good base for me to work from.
|
||||
|
||||
![Skanlite][6]
|
||||
|
||||
![Poster sketches][7]
|
||||
|
||||
![Poster sketch][8]
|
||||
|
||||
I had three concepts:
|
||||
|
||||
* On the [FSF's website][2], I read about taking free software to new frontiers, which made me think about the idea of "conquering a summit." Free software work is also filled with adventures, in my opinion, and sometimes a task may seem like scaling a summit. So, I thought showing some mountaineers would resonate well.
|
||||
* I also wanted to ask people to donate to FSF, so I sketched a hand giving a heart. I didn't feel any excitement in executing this idea, nevertheless, I kept it for backup in case I fell short of time.
|
||||
* The FSF website has a hashtag for a donation program called #thankGNU, so I thought about using this as the basis of my design. Repurposing my hand visual, I replaced the heart with a bouquet of flowers that has a heart-shaped card saying #thankGNU!
|
||||
|
||||
|
||||
|
||||
I know these are somewhat quick and safe concepts, but given the little time I had for the project, I went ahead with them.
|
||||
|
||||
My design process mostly depends on the kind of look I need in the final image. I choose my software and process according to my needs. I may use one software from start to finish or combine various software packages to accomplish what I need. For this project, I used [Krita][9] and [Scribus][10], with some minimal use of [Inkscape][11].
|
||||
|
||||
### Krita: Making the illustrations
|
||||
|
||||
I imported my sketches into [Krita][12] and started adding more defined lines and shapes.
|
||||
|
||||
For the first image, which has some mountaineers climbing, I used [vector layers][13] in Krita to add basic shapes and then used [Alpha Inheritance][14], which is similar to what is called Clipping Masks in Photoshop, to add texture and gradients inside the shapes. This helped me change the underlying base shape (in this case, the shape of the mountain in the first poster) anytime during the process. Krita also has a nice feature called the Reference Image tool, which lets you pin some references around your canvas (this helps a lot and saves many Alt+Tabs). Once I got the mountain how I wanted, according to the layout, I started painting the mountaineers and added more details for the ice and other features. I like grungy brushes and brushes that have a texture akin to chalks and sponges. Krita has a wide range of brushes as well as a brush engine, which makes replicating a traditional medium easier. After about 3.5 hours of painting, this image was ready for further processing.
|
||||
|
||||
I wanted the second poster to have the feel of an old-style book illustration. So, I created the illustration with inked lines, somewhat similar to what we see in textbooks or novels. Inking in Krita is really a time saver; since it has stabilizer options, your wavy, hand-drawn lines will be smooth and crisp. I added a textured background and some minimal colors beneath the lines. It took me about three hours to do this illustration as well.
|
||||
|
||||
![Poster][15]
|
||||
|
||||
![Poster][16]
|
||||
|
||||
### Scribus: Adding layout and typography
|
||||
|
||||
Once my illustrations were ready, it was time to move on to the next part: adding text and other things to the layout. For this, I used Scribus. Both Scribus and Krita have CMYK support. In both applications, you can soft-proof your artwork and make changes according to the color profile you get from the printer. I mostly do my work in RGB and then, if required, I convert it to CMYK. Since most printers nowadays will do the color conversion, I don't think CMYK is support required, however, it's good to be able to work in CMYK with free software tools.
|
||||
|
||||
I use open source fonts for my design work unless a client has licensed a closed font for use. A good way to browse for suitable fonts is [Google Fonts repository][17]. (I have the entire repository cloned.) Occasionally, I also browse fonts on [Font Library][18], as it also has a nice collection. I decided to use Montserrat by Julieta Ulanovsky for the posters. Placing text was very quick in Scribus; once you create a style, you can apply it to any number of paragraphs or titles. This helped me place text in both designs quickly since I didn't have to re-create the text properties.
|
||||
|
||||
![Poster in Scribus][19]
|
||||
|
||||
I keep two layers in Scribus. One is for the illustrations, which are linked to the original files so if I change an illustration, it will update in Scribus. The other is for text and it's layered on top of the illustration layer.
|
||||
|
||||
### Inkscape: QR codes
|
||||
|
||||
I used Inkscape to generate a QR code that points to the Membership page on FSF's website. To generate a QR code in Scribus, go to **Extensions > Render > Barcode > QR Code** in Inkscape's menu. The logos are also vector; because Scribus supports vector images, you can directly paste things from Inkscape into Scribus. In a way, this helps in designing CMYK-based vector graphics.
|
||||
|
||||
![Final poster design][20]
|
||||
|
||||
![Final poster design][21]
|
||||
|
||||
With the designs ready, I exported them to layered PDF and sent to them to Abhas for feedback. He asked me to add FSF India's logo, which I did and sent a new PDF to him.
|
||||
|
||||
### Printing the posters
|
||||
|
||||
From here, Abhas took over the printing part of the process. His local printer in Bangalore printed the posters in A2 size. He was kind enough to send me some pictures of them. The prints came out well, considering I didn't even convert them to CMYK nor do any color corrections or soft proofing, as I usually do when I get the color profile from my printer. My opinion is that 100% accurate CMYK printing is just a myth; there are too many factors to consider. If I really want perfect color reproduction, I leave this job to the printer, as they know their printer well and can do the conversion.
|
||||
|
||||
![Final poster design][22]
|
||||
|
||||
![Final poster design][23]
|
||||
|
||||
### Accessing the source files
|
||||
|
||||
When we discussed the requirements for these posters, Abhas told me to release the artwork under a Creative Commons license so others can re-use, modify, and share it. I am really glad he mentioned it. Anyone who wants to poke at the files can [download them from my Nextcloud drive][24]. If you have any improvements to make, please go ahead—and do remember to share your work with everybody.
|
||||
|
||||
Let me know what you think about this article by [emailing me][25].
|
||||
|
||||
* * *
|
||||
|
||||
_[This article][26] originally appeared on [Raghukamath.com][27] and is republished with the author's permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/design-posters
|
||||
|
||||
作者:[Raghavendra Kamath][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/raghukamath/users/seilarashel/users/raghukamath/users/raghukamath/users/greg-p/users/raghukamath
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code")
|
||||
[2]: https://www.fsf.org/
|
||||
[3]: https://rms-tour.gnu.org.in/
|
||||
[4]: https://abhas.io/
|
||||
[5]: https://kde.org/applications/graphics/skanlite/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/skanlite.png (Skanlite)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/sketch-01.png (Poster sketches)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/sketch-02.png (Poster sketch)
|
||||
[9]: https://krita.org/
|
||||
[10]: https://www.scribus.net/
|
||||
[11]: https://inkscape.org/
|
||||
[12]: /life/16/4/nick-hamilton-linuxfest-northwest-2016-krita
|
||||
[13]: https://docs.krita.org/en/user_manual/vector_graphics.html#vector-graphics
|
||||
[14]: https://docs.krita.org/en/tutorials/clipping_masks_and_alpha_inheritance.html
|
||||
[15]: https://opensource.com/sites/default/files/uploads/poster-illo-01.jpg (Poster)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/poster-illo-02.jpg (Poster)
|
||||
[17]: https://fonts.google.com/
|
||||
[18]: https://fontlibrary.org/
|
||||
[19]: https://opensource.com/sites/default/files/uploads/poster-in-scribus.png (Poster in Scribus)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/final-01.png (Final poster design)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/final-02.png (Final poster design)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/posters-in-action-01.jpg (Final poster design)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/posters-in-action-02.jpg (Final poster design)
|
||||
[24]: https://box.raghukamath.com/cloud/index.php/s/97KPnTBP4QL4iCx
|
||||
[25]: mailto:raghu@raghukamath.com?Subject=designing-posters-with-free-software
|
||||
[26]: https://raghukamath.com/journal/designing-posters-with-free-software/
|
||||
[27]: https://raghukamath.com/
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How libraries are adopting open source)
|
||||
[#]: via: (https://opensource.com/article/19/4/software-libraries)
|
||||
[#]: author: (Don Watkins (Community Moderator) https://opensource.com/users/don-watkins)
|
||||
|
||||
How libraries are adopting open source
|
||||
======
|
||||
Over the past decade, ByWater Solutions has expanded its business by
|
||||
advocating for open source software.
|
||||
![][1]
|
||||
|
||||
Four years ago, I [interviewed Nathan Currulla][2], co-founder of ByWater Solutions, a major services and solutions provider for [Koha][3], a popular open source integrated library system (ILS). Since then, I've benefitted directly from his company's work, as my local [Chautauqua–Cattaraugus Library System][4] in western New York migrated from a proprietary software system to a [ByWater Systems][5]' Koha implementation.
|
||||
|
||||
When I learned that ByWater is celebrating its 10th anniversary in 2019, I decided to reach out to Nathan to learn how the company has grown over the last decade. (Our remarks have been edited slightly for grammar and clarity.)
|
||||
|
||||
**Don Watkins** : How has ByWater grown in the last 10 years?
|
||||
|
||||
**Nathan Currulla** : Over the last 10 years, ByWater has grown by leaps and bounds. By the end of 2009, we supported five libraries with five contracts. That number shot up to 117 libraries made up of 46 contracts by the end of 2010. We now support over 1,500 libraries and 450+ contracts. We also went from having two team members to 25 in the past 10 years. The service-focused processes we have developed for migrating new libraries have been adopted by other library companies, and we have become a real market disruptor, putting pressure on other companies to provide better support and lower software subscription fees for libraries using their products. This was our goal from the outset, to change the way libraries work with the technology companies who support them, whomever they may be.
|
||||
|
||||
Since the beginning, we have been rooted in the future, while legacy systems are still rooted in the past. Ten years ago, it was a real struggle for us to overcome the barriers presented by the fear of change in libraries and the outdated perceptions of open source in general. Now, although we still have to deal with change aversion, there are enough users to disprove any misinformation that exists regarding Koha and open source. The conversation is easier now than it ever was. That said, despite the fact that the ideals and morals held by open source are directly aligned with those of libraries, we still have a long way to go until open source technologies are the norm in this marketplace.
|
||||
|
||||
**DW** : What kinds of libraries do you support?
|
||||
|
||||
**NC** : Our partners are made up of a diverse set of library types. About 35% of our partners are public libraries, 35% are academic, and the remaining 30% are made up of museum, corporate, law, school, and other special library types. Because of Koha's flexibility and diverse feature set, we can successfully provide services to a variety of library types despite the current trend of consolidation in the library technology marketplace.
|
||||
|
||||
**DW** : How does ByWater work with and help the Koha community?
|
||||
|
||||
**NC** : We are working with the rest of the Koha community to streamline workflows and further improve the process of submitting and accepting new features into Koha. The vast majority of the community is made up of volunteers; by providing paid positions within the community, we can dedicate more time to the quality assurance and sign-off processes needed to stay competitive with other systems, both open source and proprietary. The number of new features submitted to the Koha community for each release is staggering. The more resources we have to get those features out to our users, the faster Koha can evolve and further shape the library-technology marketplace.
|
||||
|
||||
**DW** : When we talked in 2015, ByWater had recently partnered with library solutions provider [EBSCO][6]. What initiatives are you working on now with EBSCO?
|
||||
|
||||
**NC** : Originally, Catalyst IT of New Zealand worked with EBSCO to create the EBSCO Discovery Service (EDS) plugin that is used by many of our customers. Unlike most discovery systems that sit on top of a library's online public access catalog (OPAC), Koha's integration with EDS uses the Koha OPAC as the frontend, with EDS feeding data into the Koha interface. This allows libraries to choose which interface they prefer (EDS or Koha as the frontend) and provides a unified library service platform (LSP). EBSCO has always been a great partner and has always shown a strong willingness to contribute to the open source initiative. They understand the importance of having fewer barriers between the ILS and the libraries' other content to provide a seamless interface to the end user.
|
||||
|
||||
Outside of Koha, ByWater is working closely with EBSCO to provide implementation, training, and support services for its [Folio LSP][7]. Folio is an open source LSP for academic libraries with the intent to provide even more seamless integration with other content providers using an extensible, open app marketplace. ByWater is developing a separate department for the implementation and ongoing support of Folio, with EBSCO providing hosting services to our mutual customers. The fact that EBSCO is investing millions in the creation of an open source platform lends further credence to the importance and validity of open source technologies in the library market.
|
||||
|
||||
**DW** : What other projects are you supporting? How do they complement Koha?
|
||||
|
||||
**NC** : ByWater also supports Libki, an open source, web-based kiosk and print management solution; Coral, an open source electronic resource management (ERM) solution; and Folio. Libki and Coral seamlessly integrate with Koha to provide a unified LSP. Folio may work in cooperation with Koha on some functionality, but it is too early to tell what that will specifically look like.
|
||||
|
||||
ByWater also offers Koha Klassmates, a program that provides free installations of Koha to over 40 library schools in the US to familiarize the next generation of librarians with open source and the tools they will use daily in the workforce. We are also rolling out a program called Koha University, which will mentor computer science students in writing and submitting code to Koha, one of the largest open source projects in the world. This will give them experience in working in such an environment and provide the opportunity for their names to be listed as official Koha contributors.
|
||||
|
||||
**DW** : What is ByWater's strategic focus over the next five years?
|
||||
|
||||
**NC** : ByWater will continue offering top-rated support to our ever-growing customer base while leveraging new open source opportunities to disprove misinformation surrounding the use of open source solutions in libraries. We will focus on making open source the norm and educating libraries that could be taking advantage of these technologies but do not because of outdated information and perceptions.
|
||||
|
||||
Additionally, our research and development efforts will be focused on analyzing machine learning for advanced education and support services. We also want to work closely with our partners on advancing the marketing efforts (through software) for small and large libraries to help cement their roles as community centers by marketing inventory, programs, and library events. We want to be community builders on different levels, both for our partner libraries and with the open source communities that we are involved in.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/software-libraries
|
||||
|
||||
作者:[Don Watkins (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_opencardcatalog.png?itok=f9PyJEe-
|
||||
[2]: https://opensource.com/business/15/5/bywater-solutions-empowering-library-tech
|
||||
[3]: http://www.koha.org/
|
||||
[4]: https://catalog.cclsny.org/
|
||||
[5]: https://bywatersolutions.com/
|
||||
[6]: https://www.ebsco.com/
|
||||
[7]: https://www.ebsco.com/products/ebsco-folio-library-services
|
122
sources/tech/20190412 Joe Doss- How Do You Fedora.md
Normal file
122
sources/tech/20190412 Joe Doss- How Do You Fedora.md
Normal file
@ -0,0 +1,122 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Joe Doss: How Do You Fedora?)
|
||||
[#]: via: (https://fedoramagazine.org/joe-doss-how-do-you-fedora/)
|
||||
[#]: author: (Charles Profitt https://fedoramagazine.org/author/cprofitt/)
|
||||
|
||||
Joe Doss: How Do You Fedora?
|
||||
======
|
||||
|
||||
![Joe Doss][1]
|
||||
|
||||
We recently interviewed Joe Doss on how he uses Fedora. This is part of a [series][2] on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback form][3] to express your interest in becoming a interviewee.
|
||||
|
||||
### Who is Joe Doss?
|
||||
|
||||
Joe Doss lives in Chicago, Illinois USA and his favorite food is pizza. He is the Director of Engineering Operations and Kenna Security, Inc. Doss describes his employer this way: “Kenna uses data science to help enterprises combine their infrastructure and application vulnerability data with exploit intelligence to measure risk, predict attacks and prioritize remediation.”
|
||||
|
||||
His first Linux distribution was Red Hat Linux 5. A friend of his showed him a computer that wasn’t running Windows. Doss thought it was just a program to install on Windows when his friend gave him a Red Hat Linux 5 install disk. “I proceeded to install this Linux ‘program’ on my Father’s PC,” he says. Luckily for Doss, his father supported his interest in computers. “I ended up totally wiping out the Windows 95 install as a result and this was how I got my first computer.”
|
||||
|
||||
At Kenna, Doss’ group makes use of Fedora and [Ansible][4]: “We run Fedora Cloud in multiple VPC deployments in AWS and Google Compute with over 200 virtual machines. We use Ansible to automate everything we do with Fedora.”
|
||||
|
||||
Doss brews beer at home and contributes to open source in his free time. He also has a cat named Tibby. “I rescued Tibby off the street the Hyde Park neighborhood of Chicago when she was 7 months old. She is not very smart, but she makes up for that with cuteness.” His favorite place to visit is his childhood home of Michigan, but Doss says, “anywhere with a warm beach, a cool drink, and the ocean is pretty nice too.”
|
||||
|
||||
![Tibby the cute cat!][5]
|
||||
|
||||
### The Fedora community
|
||||
|
||||
Doss became involved with Fedora and the Fedora community through his job at Kenna Security. When he first joined the company they were using Ubuntu and Chef in production. There was a desire to make the infrastructure more reproducible and reliable, and he says, “I was able to greenfield our deployments with Fedora Cloud and Ansible.” This project got him involved in the Fedora Cloud release.
|
||||
|
||||
When asked about his first impression of the Fedora community, Doss said, “Overwhelming to be honest. There is so much going on and it is hard to figure out who are the stakeholders of each part of Fedora.” Once he figured out who he needed to talk to he found the community very welcoming and super supportive.
|
||||
|
||||
One of the ideas he had to improve the community was to unite the various projects and team under on bug tracking tool and community resource. “Pagure, Bugzilla, Github, Fedora Forums, Discourse Forums, Mailing lists… it is all over the place and hard to navigate at first.” Despite the initial complexity of becoming familiar with the Fedora Project, Doss feels it is amazingly rewarding to be involved. “It feels awesome it to be apart of a Linux distro that impacts so many people in very positive ways. You can make a difference.”
|
||||
|
||||
Doss called out Dusty Mabe at Red Hat for helping him become involved, saying Dusty “has been an amazing mentor and resource for enabling me to contribute back to Fedora.”
|
||||
|
||||
Doss has an interesting way of explaining to non-technical friends what he does. “Imagine changing the tires on a very large bus while it is going down the highway at 70 MPH and sometimes you need to get involved with the tire manufacturer to help make this process work well.” This metaphor helps people understand what replacing 200-plus VMs across more than five production VPCs in AWS and Google Compute with every Fedora release.
|
||||
|
||||
Doss drew my attention to one specific incident with Fedora 29 and Vagrant. “Recently we encountered an issue where Vagrant wouldn’t set the hostname on a Fresh Fedora 29 Beta VM. This was due to Fedora 29 Cloud no longer shipping the network service stub in favor of NetworkManager. This led to me working with a colleague at Kenna Security to send a patch upstream to the Vagrant project to help their developers produce a fix for Fedora 29. Vagrant usage with Fedora is a very large part of our development cycle at Kenna, and having this broken before the Fedora 29 release would have impacted us a lot.” As Doss said, “Sometimes you need to help make the tires before they go on the bus.”
|
||||
|
||||
Doss is the [COPR][6] Fedora, RHEL, and CentOS package maintainer for [WireGuard VPN][7]. “The CentOS repo just went over 60 thousand downloads last month which is pretty awesome.”
|
||||
|
||||
### What Hardware?
|
||||
|
||||
Doss uses Fedora 29 cloud in the over five VPC deployments in AWS and Google computer. At home he has a SuperMicro SYS-5019A-FTN4 1U Server that runs Fedora 29 Server with Openshift OKD installed on it. His laptops are all Lenovo. “For Laptops I use a ThinkPad T460s for work and a ThinkPad 25 at home. Both have Fedora 29 installed. ThinkPads are the best with Fedora.”
|
||||
|
||||
### What Software?
|
||||
|
||||
Doss used GNOME 3 as his preferred desktop on Fedora Workstation. “I use Sublime Text 3 for my text editor on the desktop or vim on servers.” For development and testing he uses Vagrant. “Ansible is what I use for any kind of automation with Fedora. I maintain an [Ansible playbook][8] for setting up my workstation.”
|
||||
|
||||
### Ansible
|
||||
|
||||
I asked Doss if he had advice for people trying to learn Ansible.
|
||||
|
||||
“Start small. Automate the stuff that makes your life easier, but don’t over complicate it. [Ansible Galaxy][9] is a great resource to get things done quickly, but if you truly want to learn how to use Ansible, writing your own roles and playbooks the path I would take.
|
||||
|
||||
“I have helped a lot of my coworkers that have joined my Operations team at Kenna get up to speed on using Ansible by buying them a copy of [Ansible for Devops][10] by Jeff Geerling. This book will give anyone new to Ansible the foundation they need to start using it everyday. #ansible on Freenode is a great resource as well along with the [official Ansible docs][11].”
|
||||
|
||||
Doss also said, “Knowing what to automate is most likely the most difficult thing to master without over complicating things. Debugging complex playbooks and roles is a close second.”
|
||||
|
||||
### Home lab
|
||||
|
||||
He recommended setting up a home lab. “At Kenna and at home I use [Vagrant][12] with the [Vagrant-libvirt plugin][13] for developing Ansible roles and playbooks. You can iterate quickly to build your roles and playbooks on your laptop with your favorite editor and run _vagrant provision_ to run your playbook. Quick feedback loop and the ability to burn down your Vagrant VM and start over quickly is an amazing workflow. Below is a sample Vagrant file that I keep handy to spin up a Fedora VM to test my playbooks.”
|
||||
|
||||
```
|
||||
-- mode: ruby --
|
||||
vi: set ft=ruby :
|
||||
Vagrant.configure(2) do |config|
|
||||
config.vm.provision "shell", inline: "dnf install nfs-utils rpcbind @development-tools @ansible-node redhat-rpm-config gcc-c++ -y"
|
||||
config.ssh.forward_agent = true
|
||||
config.vm.define "f29", autostart: false do |f29|
|
||||
f29.vm.box = "fedora/29-cloud-base"
|
||||
f29.vm.hostname = "f29.example.com"
|
||||
f29.vm.provider "libvirt" do |vm|
|
||||
vm.memory = 2048
|
||||
vm.cpus = 2
|
||||
vm.driver = "kvm"
|
||||
vm.nic_model_type = "e1000"
|
||||
end
|
||||
config.vm.synced_folder '.', '/vagrant', disabled: true
|
||||
|
||||
config.vm.provision "ansible" do |ansible|
|
||||
ansible.groups = {
|
||||
}
|
||||
ansible.playbook = "playbooks/main.yml"
|
||||
ansible.inventory_path = "inventory/development"
|
||||
ansible.extra_vars = {
|
||||
ansible_python_interpreter: "/usr/bin/python3"
|
||||
}
|
||||
# ansible.verbose = 'vvv' end
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/joe-doss-how-do-you-fedora/
|
||||
|
||||
作者:[Charles Profitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/cprofitt/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/IMG_20181029_121944-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/tag/how-do-you-fedora/
|
||||
[3]: https://fedoramagazine.org/submit-an-idea-or-tip/
|
||||
[4]: https://ansible.com
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/IMG_20181231_110920_fixed.jpg
|
||||
[6]: https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/
|
||||
[7]: https://www.wireguard.com/install/
|
||||
[8]: https://github.com/jdoss/fedora-workstation
|
||||
[9]: https://galaxy.ansible.com/
|
||||
[10]: https://www.ansiblefordevops.com/
|
||||
[11]: https://docs.ansible.com/ansible/latest/index.html
|
||||
[12]: http://www.vagrantup.com/
|
||||
[13]: https://github.com/vagrant-libvirt/vagrant-libvirt%20plugin
|
@ -0,0 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 2)
|
||||
[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-2)
|
||||
[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
|
||||
|
||||
Linux Server Hardening Using Idempotency with Ansible: Part 2
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
[Creative Commons Zero][2]
|
||||
|
||||
In the first part of this series, we introduced something called idempotency, which can provide the ongoing improvements to your server estate’s security posture. In this article, we’ll get a little more hands-on with a look at some specific Ansible examples.
|
||||
|
||||
### Shopping List
|
||||
|
||||
You will need some Ansible experience before being able to make use of the information that follows. Rather than run through the installation and operation of Ansible let’s instead look at some of the idempotency playbook’s content.
|
||||
|
||||
As mentioned earlier there might be hundreds of individual system tweaks to make on just one type of host so we’ll only explore a few suggested Ansible tasks and how I like to structure the Ansible role responsible for the compliance and hardening. You have hopefully picked up on the fact that the devil is in the detail and you should absolutely, unequivocally, understand to as high a level of detail as possible, about the permutations of making changes to your server OS.
|
||||
|
||||
Be aware that I will mix and match between OSs in the Ansible examples that follow. Many examples are OS agnostic but as ever you should pay close attention to the detail. Obvious changes like “apt” to “yum” for the package manager is a given.
|
||||
|
||||
Inside a “tasks” file under our Ansible “hardening” role, or whatever you decide to name it, these named tasks represent the areas of a system with some example code to offer food for thought. In other words, each section that follows will probably be a single YAML file, such as “accounts.yml”, and each will have with varying lengths and complexity.
|
||||
|
||||
Let’s look at some examples with ideas about what should go into each file to get you started. The contents of each file that follow are just the very beginning of a checklist and the following suggestions are far from exhaustive.
|
||||
|
||||
#### SSH Server
|
||||
|
||||
This is the application that almost all engineers immediately look to harden when asked to secure a server. It makes sense as SSH (the OpenSSH package in many cases) is usually only one of a few ports intentionally prised open and of course allows direct access to the command line. The level of hardening that you should adopt is debatable. I believe in tightening the daemon as much as possible without disruption and would usually make around fifteen changes to the standard OpenSSH server config file, “sshd_config”. These changes would include pulling in a MOTD banner (Message Of The Day) for legal compliance (warning of unauthorised access and prosecution), enforcing the permissions on the main SSHD files (so they can’t be tampered with by lesser-privileged users), ensuring the “root” user can’t log in directly, setting an idle session timeout and so on.
|
||||
|
||||
Here’s a very simple Ansible example that you can repeat within other YAML files later on, focusing on enforcing file permissions on our main, critical OpenSSH server config file. Note that you should carefully check every single file that you hard-reset permissions on before doing so. This is because there are horrifyingly subtle differences between Linux distributions. Believe me when I say that it’s worth checking first.
|
||||
|
||||
name: Hard reset permissions on sshd server file
|
||||
|
||||
file: owner=root group=root mode=0600 path=/etc/ssh/sshd_config
|
||||
|
||||
To check existing file permissions I prefer this natty little command for the job:
|
||||
|
||||
```
|
||||
$ stat -c "%a %n" /etc/ssh/sshd_config
|
||||
|
||||
644 /etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
As our “stat” command shows our Ansible snippet would be an improvement to the current permissions because 0600 means only the “root” user can read and write to that file. Other users or groups can’t even read that file which is of benefit because if we’ve made any mistakes in securing SSH’s config they can’t be discovered as easily by less-privileged users.
|
||||
|
||||
#### System Accounts
|
||||
|
||||
At a simple level this file might define how many users should be on a standard server. Usually a number of users who are admins have home directories with public keys copied into them. However this file might also include performing simple checks that the root user is the only system user with the all-powerful superuser UID 0; in case an attacker has altered user accounts on the system for example.
|
||||
|
||||
#### Kernel
|
||||
|
||||
Here’s a file that can grow arms and legs. Typically I might affect between fifteen and twenty sysctl changes on an OS which I’m satisfied won’t be disruptive to current and, all going well, any future uses of a system. These changes are again at your discretion and, at my last count (as there’s between five hundred and a thousand configurable kernel options using sysctl on a Debian/Ubuntu box) you might opt to split off these many changes up into different categories.
|
||||
|
||||
Such categories might include network stack tuning, stopping core dumps from filling up disk space, disabling IPv6 entirely and so on. Here’s an Ansible example of logging network packets that shouldn’t been routed out onto the Internet, namely those packets using spoofed private IP Addresses, called “martians”.
|
||||
|
||||
name: Keep track of traffic that shouldn’t be routed onto the Internet
|
||||
|
||||
lineinfile: dest="/etc/sysctl.conf" line="{{item.network}}" state=present
|
||||
|
||||
with_items:
|
||||
|
||||
\- { network: 'net.ipv4.conf.all.log_martians = 1' }
|
||||
|
||||
\- { network: 'net.ipv4.conf.default.log_martians = 1' }
|
||||
|
||||
Pay close attention that you probably don’t want to use the file “/etc/sysctl.conf” but create a custom file under the directory “/etc/sysctl.d/” or similar. Again, check your OS’s preference, usually in the comments of the pertinent files. If you’ve not seen martian packets being enabled before then type “dmesg” (sometimes only as the “root” user) to view kernel messages and after a week or two of logging being in place you’ll probably see some traffic polluting your logs. It’s much better to know how attackers are probing your servers than not. A few log entries for reference can only be of value. When it comes to looking after servers, ignorance is certainly not bliss.
|
||||
|
||||
#### Network
|
||||
|
||||
As mentioned you might want to include hardening the network stack within your kernel.yml file, depending on whether there’s many entries or not, or simply for greater clarity. For your network.yml file have a think about stopping old-school broadcast attacks flooding your LAN and ICMP oddities from changing your routing in addition.
|
||||
|
||||
#### Services
|
||||
|
||||
Usually I would stop or start miscellaneous system services (and potentially applications) within this Ansible file. If there weren’t many services then rather than also using a “cron.yml” file specifically for “cron” hardening I’d include those here too.
|
||||
|
||||
There’s a bundle of changes you can make around cron’s file permissions etc. If you haven’t come across it, on some OSs, there’s a “cron.deny” file for example which blacklists certain users from accessing the “crontab” command. Additionally you also have a multitude of cron directories under the “/etc” directory which need permissions enforced and improved, indeed along with the file “/etc/crontab” itself. Once again check with your OS’s current settings before altering these or “bad things” ™ might happen to your uptime.
|
||||
|
||||
In terms of miscellaneous services being purposefully stopped and certain services, such as system logging which is imperative to a healthy and secure system, have a quick look at the Ansible below which I might put in place for syslog as an example.
|
||||
|
||||
name: Insist syslog is definitely installed (so we can receive upstream logs)
|
||||
|
||||
apt: name=rsyslog state=present
|
||||
|
||||
name: Make sure that syslog starts after a reboot
|
||||
|
||||
service: name=rsyslog state=started enabled=yes
|
||||
|
||||
#### IPtables
|
||||
|
||||
The venerable Netfilter which, from within the Linux kernel offers the IPtables software firewall the ability to filter network packets in an exceptionally sophisticated manner, is a must if you can enable it sensibly. If you’re confident that each of your varying flavours of servers (whether it’s a webserver, database server and so on) can use the same IPtables config then copy a file onto the filesystem via Ansible and make sure it’s always loaded up using this YAML file.
|
||||
|
||||
Next time, we’ll wrap up our look at specific system suggestions and talk a little more about how the playbook might be used.
|
||||
|
||||
Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-2
|
||||
|
||||
作者:[Chris Binnie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/chrisbinnie
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/artificial-intelligence-3382507_1280.jpg?itok=PHazitpd
|
||||
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
||||
[3]: https://www.devsecops.cc/
|
@ -0,0 +1,36 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What's your primary backup strategy for the /home directory in Linux?)
|
||||
[#]: via: (https://opensource.com/poll/19/4/backup-strategy-home-directory-linux)
|
||||
[#]: author: ( https://opensource.com/users/dboth/users/don-watkins/users/greg-p)
|
||||
|
||||
What's your primary backup strategy for the /home directory in Linux?
|
||||
======
|
||||
|
||||
![Linux keys on the keyboard for a desktop computer][1]
|
||||
|
||||
I frequently upgrade to newer releases of Fedora, which is my primary distribution. I also upgrade other distros but much less frequently. I have also had many crashes of various types over the years, including a large portion of self-inflicted ones. Past experience with data loss has made me very aware of the need for good backups.
|
||||
|
||||
I back up many parts of my Linux hosts but my **/home** directory is especially important. Losing any of the data in **/home** on my primary workstation due to a crash or an upgrade could be disastrous.
|
||||
|
||||
My backup strategy for **/home** is to back up everything every day. There are other things on every Linux system to back up but **/home **is the center of everything I do on my workstation. I keep my documents and financial records there as well as off-line emails, address books for different apps, calendar and task data, and most importantly for me these days, the working copies of my next two Linux books.
|
||||
|
||||
I can think of a number of approaches to doing backups and restores of **/home** which would allow an easy and complete recovery after a data loss ranging from a single file to the entire directory. Which approach do you take? Which tools do you use?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/poll/19/4/backup-strategy-home-directory-linux
|
||||
|
||||
作者:[][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth/users/don-watkins/users/greg-p
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Working with Microsoft Exchange from your Linux Desktop)
|
||||
[#]: via: (https://itsfoss.com/microsoft-exchange-linux-desktop/)
|
||||
[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
|
||||
|
||||
Working with Microsoft Exchange from your Linux Desktop
|
||||
======
|
||||
|
||||
Recently I had to do some research (and even magic) to be able to work on my Ubuntu Desktop with Exchange Mail Server from my current employer. I am going to share my experience with you.
|
||||
|
||||
### Microsoft Exchange on Linux desktop
|
||||
|
||||
I guess many readers might feel confused, I mean, it shouldn’t be that hard if you simply use [Thunderbird][1] or any other [Linux email client][2] with your Office365 Exchange Account, right? Well, for better or for worse it was not this case for me.
|
||||
|
||||
Here’s my ordeal and what I did to make Microsoft Exchange work on my Linux desktop.
|
||||
|
||||
![][3]
|
||||
|
||||
#### The initial problem, no Office365
|
||||
|
||||
The first problem encountered in my situation was that we don’t currently use Office365 like probably majority of current people does for hosting their Exchange accounts, we currently use an on premises Exchange server and a very old version of it.
|
||||
|
||||
So, this means I didn’t have the luxury of using automatic configuration that comes in majority of email clients to simply connect to Office365.
|
||||
|
||||
#### Webmail is always an option… right?
|
||||
|
||||
Short answer is yes, however, as I mentioned we are using Exchange 2010, so the webmail interface is not only outdated, it even won’t allow you to have a decent email signature as it has a limit of characters in webmail configuration, so I needed to use an email client if I really wanted to be able to use the email the way I needed.
|
||||
|
||||
#### Another problem, I am picky for my email client
|
||||
|
||||
I am a regular Google user, I have been using GMail for the past 14 years as my personal email, so I really like how it looks and works. I actually use the webmail as I don’t like to be tied to my email client or even my computer device, if something happens and I need to switch to a newer device I don’t want to have to copy things over, I just want things to be there waiting for me to use them.
|
||||
|
||||
This leads me not liking Thunderbird, K-9 or Evolution Mail clients. All of these are capable of being connected to Exchange servers (one way or the other) but again, they don’t meet the standard of a clean, easy and modern GUI I wanted plus they couldn’t even manage my Exchange calendar well (which was a real deal breaker for me).
|
||||
|
||||
#### Found some options as email clients!
|
||||
|
||||
After some other research I found there were a couple of options for email clients that I could use and that actually would work the way I expected.
|
||||
|
||||
These were: [Hiri][4], which had a very modern and innovative user interface and had Exchange Server capabilities and there also was [Mailspring][5] which is a fork of an old foe ([Nylas Mail][6]) and which was my real favorite.
|
||||
|
||||
However, Mailspring couldn’t connect directly to an Exchange server (using Exchange’s protocol) unless you use Office365, it required [IMAP][7] (another luxury!) and the IT department at my office was reluctant to activate IMAP for “security reasons”.
|
||||
|
||||
Hiri is a good option but it’s not free.
|
||||
|
||||
#### No IMAP, no Office365, game over? Not yet!
|
||||
|
||||
I have to confess, I was really ready to give up and simply use the old webmail and learn to live with it, however, I gave a last shot on my research capabilities and I found a possible solution: what if I had a way to put a “man in the middle”? What if I was able to make the IMAP to run locally on my computer while my computer simply pull the emails via Exchange protocol? It was a long shot but, could work…
|
||||
|
||||
So I started looking here and there and found this [DavMail][8], which works as a Gateway to “talk” with an Exchange server and then locally provide you whatever you need in order to use it. Basically it was like a “translator” between by computer and the Exchange and then provided me with whatever service I needed.
|
||||
|
||||
![DavMail Settings][9]
|
||||
|
||||
So basically I only had to give DavMail my Exchange Server’s URL (even OWA URL) and set whatever ports I wanted on my local computer to be the new ports where my email client could connect.
|
||||
|
||||
This way I was free to basically use ANY client I wanted, at least any client which was capable of using IMAP protocol would work, as long as I configure the same ports I set up as my local ports.
|
||||
|
||||
![Mailspring working my office’s on premises Exchange. Information has been blurred due to non-disclosure agreement at my office.][10]
|
||||
|
||||
And that was it! I was able to use MailSpring (which is my preferred choice for email client) under my non favorable conditions.
|
||||
|
||||
#### Bonus point: this is a multi-platform solution!
|
||||
|
||||
What’s best is that this solution will work for any platform! So if you have the same problem while using Windows or macOS, DavMail has a version for all tastes!
|
||||
|
||||
![avatar][11]
|
||||
|
||||
![avatar][11]
|
||||
|
||||
### Helder Martins
|
||||
|
||||
Systems Engineer, technology evangelist, Ubuntu user, Linux enthusiast, father and husband.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/microsoft-exchange-linux-desktop/
|
||||
|
||||
作者:[It's FOSS Community][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/itsfoss/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.thunderbird.net/en-US/
|
||||
[2]: https://itsfoss.com/best-email-clients-linux/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/microsoft-exchange-linux-desktop.png?resize=800%2C450&ssl=1
|
||||
[4]: https://www.hiri.com/
|
||||
[5]: https://getmailspring.com/
|
||||
[6]: https://itsfoss.com/n1-open-source-email-client/
|
||||
[7]: https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol
|
||||
[8]: http://davmail.sourceforge.net/
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/davmail-exchange-settings.png?resize=800%2C597&ssl=1
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/davmail-exchange-settings-1.jpg?ssl=1
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/helder-martins-1.jpeg?ssl=1
|
@ -0,0 +1,354 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (12 Single Board Computers: Alternative to Raspberry Pi)
|
||||
[#]: via: (https://itsfoss.com/raspberry-pi-alternatives/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
12 Single Board Computers: Alternative to Raspberry Pi
|
||||
======
|
||||
|
||||
_**Brief: Looking for a Raspberry Pi alternative? Here are some other single board computers to satisfy your DIY cravings.**_
|
||||
|
||||
Raspberry Pi is the most popular single board computer right now. You can use it for your DIY projects or can use it as a cost effective system to learn coding or maybe utilize a [media server software][1] on it to stream media at your convenience.
|
||||
|
||||
You can do a lot of things with Raspberry Pi but it is not the ultimate solution for all kinds of tinkerers. Some might be looking for a cheaper board and some might be on the lookout for a powerful one.
|
||||
|
||||
Whatever be the case, we do need Raspberry Pi alternatives for a variety of reasons. So, in this article, we will talk about the best ten single board computers that we think are the best Raspberry Pi alternatives.
|
||||
|
||||
![][2]
|
||||
|
||||
### Raspberry Pi alternatives to satisfy your DIY craving
|
||||
|
||||
The list is in no particular order of ranking. Some of the links here are affiliate links. Please read our [affiliate policy][3].
|
||||
|
||||
#### 1\. Onion Omega2+
|
||||
|
||||
![][4]
|
||||
|
||||
For just **$13** , the Omega2+ is one of the cheapest IoT single board computers you can find out there. It runs on LEDE (Linux Embedded Development Environment) Linux OS – a distribution based on [OpenWRT][5].
|
||||
|
||||
Its form factor, cost, and the flexibility that comes from running a customized version of Linux OS makes it a perfect fit for almost any type of IoT applications.
|
||||
|
||||
You can find [Onion Omega kit on Amazon][6] or order from their own website that would cost you extra shipping charges.
|
||||
|
||||
**Key Specifications**
|
||||
|
||||
* MT7688 SoC
|
||||
* 2.4 GHz IEEE 802.11 b/g/n WiFi
|
||||
* 128 MB DDR2 RAM
|
||||
* 32 MB on-board flash storage
|
||||
* MicroSD Slot
|
||||
* USB 2.0
|
||||
* 12 GPIO Pins
|
||||
|
||||
|
||||
|
||||
[Visit WEBSITE
|
||||
][7]
|
||||
|
||||
#### 2\. NVIDIA Jetson Nano Developer Kit
|
||||
|
||||
![][8]
|
||||
|
||||
This is a very unique and interesting Raspberry Pi alternative from NVIDIA for just **$99**. Yes, it’s not something that everyone can make use of – but for a specific group of tinkerers or developers.
|
||||
|
||||
NVIDIA explains it for the following use-case:
|
||||
|
||||
> NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts.
|
||||
>
|
||||
> nvidia
|
||||
|
||||
So, basically, if you are into AI and deep learning, you can make use of the developer kit. If you are curious, the production compute module of this will be arriving in June 2019.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* CPU: Quad-core ARM A57 @ 1.43 GHz
|
||||
* GPU: 128-core Maxwell
|
||||
* RAM: 4 GB 64-bit LPDDR4 25.6 GB/s
|
||||
* Display: HDMI 2.0
|
||||
* 4 x USB 3.0 and eDP 1.4
|
||||
|
||||
|
||||
|
||||
[VISIT WEBSITE
|
||||
][9]
|
||||
|
||||
#### 3\. ASUS Tinker Board S
|
||||
|
||||
![][10]
|
||||
|
||||
ASUS Tinker Board S isn’t the most affordable Raspberry Pi alternative at **$82** (on [Amazon][11]) but it is a powerful alternative. It features the same 40-pin connector that you’d normally find in the standard Raspberry Pi 3 Model but offers a powerful processor and a GPU.Also, the size of the Tinker Board S is exactly the same as a standard Raspberry Pi 3.
|
||||
|
||||
The main highlight of this board is the presence of 16 GB [eMMC][12] (in layman terms, it has got SSD-like storage on board that makes it faster while working on it).
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* Rockchip Quad-Core RK3288 processor
|
||||
* 2 GB DDR3 RAM
|
||||
* Integrated Graphics Processor
|
||||
* ARM® Mali™-T764 GPU
|
||||
* 16 GB eMMC
|
||||
* MicroSD Card Slot
|
||||
* 802.11 b/g/n, Bluetooth V4.0 + EDR
|
||||
* USB 2.0
|
||||
* 28 GPIO pins
|
||||
* HDMI Interface
|
||||
|
||||
|
||||
|
||||
[Visit website
|
||||
][13]
|
||||
|
||||
#### 4\. ClockworkPi
|
||||
|
||||
![][14]
|
||||
|
||||
Clockwork Pi is usually a part of the [GameShell Kit][15] if you are looking to assemble a modular retro gaming console. However, you can purchase the board separately for $49.
|
||||
|
||||
Its compact size, WiFi connectivity, and the presence of micro HDMI port make it a great choice for a lot of things.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* Allwinner R16-J Quad-core Cortex-A7 CPU @1.2GHz
|
||||
* Mali-400 MP2 GPU
|
||||
* RAM: 1GB DDR3
|
||||
* WiFi & Bluetooth v4.0
|
||||
* Micro HDMI output
|
||||
* MicroSD Card Slot
|
||||
|
||||
|
||||
|
||||
[visit website
|
||||
][16]
|
||||
|
||||
#### 5\. Arduino Mega 2560
|
||||
|
||||
![][17]
|
||||
|
||||
If you are into robotic projects or you want something for a 3D printer – Arduino Mega 2560 will be a handy replacement to Raspberry Pi. Unlike Raspberry Pi, it is based on a microcontroller and not a microprocessor.
|
||||
|
||||
It would cost you $38.50 on their [official site][18] and and around [$33 on Amazon][19].
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* Microcontroller: ATmega2560
|
||||
* Clock Speed: 16 MHz
|
||||
* Digital I/O Pins: 54
|
||||
* Analog Input Pins: 16
|
||||
* Flash Memory: 256 KB of which 8 KB used by bootloader
|
||||
|
||||
|
||||
|
||||
[visit website
|
||||
][18]
|
||||
|
||||
#### 6\. Rock64 Media Board
|
||||
|
||||
![][20]
|
||||
|
||||
For the same investment as you would on a Raspberry Pi 3 B+, you will be getting a faster processor and double the memory on Rock64 Media Board. In addition, it also offers a cheaper alternative to Raspberry Pi if you want the 1 GB RAM model – which would cost $10 less.
|
||||
|
||||
Unlike Raspberry Pi, you do not have wireless connectivity support here but the presence of USB 3.0 and HDMI 2.0 does make a good difference if that matters to you.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* Rockchip RK3328 Quad-Core ARM Cortex A53 64-Bit Processor
|
||||
* Supports up to 4GB 1600MHz LPDDR3 RAM
|
||||
* eMMC module socket
|
||||
* MicroSD Card slot
|
||||
* USB 3.0
|
||||
* HDMI 2.0
|
||||
|
||||
|
||||
|
||||
[visit website
|
||||
][21]
|
||||
|
||||
#### 7\. Odroid-XU4
|
||||
|
||||
![][22]
|
||||
|
||||
Odroid-XU4 is the perfect alternative to Raspberry Pi if you have room to spend a little more ($80-$100 or even lower, depending on the store/availability).
|
||||
|
||||
It is indeed a powerful replacement and technically a bit smaller in size. The support for eMMC and USB 3.0 makes it faster to work with.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* Samsung Exynos 5422 Octa ARM Cortex™-A15 Quad 2Ghz and Cortex™-A7 Quad 1.3GHz CPUs
|
||||
* 2Gbyte LPDDR3 RAM
|
||||
* GPU: Mali-T628 MP6
|
||||
* USB 3.0
|
||||
* HDMI 1.4a
|
||||
* eMMC 5.0 module socket
|
||||
* MicroSD Card Slot
|
||||
|
||||
|
||||
|
||||
[visit website
|
||||
][23]
|
||||
|
||||
#### 8\. **PocketBeagle**
|
||||
|
||||
![][24]
|
||||
|
||||
It is an incredibly small SBC – almost similar to the Raspberry Pi Zero. However, it would cost you the same as that of a full-sized Raspberry Pi 3 model. The main highlight here is that you can use it as a USB key-fob and then access the Linux terminal to work on it.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* Processor: Octavo Systems OSD3358 1GHz ARM® Cortex-A8
|
||||
* RAM: 512 MB DDR3
|
||||
* 72 expansion pin headers
|
||||
* microUSB
|
||||
* USB 2.0
|
||||
|
||||
|
||||
|
||||
[visit website
|
||||
][25]
|
||||
|
||||
#### 9\. Le Potato
|
||||
|
||||
![][26]
|
||||
|
||||
Le Potato by [Libre Computer][27], also identified by its model number AML-S905X-CC. It would [cost you $45][28].
|
||||
|
||||
If you want double the memory along with HDMI 2.0 interface by spending a bit more than a Raspberry Pi – this would be the perfect choice. Although, you won’t find wireless connectivity baked in.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* Amlogic S905X SoC
|
||||
* 2GB DDR3 SDRAM
|
||||
* USB 2.0
|
||||
* HDMI 2.0
|
||||
* microUSB
|
||||
* MicroSD Card Slot
|
||||
* eMMC Interface
|
||||
|
||||
|
||||
|
||||
[visit website
|
||||
][29]
|
||||
|
||||
#### 10\. Banana Pi M64
|
||||
|
||||
![][30]
|
||||
|
||||
It comes loaded with 8 Gigs of eMMC – which is the key highlight of this Raspberry Pi alternative. For the very same reason, it would cost you $60.
|
||||
|
||||
The presence of HDMI interface makes it 4K-ready. In addition, Banana Pi offers a lot more variety of open source SBCs as an alternative to Raspberry Pi.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* 1.2 Ghz Quad-Core ARM Cortex A53 64-Bit Processor-R18
|
||||
* 2GB DDR3 SDRAM
|
||||
* 8 GB eMMC
|
||||
* WiFi & Bluetooth
|
||||
* USB 2.0
|
||||
* HDMI
|
||||
|
||||
|
||||
|
||||
[visit website
|
||||
][31]
|
||||
|
||||
#### 11\. Orange Pi Zero
|
||||
|
||||
![][32]
|
||||
|
||||
The Orange Pi Zero is an incredibly cheap alternative to Raspberry Pi. You will be able to get it for almost $10 on Aliexpress or Amazon. For a [little more investment, you can get 512 MB RAM][33].
|
||||
|
||||
If that isn’t sufficient, you can also go for Orange Pi 3 with better specifications which will cost you around $25.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* H2 Quad-core Cortex-A7
|
||||
* Mali400MP2 GPU
|
||||
* RAM: Up to 512 MB
|
||||
* TF Card support
|
||||
* WiFi
|
||||
* USB 2.0
|
||||
|
||||
|
||||
|
||||
[Visit website
|
||||
][34]
|
||||
|
||||
#### 12\. VIM 2 SBC by Khadas
|
||||
|
||||
![][35]
|
||||
|
||||
VIM 2 by Khadas is one of the latest SBCs that you can grab with Bluetooth 5.0 on board. It [starts from $99 (the basic model) and goes up to $140][36].
|
||||
|
||||
The basic model includes 2 GB RAM, 16 GB eMMC and Bluetooth 4.1. However, the Pro/Max versions would include Bluetooth 5.0, more memory, and more eMMC storage.
|
||||
|
||||
**Key Specifications:**
|
||||
|
||||
* Amlogic S912 1.5GHz 64-bit Octa-Core CPU
|
||||
* T820MP3 GPU
|
||||
* Up to 3 GB DDR4 RAM
|
||||
* Up to 64 GB eMMC
|
||||
* Bluetooth 5.0 (Pro/Max)
|
||||
* Bluetooth 4.1 (Basic)
|
||||
* HDMI 2.0a
|
||||
* WiFi
|
||||
|
||||
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
We do know that there are different types of single board computers. Some are better than Raspberry Pi – and some scaled down versions of it for a cheaper price tag. Also, SBCs like Jetson Nano have been tailored for a specific use. So, depending on what you require – you should verify the specifications of the single board computer.
|
||||
|
||||
If you think that you know about something that is better than the ones mentioned above, feel free to let us know in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/raspberry-pi-alternatives/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-media-server/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-alternatives.png?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/affiliate-policy/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/omega-2-plus-e1555306748755-800x444.jpg?resize=800%2C444&ssl=1
|
||||
[5]: https://openwrt.org/
|
||||
[6]: https://amzn.to/2Xj8pkn
|
||||
[7]: https://onion.io/store/omega2p/
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Jetson-Nano-e1555306350976-800x590.jpg?resize=800%2C590&ssl=1
|
||||
[9]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/asus-tinker-board-s-e1555304945760-800x450.jpg?resize=800%2C450&ssl=1
|
||||
[11]: https://amzn.to/2XfkOFT
|
||||
[12]: https://en.wikipedia.org/wiki/MultiMediaCard
|
||||
[13]: https://www.asus.com/in/Single-Board-Computer/Tinker-Board-S/
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/clockwork-pi-e1555305016242-800x506.jpg?resize=800%2C506&ssl=1
|
||||
[15]: https://itsfoss.com/gameshell-console/
|
||||
[16]: https://www.clockworkpi.com/product-page/cpi-v3-1
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/arduino-mega-2560-e1555305257633.jpg?ssl=1
|
||||
[18]: https://store.arduino.cc/usa/mega-2560-r3
|
||||
[19]: https://amzn.to/2KCi041
|
||||
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/ROCK64_board-e1555306092845-800x440.jpg?resize=800%2C440&ssl=1
|
||||
[21]: https://www.pine64.org/?product=rock64-media-board-computer
|
||||
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/odroid-xu4.jpg?fit=800%2C354&ssl=1
|
||||
[23]: https://www.hardkernel.com/shop/odroid-xu4-special-price/
|
||||
[24]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/PocketBeagle.jpg?fit=800%2C450&ssl=1
|
||||
[25]: https://beagleboard.org/p/products/pocketbeagle
|
||||
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/aml-libre.-e1555306237972-800x514.jpg?resize=800%2C514&ssl=1
|
||||
[27]: https://libre.computer/
|
||||
[28]: https://amzn.to/2DpG3xl
|
||||
[29]: https://libre.computer/products/boards/aml-s905x-cc/
|
||||
[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/banana-pi-m6.jpg?fit=800%2C389&ssl=1
|
||||
[31]: http://www.banana-pi.org/m64.html
|
||||
[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/orange-pi-zero.jpg?fit=800%2C693&ssl=1
|
||||
[33]: https://amzn.to/2IlI81g
|
||||
[34]: http://www.orangepi.org/orangepizero/index.html
|
||||
[35]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/khadas-vim-2-e1555306505640-800x563.jpg?resize=800%2C563&ssl=1
|
||||
[36]: https://amzn.to/2UDvrFE
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blender short film, new license for Chef, ethics in open source, and more news)
|
||||
[#]: via: (https://opensource.com/article/15/4/news-april-15)
|
||||
[#]: author: (Joshua Allen Holm (Community Moderator) https://opensource.com/users/holmja)
|
||||
|
||||
Blender short film, new license for Chef, ethics in open source, and more news
|
||||
======
|
||||
Here are some of the biggest headlines in open source in the last two
|
||||
weeks
|
||||
![][1]
|
||||
|
||||
In this edition of our open source news roundup, we take a look at the 12th Blender short film, Chef shifts away from open core toward a 100% open source license, SuperTuxKart's latest release candidate with online multiplayer support, and more.
|
||||
|
||||
### Blender Animation Studio releases Spring
|
||||
|
||||
[Spring][2], the latest short film from [Blender Animation Studio][3], premiered on April 4th. The [press release on Blender.org][4] describes _Spring_ as "the story of a shepherd girl and her dog, who face ancient spirits in order to continue the cycle of life." The development version of Blender 2.80, as well as other open source tools, were used to create this animated short film. The character and asset files for the film are available from [Blender Cloud][5], and tutorials, walkthroughs, and other instructional material are coming soon.
|
||||
|
||||
### The importance of ethics in open source
|
||||
|
||||
Reuven M. Lerner, writing for [Linux Journal][6], shares his thoughts about need for teaching programmers about ethics in an article titled [Open Source Is Winning, and Now It's Time for People to Win Too][7]. Part retrospective looking back at the history of open source and part call to action for moving forward, Lerner's article discusses many issues relevant to open source beyond just coding. He argues that when we teach kids about open source "[w]e also need to inform them of the societal parts of their work, and the huge influence and power that today's programmers have." He continues by stating "It's sometimes okay—and even preferable—for a company to make less money deliberately, when the alternative would be to do things that are inappropriate or illegal." Overall a very thought-provoking piece, Lerner makes a solid case for making sure to remember that the open source movement is about more than free code.
|
||||
|
||||
### Chef transitions from open core to open source
|
||||
|
||||
Chef, the company behind the well-known DevOps automation tool, [announced][8] that they will be release 100% of their software as open source under an Apache 2.0 license. This move marks a departure from their current [open core model][9]. Given a tendency for companies to try to move in the opposite direction, Chef's move is a big one. By operating under a fully open source model Chef builds a better, stronger relationship with the community, and the community benefits from full access to all the source code. Even developers of competing projects (and the commercial projects based on those products) benefit from being able to learn from Chef's code, as Chef can do from its open source competitors, which is one of the greatest advantages of open source; the best ideas get to win and business relationships are built around trust and quality of service, not proprietary secrets. For a more detailed look at this development, read Steven J. Vaughan-Nichols's [article for ZDNet][10].
|
||||
|
||||
### SuperTuxKart releases version 0.10 RC1 for testing
|
||||
|
||||
SuperTuxKart, the open source Mario Kart clone featuring open source mascots, is getting very close to releasing a version that supports online multi-player. On April 5th, the SuperTuxKart blog announced the release of [SuperTuxKart 0.10 Release Candidate 1][11], which needs testing before the final release. Users who want to help test the online and LAN multiplayer options can [download the game from SourceForge][12]. In addition to the new online and LAN features, SuperTuxKart 0.10 features a couple new tracks to race on; Ravenbridge Mansion replaces the old Mansion track, and Black Forest, which was an add-on track in earlier versions, is now part of the official track set.
|
||||
|
||||
#### In other news
|
||||
|
||||
* [My code is your code: Embracing the power of open sourcing][13]
|
||||
* [FOSS means kids can have a big impact][14]
|
||||
* [Open-source textbooks lighten students’ financial load][15]
|
||||
* [Developing the ultimate open source radio control transmitter][16]
|
||||
* [How does open source tech transform Government?][17]
|
||||
|
||||
|
||||
|
||||
_Thanks, as always, to Opensource.com staff members and moderators for their help this week._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/15/4/news-april-15
|
||||
|
||||
作者:[Joshua Allen Holm (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/holmja
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i
|
||||
[2]: https://www.youtube.com/watch?v=WhWc3b3KhnY (Spring)
|
||||
[3]: https://blender.studio/ (Blender Animation Studio)
|
||||
[4]: https://www.blender.org/press/spring-open-movie/ (Spring Open Movie)
|
||||
[5]: https://cloud.blender.org/p/spring/ (Spring on Blender Cloud)
|
||||
[6]: https://www.linuxjournal.com/ (Linux Journal)
|
||||
[7]: https://www.linuxjournal.com/content/open-source-winning-and-now-its-time-people-win-too (Open Source Is Winning, and Now It's Time for People to Win Too)
|
||||
[8]: https://blog.chef.io/2019/04/02/chef-software-announces-the-enterprise-automation-stack/ (Introducing the New Chef: 100% Open, Always)
|
||||
[9]: https://en.wikipedia.org/wiki/Open-core_model (Wikipedia: Open-core model)
|
||||
[10]: https://www.zdnet.com/article/leading-devops-program-chef-goes-all-in-with-open-source/ (Leading DevOps program Chef goes all in with open source)
|
||||
[11]: http://blog.supertuxkart.net/2019/04/supertuxkart-010-release-candidate-1.html (SuperTuxKart 0.10 Release Candidate 1 Released)
|
||||
[12]: https://sourceforge.net/projects/supertuxkart/files/SuperTuxKart/0.10-rc1/ (SourceForge: SuperTuxKart)
|
||||
[13]: https://www.forbes.com/sites/forbestechcouncil/2019/04/10/my-code-is-your-code-embracing-the-power-of-open-sourcing/ (My code is your code: Embracing the power of open sourcing)
|
||||
[14]: https://www.linuxjournal.com/content/foss-means-kids-can-have-big-impact (FOSS means kids can have a big impact)
|
||||
[15]: https://www.schoolnewsnetwork.org/2019/04/09/open-source-textbooks-lighten-students-financial-load/ (Open-source textbooks lighten students’ financial load)
|
||||
[16]: https://hackaday.com/2019/04/03/developing-the-ultimate-open-source-radio-control-transmitter/ (Developing the ultimate open source radio control transmitter)
|
||||
[17]: https://www.openaccessgovernment.org/open-source-tech-transform/62059/ (How does open source tech transform Government?)
|
@ -0,0 +1,127 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Mercurial for version control)
|
||||
[#]: via: (https://opensource.com/article/19/4/getting-started-mercurial)
|
||||
[#]: author: (Moshe Zadka (Community Moderator) https://opensource.com/users/moshez)
|
||||
|
||||
Getting started with Mercurial for version control
|
||||
======
|
||||
Learn the basics of Mercurial, a distributed version control system
|
||||
written in Python.
|
||||
![][1]
|
||||
|
||||
[Mercurial][2] is a distributed version control system written in Python. Because it's written in a high-level language, you can write a Mercurial extension with a few Python functions.
|
||||
|
||||
There are several ways to install Mercurial, which are explained in the [official documentation][3]. My favorite one is not there: using **pip**. This is the most amenable way to develop local extensions!
|
||||
|
||||
For now, Mercurial only supports Python 2.7, so you will need to create a Python 2.7 virtual environment:
|
||||
|
||||
|
||||
```
|
||||
python2 -m virtualenv mercurial-env
|
||||
./mercurial-env/bin/pip install mercurial
|
||||
```
|
||||
|
||||
To have a short command, and to satisfy everyone's insatiable need for chemistry-based humor, the command is called **hg**.
|
||||
|
||||
|
||||
```
|
||||
$ source mercurial-env/bin/activate
|
||||
(mercurial-env)$ mkdir test-dir
|
||||
(mercurial-env)$ cd test-dir
|
||||
(mercurial-env)$ hg init
|
||||
(mercurial-env)$ hg status
|
||||
(mercurial-env)$
|
||||
```
|
||||
|
||||
The status is empty since you do not have any files. Add a couple of files:
|
||||
|
||||
|
||||
```
|
||||
(mercurial-env)$ echo 1 > one
|
||||
(mercurial-env)$ echo 2 > two
|
||||
(mercurial-env)$ hg status
|
||||
? one
|
||||
? two
|
||||
(mercurial-env)$ hg addremove
|
||||
adding one
|
||||
adding two
|
||||
(mercurial-env)$ hg commit -m 'Adding stuff'
|
||||
(mercurial-env)$ hg log
|
||||
changeset: 0:1f1befb5d1e9
|
||||
tag: tip
|
||||
user: Moshe Zadka <[moshez@zadka.club][4]>
|
||||
date: Fri Mar 29 12:42:43 2019 -0700
|
||||
summary: Adding stuff
|
||||
```
|
||||
|
||||
The **addremove** command is useful: it adds any new files that are not ignored to the list of managed files and removes any files that have been removed.
|
||||
|
||||
As I mentioned, Mercurial extensions are written in Python—they are just regular Python modules.
|
||||
|
||||
This is an example of a short Mercurial extension:
|
||||
|
||||
|
||||
```
|
||||
from mercurial import registrar
|
||||
from mercurial.i18n import _
|
||||
|
||||
cmdtable = {}
|
||||
command = registrar.command(cmdtable)
|
||||
|
||||
@command('say-hello',
|
||||
[('w', 'whom', '', _('Whom to greet'))])
|
||||
def say_hello(ui, repo, **opts):
|
||||
ui.write("hello ", opts['whom'], "\n")
|
||||
```
|
||||
|
||||
A simple way to test it is to put it in a file in the virtual environment manually:
|
||||
|
||||
|
||||
```
|
||||
`$ vi ../mercurial-env/lib/python2.7/site-packages/hello_ext.py`
|
||||
```
|
||||
|
||||
Then you need to _enable_ the extension. You can start by enabling it only in the current repository:
|
||||
|
||||
|
||||
```
|
||||
$ cat >> .hg/hgrc
|
||||
[extensions]
|
||||
hello_ext =
|
||||
```
|
||||
|
||||
Now, a greeting is possible:
|
||||
|
||||
|
||||
```
|
||||
(mercurial-env)$ hg say-hello --whom world
|
||||
hello world
|
||||
```
|
||||
|
||||
Most extensions will do more useful stuff—possibly even things to do with Mercurial. The **repo** object is a **mercurial.hg.repository** object.
|
||||
|
||||
Refer to the [official documentation][5] for more about Mercurial's API. And visit the [official repo][6] for more examples and inspiration.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/getting-started-mercurial
|
||||
|
||||
作者:[Moshe Zadka (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO
|
||||
[2]: https://www.mercurial-scm.org/
|
||||
[3]: https://www.mercurial-scm.org/wiki/UnixInstall
|
||||
[4]: mailto:moshez@zadka.club
|
||||
[5]: https://www.mercurial-scm.org/wiki/MercurialApi#Repositories
|
||||
[6]: https://www.mercurial-scm.org/repo/hg/file/tip/hgext
|
@ -0,0 +1,292 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-linux-using-ifconfig-ifdown-ifup-ip-nmcli-nmtui/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Enable (UP) And Disable (DOWN) A Network Interface Port (NIC) In Linux?
|
||||
======
|
||||
|
||||
You may need to run these commands based on your requirements.
|
||||
|
||||
I can tell you few examples, where you would be needed this.
|
||||
|
||||
When you add a new network interface or when you create a new virtual network interface from the original physical interface.
|
||||
|
||||
you may need to bounce these commands to bring up the new interface.
|
||||
|
||||
Also, if you made any changes or if it’s down then you need to run one of the below commands to bring them up.
|
||||
|
||||
It can be done on many ways and we would like to add best five method which we used in the article.
|
||||
|
||||
It can be done using the below five methods.
|
||||
|
||||
* **`ifconfig Command:`** The ifconfig command is used configure a network interface. It provides so many information about NIC.
|
||||
* **`ifdown/up Command:`** The ifdown command take a network interface down and the ifup command bring a network interface up.
|
||||
* **`ip Command:`** ip command is used to manage NIC. It’s replacement of old and deprecated ifconfig command. It’s similar to ifconfig command but has many powerful features which isn’t available in ifconfig command.
|
||||
* **`nmcli Command:`** nmcli is a command-line tool for controlling NetworkManager and reporting network status.
|
||||
* **`nmtui Command:`** nmtui is a curses‐based TUI application for interacting with NetworkManager.
|
||||
|
||||
|
||||
|
||||
The below output shows the available network interface card (NIC) information in my Linux system.
|
||||
|
||||
```
|
||||
# ip a
|
||||
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
inet6 ::1/128 scope host
|
||||
valid_lft forever preferred_lft forever
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
|
||||
valid_lft 86049sec preferred_lft 86049sec
|
||||
inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000
|
||||
link/ether 08:00:27:30:5d:52 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.3/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s8
|
||||
valid_lft 86049sec preferred_lft 86049sec
|
||||
inet6 fe80::32b7:8727:bdf2:2f3/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
### 1) How To Bring UP And Bring Down A Network Interface In Linux Using ifconfig Command?
|
||||
|
||||
The ifconfig command is used configure a network interface.
|
||||
|
||||
It is used at boot time to set up interfaces as necessary. It provides so many information about NIC. We can use ifconfig command when we need to make any changes on NIC.
|
||||
|
||||
Common Syntax for ifconfig:
|
||||
|
||||
```
|
||||
# ifconfig [NIC_NAME] Down/Up
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux. Make a note, you have to input your interface name instead of us.
|
||||
|
||||
```
|
||||
# ifconfig enp0s3 down
|
||||
```
|
||||
|
||||
Yes, the given interface is down now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 1 "enp0s3:"
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux.
|
||||
|
||||
```
|
||||
# ifconfig enp0s3 up
|
||||
```
|
||||
|
||||
Yes, the given interface is up now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 5 "enp0s3:"
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
|
||||
valid_lft 86294sec preferred_lft 86294sec
|
||||
inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
### 2) How To Enable And Disable A Network Interface In Linux Using ifdown/up Command?
|
||||
|
||||
The ifdown command take a network interface down and the ifup command bring a network interface up.
|
||||
|
||||
**Note:**It doesn’t work on new interface device name like `enpXXX`
|
||||
|
||||
Common Syntax for ifdown/ifup:
|
||||
|
||||
```
|
||||
# ifdown [NIC_NAME]
|
||||
|
||||
# ifup [NIC_NAME]
|
||||
```
|
||||
|
||||
Run the following command to bring down the `eth1` interface in Linux.
|
||||
|
||||
```
|
||||
# ifdown eth0
|
||||
```
|
||||
|
||||
Run the following command to bring down the `eth1` interface in Linux.
|
||||
|
||||
```
|
||||
# ip a | grep -A 3 "eth1:"
|
||||
3: eth1: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
|
||||
link/ether 08:00:27:d5:a0:18 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
Run the following command to bring down the `eth1` interface in Linux.
|
||||
|
||||
```
|
||||
# ifup eth0
|
||||
```
|
||||
|
||||
Yes, the given interface is up now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 5 "eth1:"
|
||||
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
|
||||
link/ether 08:00:27:d5:a0:18 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.7/24 brd 192.168.1.255 scope global eth1
|
||||
inet6 fe80::a00:27ff:fed5:a018/64 scope link tentative dadfailed
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
ifup and ifdown doesn’t supporting the latest interface device `enpXXX` names. I got the below message when i ran the command.
|
||||
|
||||
```
|
||||
# ifdown enp0s8
|
||||
Unknown interface enp0s8
|
||||
```
|
||||
|
||||
### 3) How To Bring UP/Bring Down A Network Interface In Linux Using ip Command?
|
||||
|
||||
ip command is used to manage Network Interface Card (NIC). It’s replacement of old and deprecated ifconfig command on modern Linux systems.
|
||||
|
||||
It’s similar to ifconfig command but has many powerful features which isn’t available in ifconfig command.
|
||||
|
||||
Common Syntax for ip:
|
||||
|
||||
```
|
||||
# ip link set Down/Up
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux.
|
||||
|
||||
```
|
||||
# ip link set enp0s3 down
|
||||
```
|
||||
|
||||
Yes, the given interface is down now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 1 "enp0s3:"
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux.
|
||||
|
||||
```
|
||||
# ip link set enp0s3 up
|
||||
```
|
||||
|
||||
Yes, the given interface is up now as per the following output.
|
||||
|
||||
```
|
||||
# ip a | grep -A 5 "enp0s3:"
|
||||
2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000
|
||||
link/ether 08:00:27:c2:e4:e8 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
|
||||
valid_lft 86294sec preferred_lft 86294sec
|
||||
inet6 fe80::3899:270f:ae38:b433/64 scope link noprefixroute
|
||||
valid_lft forever preferred_lft forever
|
||||
```
|
||||
|
||||
### 4) How To Enable And Disable A Network Interface In Linux Using nmcli Command?
|
||||
|
||||
nmcli is a command-line tool for controlling NetworkManager and reporting network status.
|
||||
|
||||
It can be utilized as a replacement for nm-applet or other graphical clients. nmcli is used to create, display, edit, delete, activate, and deactivate network
|
||||
|
||||
connections, as well as control and display network device status.
|
||||
|
||||
Run the following command to identify the interface name because nmcli command is perform most of the task using `profile name` instead of `device name`.
|
||||
|
||||
```
|
||||
# nmcli con show
|
||||
NAME UUID TYPE DEVICE
|
||||
Wired connection 1 3d5afa0a-419a-3d1a-93e6-889ce9c6a18c ethernet enp0s3
|
||||
Wired connection 2 a22154b7-4cc4-3756-9d8d-da5a4318e146 ethernet enp0s8
|
||||
```
|
||||
|
||||
Common Syntax for ip:
|
||||
|
||||
```
|
||||
# nmcli con Down/Up
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux. You have to give `profile name` instead of `device name` to bring down it.
|
||||
|
||||
```
|
||||
# nmcli con down 'Wired connection 1'
|
||||
Connection 'Wired connection 1' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
|
||||
```
|
||||
|
||||
Yes, the given interface is down now as per the following output.
|
||||
|
||||
```
|
||||
# nmcli dev status
|
||||
DEVICE TYPE STATE CONNECTION
|
||||
enp0s8 ethernet connected Wired connection 2
|
||||
enp0s3 ethernet disconnected --
|
||||
lo loopback unmanaged --
|
||||
```
|
||||
|
||||
Run the following command to bring down the `enp0s3` interface in Linux. You have to give `profile name` instead of `device name` to bring down it.
|
||||
|
||||
```
|
||||
# nmcli con up 'Wired connection 1'
|
||||
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
|
||||
```
|
||||
|
||||
Yes, the given interface is up now as per the following output.
|
||||
|
||||
```
|
||||
# nmcli dev status
|
||||
DEVICE TYPE STATE CONNECTION
|
||||
enp0s8 ethernet connected Wired connection 2
|
||||
enp0s3 ethernet connected Wired connection 1
|
||||
lo loopback unmanaged --
|
||||
```
|
||||
|
||||
### 5) How To Bring UP/Bring Down A Network Interface In Linux Using nmtui Command?
|
||||
|
||||
nmtui is a curses based TUI application for interacting with NetworkManager.
|
||||
|
||||
When starting nmtui, the user is prompted to choose the activity to perform unless it was specified as the first argument.
|
||||
|
||||
Run the following command launch the nmtui interface. Select “Active a connection” and hit “OK”
|
||||
|
||||
```
|
||||
# nmtui
|
||||
```
|
||||
|
||||
[![][1]![][1]][2]
|
||||
|
||||
Select the interface which you want to bring down then hit “Deactivate” button.
|
||||
[![][1]![][1]][3]
|
||||
|
||||
For activation do the same above procedure.
|
||||
[![][1]![][1]][4]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/enable-disable-up-down-nic-network-interface-port-linux-using-ifconfig-ifdown-ifup-ip-nmcli-nmtui/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-1.png
|
||||
[3]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-2.png
|
||||
[4]: https://www.2daygeek.com/wp-content/uploads/2019/04/enable-disable-up-down-nic-network-interface-port-linux-nmtui-3.png
|
@ -0,0 +1,419 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Inter-process communication in Linux: Shared storage)
|
||||
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-storage)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
Inter-process communication in Linux: Shared storage
|
||||
======
|
||||
Learn how processes synchronize with each other in Linux.
|
||||
![Filing papers and documents][1]
|
||||
|
||||
This is the first article in a series about [interprocess communication][2] (IPC) in Linux. The series uses code examples in C to clarify the following IPC mechanisms:
|
||||
|
||||
* Shared files
|
||||
* Shared memory (with semaphores)
|
||||
* Pipes (named and unnamed)
|
||||
* Message queues
|
||||
* Sockets
|
||||
* Signals
|
||||
|
||||
|
||||
|
||||
This article reviews some core concepts before moving on to the first two of these mechanisms: shared files and shared memory.
|
||||
|
||||
### Core concepts
|
||||
|
||||
A _process_ is a program in execution, and each process has its own address space, which comprises the memory locations that the process is allowed to access. A process has one or more _threads_ of execution, which are sequences of executable instructions: a _single-threaded_ process has just one thread, whereas a _multi-threaded_ process has more than one thread. Threads within a process share various resources, in particular, address space. Accordingly, threads within a process can communicate straightforwardly through shared memory, although some modern languages (e.g., Go) encourage a more disciplined approach such as the use of thread-safe channels. Of interest here is that different processes, by default, do _not_ share memory.
|
||||
|
||||
There are various ways to launch processes that then communicate, and two ways dominate in the examples that follow:
|
||||
|
||||
* A terminal is used to start one process, and perhaps a different terminal is used to start another.
|
||||
* The system function **fork** is called within one process (the parent) to spawn another process (the child).
|
||||
|
||||
|
||||
|
||||
The first examples take the terminal approach. The [code examples][3] are available in a ZIP file on my website.
|
||||
|
||||
### Shared files
|
||||
|
||||
Programmers are all too familiar with file access, including the many pitfalls (non-existent files, bad file permissions, and so on) that beset the use of files in programs. Nonetheless, shared files may be the most basic IPC mechanism. Consider the relatively simple case in which one process ( _producer_ ) creates and writes to a file, and another process ( _consumer_ ) reads from this same file:
|
||||
|
||||
|
||||
```
|
||||
writes +-----------+ reads
|
||||
producer-------->| disk file |<\-------consumer
|
||||
+-----------+
|
||||
```
|
||||
|
||||
The obvious challenge in using this IPC mechanism is that a _race condition_ might arise: the producer and the consumer might access the file at exactly the same time, thereby making the outcome indeterminate. To avoid a race condition, the file must be locked in a way that prevents a conflict between a _write_ operation and any another operation, whether a _read_ or a _write_. The locking API in the standard system library can be summarized as follows:
|
||||
|
||||
* A producer should gain an exclusive lock on the file before writing to the file. An _exclusive_ lock can be held by one process at most, which rules out a race condition because no other process can access the file until the lock is released.
|
||||
* A consumer should gain at least a shared lock on the file before reading from the file. Multiple _readers_ can hold a _shared_ lock at the same time, but no _writer_ can access a file when even a single _reader_ holds a shared lock.
|
||||
|
||||
|
||||
|
||||
A shared lock promotes efficiency. If one process is just reading a file and not changing its contents, there is no reason to prevent other processes from doing the same. Writing, however, clearly demands exclusive access to a file.
|
||||
|
||||
The standard I/O library includes a utility function named **fcntl** that can be used to inspect and manipulate both exclusive and shared locks on a file. The function works through a _file descriptor_ , a non-negative integer value that, within a process, identifies a file. (Different file descriptors in different processes may identify the same physical file.) For file locking, Linux provides the library function **flock** , which is a thin wrapper around **fcntl**. The first example uses the **fcntl** function to expose API details.
|
||||
|
||||
#### Example 1. The _producer_ program
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <string.h>
|
||||
|
||||
#define FileName "data.dat"
|
||||
#define DataString "Now is the winter of our discontent\nMade glorious summer by this sun of York\n"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][4](msg);
|
||||
[exit][5](-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
struct flock lock;
|
||||
lock.l_type = F_WRLCK; /* read/write (exclusive versus shared) lock */
|
||||
lock.l_whence = SEEK_SET; /* base for seek offsets */
|
||||
lock.l_start = 0; /* 1st byte in file */
|
||||
lock.l_len = 0; /* 0 here means 'until EOF' */
|
||||
lock.l_pid = getpid(); /* process id */
|
||||
|
||||
int fd; /* file descriptor to identify a file within a process */
|
||||
if ((fd = open(FileName, O_RDWR | O_CREAT, 0666)) < 0) /* -1 signals an error */
|
||||
report_and_exit("open failed...");
|
||||
|
||||
if (fcntl(fd, F_SETLK, &lock) < 0) /** F_SETLK doesn't block, F_SETLKW does **/
|
||||
report_and_exit("fcntl failed to get lock...");
|
||||
else {
|
||||
write(fd, DataString, [strlen][6](DataString)); /* populate data file */
|
||||
[fprintf][7](stderr, "Process %d has written to data file...\n", lock.l_pid);
|
||||
}
|
||||
|
||||
/* Now release the lock explicitly. */
|
||||
lock.l_type = F_UNLCK;
|
||||
if (fcntl(fd, F_SETLK, &lock) < 0)
|
||||
report_and_exit("explicit unlocking failed...");
|
||||
|
||||
close(fd); /* close the file: would unlock if needed */
|
||||
return 0; /* terminating the process would unlock as well */
|
||||
}
|
||||
```
|
||||
|
||||
The main steps in the _producer_ program above can be summarized as follows:
|
||||
|
||||
* The program declares a variable of type **struct flock** , which represents a lock, and initializes the structure's five fields. The first initialization: [code]`lock.l_type = F_WRLCK; /* exclusive lock */`[/code] makes the lock an exclusive ( _read-write_ ) rather than a shared ( _read-only_ ) lock. If the _producer_ gains the lock, then no other process will be able to write or read the file until the _producer_ releases the lock, either explicitly with the appropriate call to **fcntl** or implicitly by closing the file. (When the process terminates, any opened files would be closed automatically, thereby releasing the lock.)
|
||||
* The program then initializes the remaining fields. The chief effect is that the _entire_ file is to be locked. However, the locking API allows only designated bytes to be locked. For example, if the file contains multiple text records, then a single record (or even part of a record) could be locked and the rest left unlocked.
|
||||
* The first call to **fcntl** : [code]`if (fcntl(fd, F_SETLK, &lock) < 0)`[/code] tries to lock the file exclusively, checking whether the call succeeded. In general, the **fcntl** function returns **-1** (hence, less than zero) to indicate failure. The second argument **F_SETLK** means that the call to **fcntl** does _not_ block: the function returns immediately, either granting the lock or indicating failure. If the flag **F_SETLKW** (the **W** at the end is for _wait_ ) were used instead, the call to **fcntl** would block until gaining the lock was possible. In the calls to **fcntl** , the first argument **fd** is the file descriptor, the second argument specifies the action to be taken (in this case, **F_SETLK** for setting the lock), and the third argument is the address of the lock structure (in this case, **& lock**).
|
||||
* If the _producer_ gains the lock, the program writes two text records to the file.
|
||||
* After writing to the file, the _producer_ changes the lock structure's **l_type** field to the _unlock_ value: [code]`lock.l_type = F_UNLCK;`[/code] and calls **fcntl** to perform the unlocking operation. The program finishes up by closing the file and exiting.
|
||||
|
||||
|
||||
|
||||
#### Example 2. The _consumer_ program
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#define FileName "data.dat"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][4](msg);
|
||||
[exit][5](-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
struct flock lock;
|
||||
lock.l_type = F_WRLCK; /* read/write (exclusive) lock */
|
||||
lock.l_whence = SEEK_SET; /* base for seek offsets */
|
||||
lock.l_start = 0; /* 1st byte in file */
|
||||
lock.l_len = 0; /* 0 here means 'until EOF' */
|
||||
lock.l_pid = getpid(); /* process id */
|
||||
|
||||
int fd; /* file descriptor to identify a file within a process */
|
||||
if ((fd = open(FileName, O_RDONLY)) < 0) /* -1 signals an error */
|
||||
report_and_exit("open to read failed...");
|
||||
|
||||
/* If the file is write-locked, we can't continue. */
|
||||
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
|
||||
if (lock.l_type != F_UNLCK)
|
||||
report_and_exit("file is still write locked...");
|
||||
|
||||
lock.l_type = F_RDLCK; /* prevents any writing during the reading */
|
||||
if (fcntl(fd, F_SETLK, &lock) < 0)
|
||||
report_and_exit("can't get a read-only lock...");
|
||||
|
||||
/* Read the bytes (they happen to be ASCII codes) one at a time. */
|
||||
int c; /* buffer for read bytes */
|
||||
while (read(fd, &c, 1) > 0) /* 0 signals EOF */
|
||||
write(STDOUT_FILENO, &c, 1); /* write one byte to the standard output */
|
||||
|
||||
/* Release the lock explicitly. */
|
||||
lock.l_type = F_UNLCK;
|
||||
if (fcntl(fd, F_SETLK, &lock) < 0)
|
||||
report_and_exit("explicit unlocking failed...");
|
||||
|
||||
close(fd);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _consumer_ program is more complicated than necessary to highlight features of the locking API. In particular, the _consumer_ program first checks whether the file is exclusively locked and only then tries to gain a shared lock. The relevant code is:
|
||||
|
||||
|
||||
```
|
||||
lock.l_type = F_WRLCK;
|
||||
...
|
||||
fcntl(fd, F_GETLK, &lock); /* sets lock.l_type to F_UNLCK if no write lock */
|
||||
if (lock.l_type != F_UNLCK)
|
||||
report_and_exit("file is still write locked...");
|
||||
```
|
||||
|
||||
The **F_GETLK** operation specified in the **fcntl** call checks for a lock, in this case, an exclusive lock given as **F_WRLCK** in the first statement above. If the specified lock does not exist, then the **fcntl** call automatically changes the lock type field to **F_UNLCK** to indicate this fact. If the file is exclusively locked, the _consumer_ terminates. (A more robust version of the program might have the _consumer_ **sleep** a bit and try again several times.)
|
||||
|
||||
If the file is not currently locked, then the _consumer_ tries to gain a shared ( _read-only_ ) lock ( **F_RDLCK** ). To shorten the program, the **F_GETLK** call to **fcntl** could be dropped because the **F_RDLCK** call would fail if a _read-write_ lock already were held by some other process. Recall that a _read-only_ lock does prevent any other process from writing to the file, but allows other processes to read from the file. In short, a _shared_ lock can be held by multiple processes. After gaining a shared lock, the _consumer_ program reads the bytes one at a time from the file, prints the bytes to the standard output, releases the lock, closes the file, and terminates.
|
||||
|
||||
Here is the output from the two programs launched from the same terminal with **%** as the command line prompt:
|
||||
|
||||
|
||||
```
|
||||
% ./producer
|
||||
Process 29255 has written to data file...
|
||||
|
||||
% ./consumer
|
||||
Now is the winter of our discontent
|
||||
Made glorious summer by this sun of York
|
||||
```
|
||||
|
||||
In this first code example, the data shared through IPC is text: two lines from Shakespeare's play _Richard III_. Yet, the shared file's contents could be voluminous, arbitrary bytes (e.g., a digitized movie), which makes file sharing an impressively flexible IPC mechanism. The downside is that file access is relatively slow, whether the access involves reading or writing. As always, programming comes with tradeoffs. The next example has the upside of IPC through shared memory, rather than shared files, with a corresponding boost in performance.
|
||||
|
||||
### Shared memory
|
||||
|
||||
Linux systems provide two separate APIs for shared memory: the legacy System V API and the more recent POSIX one. These APIs should never be mixed in a single application, however. A downside of the POSIX approach is that features are still in development and dependent upon the installed kernel version, which impacts code portability. For example, the POSIX API, by default, implements shared memory as a _memory-mapped file_ : for a shared memory segment, the system maintains a _backing file_ with corresponding contents. Shared memory under POSIX can be configured without a backing file, but this may impact portability. My example uses the POSIX API with a backing file, which combines the benefits of memory access (speed) and file storage (persistence).
|
||||
|
||||
The shared-memory example has two programs, named _memwriter_ and _memreader_ , and uses a _semaphore_ to coordinate their access to the shared memory. Whenever shared memory comes into the picture with a _writer_ , whether in multi-processing or multi-threading, so does the risk of a memory-based race condition; hence, the semaphore is used to coordinate (synchronize) access to the shared memory.
|
||||
|
||||
The _memwriter_ program should be started first in its own terminal. The _memreader_ program then can be started (within a dozen seconds) in its own terminal. The output from the _memreader_ is:
|
||||
|
||||
|
||||
```
|
||||
`This is the way the world ends...`
|
||||
```
|
||||
|
||||
Each source file has documentation at the top explaining the link flags to be included during compilation.
|
||||
|
||||
Let's start with a review of how semaphores work as a synchronization mechanism. A general semaphore also is called a _counting semaphore_ , as it has a value (typically initialized to zero) that can be incremented. Consider a shop that rents bicycles, with a hundred of them in stock, with a program that clerks use to do the rentals. Every time a bike is rented, the semaphore is incremented by one; when a bike is returned, the semaphore is decremented by one. Rentals can continue until the value hits 100 but then must halt until at least one bike is returned, thereby decrementing the semaphore to 99.
|
||||
|
||||
A _binary semaphore_ is a special case requiring only two values: 0 and 1. In this situation, a semaphore acts as a _mutex_ : a mutual exclusion construct. The shared-memory example uses a semaphore as a mutex. When the semaphore's value is 0, the _memwriter_ alone can access the shared memory. After writing, this process increments the semaphore's value, thereby allowing the _memreader_ to read the shared memory.
|
||||
|
||||
#### Example 3. Source code for the _memwriter_ process
|
||||
|
||||
|
||||
```
|
||||
/** Compilation: gcc -o memwriter memwriter.c -lrt -lpthread **/
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/mman.h>
|
||||
#include <sys/stat.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <semaphore.h>
|
||||
#include <string.h>
|
||||
#include "shmem.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][4](msg);
|
||||
[exit][5](-1);
|
||||
}
|
||||
|
||||
int main() {
|
||||
int fd = shm_open(BackingFile, /* name from smem.h */
|
||||
O_RDWR | O_CREAT, /* read/write, create if needed */
|
||||
AccessPerms); /* access permissions (0644) */
|
||||
if (fd < 0) report_and_exit("Can't open shared mem segment...");
|
||||
|
||||
ftruncate(fd, ByteSize); /* get the bytes */
|
||||
|
||||
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
|
||||
ByteSize, /* how many bytes */
|
||||
PROT_READ | PROT_WRITE, /* access protections */
|
||||
MAP_SHARED, /* mapping visible to other processes */
|
||||
fd, /* file descriptor */
|
||||
0); /* offset: start at 1st byte */
|
||||
if ((caddr_t) -1 == memptr) report_and_exit("Can't get segment...");
|
||||
|
||||
[fprintf][7](stderr, "shared mem address: %p [0..%d]\n", memptr, ByteSize - 1);
|
||||
[fprintf][7](stderr, "backing file: /dev/shm%s\n", BackingFile );
|
||||
|
||||
/* semaphore code to lock the shared mem */
|
||||
sem_t* semptr = sem_open(SemaphoreName, /* name */
|
||||
O_CREAT, /* create the semaphore */
|
||||
AccessPerms, /* protection perms */
|
||||
0); /* initial value */
|
||||
if (semptr == (void*) -1) report_and_exit("sem_open");
|
||||
|
||||
[strcpy][8](memptr, MemContents); /* copy some ASCII bytes to the segment */
|
||||
|
||||
/* increment the semaphore so that memreader can read */
|
||||
if (sem_post(semptr) < 0) report_and_exit("sem_post");
|
||||
|
||||
sleep(12); /* give reader a chance */
|
||||
|
||||
/* clean up */
|
||||
munmap(memptr, ByteSize); /* unmap the storage */
|
||||
close(fd);
|
||||
sem_close(semptr);
|
||||
shm_unlink(BackingFile); /* unlink from the backing file */
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Here's an overview of how the _memwriter_ and _memreader_ programs communicate through shared memory:
|
||||
|
||||
* The _memwriter_ program, shown above, calls the **shm_open** function to get a file descriptor for the backing file that the system coordinates with the shared memory. At this point, no memory has been allocated. The subsequent call to the misleadingly named function **ftruncate** : [code]`ftruncate(fd, ByteSize); /* get the bytes */`[/code] allocates **ByteSize** bytes, in this case, a modest 512 bytes. The _memwriter_ and _memreader_ programs access the shared memory only, not the backing file. The system is responsible for synchronizing the shared memory and the backing file.
|
||||
* The _memwriter_ then calls the **mmap** function: [code] caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
|
||||
ByteSize, /* how many bytes */
|
||||
PROT_READ | PROT_WRITE, /* access protections */
|
||||
MAP_SHARED, /* mapping visible to other processes */
|
||||
fd, /* file descriptor */
|
||||
0); /* offset: start at 1st byte */ [/code] to get a pointer to the shared memory. (The _memreader_ makes a similar call.) The pointer type **caddr_t** starts with a **c** for **calloc** , a system function that initializes dynamically allocated storage to zeroes. The _memwriter_ uses the **memptr** for the later _write_ operation, using the library **strcpy** (string copy) function.
|
||||
* At this point, the _memwriter_ is ready for writing, but it first creates a semaphore to ensure exclusive access to the shared memory. A race condition would occur if the _memwriter_ were writing while the _memreader_ was reading. If the call to **sem_open** succeeds: [code] sem_t* semptr = sem_open(SemaphoreName, /* name */
|
||||
O_CREAT, /* create the semaphore */
|
||||
AccessPerms, /* protection perms */
|
||||
0); /* initial value */ [/code] then the writing can proceed. The **SemaphoreName** (any unique non-empty name will do) identifies the semaphore in both the _memwriter_ and the _memreader_. The initial value of zero gives the semaphore's creator, in this case, the _memwriter_ , the right to proceed, in this case, to the _write_ operation.
|
||||
* After writing, the _memwriter_ increments the semaphore value to 1: [code]`if (sem_post(semptr) < 0) ..`[/code] with a call to the **sem_post** function. Incrementing the semaphore releases the mutex lock and enables the _memreader_ to perform its _read_ operation. For good measure, the _memwriter_ also unmaps the shared memory from the _memwriter_ address space: [code]`munmap(memptr, ByteSize); /* unmap the storage *`[/code] This bars the _memwriter_ from further access to the shared memory.
|
||||
|
||||
|
||||
|
||||
#### Example 4. Source code for the _memreader_ process
|
||||
|
||||
|
||||
```
|
||||
/** Compilation: gcc -o memreader memreader.c -lrt -lpthread **/
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/mman.h>
|
||||
#include <sys/stat.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <semaphore.h>
|
||||
#include <string.h>
|
||||
#include "shmem.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][4](msg);
|
||||
[exit][5](-1);
|
||||
}
|
||||
|
||||
int main() {
|
||||
int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* empty to begin */
|
||||
if (fd < 0) report_and_exit("Can't get file descriptor...");
|
||||
|
||||
/* get a pointer to memory */
|
||||
caddr_t memptr = mmap(NULL, /* let system pick where to put segment */
|
||||
ByteSize, /* how many bytes */
|
||||
PROT_READ | PROT_WRITE, /* access protections */
|
||||
MAP_SHARED, /* mapping visible to other processes */
|
||||
fd, /* file descriptor */
|
||||
0); /* offset: start at 1st byte */
|
||||
if ((caddr_t) -1 == memptr) report_and_exit("Can't access segment...");
|
||||
|
||||
/* create a semaphore for mutual exclusion */
|
||||
sem_t* semptr = sem_open(SemaphoreName, /* name */
|
||||
O_CREAT, /* create the semaphore */
|
||||
AccessPerms, /* protection perms */
|
||||
0); /* initial value */
|
||||
if (semptr == (void*) -1) report_and_exit("sem_open");
|
||||
|
||||
/* use semaphore as a mutex (lock) by waiting for writer to increment it */
|
||||
if (!sem_wait(semptr)) { /* wait until semaphore != 0 */
|
||||
int i;
|
||||
for (i = 0; i < [strlen][6](MemContents); i++)
|
||||
write(STDOUT_FILENO, memptr + i, 1); /* one byte at a time */
|
||||
sem_post(semptr);
|
||||
}
|
||||
|
||||
/* cleanup */
|
||||
munmap(memptr, ByteSize);
|
||||
close(fd);
|
||||
sem_close(semptr);
|
||||
unlink(BackingFile);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
In both the _memwriter_ and _memreader_ programs, the shared-memory functions of main interest are **shm_open** and **mmap** : on success, the first call returns a file descriptor for the backing file, which the second call then uses to get a pointer to the shared memory segment. The calls to **shm_open** are similar in the two programs except that the _memwriter_ program creates the shared memory, whereas the _memreader_ only accesses this already created memory:
|
||||
|
||||
|
||||
```
|
||||
int fd = shm_open(BackingFile, O_RDWR | O_CREAT, AccessPerms); /* memwriter */
|
||||
int fd = shm_open(BackingFile, O_RDWR, AccessPerms); /* memreader */
|
||||
```
|
||||
|
||||
With a file descriptor in hand, the calls to **mmap** are the same:
|
||||
|
||||
|
||||
```
|
||||
`caddr_t memptr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);`
|
||||
```
|
||||
|
||||
The first argument to **mmap** is **NULL** , which means that the system determines where to allocate the memory in virtual address space. It's possible (but tricky) to specify an address instead. The **MAP_SHARED** flag indicates that the allocated memory is shareable among processes, and the last argument (in this case, zero) means that the offset for the shared memory should be the first byte. The **size** argument specifies the number of bytes to be allocated (in this case, 512), and the protection argument indicates that the shared memory can be written and read.
|
||||
|
||||
When the _memwriter_ program executes successfully, the system creates and maintains the backing file; on my system, the file is _/dev/shm/shMemEx_ , with _shMemEx_ as my name (given in the header file _shmem.h_ ) for the shared storage. In the current version of the _memwriter_ and _memreader_ programs, the statement:
|
||||
|
||||
|
||||
```
|
||||
`shm_unlink(BackingFile); /* removes backing file */`
|
||||
```
|
||||
|
||||
removes the backing file. If the **unlink** statement is omitted, then the backing file persists after the program terminates.
|
||||
|
||||
The _memreader_ , like the _memwriter_ , accesses the semaphore through its name in a call to **sem_open**. But the _memreader_ then goes into a wait state until the _memwriter_ increments the semaphore, whose initial value is 0:
|
||||
|
||||
|
||||
```
|
||||
`if (!sem_wait(semptr)) { /* wait until semaphore != 0 */`
|
||||
```
|
||||
|
||||
Once the wait is over, the _memreader_ reads the ASCII bytes from the shared memory, cleans up, and terminates.
|
||||
|
||||
The shared-memory API includes operations explicitly to synchronize the shared memory segment and the backing file. These operations have been omitted from the example to reduce clutter and keep the focus on the memory-sharing and semaphore code.
|
||||
|
||||
The _memwriter_ and _memreader_ programs are likely to execute without inducing a race condition even if the semaphore code is removed: the _memwriter_ creates the shared memory segment and writes immediately to it; the _memreader_ cannot even access the shared memory until this has been created. However, best practice requires that shared-memory access is synchronized whenever a _write_ operation is in the mix, and the semaphore API is important enough to be highlighted in a code example.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The shared-file and shared-memory examples show how processes can communicate through _shared storage_ , files in one case and memory segments in the other. The APIs for both approaches are relatively straightforward. Do these approaches have a common downside? Modern applications often deal with streaming data, indeed, with massively large streams of data. Neither the shared-file nor the shared-memory approaches are well suited for massive data streams. Channels of one type or another are better suited. Part 2 thus introduces channels and message queues, again with code examples in C.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/interprocess-communication-linux-storage
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ (Filing papers and documents)
|
||||
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
|
||||
[3]: http://condor.depaul.edu/mkalin
|
||||
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
|
||||
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
|
||||
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
|
211
sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md
Normal file
211
sources/tech/20190415 Kubernetes on Fedora IoT with k3s.md
Normal file
@ -0,0 +1,211 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Kubernetes on Fedora IoT with k3s)
|
||||
[#]: via: (https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/)
|
||||
[#]: author: (Lennart Jern https://fedoramagazine.org/author/lennartj/)
|
||||
|
||||
Kubernetes on Fedora IoT with k3s
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year on Fedora Magazine in the article [How to turn on an LED with Fedora IoT][2]. Since then, it has continued to improve together with Fedora Silverblue to provide an immutable base operating system aimed at container-focused workflows.
|
||||
|
||||
Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used on powerful hardware handling huge workloads. However, it can also be used on lightweight devices such as the Raspberry Pi 3. Read on to find out how.
|
||||
|
||||
### Why Kubernetes?
|
||||
|
||||
While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn and get familiar with Kubernetes without the need for expensive hardware. Second, because of its popularity, there are [tons of applications][3] that comes pre-packaged for running in Kubernetes clusters. Not to mention the large community to provide help if you ever get stuck.
|
||||
|
||||
Last but not least, container orchestration may actually make things easier, even at the small scale in a home lab. This may not be apparent when tackling the the learning curve, but these skills will help when dealing with any cluster in the future. It doesn’t matter if it’s a single node Raspberry Pi cluster or a large scale machine learning farm.
|
||||
|
||||
#### K3s – a lightweight Kubernetes
|
||||
|
||||
A “normal” installation of Kubernetes (if such a thing can be said to exist) is a bit on the heavy side for IoT. The recommendation is a minimum of 2 GB RAM per machine! However, there are plenty of alternatives, and one of the newcomers is [k3s][4] – a lightweight Kubernetes distribution.
|
||||
|
||||
K3s is quite special in that it has replaced etcd with SQLite for its key-value storage needs. Another thing to note is that k3s ships as a single binary instead of one per component. This diminishes the memory footprint and simplifies the installation. Thanks to the above, k3s should be able to run k3s with just 512 MB of RAM, perfect for a small single board computer!
|
||||
|
||||
### What you will need
|
||||
|
||||
1. Fedora IoT in a virtual machine or on a physical device. See the excellent getting started guide [here][5]. One machine is enough but two will allow you to test adding more nodes to the cluster.
|
||||
2. [Configure the firewall][6] to allow traffic on ports 6443 and 8472. Or simply disable it for this experiment by running “systemctl stop firewalld”.
|
||||
|
||||
|
||||
|
||||
### Install k3s
|
||||
|
||||
Installing k3s is very easy. Simply run the installation script:
|
||||
|
||||
```
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
This will download, install and start up k3s. After installation, get a list of nodes from the server by running the following command:
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
Note that there are several options that can be passed to the installation script through environment variables. These can be found in the [documentation][7]. And of course, there is nothing stopping you from installing k3s manually by downloading the binary directly.
|
||||
|
||||
While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily, adding another node is no harder than setting up the first one. Just pass two environment variables to the installation script to make it find the first node and avoid running the server part of k3s
|
||||
|
||||
```
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \
|
||||
K3S_TOKEN=XXX sh -
|
||||
```
|
||||
|
||||
The example-url above should be replaced by the IP address or fully qualified domain name of the first node. On that node the token (represented by XXX) is found in the file /var/lib/rancher/k3s/server/node-token.
|
||||
|
||||
### Deploy some containers
|
||||
|
||||
Now that we have a Kubernetes cluster, what can we actually do with it? Let’s start by deploying a simple web server.
|
||||
|
||||
```
|
||||
kubectl create deployment my-server --image nginx
|
||||
```
|
||||
|
||||
This will create a [Deployment][8] named “my-server” from the container image “nginx” (defaulting to docker hub as registry and the latest tag). You can see the Pod created by running the following command.
|
||||
|
||||
```
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
In order to access the nginx server running in the pod, first expose the Deployment through a [Service][9]. The following command will create a Service with the same name as the deployment.
|
||||
|
||||
```
|
||||
kubectl expose deployment my-server --port 80
|
||||
```
|
||||
|
||||
The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running a second Pod, we will be able to _curl_ the nginx server just by specifying _my-server_ (the name of the Service). See the example below for how to do this.
|
||||
|
||||
```
|
||||
# Start a pod and run bash interactively in it
|
||||
kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
|
||||
# Wait for the bash prompt to appear
|
||||
curl my-server
|
||||
# You should get the "Welcome to nginx!" page as output
|
||||
```
|
||||
|
||||
### Ingress controller and external IP
|
||||
|
||||
By default, a Service only get a ClusterIP (only accessible inside the cluster), but you can also request an external IP for the service by setting its type to [LoadBalancer][10]. However, not all applications require their own IP address. Instead, it is often possible to share one IP address among many services by routing requests based on the host header or path. You can accomplish this in Kubernetes with an [Ingress][11], and this is what we will do. Ingresses also provide additional features such as TLS encryption of the traffic without having to modify your application.
|
||||
|
||||
Kubernetes needs an ingress controller to make the Ingress resources work and k3s includes [Traefik][12] for this purpose. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. The [documentation][13] describes the service like this:
|
||||
|
||||
> k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.
|
||||
>
|
||||
> k3s README
|
||||
|
||||
The ingress controller is already exposed with this load balancer service. You can find the IP address that it is using with the following command.
|
||||
|
||||
```
|
||||
$ kubectl get svc --all-namespaces
|
||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
default kubernetes ClusterIP 10.43.0.1 443/TCP 33d
|
||||
default my-server ClusterIP 10.43.174.38 80/TCP 30m
|
||||
kube-system kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 33d
|
||||
kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d
|
||||
```
|
||||
|
||||
Look for the Service named traefik. In the above example the IP we are interested in is 10.0.0.8.
|
||||
|
||||
### Route incoming requests
|
||||
|
||||
Let’s create an Ingress that routes requests to our web server based on the host header. This example uses [xip.io][14] to avoid having to set up DNS records. It works by including the IP adress as a subdomain, to use any subdomain of 10.0.0.8.xip.io to reach the IP 10.0.0.8. In other words, my-server.10.0.0.8.xip.io is used to reach the ingress controller in the cluster. You can try this right now (with your own IP instead of 10.0.0.8). Without an ingress in place you should reach the “default backend” which is just a page showing “404 page not found”.
|
||||
|
||||
We can tell the ingress controller to route requests to our web server Service with the following Ingress.
|
||||
|
||||
```
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: my-server
|
||||
spec:
|
||||
rules:
|
||||
- host: my-server.10.0.0.8.xip.io
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: my-server
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
Save the above snippet in a file named _my-ingress.yaml_ and add it to the cluster by running this command:
|
||||
|
||||
```
|
||||
kubectl apply -f my-ingress.yaml
|
||||
```
|
||||
|
||||
You should now be able to reach the default nginx welcoming page on the fully qualified domain name you chose. In my example this would be my-server.10.0.0.8.xip.io. The ingress controller is routing the requests based on the information in the Ingress. A request to my-server.10.0.0.8.xip.io will be routed to the Service and port defined as backend in the Ingress (my-server and 80 in this case).
|
||||
|
||||
### What about IoT then?
|
||||
|
||||
Imagine the following scenario. You have dozens of devices spread out around your home or farm. It is a heterogeneous collection of IoT devices with various hardware capabilities, sensors and actuators. Maybe some of them have cameras, weather or light sensors. Others may be hooked up to control the ventilation, lights, blinds or blink LEDs.
|
||||
|
||||
In this scenario, you want to gather data from all the sensors, maybe process and analyze it before you finally use it to make decisions and control the actuators. In addition to this, you may want to visualize what’s going on by setting up a dashboard. So how can Kubernetes help us manage something like this? How can we make sure that Pods run on suitable devices?
|
||||
|
||||
The simple answer is labels. You can label the nodes according to capabilities, like this:
|
||||
|
||||
```
|
||||
kubectl label nodes <node-name> <label-key>=<label-value>
|
||||
# Example
|
||||
kubectl label nodes node2 camera=available
|
||||
```
|
||||
|
||||
Once they are labeled, it is easy to select suitable nodes for your workload with [nodeSelectors][15]. The final piece to the puzzle, if you want to run your Pods on _all_ suitable nodes is to use [DaemonSets][16] instead of Deployments. In other words, create one DaemonSet for each data collecting application that uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper hardware.
|
||||
|
||||
The service discovery feature that allows Pods to find each other simply by Service name makes it quite easy to handle these kinds of distributed systems. You don’t need to know or configure IP addresses or custom ports for the applications. Instead, they can easily find each other through named Services in the cluster.
|
||||
|
||||
#### Utilize spare resources
|
||||
|
||||
With the cluster up and running, collecting data and controlling your lights and climate control you may feel that you are finished. However, there are still plenty of compute resources in the cluster that could be used for other projects. This is where Kubernetes really shines.
|
||||
|
||||
You shouldn’t have to worry about where exactly those resources are or calculate if there is enough memory to fit an extra application here or there. This is exactly what orchestration solves! You can easily deploy more applications in the cluster and let Kubernetes figure out where (or if) they will fit.
|
||||
|
||||
Why not run your own [NextCloud][17] instance? Or maybe [gitea][18]? You could also set up a CI/CD pipeline for all those IoT containers. After all, why would you build and cross compile them on your main computer if you can do it natively in the cluster?
|
||||
|
||||
The point here is that Kubernetes makes it easier to make use of the “hidden” resources that you often end up with otherwise. Kubernetes handles scheduling of Pods in the cluster based on available resources and fault tolerance so that you don’t have to. However, in order to help Kubernetes make reasonable decisions you should definitely add [resource requests][19] to your workloads.
|
||||
|
||||
### Summary
|
||||
|
||||
While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems. Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare resources.
|
||||
|
||||
Container technology made it possible to build applications that could “run anywhere”. Now Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on, we have Fedora IoT.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/kubernetes-on-fedora-iot-with-k3s/
|
||||
|
||||
作者:[Lennart Jern][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/lennartj/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/k3s-1-816x345.png
|
||||
[2]: https://fedoramagazine.org/turnon-led-fedora-iot/
|
||||
[3]: https://hub.helm.sh/
|
||||
[4]: https://k3s.io
|
||||
[5]: https://docs.fedoraproject.org/en-US/iot/getting-started/
|
||||
[6]: https://github.com/rancher/k3s#open-ports--network-security
|
||||
[7]: https://github.com/rancher/k3s#systemd
|
||||
[8]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
|
||||
[9]: https://kubernetes.io/docs/concepts/services-networking/service/
|
||||
[10]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
|
||||
[11]: https://kubernetes.io/docs/concepts/services-networking/ingress/
|
||||
[12]: https://traefik.io/
|
||||
[13]: https://github.com/rancher/k3s/blob/master/README.md#service-load-balancer
|
||||
[14]: http://xip.io/
|
||||
[15]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
|
||||
[16]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
|
||||
[17]: https://nextcloud.com/
|
||||
[18]: https://gitea.io/en-us/
|
||||
[19]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
|
39
sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
Normal file
39
sources/tech/20190415 Troubleshooting slow WiFi on Linux.md
Normal file
@ -0,0 +1,39 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Troubleshooting slow WiFi on Linux)
|
||||
[#]: via: (https://www.linux.com/blog/troubleshooting-slow-wifi-linux)
|
||||
[#]: author: (OpenSource.com https://www.linux.com/USERS/OPENSOURCECOM)
|
||||
|
||||
Troubleshooting slow WiFi on Linux
|
||||
======
|
||||
|
||||
I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
|
||||
|
||||
Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
|
||||
|
||||
Read more at: [OpenSource.com][2]
|
||||
|
||||
I'm no stranger to diagnosing hardware problems on [Linux systems][1]. Even though most of my professional work over the past few years has involved virtualization, I still enjoy crouching under desks and fumbling around with devices and memory modules. Well, except for the "crouching under desks" part. But none of that means that persistent and mysterious bugs aren't frustrating. I recently faced off against one of those bugs on my Ubuntu 18.04 workstation, which remained unsolved for months.
|
||||
|
||||
Here, I'll share my problem and my many attempts to resolve it. Even though you'll probably never encounter my specific issue, the troubleshooting process might be helpful. And besides, you'll get to enjoy feeling smug at how much time and effort I wasted following useless leads.
|
||||
|
||||
Read more at: [OpenSource.com][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/troubleshooting-slow-wifi-linux
|
||||
|
||||
作者:[OpenSource.com][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/USERS/OPENSOURCECOM
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.manning.com/books/linux-in-action?a_aid=bootstrap-it&a_bid=4ca15fc9&chan=opensource
|
||||
[2]: https://opensource.com/article/19/4/troubleshooting-wifi-linux
|
@ -0,0 +1,263 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Building a DNS-as-a-service with OpenStack Designate)
|
||||
[#]: via: (https://opensource.com/article/19/4/getting-started-openstack-designate)
|
||||
[#]: author: (Amjad Yaseen https://opensource.com/users/ayaseen)
|
||||
|
||||
Building a DNS-as-a-service with OpenStack Designate
|
||||
======
|
||||
Learn how to install and configure Designate, a multi-tenant
|
||||
DNS-as-a-service (DNSaaS) for OpenStack.
|
||||
![Command line prompt][1]
|
||||
|
||||
[Designate][2] is a multi-tenant DNS-as-a-service that includes a REST API for domain and record management, a framework for integration with [Neutron][3], and integration support for Bind9.
|
||||
|
||||
You would want to consider a DNSaaS for the following:
|
||||
|
||||
* A clean REST API for managing zones and records
|
||||
* Automatic records generated (with OpenStack integration)
|
||||
* Support for multiple authoritative name servers
|
||||
* Hosting multiple projects/organizations
|
||||
|
||||
|
||||
|
||||
![Designate's architecture][4]
|
||||
|
||||
This article explains how to manually install and configure the latest release of Designate service on CentOS or Red Hat Enterprise Linux 7 (RHEL 7), but you can use the same configuration on other distributions.
|
||||
|
||||
### Install Designate on OpenStack
|
||||
|
||||
I have Ansible roles for bind and Designate that demonstrate the setup in my [GitHub repository][5].
|
||||
|
||||
This setup presumes bind service is external (even though you can install bind locally) on the OpenStack controller node.
|
||||
|
||||
1. Install Designate's packages and bind (on OpenStack controller): [code]`# yum install openstack-designate-* bind bind-utils -y`
|
||||
```
|
||||
2. Create the Designate database and user: [code] MariaDB [(none)]> CREATE DATABASE designate CHARACTER SET utf8 COLLATE utf8_general_ci;
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO \
|
||||
'designate'@'localhost' IDENTIFIED BY 'rhlab123';
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'%' \
|
||||
IDENTIFIED BY 'rhlab123';
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
Note: Bind packages must be installed on the controller side for Remote Name Daemon Control (RNDC) to function properly.
|
||||
|
||||
### Configure bind (DNS server)
|
||||
|
||||
1. Generate RNDC files: [code] rndc-confgen -a -k designate -c /etc/rndc.key -r /dev/urandom
|
||||
|
||||
cat <<EOF> etcrndc.conf
|
||||
include "/etc/rndc.key";
|
||||
options {
|
||||
default-key "designate";
|
||||
default-server {{ DNS_SERVER_IP }};
|
||||
default-port 953;
|
||||
};
|
||||
EOF
|
||||
```
|
||||
2. Add the following into **named.conf** : [code]`include "/etc/rndc.key"; controls { inet {{ DNS_SERVER_IP }} allow { localhost;{{ CONTROLLER_SERVER_IP }}; } keys { "designate"; }; };`[/code] In the **option** section, add: [code] options {
|
||||
...
|
||||
allow-new-zones yes;
|
||||
request-ixfr no;
|
||||
listen-on port 53 { any; };
|
||||
recursion no;
|
||||
allow-query { 127.0.0.1; {{ CONTROLLER_SERVER_IP }}; };
|
||||
}; [/code] Add the right permissions: [code] chown named:named /etc/rndc.key
|
||||
chown named:named /etc/rndc.conf
|
||||
chmod 600 /etc/rndc.key
|
||||
chown -v root:named /etc/named.conf
|
||||
chmod g+w /var/named
|
||||
|
||||
# systemctl restart named
|
||||
# setsebool named_write_master_zones 1
|
||||
```
|
||||
|
||||
3. Push **rndc.key** and **rndc.conf** into the OpenStack controller: [code]`# scp -r /etc/rndc* {{ CONTROLLER_SERVER_IP }}:/etc/`
|
||||
```
|
||||
## Create OpenStack Designate service and endpoints
|
||||
|
||||
Enter:
|
||||
```
|
||||
|
||||
|
||||
# openstack user create --domain default --password-prompt designate
|
||||
# openstack role add --project services --user designate admin
|
||||
# openstack service create --name designate --description "DNS" dns
|
||||
|
||||
# openstack endpoint create --region RegionOne dns public http://{{ CONTROLLER_SERVER_IP }}:9001/
|
||||
# openstack endpoint create --region RegionOne dns internal http://{{ CONTROLLER_SERVER_IP }}:9001/
|
||||
# openstack endpoint create --region RegionOne dns admin http://{{ CONTROLLER_SERVER_IP }}:9001/
|
||||
|
||||
```
|
||||
## Configure Designate service
|
||||
|
||||
1. Edit **/etc/designate/designate.conf** :
|
||||
* In the **[service:api]** section, configure **auth_strategy** : [code] [service:api]
|
||||
listen = 0.0.0.0:9001
|
||||
auth_strategy = keystone
|
||||
api_base_uri = http://{{ CONTROLLER_SERVER_IP }}:9001/
|
||||
enable_api_v2 = True
|
||||
enabled_extensions_v2 = quotas, reports
|
||||
```
|
||||
* In the **[keystone_authtoken]** section, configure the following options: [code] [keystone_authtoken]
|
||||
auth_type = password
|
||||
username = designate
|
||||
password = rhlab123
|
||||
project_name = service
|
||||
project_domain_name = Default
|
||||
user_domain_name = Default
|
||||
www_authenticate_uri = http://{{ CONTROLLER_SERVER_IP }}:5000/
|
||||
auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000/
|
||||
```
|
||||
* In the **[service:worker]** section, enable the worker model: [code] enabled = True
|
||||
notify = True
|
||||
```
|
||||
* In the **[storage:sqlalchemy]** section, configure database access: [code] [storage:sqlalchemy]
|
||||
connection = mysql+pymysql://designate:rhlab123@{{ CONTROLLER_SERVER_IP }}/designate
|
||||
```
|
||||
* Populate the Designate database: [code]`# su -s /bin/sh -c "designate-manage database sync" designate`
|
||||
```
|
||||
|
||||
|
||||
2. Create Designate's **pools.yaml** file (has target and bind details):
|
||||
* Edit **/etc/designate/pools.yaml** : [code] - name: default
|
||||
# The name is immutable. There will be no option to change the name after
|
||||
# creation and the only way will to change it will be to delete it
|
||||
# (and all zones associated with it) and recreate it.
|
||||
description: Default Pool
|
||||
|
||||
attributes: {}
|
||||
|
||||
# List out the NS records for zones hosted within this pool
|
||||
# This should be a record that is created outside of designate, that
|
||||
# points to the public IP of the controller node.
|
||||
ns_records:
|
||||
\- hostname: {{Controller_FQDN}}. # Thisis mDNS
|
||||
priority: 1
|
||||
|
||||
# List out the nameservers for this pool. These are the actual BIND servers.
|
||||
# We use these to verify changes have propagated to all nameservers.
|
||||
nameservers:
|
||||
\- host: {{ DNS_SERVER_IP }}
|
||||
port: 53
|
||||
|
||||
# List out the targets for this pool. For BIND there will be one
|
||||
# entry for each BIND server, as we have to run rndc command on each server
|
||||
targets:
|
||||
\- type: bind9
|
||||
description: BIND9 Server 1
|
||||
|
||||
# List out the designate-mdns servers from which BIND servers should
|
||||
# request zone transfers (AXFRs) from.
|
||||
# This should be the IP of the controller node.
|
||||
# If you have multiple controllers you can add multiple masters
|
||||
# by running designate-mdns on them, and adding them here.
|
||||
masters:
|
||||
\- host: {{ CONTROLLER_SERVER_IP }}
|
||||
port: 5354
|
||||
|
||||
# BIND Configuration options
|
||||
options:
|
||||
host: {{ DNS_SERVER_IP }}
|
||||
port: 53
|
||||
rndc_host: {{ DNS_SERVER_IP }}
|
||||
rndc_port: 953
|
||||
rndc_key_file: /etc/rndc.key
|
||||
rndc_config_file: /etc/rndc.conf
|
||||
```
|
||||
* Populate Designate's pools: [code]`su -s /bin/sh -c "designate-manage pool update" designate`
|
||||
```
|
||||
|
||||
|
||||
|
||||
3. Start Designate central and API services: [code]`systemctl enable --now designate-central designate-api`
|
||||
```
|
||||
4. Verify Designate's services are up: [code] # openstack dns service list
|
||||
|
||||
+--------------+--------+-------+--------------+
|
||||
| service_name | status | stats | capabilities |
|
||||
+--------------+--------+-------+--------------+
|
||||
| central | UP | - | - |
|
||||
| api | UP | - | - |
|
||||
| mdns | UP | - | - |
|
||||
| worker | UP | - | - |
|
||||
| producer | UP | - | - |
|
||||
+--------------+--------+-------+--------------+
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
### Configure OpenStack Neutron with external DNS
|
||||
|
||||
1. Configure iptables for Designate services: [code] # iptables -I INPUT -p tcp -m multiport --dports 9001 -m comment --comment "designate incoming" -j ACCEPT
|
||||
|
||||
# iptables -I INPUT -p tcp -m multiport --dports 5354 -m comment --comment "Designate mdns incoming" -j ACCEPT
|
||||
|
||||
# iptables -I INPUT -p tcp -m multiport --dports 53 -m comment --comment "bind incoming" -j ACCEPT
|
||||
|
||||
|
||||
# iptables -I INPUT -p udp -m multiport --dports 53 -m comment --comment "bind/powerdns incoming" -j ACCEPT
|
||||
|
||||
# iptables -I INPUT -p tcp -m multiport --dports 953 -m comment --comment "rndc incoming - bind only" -j ACCEPT
|
||||
|
||||
# service iptables save; service iptables restart
|
||||
# setsebool named_write_master_zones 1
|
||||
```
|
||||
2. Edit the **[default]** section of **/etc/neutron/neutron.conf** : [code]`external_dns_driver = designate`
|
||||
```
|
||||
|
||||
3. Add the **[designate]** section in **/_etc/_neutron/neutron.conf** : [code] [designate]
|
||||
url = http://{{ CONTROLLER_SERVER_IP }}:9001/v2 ## This end point of designate
|
||||
auth_type = password
|
||||
auth_url = http://{{ CONTROLLER_SERVER_IP }}:5000
|
||||
username = designate
|
||||
password = rhlab123
|
||||
project_name = services
|
||||
project_domain_name = Default
|
||||
user_domain_name = Default
|
||||
allow_reverse_dns_lookup = True
|
||||
ipv4_ptr_zone_prefix_size = 24
|
||||
ipv6_ptr_zone_prefix_size = 116
|
||||
```
|
||||
4. Edit **dns_domain** in **neutron.conf** : [code] dns_domain = rhlab.dev.
|
||||
|
||||
# systemctl restart neutron-*
|
||||
```
|
||||
|
||||
5. Add **dns** to the list of Modular Layer 2 (ML2) drivers in **/etc/neutron/plugins/ml2/ml2_conf.ini** : [code]`extension_drivers=port_security,qos,dns`
|
||||
```
|
||||
6. Add **zone** in Designate: [code]`# openstack zone create –email=admin@rhlab.dev rhlab.dev.`[/code] Add a new record in **zone rhlab.dev** : [code]`# openstack recordset create --record '192.168.1.230' --type A rhlab.dev. Test`
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
Designate should now be installed and configured.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/getting-started-openstack-designate
|
||||
|
||||
作者:[Amjad Yaseen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ayaseen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
|
||||
[2]: https://docs.openstack.org/designate/latest/
|
||||
[3]: /article/19/3/openstack-neutron
|
||||
[4]: https://opensource.com/sites/default/files/uploads/openstack_designate_architecture.png (Designate's architecture)
|
||||
[5]: https://github.com/ayaseen/designate
|
79
sources/tech/20190416 Can schools be agile.md
Normal file
79
sources/tech/20190416 Can schools be agile.md
Normal file
@ -0,0 +1,79 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Can schools be agile?)
|
||||
[#]: via: (https://opensource.com/open-organization/19/4/education-culture-agile)
|
||||
[#]: author: (Ben Owens https://opensource.com/users/engineerteacher/users/ke4qqq/users/n8chz/users/don-watkins)
|
||||
|
||||
Can schools be agile?
|
||||
======
|
||||
We certainly don't need to run our schools like businesses—but we could
|
||||
benefit from educational organizations more focused on continuous
|
||||
improvement.
|
||||
![][1]
|
||||
|
||||
We've all had those _deja vu_ moments that make us think "I've seen this before!" I experienced them often in the late 1980s, when I first began my career in industry. I was caught up in a wave of organizational change, where the U.S. manufacturing sector was experimenting with various models that asked leaders, managers, and engineers like me to rethink how we approached things like quality, cost, innovation, and shareholder value. It seems as if every year (sometimes, more frequently) we'd study yet another book to identify the "best practices" necessary for making us leaner, flatter, more nimble, and more responsive to the needs of the customer.
|
||||
|
||||
Many of the approaches were so transformational that their core principles still resonate with me today. Specific ideas and methods from thought leaders such as John Kotter, Peter Drucker, Edwards Demming, and Peter Senge were truly pivotal for our ability to rethink our work, as were the adoption of process improvement methods such as Six Sigma and those embodied in the "Toyota Way."
|
||||
|
||||
But others seemed to simply repackage these same ideas with a sexy new twist—hence my _deja vu_.
|
||||
|
||||
And yet when I began my career as a teacher, I encountered a context that _didn't_ give me that feeling: education. In fact, I was surprised to find that "getting better all the time" was _not_ the same high priority in my new profession that it was in my old one (particularly at the level of my role as a classroom teacher).
|
||||
|
||||
Why aren't more educational organizations working to create cultures of continuous improvement? I can think of several reasons, but let me address two.
|
||||
|
||||
### Widgets no more
|
||||
|
||||
The first barrier to a culture of continuous improvement is education's general reticence to look at other professions for ideas it can adapt and adopt—especially ideas from the business community. The second is education's predominant leadership model, which remains predominantly top-down and rooted in hierarchy. Conversations about systemic, continuous improvement tend to be the purview of a relatively small group of school or district leaders: principals, assistant principals, superintendents, and the like. But widespread organizational culture change can't occur if only one small group is involved in it.
|
||||
|
||||
Before unpacking these points a bit further, I'd like to emphasize that there are certainly exceptions to the above generalization (many I have seen first hand) and that there are two basic assumptions that I think any education stakeholder should be able to agree with:
|
||||
|
||||
1. Continuous improvement must be an essential mindset for _anyone_ involved in the work of providing high-quality and equitable teaching and learning systems for students, and
|
||||
2. Decisions by leaders of our schools will more greatly benefit students and the communities in which they live when those decisions are informed and influenced by those who work closest with students.
|
||||
|
||||
|
||||
|
||||
So why a tendency to ignore (or be outright hostile toward) ideas that come from outside the education space?
|
||||
|
||||
I, for example, have certainly faced criticism in the past for suggesting that we look to other professions for ideas and inspiration that can help us better meet the needs of students. A common refrain is something like: "You're trying to treat our students like widgets!" But how could our students be treated any more like widgets than they already are? They matriculate through school in age-based cohorts, going from siloed class to class each day by the sound of a shrill bell, and receive grades based on arbitrary tests that emphasize sameness over individuality.
|
||||
|
||||
What I'm advocating is a clear-eyed and objective look at any idea from any sector with potential to help us better meet the needs of individual students, not that we somehow run our schools like businesses.
|
||||
|
||||
It may be news to many inside of education, but widgets—abstract units of production that evoke the idea of assembly line standardization—are not a significant part of the modern manufacturing sector. Thanks to the culture of continuous improvement described above, modern, advanced manufacturing delivers just what the individual customer wants, at a competitive price, exactly when she wants it. If we adapted this model to our schools, teachers would be more likely to collaborate and constantly refine their unique paths of growth for all students based on just-in-time needs and desires—regardless of the time, subject, or any other traditional norm.
|
||||
|
||||
What I'm advocating is a clear-eyed and objective look at any idea from any sector with potential to help us better meet the needs of individual students, not that we somehow run our schools like businesses. In order for this to happen effectively, however, we need to scrutinize a leadership structure that has frankly remained stagnant for over 100 years.
|
||||
|
||||
### Toward continuous improvement
|
||||
|
||||
While I certainly appreciate the argument that education is an animal significantly different from other professions, I also believe that rethinking an organizational and leadership structure is an applicable exercise for any entity wanting to remain responsible (and responsive) to the needs of its stakeholders. Most other professions have taken a hard look at their traditional, closed, hierarchical structures and moved to ones that encourage collective autonomy per shared goals of excellence—organizational elements essential for continuous improvement. It's time our schools and districts do the same by expanding their horizon beyond sources that, while well intended, are developed from a lens of the current paradigm.
|
||||
|
||||
Not surprisingly, a go-to resource I recommend to any school wanting to begin or accelerate this process is _The Open Organization_ by Jim Whitehurst. Not only does the book provide a window into how educators can create more open, inclusive leadership structures—where mutual respect enables nimble decisions to be made per real-time data—but it does so in language easily adaptable to the rather strange lexicon that's second nature to educators. Open organization thinking provides pragmatic ways any organization can empower members to be more open: sharing ideas and resources, embracing a culture of collaborative participation as a top priority, developing an innovation mindset through rapid prototyping, valuing ideas based on merit rather than the rank of the person proposing them, and building a strong sense of community that's baked into the organization's DNA. Such an open organization crowd-sources ideas from both inside and outside its formal structure and creates the type of environment that enables localized, student-centered innovations to thrive.
|
||||
|
||||
We simply can't rely on solutions and practices we developed in a factory-model paradigm.
|
||||
|
||||
Here's the bottom line: Essential to a culture of continuous improvement is recognizing that what we've done in the past may not be suitable in a rapidly changing future. For educators, that means we simply can't rely on solutions and practices we developed in a factory-model paradigm. We must acknowledge countless examples of best practices from other sectors—such as non-profits, the military, the medical profession, and yes, even business—that can at least _inform_ how we rethink what we do in the best interest of students. By moving beyond the traditionally sanctioned "eduspeak" world, we create opportunities for considering perspectives. We can better see the forest for the trees, taking a more objective look at the problems we face, as well as acknowledging what we do very well.
|
||||
|
||||
Intentionally considering ideas from all sources—from first year classroom teachers to the latest NYT Business & Management Leadership bestseller—offers us a powerful way to engage existing talent within our schools to help overcome the institutionalized inertia that has prevented more positive change from taking hold in our schools and districts.
|
||||
|
||||
Relentlessly pursuing methods of continuous improvement should not be a behavior confined to organizations fighting to remain competitive in a global, innovation economy, nor should it be left to a select few charged with the operation of our schools. When everyone in an organization is always thinking about what they can do differently _today_ to improve what they did _yesterday_ , then you have an organization living a culture of excellence. That's the kind of radically collaborative and innovative culture we should especially expect for organizations focused on changing the lives of young people.
|
||||
|
||||
I'm eagerly awaiting the day when I enter a school, recognize that spirit, and smile to myself as I say, "I've seen this before."
|
||||
|
||||
Experiential learning using open source is fraught with opportunities for disaster.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/4/education-culture-agile
|
||||
|
||||
作者:[Ben Owens][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/engineerteacher/users/ke4qqq/users/n8chz/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_network.png?itok=ySEHuAQ8
|
792
sources/tech/20190416 Detecting malaria with deep learning.md
Normal file
792
sources/tech/20190416 Detecting malaria with deep learning.md
Normal file
@ -0,0 +1,792 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Detecting malaria with deep learning)
|
||||
[#]: via: (https://opensource.com/article/19/4/detecting-malaria-deep-learning)
|
||||
[#]: author: (Dipanjan (DJ) Sarkar (Red Hat) https://opensource.com/users/djsarkar)
|
||||
|
||||
Detecting malaria with deep learning
|
||||
======
|
||||
Artificial intelligence combined with open source tools can improve
|
||||
diagnosis of the fatal disease malaria.
|
||||
![][1]
|
||||
|
||||
Artificial intelligence (AI) and open source tools, technologies, and frameworks are a powerful combination for improving society. _"Health is wealth"_ is perhaps a cliche, yet it's very accurate! In this article, we will examine how AI can be leveraged for detecting the deadly disease malaria with a low-cost, effective, and accurate open source deep learning solution.
|
||||
|
||||
While I am neither a doctor nor a healthcare researcher and I'm nowhere near as qualified as they are, I am interested in applying AI to healthcare research. My intent in this article is to showcase how AI and open source solutions can help malaria detection and reduce manual labor.
|
||||
|
||||
![Python and TensorFlow][2]
|
||||
|
||||
Python and TensorFlow: A great combo to build open source deep learning solutions
|
||||
|
||||
Thanks to the power of Python and deep learning frameworks like TensorFlow, we can build robust, scalable, and effective deep learning solutions. Because these tools are free and open source, we can build solutions that are very cost-effective and easily adopted and used by anyone. Let's get started!
|
||||
|
||||
### Motivation for the project
|
||||
|
||||
Malaria is a deadly, infectious, mosquito-borne disease caused by _Plasmodium_ parasites that are transmitted by the bites of infected female _Anopheles_ mosquitoes. There are five parasites that cause malaria, but two types— _P. falciparum_ and _P. vivax_ —cause the majority of the cases.
|
||||
|
||||
![Malaria heat map][3]
|
||||
|
||||
This map shows that malaria is prevalent around the globe, especially in tropical regions, but the nature and fatality of the disease is the primary motivation for this project.
|
||||
|
||||
If an infected mosquito bites you, parasites carried by the mosquito enter your blood and start destroying oxygen-carrying red blood cells (RBC). Typically, the first symptoms of malaria are similar to a virus like the flu and they usually begin within a few days or weeks after the mosquito bite. However, these deadly parasites can live in your body for over a year without causing symptoms, and a delay in treatment can lead to complications and even death. Therefore, early detection can save lives.
|
||||
|
||||
The World Health Organization's (WHO) [malaria facts][4] indicate that nearly half the world's population is at risk from malaria, and there are over 200 million malaria cases and approximately 400,000 deaths due to malaria every year. This is a motivatation to make malaria detection and diagnosis fast, easy, and effective.
|
||||
|
||||
### Methods of malaria detection
|
||||
|
||||
There are several methods that can be used for malaria detection and diagnosis. The paper on which our project is based, "[Pre-trained convolutional neural networks as feature extractors toward improved Malaria parasite detection in thin blood smear images][5]," by Rajaraman, et al., introduces some of the methods, including polymerase chain reaction (PCR) and rapid diagnostic tests (RDT). These two tests are typically used where high-quality microscopy services are not readily available.
|
||||
|
||||
The standard malaria diagnosis is typically based on a blood-smear workflow, according to Carlos Ariza's article "[Malaria Hero: A web app for faster malaria diagnosis][6]," which I learned about in Adrian Rosebrock's "[Deep learning and medical image analysis with Keras][7]." I appreciate the authors of these excellent resources for giving me more perspective on malaria prevalence, diagnosis, and treatment.
|
||||
|
||||
![Blood smear workflow for Malaria detection][8]
|
||||
|
||||
A blood smear workflow for Malaria detection
|
||||
|
||||
According to WHO protocol, diagnosis typically involves intensive examination of the blood smear at 100X magnification. Trained people manually count how many red blood cells contain parasites out of 5,000 cells. As the Rajaraman, et al., paper cited above explains:
|
||||
|
||||
> Thick blood smears assist in detecting the presence of parasites while thin blood smears assist in identifying the species of the parasite causing the infection (Centers for Disease Control and Prevention, 2012). The diagnostic accuracy heavily depends on human expertise and can be adversely impacted by the inter-observer variability and the liability imposed by large-scale diagnoses in disease-endemic/resource-constrained regions (Mitiku, Mengistu, and Gelaw, 2003). Alternative techniques such as polymerase chain reaction (PCR) and rapid diagnostic tests (RDT) are used; however, PCR analysis is limited in its performance (Hommelsheim, et al., 2014) and RDTs are less cost-effective in disease-endemic regions (Hawkes, Katsuva, and Masumbuko, 2009).
|
||||
|
||||
Thus, malaria detection could benefit from automation using deep learning.
|
||||
|
||||
### Deep learning for malaria detection
|
||||
|
||||
Manual diagnosis of blood smears is an intensive manual process that requires expertise in classifying and counting parasitized and uninfected cells. This process may not scale well, especially in regions where the right expertise is hard to find. Some advancements have been made in leveraging state-of-the-art image processing and analysis techniques to extract hand-engineered features and build machine learning-based classification models. However, these models are not scalable with more data being available for training and given the fact that hand-engineered features take a lot of time.
|
||||
|
||||
Deep learning models, or more specifically convolutional neural networks (CNNs), have proven very effective in a wide variety of computer vision tasks. (If you would like additional background knowledge on CNNs, I recommend reading [CS231n Convolutional Neural Networks for Visual Recognition][9].) Briefly, the key layers in a CNN model include convolution and pooling layers, as shown in the following figure.
|
||||
|
||||
![A typical CNN architecture][10]
|
||||
|
||||
A typical CNN architecture
|
||||
|
||||
Convolution layers learn spatial hierarchical patterns from data, which are also translation-invariant, so they are able to learn different aspects of images. For example, the first convolution layer will learn small and local patterns, such as edges and corners, a second convolution layer will learn larger patterns based on the features from the first layers, and so on. This allows CNNs to automate feature engineering and learn effective features that generalize well on new data points. Pooling layers helps with downsampling and dimension reduction.
|
||||
|
||||
Thus, CNNs help with automated and scalable feature engineering. Also, plugging in dense layers at the end of the model enables us to perform tasks like image classification. Automated malaria detection using deep learning models like CNNs could be very effective, cheap, and scalable, especially with the advent of transfer learning and pre-trained models that work quite well, even with constraints like less data.
|
||||
|
||||
The Rajaraman, et al., paper leverages six pre-trained models on a dataset to obtain an impressive accuracy of 95.9% in detecting malaria vs. non-infected samples. Our focus is to try some simple CNN models from scratch and a couple of pre-trained models using transfer learning to see the results we can get on the same dataset. We will use open source tools and frameworks, including Python and TensorFlow, to build our models.
|
||||
|
||||
### The dataset
|
||||
|
||||
The data for our analysis comes from researchers at the Lister Hill National Center for Biomedical Communications (LHNCBC), part of the National Library of Medicine (NLM), who have carefully collected and annotated the [publicly available dataset][11] of healthy and infected blood smear images. These researchers have developed a mobile [application for malaria detection][12] that runs on a standard Android smartphone attached to a conventional light microscope. They used Giemsa-stained thin blood smear slides from 150 _P. falciparum_ -infected and 50 healthy patients, collected and photographed at Chittagong Medical College Hospital, Bangladesh. The smartphone's built-in camera acquired images of slides for each microscopic field of view. The images were manually annotated by an expert slide reader at the Mahidol-Oxford Tropical Medicine Research Unit in Bangkok, Thailand.
|
||||
|
||||
Let's briefly check out the dataset's structure. First, I will install some basic dependencies (based on the operating system being used).
|
||||
|
||||
![Installing dependencies][13]
|
||||
|
||||
I am using a Debian-based system on the cloud with a GPU so I can run my models faster. To view the directory structure, we must install the tree dependency (if we don't have it) using **sudo apt install tree**.
|
||||
|
||||
![Installing the tree dependency][14]
|
||||
|
||||
We have two folders that contain images of cells, infected and healthy. We can get further details about the total number of images by entering:
|
||||
|
||||
|
||||
```
|
||||
import os
|
||||
import glob
|
||||
|
||||
base_dir = os.path.join('./cell_images')
|
||||
infected_dir = os.path.join(base_dir,'Parasitized')
|
||||
healthy_dir = os.path.join(base_dir,'Uninfected')
|
||||
|
||||
infected_files = glob.glob(infected_dir+'/*.png')
|
||||
healthy_files = glob.glob(healthy_dir+'/*.png')
|
||||
len(infected_files), len(healthy_files)
|
||||
|
||||
# Output
|
||||
(13779, 13779)
|
||||
```
|
||||
|
||||
It looks like we have a balanced dataset with 13,779 malaria and 13,779 non-malaria (uninfected) cell images. Let's build a data frame from this, which we will use when we start building our datasets.
|
||||
|
||||
|
||||
```
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
|
||||
np.random.seed(42)
|
||||
|
||||
files_df = pd.DataFrame({
|
||||
'filename': infected_files + healthy_files,
|
||||
'label': ['malaria'] * len(infected_files) + ['healthy'] * len(healthy_files)
|
||||
}).sample(frac=1, random_state=42).reset_index(drop=True)
|
||||
|
||||
files_df.head()
|
||||
```
|
||||
|
||||
![Datasets][15]
|
||||
|
||||
### Build and explore image datasets
|
||||
|
||||
To build deep learning models, we need training data, but we also need to test the model's performance on unseen data. We will use a 60:10:30 split for train, validation, and test datasets, respectively. We will leverage the train and validation datasets during training and check the performance of the model on the test dataset.
|
||||
|
||||
|
||||
```
|
||||
from sklearn.model_selection import train_test_split
|
||||
from collections import Counter
|
||||
|
||||
train_files, test_files, train_labels, test_labels = train_test_split(files_df['filename'].values,
|
||||
files_df['label'].values,
|
||||
test_size=0.3, random_state=42)
|
||||
train_files, val_files, train_labels, val_labels = train_test_split(train_files,
|
||||
train_labels,
|
||||
test_size=0.1, random_state=42)
|
||||
|
||||
print(train_files.shape, val_files.shape, test_files.shape)
|
||||
print('Train:', Counter(train_labels), '\nVal:', Counter(val_labels), '\nTest:', Counter(test_labels))
|
||||
|
||||
# Output
|
||||
(17361,) (1929,) (8268,)
|
||||
Train: Counter({'healthy': 8734, 'malaria': 8627})
|
||||
Val: Counter({'healthy': 970, 'malaria': 959})
|
||||
Test: Counter({'malaria': 4193, 'healthy': 4075})
|
||||
```
|
||||
|
||||
The images will not be of equal dimensions because blood smears and cell images vary based on the human, the test method, and the orientation of the photo. Let's get some summary statistics of our training dataset to determine the optimal image dimensions (remember, we don't touch the test dataset at all!).
|
||||
|
||||
|
||||
```
|
||||
import cv2
|
||||
from concurrent import futures
|
||||
import threading
|
||||
|
||||
def get_img_shape_parallel(idx, img, total_imgs):
|
||||
if idx % 5000 == 0 or idx == (total_imgs - 1):
|
||||
print('{}: working on img num: {}'.format(threading.current_thread().name,
|
||||
idx))
|
||||
return cv2.imread(img).shape
|
||||
|
||||
ex = futures.ThreadPoolExecutor(max_workers=None)
|
||||
data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
|
||||
print('Starting Img shape computation:')
|
||||
train_img_dims_map = ex.map(get_img_shape_parallel,
|
||||
[record[0] for record in data_inp],
|
||||
[record[1] for record in data_inp],
|
||||
[record[2] for record in data_inp])
|
||||
train_img_dims = list(train_img_dims_map)
|
||||
print('Min Dimensions:', np.min(train_img_dims, axis=0))
|
||||
print('Avg Dimensions:', np.mean(train_img_dims, axis=0))
|
||||
print('Median Dimensions:', np.median(train_img_dims, axis=0))
|
||||
print('Max Dimensions:', np.max(train_img_dims, axis=0))
|
||||
|
||||
# Output
|
||||
Starting Img shape computation:
|
||||
ThreadPoolExecutor-0_0: working on img num: 0
|
||||
ThreadPoolExecutor-0_17: working on img num: 5000
|
||||
ThreadPoolExecutor-0_15: working on img num: 10000
|
||||
ThreadPoolExecutor-0_1: working on img num: 15000
|
||||
ThreadPoolExecutor-0_7: working on img num: 17360
|
||||
Min Dimensions: [46 46 3]
|
||||
Avg Dimensions: [132.77311215 132.45757733 3.]
|
||||
Median Dimensions: [130. 130. 3.]
|
||||
Max Dimensions: [385 394 3]
|
||||
```
|
||||
|
||||
We apply parallel processing to speed up the image-read operations and, based on the summary statistics, we will resize each image to 125x125 pixels. Let's load up all of our images and resize them to these fixed dimensions.
|
||||
|
||||
|
||||
```
|
||||
IMG_DIMS = (125, 125)
|
||||
|
||||
def get_img_data_parallel(idx, img, total_imgs):
|
||||
if idx % 5000 == 0 or idx == (total_imgs - 1):
|
||||
print('{}: working on img num: {}'.format(threading.current_thread().name,
|
||||
idx))
|
||||
img = cv2.imread(img)
|
||||
img = cv2.resize(img, dsize=IMG_DIMS,
|
||||
interpolation=cv2.INTER_CUBIC)
|
||||
img = np.array(img, dtype=np.float32)
|
||||
return img
|
||||
|
||||
ex = futures.ThreadPoolExecutor(max_workers=None)
|
||||
train_data_inp = [(idx, img, len(train_files)) for idx, img in enumerate(train_files)]
|
||||
val_data_inp = [(idx, img, len(val_files)) for idx, img in enumerate(val_files)]
|
||||
test_data_inp = [(idx, img, len(test_files)) for idx, img in enumerate(test_files)]
|
||||
|
||||
print('Loading Train Images:')
|
||||
train_data_map = ex.map(get_img_data_parallel,
|
||||
[record[0] for record in train_data_inp],
|
||||
[record[1] for record in train_data_inp],
|
||||
[record[2] for record in train_data_inp])
|
||||
train_data = np.array(list(train_data_map))
|
||||
|
||||
print('\nLoading Validation Images:')
|
||||
val_data_map = ex.map(get_img_data_parallel,
|
||||
[record[0] for record in val_data_inp],
|
||||
[record[1] for record in val_data_inp],
|
||||
[record[2] for record in val_data_inp])
|
||||
val_data = np.array(list(val_data_map))
|
||||
|
||||
print('\nLoading Test Images:')
|
||||
test_data_map = ex.map(get_img_data_parallel,
|
||||
[record[0] for record in test_data_inp],
|
||||
[record[1] for record in test_data_inp],
|
||||
[record[2] for record in test_data_inp])
|
||||
test_data = np.array(list(test_data_map))
|
||||
|
||||
train_data.shape, val_data.shape, test_data.shape
|
||||
|
||||
# Output
|
||||
Loading Train Images:
|
||||
ThreadPoolExecutor-1_0: working on img num: 0
|
||||
ThreadPoolExecutor-1_12: working on img num: 5000
|
||||
ThreadPoolExecutor-1_6: working on img num: 10000
|
||||
ThreadPoolExecutor-1_10: working on img num: 15000
|
||||
ThreadPoolExecutor-1_3: working on img num: 17360
|
||||
|
||||
Loading Validation Images:
|
||||
ThreadPoolExecutor-1_13: working on img num: 0
|
||||
ThreadPoolExecutor-1_18: working on img num: 1928
|
||||
|
||||
Loading Test Images:
|
||||
ThreadPoolExecutor-1_5: working on img num: 0
|
||||
ThreadPoolExecutor-1_19: working on img num: 5000
|
||||
ThreadPoolExecutor-1_8: working on img num: 8267
|
||||
((17361, 125, 125, 3), (1929, 125, 125, 3), (8268, 125, 125, 3))
|
||||
```
|
||||
|
||||
We leverage parallel processing again to speed up computations pertaining to image load and resizing. Finally, we get our image tensors of the desired dimensions, as depicted in the preceding output. We can now view some sample cell images to get an idea of how our data looks.
|
||||
|
||||
|
||||
```
|
||||
import matplotlib.pyplot as plt
|
||||
%matplotlib inline
|
||||
|
||||
plt.figure(1 , figsize = (8 , 8))
|
||||
n = 0
|
||||
for i in range(16):
|
||||
n += 1
|
||||
r = np.random.randint(0 , train_data.shape[0] , 1)
|
||||
plt.subplot(4 , 4 , n)
|
||||
plt.subplots_adjust(hspace = 0.5 , wspace = 0.5)
|
||||
plt.imshow(train_data[r[0]]/255.)
|
||||
plt.title('{}'.format(train_labels[r[0]]))
|
||||
plt.xticks([]) , plt.yticks([])
|
||||
```
|
||||
|
||||
![Malaria cell samples][16]
|
||||
|
||||
Based on these sample images, we can see some subtle differences between malaria and healthy cell images. We will make our deep learning models try to learn these patterns during model training.
|
||||
|
||||
Before can we start training our models, we must set up some basic configuration settings.
|
||||
|
||||
|
||||
```
|
||||
BATCH_SIZE = 64
|
||||
NUM_CLASSES = 2
|
||||
EPOCHS = 25
|
||||
INPUT_SHAPE = (125, 125, 3)
|
||||
|
||||
train_imgs_scaled = train_data / 255.
|
||||
val_imgs_scaled = val_data / 255.
|
||||
|
||||
# encode text category labels
|
||||
from sklearn.preprocessing import LabelEncoder
|
||||
|
||||
le = LabelEncoder()
|
||||
le.fit(train_labels)
|
||||
train_labels_enc = le.transform(train_labels)
|
||||
val_labels_enc = le.transform(val_labels)
|
||||
|
||||
print(train_labels[:6], train_labels_enc[:6])
|
||||
|
||||
# Output
|
||||
['malaria' 'malaria' 'malaria' 'healthy' 'healthy' 'malaria'] [1 1 1 0 0 1]
|
||||
```
|
||||
|
||||
We fix our image dimensions, batch size, and epochs and encode our categorical class labels. The alpha version of TensorFlow 2.0 was released in March 2019, and this exercise is the perfect excuse to try it out.
|
||||
|
||||
|
||||
```
|
||||
import tensorflow as tf
|
||||
|
||||
# Load the TensorBoard notebook extension (optional)
|
||||
%load_ext tensorboard.notebook
|
||||
|
||||
tf.random.set_seed(42)
|
||||
tf.__version__
|
||||
|
||||
# Output
|
||||
'2.0.0-alpha0'
|
||||
```
|
||||
|
||||
### Deep learning model training
|
||||
|
||||
In the model training phase, we will build three deep learning models, train them with our training data, and compare their performance using the validation data. We will then save these models and use them later in the model evaluation phase.
|
||||
|
||||
#### Model 1: CNN from scratch
|
||||
|
||||
Our first malaria detection model will build and train a basic CNN from scratch. First, let's define our model architecture.
|
||||
|
||||
|
||||
```
|
||||
inp = tf.keras.layers.Input(shape=INPUT_SHAPE)
|
||||
|
||||
conv1 = tf.keras.layers.Conv2D(32, kernel_size=(3, 3),
|
||||
activation='relu', padding='same')(inp)
|
||||
pool1 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv1)
|
||||
conv2 = tf.keras.layers.Conv2D(64, kernel_size=(3, 3),
|
||||
activation='relu', padding='same')(pool1)
|
||||
pool2 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv2)
|
||||
conv3 = tf.keras.layers.Conv2D(128, kernel_size=(3, 3),
|
||||
activation='relu', padding='same')(pool2)
|
||||
pool3 = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(conv3)
|
||||
|
||||
flat = tf.keras.layers.Flatten()(pool3)
|
||||
|
||||
hidden1 = tf.keras.layers.Dense(512, activation='relu')(flat)
|
||||
drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
|
||||
hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
|
||||
drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
|
||||
|
||||
out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
|
||||
|
||||
model = tf.keras.Model(inputs=inp, outputs=out)
|
||||
model.compile(optimizer='adam',
|
||||
loss='binary_crossentropy',
|
||||
metrics=['accuracy'])
|
||||
model.summary()
|
||||
|
||||
# Output
|
||||
Model: "model"
|
||||
_________________________________________________________________
|
||||
Layer (type) Output Shape Param #
|
||||
=================================================================
|
||||
input_1 (InputLayer) [(None, 125, 125, 3)] 0
|
||||
_________________________________________________________________
|
||||
conv2d (Conv2D) (None, 125, 125, 32) 896
|
||||
_________________________________________________________________
|
||||
max_pooling2d (MaxPooling2D) (None, 62, 62, 32) 0
|
||||
_________________________________________________________________
|
||||
conv2d_1 (Conv2D) (None, 62, 62, 64) 18496
|
||||
_________________________________________________________________
|
||||
...
|
||||
...
|
||||
_________________________________________________________________
|
||||
dense_1 (Dense) (None, 512) 262656
|
||||
_________________________________________________________________
|
||||
dropout_1 (Dropout) (None, 512) 0
|
||||
_________________________________________________________________
|
||||
dense_2 (Dense) (None, 1) 513
|
||||
=================================================================
|
||||
Total params: 15,102,529
|
||||
Trainable params: 15,102,529
|
||||
Non-trainable params: 0
|
||||
_________________________________________________________________
|
||||
```
|
||||
|
||||
Based on the architecture in this code, our CNN model has three convolution and pooling layers, followed by two dense layers, and dropouts for regularization. Let's train our model.
|
||||
|
||||
|
||||
```
|
||||
import datetime
|
||||
|
||||
logdir = os.path.join('/home/dipanzan_sarkar/projects/tensorboard_logs',
|
||||
datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
|
||||
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
|
||||
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
|
||||
patience=2, min_lr=0.000001)
|
||||
callbacks = [reduce_lr, tensorboard_callback]
|
||||
|
||||
history = model.fit(x=train_imgs_scaled, y=train_labels_enc,
|
||||
batch_size=BATCH_SIZE,
|
||||
epochs=EPOCHS,
|
||||
validation_data=(val_imgs_scaled, val_labels_enc),
|
||||
callbacks=callbacks,
|
||||
verbose=1)
|
||||
|
||||
|
||||
# Output
|
||||
Train on 17361 samples, validate on 1929 samples
|
||||
Epoch 1/25
|
||||
17361/17361 [====] - 32s 2ms/sample - loss: 0.4373 - accuracy: 0.7814 - val_loss: 0.1834 - val_accuracy: 0.9393
|
||||
Epoch 2/25
|
||||
17361/17361 [====] - 30s 2ms/sample - loss: 0.1725 - accuracy: 0.9434 - val_loss: 0.1567 - val_accuracy: 0.9513
|
||||
...
|
||||
...
|
||||
Epoch 24/25
|
||||
17361/17361 [====] - 30s 2ms/sample - loss: 0.0036 - accuracy: 0.9993 - val_loss: 0.3693 - val_accuracy: 0.9565
|
||||
Epoch 25/25
|
||||
17361/17361 [====] - 30s 2ms/sample - loss: 0.0034 - accuracy: 0.9994 - val_loss: 0.3699 - val_accuracy: 0.9559
|
||||
```
|
||||
|
||||
We get a validation accuracy of 95.6%, which is pretty good, although our model looks to be overfitting slightly (based on looking at our training accuracy, which is 99.9%). We can get a clear perspective on this by plotting the training and validation accuracy and loss curves.
|
||||
|
||||
|
||||
```
|
||||
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
|
||||
t = f.suptitle('Basic CNN Performance', fontsize=12)
|
||||
f.subplots_adjust(top=0.85, wspace=0.3)
|
||||
|
||||
max_epoch = len(history.history['accuracy'])+1
|
||||
epoch_list = list(range(1,max_epoch))
|
||||
ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
|
||||
ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
|
||||
ax1.set_xticks(np.arange(1, max_epoch, 5))
|
||||
ax1.set_ylabel('Accuracy Value')
|
||||
ax1.set_xlabel('Epoch')
|
||||
ax1.set_title('Accuracy')
|
||||
l1 = ax1.legend(loc="best")
|
||||
|
||||
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
|
||||
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
|
||||
ax2.set_xticks(np.arange(1, max_epoch, 5))
|
||||
ax2.set_ylabel('Loss Value')
|
||||
ax2.set_xlabel('Epoch')
|
||||
ax2.set_title('Loss')
|
||||
l2 = ax2.legend(loc="best")
|
||||
```
|
||||
|
||||
![Learning curves for basic CNN][17]
|
||||
|
||||
Learning curves for basic CNN
|
||||
|
||||
We can see after the fifth epoch that things don't seem to improve a whole lot overall. Let's save this model for future evaluation.
|
||||
|
||||
|
||||
```
|
||||
`model.save('basic_cnn.h5')`
|
||||
```
|
||||
|
||||
#### Deep transfer learning
|
||||
|
||||
Just like humans have an inherent capability to transfer knowledge across tasks, transfer learning enables us to utilize knowledge from previously learned tasks and apply it to newer, related ones, even in the context of machine learning or deep learning. If you are interested in doing a deep-dive on transfer learning, you can read my article "[A comprehensive hands-on guide to transfer learning with real-world applications in deep learning][18]" and my book [_Hands-On Transfer Learning with Python_][19].
|
||||
|
||||
![Ideas for deep transfer learning][20]
|
||||
|
||||
The idea we want to explore in this exercise is:
|
||||
|
||||
> Can we leverage a pre-trained deep learning model (which was trained on a large dataset, like ImageNet) to solve the problem of malaria detection by applying and transferring its knowledge in the context of our problem?
|
||||
|
||||
We will apply the two most popular strategies for deep transfer learning.
|
||||
|
||||
* Pre-trained model as a feature extractor
|
||||
* Pre-trained model with fine-tuning
|
||||
|
||||
|
||||
|
||||
We will be using the pre-trained VGG-19 deep learning model, developed by the Visual Geometry Group (VGG) at the University of Oxford, for our experiments. A pre-trained model like VGG-19 is trained on a huge dataset ([ImageNet][21]) with a lot of diverse image categories. Therefore, the model should have learned a robust hierarchy of features, which are spatial-, rotational-, and translation-invariant with regard to features learned by CNN models. Hence, the model, having learned a good representation of features for over a million images, can act as a good feature extractor for new images suitable for computer vision problems like malaria detection. Let's discuss the VGG-19 model architecture before unleashing the power of transfer learning on our problem.
|
||||
|
||||
##### Understanding the VGG-19 model
|
||||
|
||||
The VGG-19 model is a 19-layer (convolution and fully connected) deep learning network built on the ImageNet database, which was developed for the purpose of image recognition and classification. This model was built by Karen Simonyan and Andrew Zisserman and is described in their paper "[Very deep convolutional networks for large-scale image recognition][22]." The architecture of the VGG-19 model is:
|
||||
|
||||
![VGG-19 Model Architecture][23]
|
||||
|
||||
You can see that we have a total of 16 convolution layers using 3x3 convolution filters along with max pooling layers for downsampling and two fully connected hidden layers of 4,096 units in each layer followed by a dense layer of 1,000 units, where each unit represents one of the image categories in the ImageNet database. We do not need the last three layers since we will be using our own fully connected dense layers to predict malaria. We are more concerned with the first five blocks so we can leverage the VGG model as an effective feature extractor.
|
||||
|
||||
We will use one of the models as a simple feature extractor by freezing the five convolution blocks to make sure their weights aren't updated after each epoch. For the last model, we will apply fine-tuning to the VGG model, where we will unfreeze the last two blocks (Block 4 and Block 5) so that their weights will be updated in each epoch (per batch of data) as we train our own model.
|
||||
|
||||
#### Model 2: Pre-trained model as a feature extractor
|
||||
|
||||
For building this model, we will leverage TensorFlow to load up the VGG-19 model and freeze the convolution blocks so we can use them as an image feature extractor. We will plug in our own dense layers at the end to perform the classification task.
|
||||
|
||||
|
||||
```
|
||||
vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
|
||||
input_shape=INPUT_SHAPE)
|
||||
vgg.trainable = False
|
||||
# Freeze the layers
|
||||
for layer in vgg.layers:
|
||||
layer.trainable = False
|
||||
|
||||
base_vgg = vgg
|
||||
base_out = base_vgg.output
|
||||
pool_out = tf.keras.layers.Flatten()(base_out)
|
||||
hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
|
||||
drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
|
||||
hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
|
||||
drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
|
||||
|
||||
out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
|
||||
|
||||
model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
|
||||
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-4),
|
||||
loss='binary_crossentropy',
|
||||
metrics=['accuracy'])
|
||||
model.summary()
|
||||
|
||||
# Output
|
||||
Model: "model_1"
|
||||
_________________________________________________________________
|
||||
Layer (type) Output Shape Param #
|
||||
=================================================================
|
||||
input_2 (InputLayer) [(None, 125, 125, 3)] 0
|
||||
_________________________________________________________________
|
||||
block1_conv1 (Conv2D) (None, 125, 125, 64) 1792
|
||||
_________________________________________________________________
|
||||
block1_conv2 (Conv2D) (None, 125, 125, 64) 36928
|
||||
_________________________________________________________________
|
||||
...
|
||||
...
|
||||
_________________________________________________________________
|
||||
block5_pool (MaxPooling2D) (None, 3, 3, 512) 0
|
||||
_________________________________________________________________
|
||||
flatten_1 (Flatten) (None, 4608) 0
|
||||
_________________________________________________________________
|
||||
dense_3 (Dense) (None, 512) 2359808
|
||||
_________________________________________________________________
|
||||
dropout_2 (Dropout) (None, 512) 0
|
||||
_________________________________________________________________
|
||||
dense_4 (Dense) (None, 512) 262656
|
||||
_________________________________________________________________
|
||||
dropout_3 (Dropout) (None, 512) 0
|
||||
_________________________________________________________________
|
||||
dense_5 (Dense) (None, 1) 513
|
||||
=================================================================
|
||||
Total params: 22,647,361
|
||||
Trainable params: 2,622,977
|
||||
Non-trainable params: 20,024,384
|
||||
_________________________________________________________________
|
||||
```
|
||||
|
||||
It is evident from this output that we have a lot of layers in our model and we will be using the frozen layers of the VGG-19 model as feature extractors only. You can use the following code to verify how many layers in our model are indeed trainable and how many total layers are present in our network.
|
||||
|
||||
|
||||
```
|
||||
print("Total Layers:", len(model.layers))
|
||||
print("Total trainable layers:",
|
||||
sum([1 for l in model.layers if l.trainable]))
|
||||
|
||||
# Output
|
||||
Total Layers: 28
|
||||
Total trainable layers: 6
|
||||
```
|
||||
|
||||
We will now train our model using similar configurations and callbacks to the ones we used in our previous model. Refer to [my GitHub repository][24] for the complete code to train the model. We observe the following plots showing the model's accuracy and loss.
|
||||
|
||||
![Learning curves for frozen pre-trained CNN][25]
|
||||
|
||||
Learning curves for frozen pre-trained CNN
|
||||
|
||||
This shows that our model is not overfitting as much as our basic CNN model, but the performance is slightly less than our basic CNN model. Let's save this model for future evaluation.
|
||||
|
||||
|
||||
```
|
||||
`model.save('vgg_frozen.h5')`
|
||||
```
|
||||
|
||||
#### Model 3: Fine-tuned pre-trained model with image augmentation
|
||||
|
||||
In our final model, we will fine-tune the weights of the layers in the last two blocks of our pre-trained VGG-19 model. We will also introduce the concept of image augmentation. The idea behind image augmentation is exactly as the name sounds. We load in existing images from our training dataset and apply some image transformation operations to them, such as rotation, shearing, translation, zooming, and so on, to produce new, altered versions of existing images. Due to these random transformations, we don't get the same images each time. We will leverage an excellent utility called **ImageDataGenerator** in **tf.keras** that can help build image augmentors.
|
||||
|
||||
|
||||
```
|
||||
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,
|
||||
zoom_range=0.05,
|
||||
rotation_range=25,
|
||||
width_shift_range=0.05,
|
||||
height_shift_range=0.05,
|
||||
shear_range=0.05, horizontal_flip=True,
|
||||
fill_mode='nearest')
|
||||
|
||||
val_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
|
||||
|
||||
# build image augmentation generators
|
||||
train_generator = train_datagen.flow(train_data, train_labels_enc, batch_size=BATCH_SIZE, shuffle=True)
|
||||
val_generator = val_datagen.flow(val_data, val_labels_enc, batch_size=BATCH_SIZE, shuffle=False)
|
||||
```
|
||||
|
||||
We will not apply any transformations on our validation dataset (except for scaling the images, which is mandatory) since we will be using it to evaluate our model performance per epoch. For a detailed explanation of image augmentation in the context of transfer learning, feel free to check out my [article][18] cited above. Let's look at some sample results from a batch of image augmentation transforms.
|
||||
|
||||
|
||||
```
|
||||
img_id = 0
|
||||
sample_generator = train_datagen.flow(train_data[img_id:img_id+1], train_labels[img_id:img_id+1],
|
||||
batch_size=1)
|
||||
sample = [next(sample_generator) for i in range(0,5)]
|
||||
fig, ax = plt.subplots(1,5, figsize=(16, 6))
|
||||
print('Labels:', [item[1][0] for item in sample])
|
||||
l = [ax[i].imshow(sample[i][0][0]) for i in range(0,5)]
|
||||
```
|
||||
|
||||
![Sample augmented images][26]
|
||||
|
||||
You can clearly see the slight variations of our images in the preceding output. We will now build our deep learning model, making sure the last two blocks of the VGG-19 model are trainable.
|
||||
|
||||
|
||||
```
|
||||
vgg = tf.keras.applications.vgg19.VGG19(include_top=False, weights='imagenet',
|
||||
input_shape=INPUT_SHAPE)
|
||||
# Freeze the layers
|
||||
vgg.trainable = True
|
||||
|
||||
set_trainable = False
|
||||
for layer in vgg.layers:
|
||||
if layer.name in ['block5_conv1', 'block4_conv1']:
|
||||
set_trainable = True
|
||||
if set_trainable:
|
||||
layer.trainable = True
|
||||
else:
|
||||
layer.trainable = False
|
||||
|
||||
base_vgg = vgg
|
||||
base_out = base_vgg.output
|
||||
pool_out = tf.keras.layers.Flatten()(base_out)
|
||||
hidden1 = tf.keras.layers.Dense(512, activation='relu')(pool_out)
|
||||
drop1 = tf.keras.layers.Dropout(rate=0.3)(hidden1)
|
||||
hidden2 = tf.keras.layers.Dense(512, activation='relu')(drop1)
|
||||
drop2 = tf.keras.layers.Dropout(rate=0.3)(hidden2)
|
||||
|
||||
out = tf.keras.layers.Dense(1, activation='sigmoid')(drop2)
|
||||
|
||||
model = tf.keras.Model(inputs=base_vgg.input, outputs=out)
|
||||
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=1e-5),
|
||||
loss='binary_crossentropy',
|
||||
metrics=['accuracy'])
|
||||
|
||||
print("Total Layers:", len(model.layers))
|
||||
print("Total trainable layers:", sum([1 for l in model.layers if l.trainable]))
|
||||
|
||||
# Output
|
||||
Total Layers: 28
|
||||
Total trainable layers: 16
|
||||
```
|
||||
|
||||
We reduce the learning rate in our model since we don't want to make to large weight updates to the pre-trained layers when fine-tuning. The model's training process will be slightly different since we are using data generators, so we will be leveraging the **fit_generator(…)** function.
|
||||
|
||||
|
||||
```
|
||||
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
|
||||
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5,
|
||||
patience=2, min_lr=0.000001)
|
||||
|
||||
callbacks = [reduce_lr, tensorboard_callback]
|
||||
train_steps_per_epoch = train_generator.n // train_generator.batch_size
|
||||
val_steps_per_epoch = val_generator.n // val_generator.batch_size
|
||||
history = model.fit_generator(train_generator, steps_per_epoch=train_steps_per_epoch, epochs=EPOCHS,
|
||||
validation_data=val_generator, validation_steps=val_steps_per_epoch,
|
||||
verbose=1)
|
||||
|
||||
# Output
|
||||
Epoch 1/25
|
||||
271/271 [====] - 133s 489ms/step - loss: 0.2267 - accuracy: 0.9117 - val_loss: 0.1414 - val_accuracy: 0.9531
|
||||
Epoch 2/25
|
||||
271/271 [====] - 129s 475ms/step - loss: 0.1399 - accuracy: 0.9552 - val_loss: 0.1292 - val_accuracy: 0.9589
|
||||
...
|
||||
...
|
||||
Epoch 24/25
|
||||
271/271 [====] - 128s 473ms/step - loss: 0.0815 - accuracy: 0.9727 - val_loss: 0.1466 - val_accuracy: 0.9682
|
||||
Epoch 25/25
|
||||
271/271 [====] - 128s 473ms/step - loss: 0.0792 - accuracy: 0.9729 - val_loss: 0.1127 - val_accuracy: 0.9641
|
||||
```
|
||||
|
||||
This looks to be our best model yet. It gives us a validation accuracy of almost 96.5% and, based on the training accuracy, it doesn't look like our model is overfitting as much as our first model. This can be verified with the following learning curves.
|
||||
|
||||
![Learning curves for fine-tuned pre-trained CNN][27]
|
||||
|
||||
Learning curves for fine-tuned pre-trained CNN
|
||||
|
||||
Let's save this model so we can use it for model evaluation on our test dataset.
|
||||
|
||||
|
||||
```
|
||||
`model.save('vgg_finetuned.h5')`
|
||||
```
|
||||
|
||||
This completes our model training phase. We are now ready to test the performance of our models on the actual test dataset!
|
||||
|
||||
### Deep learning model performance evaluation
|
||||
|
||||
We will evaluate the three models we built in the training phase by making predictions with them on the data from our test dataset—because just validation is not enough! We have also built a nifty utility module called **model_evaluation_utils** , which we can use to evaluate the performance of our deep learning models with relevant classification metrics. The first step is to scale our test data.
|
||||
|
||||
|
||||
```
|
||||
test_imgs_scaled = test_data / 255.
|
||||
test_imgs_scaled.shape, test_labels.shape
|
||||
|
||||
# Output
|
||||
((8268, 125, 125, 3), (8268,))
|
||||
```
|
||||
|
||||
The next step involves loading our saved deep learning models and making predictions on the test data.
|
||||
|
||||
|
||||
```
|
||||
# Load Saved Deep Learning Models
|
||||
basic_cnn = tf.keras.models.load_model('./basic_cnn.h5')
|
||||
vgg_frz = tf.keras.models.load_model('./vgg_frozen.h5')
|
||||
vgg_ft = tf.keras.models.load_model('./vgg_finetuned.h5')
|
||||
|
||||
# Make Predictions on Test Data
|
||||
basic_cnn_preds = basic_cnn.predict(test_imgs_scaled, batch_size=512)
|
||||
vgg_frz_preds = vgg_frz.predict(test_imgs_scaled, batch_size=512)
|
||||
vgg_ft_preds = vgg_ft.predict(test_imgs_scaled, batch_size=512)
|
||||
|
||||
basic_cnn_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
|
||||
for pred in basic_cnn_preds.ravel()])
|
||||
vgg_frz_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
|
||||
for pred in vgg_frz_preds.ravel()])
|
||||
vgg_ft_pred_labels = le.inverse_transform([1 if pred > 0.5 else 0
|
||||
for pred in vgg_ft_preds.ravel()])
|
||||
```
|
||||
|
||||
The final step is to leverage our **model_evaluation_utils** module and check the performance of each model with relevant classification metrics.
|
||||
|
||||
|
||||
```
|
||||
import model_evaluation_utils as meu
|
||||
import pandas as pd
|
||||
|
||||
basic_cnn_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=basic_cnn_pred_labels)
|
||||
vgg_frz_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_frz_pred_labels)
|
||||
vgg_ft_metrics = meu.get_metrics(true_labels=test_labels, predicted_labels=vgg_ft_pred_labels)
|
||||
|
||||
pd.DataFrame([basic_cnn_metrics, vgg_frz_metrics, vgg_ft_metrics],
|
||||
index=['Basic CNN', 'VGG-19 Frozen', 'VGG-19 Fine-tuned'])
|
||||
```
|
||||
|
||||
![Model accuracy][28]
|
||||
|
||||
It looks like our third model performs best on the test dataset, giving a model accuracy and an F1-score of 96%, which is pretty good and quite comparable to the more complex models mentioned in the research paper and articles we mentioned earlier.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Malaria detection is not an easy procedure, and the availability of qualified personnel around the globe is a serious concern in the diagnosis and treatment of cases. We looked at an interesting real-world medical imaging case study of malaria detection. Easy-to-build, open source techniques leveraging AI can give us state-of-the-art accuracy in detecting malaria, thus enabling AI for social good.
|
||||
|
||||
I encourage you to check out the articles and research papers mentioned in this article, without which it would have been impossible for me to conceptualize and write it. If you are interested in running or adopting these techniques, all the code used in this article is available on [my GitHub repository][24]. Remember to download the data from the [official website][11].
|
||||
|
||||
Let's hope for more adoption of open source AI capabilities in healthcare to make it less expensive and more accessible for everyone around the world!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/detecting-malaria-deep-learning
|
||||
|
||||
作者:[Dipanjan (DJ) Sarkar (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/djsarkar
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourcedoctor.png?itok=fk79NwpC
|
||||
[2]: https://opensource.com/sites/default/files/uploads/malaria1_python-tensorflow.png (Python and TensorFlow)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/malaria2_malaria-heat-map.png (Malaria heat map)
|
||||
[4]: https://www.who.int/features/factfiles/malaria/en/
|
||||
[5]: https://peerj.com/articles/4568/
|
||||
[6]: https://blog.insightdatascience.com/https-blog-insightdatascience-com-malaria-hero-a47d3d5fc4bb
|
||||
[7]: https://www.pyimagesearch.com/2018/12/03/deep-learning-and-medical-image-analysis-with-keras/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/malaria3_blood-smear.png (Blood smear workflow for Malaria detection)
|
||||
[9]: http://cs231n.github.io/convolutional-networks/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/malaria4_cnn-architecture.png (A typical CNN architecture)
|
||||
[11]: https://ceb.nlm.nih.gov/repositories/malaria-datasets/
|
||||
[12]: https://www.ncbi.nlm.nih.gov/pubmed/29360430
|
||||
[13]: https://opensource.com/sites/default/files/uploads/malaria5_dependencies.png (Installing dependencies)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/malaria6_tree-dependency.png (Installing the tree dependency)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/malaria7_dataset.png (Datasets)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/malaria8_cell-samples.png (Malaria cell samples)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/malaria9_learningcurves.png (Learning curves for basic CNN)
|
||||
[18]: https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a
|
||||
[19]: https://github.com/dipanjanS/hands-on-transfer-learning-with-python
|
||||
[20]: https://opensource.com/sites/default/files/uploads/malaria10_transferideas.png (Ideas for deep transfer learning)
|
||||
[21]: http://image-net.org/index
|
||||
[22]: https://arxiv.org/pdf/1409.1556.pdf
|
||||
[23]: https://opensource.com/sites/default/files/uploads/malaria11_vgg-19-model-architecture.png (VGG-19 Model Architecture)
|
||||
[24]: https://nbviewer.jupyter.org/github/dipanjanS/data_science_for_all/tree/master/os_malaria_detection/
|
||||
[25]: https://opensource.com/sites/default/files/uploads/malaria12_learningcurves.png (Learning curves for frozen pre-trained CNN)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/malaria13_sampleimages.png (Sample augmented images)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/malaria14_learningcurves.png (Learning curves for fine-tuned pre-trained CNN)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/malaria15_modelaccuracy.png (Model accuracy)
|
238
sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
Normal file
238
sources/tech/20190416 How to Install MySQL in Ubuntu Linux.md
Normal file
@ -0,0 +1,238 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install MySQL in Ubuntu Linux)
|
||||
[#]: via: (https://itsfoss.com/install-mysql-ubuntu/)
|
||||
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
|
||||
|
||||
How to Install MySQL in Ubuntu Linux
|
||||
======
|
||||
|
||||
_**Brief: This tutorial teaches you to install MySQL in Ubuntu based Linux distributions. You’ll also learn how to verify your install and how to connect to MySQL for the first time.**_
|
||||
|
||||
**[MySQL][1]** is the quintessential database management system. It is used in many tech stacks, including the popular **[LAMP][2]** (Linux, Apache, MySQL, PHP) stack. It has proven its stability. Another thing that makes **MySQL** so great is that it is **open-source**.
|
||||
|
||||
**MySQL** uses **relational databases** (basically **tabular data** ). It is really easy to store, organize and access data this way. For managing data, **SQL** ( **Structured Query Language** ) is used.
|
||||
|
||||
In this article I’ll show you how to **install** and **use** MySQL 8.0 in Ubuntu 18.04. Let’s get to it!
|
||||
|
||||
### Installing MySQL in Ubuntu
|
||||
|
||||
![][3]
|
||||
|
||||
I’ll be covering two ways you can install **MySQL** in Ubuntu 18.04:
|
||||
|
||||
1. Install MySQL from the Ubuntu repositories. Very basic, not the latest version (5.7)
|
||||
2. Install MySQL using the official repository. There is a bigger step that you’ll have to add to the process, but nothing to worry about. Also, you’ll have the latest version (8.0)
|
||||
|
||||
|
||||
|
||||
When needed, I’ll provide screenshots to guide you. For most of this guide, I’ll be entering commands in the **terminal** ( **default hotkey** : CTRL+ALT+T). Don’t be scared of it!
|
||||
|
||||
#### Method 1. Installing MySQL from the Ubuntu repositories
|
||||
|
||||
First of all, make sure your repositories are updated by entering:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Now, to install **MySQL 5.7** , simply type:
|
||||
|
||||
```
|
||||
sudo apt install mysql-server -y
|
||||
```
|
||||
|
||||
That’s it! Simple and efficient.
|
||||
|
||||
#### Method 2. Installing MySQL using the official repository
|
||||
|
||||
Although this method has a few more steps, I’ll go through them one by one and I’ll try writing down clear notes.
|
||||
|
||||
The first step is browsing to the [download page][4] of the official MySQL website.
|
||||
|
||||
![][5]
|
||||
|
||||
Here, go down to the **download link** for the **DEB Package**.
|
||||
|
||||
![][6]
|
||||
|
||||
Scroll down past the info about Oracle Web and right-click on **No thanks, just start my download.** Select **Copy link location**.
|
||||
|
||||
Now go back to the terminal. We’ll [use][7] **[Curl][7]** [command][7] to the download the package:
|
||||
|
||||
```
|
||||
curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
|
||||
```
|
||||
|
||||
**<https://dev.mysql.com/get/mysql-apt-config\_0.8.12-1\_all.deb>** is the link I copied from the website. It might be different based on the current version of MySQL. Let’s use **dpkg** to start installing MySQL:
|
||||
|
||||
```
|
||||
sudo dpkg -i mysql-apt-config*
|
||||
```
|
||||
|
||||
Update your repositories:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
To actually install MySQL, we’ll use the same command as in the first method:
|
||||
|
||||
```
|
||||
sudo apt install mysql-server -y
|
||||
```
|
||||
|
||||
Doing so will open a prompt in your terminal for **package configuration**. Use the **down arrow** to select the **Ok** option.
|
||||
|
||||
![][8]
|
||||
|
||||
Press **Enter**. This should prompt you to enter a **password** :. Your are basically setting the root password for MySQL. Don’t confuse it with [root password of Ubuntu][9] system.
|
||||
|
||||
![][10]
|
||||
|
||||
Type in a password and press **Tab** to select **< Ok>**. Press **Enter.** You’ll now have to **re-enter** the **password**. After doing so, press **Tab** again to select **< Ok>**. Press **Enter**.
|
||||
|
||||
![][11]
|
||||
|
||||
Some **information** on configuring MySQL Server will be presented. Press **Tab** to select **< Ok>** and **Enter** again:
|
||||
|
||||
![][12]
|
||||
|
||||
Here you need to choose a **default authentication plugin**. Make sure **Use Strong Password Encryption** is selected. Press **Tab** and then **Enter**.
|
||||
|
||||
That’s it! You have successfully installed MySQL.
|
||||
|
||||
#### Verify your MySQL installation
|
||||
|
||||
To **verify** that MySQL installed correctly, use:
|
||||
|
||||
```
|
||||
sudo systemctl status mysql.service
|
||||
```
|
||||
|
||||
This will show some information about the service:
|
||||
|
||||
![][13]
|
||||
|
||||
You should see **Active: active (running)** in there somewhere. If you don’t, use the following command to start the **service** :
|
||||
|
||||
```
|
||||
sudo systemctl start mysql.service
|
||||
```
|
||||
|
||||
#### Configuring/Securing MySQL
|
||||
|
||||
For a new install, you should run the provided command for security-related updates. That’s:
|
||||
|
||||
```
|
||||
sudo mysql_secure_installation
|
||||
```
|
||||
|
||||
Doing so will first of all ask you if you want to use the **VALIDATE PASSWORD COMPONENT**. If you want to use it, you’ll have to select a minimum password strength ( **0 – Low, 1 – Medium, 2 – High** ). You won’t be able to input any password doesn’t respect the selected rules. If you don’t have the habit of using strong passwords (you should!), this could come in handy. If you think it might help, type in **y** or **Y** and press **Enter** , then choose a **strength level** for your password and input the one you want to use. If successful, you’ll continue the **securing** process; otherwise you’ll have to re-enter a password.
|
||||
|
||||
If, however, you do not want this feature (I won’t), just press **Enter** or **any other key** to skip using it.
|
||||
|
||||
For the other options, I suggest **enabling** them (typing in **y** or **Y** and pressing **Enter** for each of them). They are (in this order): **remove anonymous user, disallow root login remotely, remove test database and access to it, reload privilege tables now**.
|
||||
|
||||
#### Connecting to & Disconnecting from the MySQL Server
|
||||
|
||||
To be able to run SQL queries, you’ll first have to connect to the server using MySQL and use the MySQL prompt. The command for doing this is:
|
||||
|
||||
```
|
||||
mysql -h host_name -u user -p
|
||||
```
|
||||
|
||||
* **-h** is used to specify a **host name** (if the server is located on another machine; if it isn’t, just omit it)
|
||||
* **-u** mentions the **user**
|
||||
* **-p** specifies that you want to input a **password**.
|
||||
|
||||
|
||||
|
||||
Although not recommended (for safety reasons), you can enter the password directly in the command by typing it in right after **-p**. For example, if the password for **test_user** is **1234** and you are trying to connect on the machine you are using, you could use:
|
||||
|
||||
```
|
||||
mysql -u test_user -p1234
|
||||
```
|
||||
|
||||
If you successfully inputted the required parameters, you’ll be greeted by the **MySQL shell prompt** ( **mysql >**):
|
||||
|
||||
![][14]
|
||||
|
||||
To **disconnect** from the server and **leave** the mysql prompt, type:
|
||||
|
||||
```
|
||||
QUIT
|
||||
```
|
||||
|
||||
Typing **quit** (MySQL is case insensitive) or **\q** will also work. Press **Enter** to exit.
|
||||
|
||||
You can also output info about the **version** with a simple command:
|
||||
|
||||
```
|
||||
sudo mysqladmin -u root version -p
|
||||
```
|
||||
|
||||
If you want to see a **list of options** , use:
|
||||
|
||||
```
|
||||
mysql --help
|
||||
```
|
||||
|
||||
#### Uninstalling MySQL
|
||||
|
||||
If you decide that you want to use a newer release or just want to stop using MySQL.
|
||||
|
||||
First, disable the service:
|
||||
|
||||
```
|
||||
sudo systemctl stop mysql.service && sudo systemctl disable mysql.service
|
||||
```
|
||||
|
||||
Make sure you backed up your databases, in case you want to use them later on. You can uninstall MySQL by running:
|
||||
|
||||
```
|
||||
sudo apt purge mysql*
|
||||
```
|
||||
|
||||
To clean up dependecies:
|
||||
|
||||
```
|
||||
sudo apt autoremove
|
||||
```
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
In this article, I’ve covered **installing MySQL** in Ubuntu Linux. I’d be glad if this guide helps struggling users and beginners.
|
||||
|
||||
Tell us in the comments if you found this post to be a useful resource. What do you use MySQL for? We’re eager to receive any feedback, impressions or suggestions. Thanks for reading and have don’t hesitate to experiment with this incredible tool!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-mysql-ubuntu/
|
||||
|
||||
作者:[Sergiu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/sergiu/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.mysql.com/
|
||||
[2]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/install-mysql-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[4]: https://dev.mysql.com/downloads/repo/apt/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_apt_download_page.jpg?fit=800%2C280&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_deb_download_link.jpg?fit=800%2C507&ssl=1
|
||||
[7]: https://linuxhandbook.com/curl-command-examples/
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_package_configuration_ok.jpg?fit=800%2C587&ssl=1
|
||||
[9]: https://itsfoss.com/change-password-ubuntu/
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_enter_password.jpg?fit=800%2C583&ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_information_on_configuring.jpg?fit=800%2C581&ssl=1
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_default_authentication_plugin.jpg?fit=800%2C586&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_service_information.jpg?fit=800%2C402&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/mysql_shell_prompt-2.jpg?fit=800%2C423&ssl=1
|
@ -0,0 +1,531 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Inter-process communication in Linux: Using pipes and message queues)
|
||||
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-channels)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
Inter-process communication in Linux: Using pipes and message queues
|
||||
======
|
||||
Learn how processes synchronize with each other in Linux.
|
||||
![Chat bubbles][1]
|
||||
|
||||
This is the second article in a series about [interprocess communication][2] (IPC) in Linux. The [first article][3] focused on IPC through shared storage: shared files and shared memory segments. This article turns to pipes, which are channels that connect processes for communication. A channel has a _write end_ for writing bytes, and a _read end_ for reading these bytes in FIFO (first in, first out) order. In typical use, one process writes to the channel, and a different process reads from this same channel. The bytes themselves might represent anything: numbers, employee records, digital movies, and so on.
|
||||
|
||||
Pipes come in two flavors, named and unnamed, and can be used either interactively from the command line or within programs; examples are forthcoming. This article also looks at memory queues, which have fallen out of fashion—but undeservedly so.
|
||||
|
||||
The code examples in the first article acknowledged the threat of race conditions (either file-based or memory-based) in IPC that uses shared storage. The question naturally arises about safe concurrency for the channel-based IPC, which will be covered in this article. The code examples for pipes and memory queues use APIs with the POSIX stamp of approval, and a core goal of the POSIX standards is thread-safety.
|
||||
|
||||
Consider the [man pages for the **mq_open**][4] function, which belongs to the memory queue API. These pages include a section on [Attributes][5] with this small table:
|
||||
|
||||
Interface | Attribute | Value
|
||||
---|---|---
|
||||
mq_open() | Thread safety | MT-Safe
|
||||
|
||||
The value **MT-Safe** (with **MT** for multi-threaded) means that the **mq_open** function is thread-safe, which in turn implies process-safe: A process executes in precisely the sense that one of its threads executes, and if a race condition cannot arise among threads in the _same_ process, such a condition cannot arise among threads in different processes. The **MT-Safe** attribute assures that a race condition does not arise in invocations of **mq_open**. In general, channel-based IPC is concurrent-safe, although a cautionary note is raised in the examples that follow.
|
||||
|
||||
### Unnamed pipes
|
||||
|
||||
Let's start with a contrived command line example that shows how unnamed pipes work. On all modern systems, the vertical bar **|** represents an unnamed pipe at the command line. Assume **%** is the command line prompt, and consider this command:
|
||||
|
||||
|
||||
```
|
||||
`% sleep 5 | echo "Hello, world!" ## writer to the left of |, reader to the right`
|
||||
```
|
||||
|
||||
The _sleep_ and _echo_ utilities execute as separate processes, and the unnamed pipe allows them to communicate. However, the example is contrived in that no communication occurs. The greeting _Hello, world!_ appears on the screen; then, after about five seconds, the command line prompt returns, indicating that both the _sleep_ and _echo_ processes have exited. What's going on?
|
||||
|
||||
In the vertical-bar syntax from the command line, the process to the left ( _sleep_ ) is the writer, and the process to the right ( _echo_ ) is the reader. By default, the reader blocks until there are bytes to read from the channel, and the writer—after writing its bytes—finishes up by sending an end-of-stream marker. (Even if the writer terminates prematurely, an end-of-stream marker is sent to the reader.) The unnamed pipe persists until both the writer and the reader terminate.
|
||||
|
||||
In the contrived example, the _sleep_ process does not write any bytes to the channel but does terminate after about five seconds, which sends an end-of-stream marker to the channel. In the meantime, the _echo_ process immediately writes the greeting to the standard output (the screen) because this process does not read any bytes from the channel, so it does no waiting. Once the _sleep_ and _echo_ processes terminate, the unnamed pipe—not used at all for communication—goes away and the command line prompt returns.
|
||||
|
||||
Here is a more useful example using two unnamed pipes. Suppose that the file _test.dat_ looks like this:
|
||||
|
||||
|
||||
```
|
||||
this
|
||||
is
|
||||
the
|
||||
way
|
||||
the
|
||||
world
|
||||
ends
|
||||
```
|
||||
|
||||
The command:
|
||||
|
||||
|
||||
```
|
||||
`% cat test.dat | sort | uniq`
|
||||
```
|
||||
|
||||
pipes the output from the _cat_ (concatenate) process into the _sort_ process to produce sorted output, and then pipes the sorted output into the _uniq_ process to eliminate duplicate records (in this case, the two occurrences of **the** reduce to one):
|
||||
|
||||
|
||||
```
|
||||
ends
|
||||
is
|
||||
the
|
||||
this
|
||||
way
|
||||
world
|
||||
```
|
||||
|
||||
The scene now is set for a program with two processes that communicate through an unnamed pipe.
|
||||
|
||||
#### Example 1. Two processes communicating through an unnamed pipe.
|
||||
|
||||
|
||||
```
|
||||
#include <sys/wait.h> /* wait */
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h> /* exit functions */
|
||||
#include <unistd.h> /* read, write, pipe, _exit */
|
||||
#include <string.h>
|
||||
|
||||
#define ReadEnd 0
|
||||
#define WriteEnd 1
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /** failure **/
|
||||
}
|
||||
|
||||
int main() {
|
||||
int pipeFDs[2]; /* two file descriptors */
|
||||
char buf; /* 1-byte buffer */
|
||||
const char* msg = "Nature's first green is gold\n"; /* bytes to write */
|
||||
|
||||
if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");
|
||||
pid_t cpid = fork(); /* fork a child process */
|
||||
if (cpid < 0) report_and_exit("fork"); /* check for failure */
|
||||
|
||||
if (0 == cpid) { /*** child ***/ /* child process */
|
||||
close(pipeFDs[WriteEnd]); /* child reads, doesn't write */
|
||||
|
||||
while (read(pipeFDs[ReadEnd], &buf, 1) > 0) /* read until end of byte stream */
|
||||
write(STDOUT_FILENO, &buf, sizeof(buf)); /* echo to the standard output */
|
||||
|
||||
close(pipeFDs[ReadEnd]); /* close the ReadEnd: all done */
|
||||
_exit(0); /* exit and notify parent at once */
|
||||
}
|
||||
else { /*** parent ***/
|
||||
close(pipeFDs[ReadEnd]); /* parent writes, doesn't read */
|
||||
|
||||
write(pipeFDs[WriteEnd], msg, [strlen][8](msg)); /* write the bytes to the pipe */
|
||||
close(pipeFDs[WriteEnd]); /* done writing: generate eof */
|
||||
|
||||
wait(NULL); /* wait for child to exit */
|
||||
[exit][7](0); /* exit normally */
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _pipeUN_ program above uses the system function **fork** to create a process. Although the program has but a single source file, multi-processing occurs during (successful) execution. Here are the particulars in a quick review of how the library function **fork** works:
|
||||
|
||||
* The **fork** function, called in the _parent_ process, returns **-1** to the parent in case of failure. In the _pipeUN_ example, the call is: [code]`pid_t cpid = fork(); /* called in parent */`[/code] The returned value is stored, in this example, in the variable **cpid** of integer type **pid_t**. (Every process has its own _process ID_ , a non-negative integer that identifies the process.) Forking a new process could fail for several reasons, including a full _process table_ , a structure that the system maintains to track processes. Zombie processes, clarified shortly, can cause a process table to fill if these are not harvested.
|
||||
* If the **fork** call succeeds, it thereby spawns (creates) a new child process, returning one value to the parent but a different value to the child. Both the parent and the child process execute the _same_ code that follows the call to **fork**. (The child inherits copies of all the variables declared so far in the parent.) In particular, a successful call to **fork** returns:
|
||||
* Zero to the child process
|
||||
* The child's process ID to the parent
|
||||
* An _if/else_ or equivalent construct typically is used after a successful **fork** call to segregate code meant for the parent from code meant for the child. In this example, the construct is: [code] if (0 == cpid) { /*** child ***/
|
||||
...
|
||||
}
|
||||
else { /*** parent ***/
|
||||
...
|
||||
}
|
||||
```
|
||||
If forking a child succeeds, the _pipeUN_ program proceeds as follows. There is an integer array:
|
||||
```
|
||||
`int pipeFDs[2]; /* two file descriptors */`
|
||||
```
|
||||
to hold two file descriptors, one for writing to the pipe and another for reading from the pipe. (The array element **pipeFDs[0]** is the file descriptor for the read end, and the array element **pipeFDs[1]** is the file descriptor for the write end.) A successful call to the system **pipe** function, made immediately before the call to **fork** , populates the array with the two file descriptors:
|
||||
```
|
||||
`if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");`
|
||||
```
|
||||
The parent and the child now have copies of both file descriptors, but the _separation of concerns_ pattern means that each process requires exactly one of the descriptors. In this example, the parent does the writing and the child does the reading, although the roles could be reversed. The first statement in the child _if_ -clause code, therefore, closes the pipe's write end:
|
||||
```
|
||||
`close(pipeFDs[WriteEnd]); /* called in child code */`
|
||||
```
|
||||
and the first statement in the parent _else_ -clause code closes the pipe's read end:
|
||||
```
|
||||
`close(pipeFDs[ReadEnd]); /* called in parent code */`
|
||||
```
|
||||
The parent then writes some bytes (ASCII codes) to the unnamed pipe, and the child reads these and echoes them to the standard output.
|
||||
|
||||
One more aspect of the program needs clarification: the call to the **wait** function in the parent code. Once spawned, a child process is largely independent of its parent, as even the short _pipeUN_ program illustrates. The child can execute arbitrary code that may have nothing to do with the parent. However, the system does notify the parent through a signal—if and when the child terminates.
|
||||
|
||||
What if the parent terminates before the child? In this case, unless precautions are taken, the child becomes and remains a _zombie_ process with an entry in the process table. The precautions are of two broad types. One precaution is to have the parent notify the system that the parent has no interest in the child's termination:
|
||||
```
|
||||
`signal(SIGCHLD, SIG_IGN); /* in parent: ignore notification */`
|
||||
```
|
||||
A second approach is to have the parent execute a **wait** on the child's termination, thereby ensuring that the parent outlives the child. This second approach is used in the _pipeUN_ program, where the parent code has this call:
|
||||
```
|
||||
`wait(NULL); /* called in parent */`
|
||||
```
|
||||
This call to **wait** means _wait until the termination of any child occurs_ , and in the _pipeUN_ program, there is only one child process. (The **NULL** argument could be replaced with the address of an integer variable to hold the child's exit status.) There is a more flexible **waitpid** function for fine-grained control, e.g., for specifying a particular child process among several.
|
||||
|
||||
The _pipeUN_ program takes another precaution. When the parent is done waiting, the parent terminates with the call to the regular **exit** function. By contrast, the child terminates with a call to the **_exit** variant, which fast-tracks notification of termination. In effect, the child is telling the system to notify the parent ASAP that the child has terminated.
|
||||
|
||||
If two processes write to the same unnamed pipe, can the bytes be interleaved? For example, if process P1 writes:
|
||||
```
|
||||
`foo bar`
|
||||
```
|
||||
to a pipe and process P2 concurrently writes:
|
||||
```
|
||||
`baz baz`
|
||||
```
|
||||
to the same pipe, it seems that the pipe contents might be something arbitrary, such as:
|
||||
```
|
||||
`baz foo baz bar`
|
||||
```
|
||||
The POSIX standard ensures that writes are not interleaved so long as no write exceeds **PIPE_BUF** bytes. On Linux systems, **PIPE_BUF** is 4,096 bytes in size. My preference with pipes is to have a single writer and a single reader, thereby sidestepping the issue.
|
||||
|
||||
## Named pipes
|
||||
|
||||
An unnamed pipe has no backing file: the system maintains an in-memory buffer to transfer bytes from the writer to the reader. Once the writer and reader terminate, the buffer is reclaimed, so the unnamed pipe goes away. By contrast, a named pipe has a backing file and a distinct API.
|
||||
|
||||
Let's look at another command line example to get the gist of named pipes. Here are the steps:
|
||||
|
||||
* Open two terminals. The working directory should be the same for both.
|
||||
* In one of the terminals, enter these two commands (the prompt again is **%** , and my comments start with **##** ): [code] % mkfifo tester ## creates a backing file named tester
|
||||
% cat tester ## type the pipe's contents to stdout [/code] At the beginning, nothing should appear in the terminal because nothing has been written yet to the named pipe.
|
||||
* In the second terminal, enter the command: [code] % cat > tester ## redirect keyboard input to the pipe
|
||||
hello, world! ## then hit Return key
|
||||
bye, bye ## ditto
|
||||
<Control-C> ## terminate session with a Control-C [/code] Whatever is typed into this terminal is echoed in the other. Once **Ctrl+C** is entered, the regular command line prompt returns in both terminals: the pipe has been closed.
|
||||
* Clean up by removing the file that implements the named pipe: [code]`% unlink tester`
|
||||
```
|
||||
|
||||
|
||||
|
||||
As the utility's name _mkfifo_ implies, a named pipe also is called a FIFO because the first byte in is the first byte out, and so on. There is a library function named **mkfifo** that creates a named pipe in programs and is used in the next example, which consists of two processes: one writes to the named pipe and the other reads from this pipe.
|
||||
|
||||
#### Example 2. The _fifoWriter_ program
|
||||
|
||||
|
||||
```
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <time.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
|
||||
#define MaxLoops 12000 /* outer loop */
|
||||
#define ChunkSize 16 /* how many written at a time */
|
||||
#define IntsPerChunk 4 /* four 4-byte ints per chunk */
|
||||
#define MaxZs 250 /* max microseconds to sleep */
|
||||
|
||||
int main() {
|
||||
const char* pipeName = "./fifoChannel";
|
||||
mkfifo(pipeName, 0666); /* read/write for user/group/others */
|
||||
int fd = open(pipeName, O_CREAT | O_WRONLY); /* open as write-only */
|
||||
if (fd < 0) return -1; /* can't go on */
|
||||
|
||||
int i;
|
||||
for (i = 0; i < MaxLoops; i++) { /* write MaxWrites times */
|
||||
int j;
|
||||
for (j = 0; j < ChunkSize; j++) { /* each time, write ChunkSize bytes */
|
||||
int k;
|
||||
int chunk[IntsPerChunk];
|
||||
for (k = 0; k < IntsPerChunk; k++)
|
||||
chunk[k] = [rand][9]();
|
||||
write(fd, chunk, sizeof(chunk));
|
||||
}
|
||||
usleep(([rand][9]() % MaxZs) + 1); /* pause a bit for realism */
|
||||
}
|
||||
|
||||
close(fd); /* close pipe: generates an end-of-stream marker */
|
||||
unlink(pipeName); /* unlink from the implementing file */
|
||||
[printf][10]("%i ints sent to the pipe.\n", MaxLoops * ChunkSize * IntsPerChunk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _fifoWriter_ program above can be summarized as follows:
|
||||
|
||||
* The program creates a named pipe for writing: [code] mkfifo(pipeName, 0666); /* read/write perms for user/group/others */
|
||||
int fd = open(pipeName, O_CREAT | O_WRONLY); [/code] where **pipeName** is the name of the backing file passed to **mkfifo** as the first argument. The named pipe then is opened with the by-now familiar call to the **open** function, which returns a file descriptor.
|
||||
* For a touch of realism, the _fifoWriter_ does not write all the data at once, but instead writes a chunk, sleeps a random number of microseconds, and so on. In total, 768,000 4-byte integer values are written to the named pipe.
|
||||
* After closing the named pipe, the _fifoWriter_ also unlinks the file: [code] close(fd); /* close pipe: generates end-of-stream marker */
|
||||
unlink(pipeName); /* unlink from the implementing file */ [/code] The system reclaims the backing file once every process connected to the pipe has performed the unlink operation. In this example, there are only two such processes: the _fifoWriter_ and the _fifoReader_ , both of which do an _unlink_ operation.
|
||||
|
||||
|
||||
|
||||
The two programs should be executed in different terminals with the same working directory. However, the _fifoWriter_ should be started before the _fifoReader_ , as the former creates the pipe. The _fifoReader_ then accesses the already created named pipe.
|
||||
|
||||
#### Example 3. The _fifoReader_ program
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
|
||||
unsigned is_prime(unsigned n) { /* not pretty, but efficient */
|
||||
if (n <= 3) return n > 1;
|
||||
if (0 == (n % 2) || 0 == (n % 3)) return 0;
|
||||
|
||||
unsigned i;
|
||||
for (i = 5; (i * i) <= n; i += 6)
|
||||
if (0 == (n % i) || 0 == (n % (i + 2))) return 0;
|
||||
|
||||
return 1; /* found a prime! */
|
||||
}
|
||||
|
||||
int main() {
|
||||
const char* file = "./fifoChannel";
|
||||
int fd = open(file, O_RDONLY);
|
||||
if (fd < 0) return -1; /* no point in continuing */
|
||||
unsigned count = 0, total = 0, primes_count = 0;
|
||||
|
||||
while (1) {
|
||||
int next;
|
||||
int i;
|
||||
|
||||
ssize_t count = read(fd, &next, sizeof(int));
|
||||
if (0 == count) break; /* end of stream */
|
||||
else if (count == sizeof(int)) { /* read a 4-byte int value */
|
||||
total++;
|
||||
if (is_prime(next)) primes_count++;
|
||||
}
|
||||
}
|
||||
|
||||
close(fd); /* close pipe from read end */
|
||||
unlink(file); /* unlink from the underlying file */
|
||||
[printf][10]("Received ints: %u, primes: %u\n", total, primes_count);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _fifoReader_ program above can be summarized as follows:
|
||||
|
||||
* Because the _fifoWriter_ creates the named pipe, the _fifoReader_ needs only the standard call **open** to access the pipe through the backing file: [code] const char* file = "./fifoChannel";
|
||||
int fd = open(file, O_RDONLY); [/code] The file opens as read-only.
|
||||
* The program then goes into a potentially infinite loop, trying to read a 4-byte chunk on each iteration. The **read** call: [code]`ssize_t count = read(fd, &next, sizeof(int));`[/code] returns 0 to indicate end-of-stream, in which case the _fifoReader_ breaks out of the loop, closes the named pipe, and unlinks the backing file before terminating.
|
||||
* After reading a 4-byte integer, the _fifoReader_ checks whether the number is a prime. This represents the business logic that a production-grade reader might perform on the received bytes. On a sample run, there were 37,682 primes among the 768,000 integers received.
|
||||
|
||||
|
||||
|
||||
On repeated sample runs, the _fifoReader_ successfully read all of the bytes that the _fifoWriter_ wrote. This is not surprising. The two processes execute on the same host, taking network issues out of the equation. Named pipes are a highly reliable and efficient IPC mechanism and, therefore, in wide use.
|
||||
|
||||
Here is the output from the two programs, each launched from a separate terminal but with the same working directory:
|
||||
|
||||
|
||||
```
|
||||
% ./fifoWriter
|
||||
768000 ints sent to the pipe.
|
||||
###
|
||||
% ./fifoReader
|
||||
Received ints: 768000, primes: 37682
|
||||
```
|
||||
|
||||
### Message queues
|
||||
|
||||
Pipes have strict FIFO behavior: the first byte written is the first byte read, the second byte written is the second byte read, and so forth. Message queues can behave in the same way but are flexible enough that byte chunks can be retrieved out of FIFO order.
|
||||
|
||||
As the name suggests, a message queue is a sequence of messages, each of which has two parts:
|
||||
|
||||
* The payload, which is an array of bytes ( **char** in C)
|
||||
* A type, given as a positive integer value; types categorize messages for flexible retrieval
|
||||
|
||||
|
||||
|
||||
Consider the following depiction of a message queue, with each message labeled with an integer type:
|
||||
|
||||
|
||||
```
|
||||
+-+ +-+ +-+ +-+
|
||||
sender--->|3|--->|2|--->|2|--->|1|--->receiver
|
||||
+-+ +-+ +-+ +-+
|
||||
```
|
||||
|
||||
Of the four messages shown, the one labeled 1 is at the front, i.e., closest to the receiver. Next come two messages with label 2, and finally, a message labeled 3 at the back. If strict FIFO behavior were in play, then the messages would be received in the order 1-2-2-3. However, the message queue allows other retrieval orders. For example, the messages could be retrieved by the receiver in the order 3-2-1-2.
|
||||
|
||||
The _mqueue_ example consists of two programs, the _sender_ that writes to the message queue and the _receiver_ that reads from this queue. Both programs include the header file _queue.h_ shown below:
|
||||
|
||||
#### Example 4. The header file _queue.h_
|
||||
|
||||
|
||||
```
|
||||
#define ProjectId 123
|
||||
#define PathName "queue.h" /* any existing, accessible file would do */
|
||||
#define MsgLen 4
|
||||
#define MsgCount 6
|
||||
|
||||
typedef struct {
|
||||
long type; /* must be of type long */
|
||||
char payload[MsgLen + 1]; /* bytes in the message */
|
||||
} queuedMessage;
|
||||
```
|
||||
|
||||
The header file defines a structure type named **queuedMessage** , with **payload** (byte array) and **type** (integer) fields. This file also defines symbolic constants (the **#define** statements), the first two of which are used to generate a key that, in turn, is used to get a message queue ID. The **ProjectId** can be any positive integer value, and the **PathName** must be an existing, accessible file—in this case, the file _queue.h_. The setup statements in both the _sender_ and the _receiver_ programs are:
|
||||
|
||||
|
||||
```
|
||||
key_t key = ftok(PathName, ProjectId); /* generate key */
|
||||
int qid = msgget(key, 0666 | IPC_CREAT); /* use key to get queue id */
|
||||
```
|
||||
|
||||
The ID **qid** is, in effect, the counterpart of a file descriptor for message queues.
|
||||
|
||||
#### Example 5. The message _sender_ program
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include "queue.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
key_t key = ftok(PathName, ProjectId);
|
||||
if (key < 0) report_and_exit("couldn't get key...");
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT);
|
||||
if (qid < 0) report_and_exit("couldn't get queue id...");
|
||||
|
||||
char* payloads[] = {"msg1", "msg2", "msg3", "msg4", "msg5", "msg6"};
|
||||
int types[] = {1, 1, 2, 2, 3, 3}; /* each must be > 0 */
|
||||
int i;
|
||||
for (i = 0; i < MsgCount; i++) {
|
||||
/* build the message */
|
||||
queuedMessage msg;
|
||||
msg.type = types[i];
|
||||
[strcpy][11](msg.payload, payloads[i]);
|
||||
|
||||
/* send the message */
|
||||
msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT); /* don't block */
|
||||
[printf][10]("%s sent as type %i\n", msg.payload, (int) msg.type);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _sender_ program above sends out six messages, two each of a specified type: the first messages are of type 1, the next two of type 2, and the last two of type 3. The sending statement:
|
||||
|
||||
|
||||
```
|
||||
`msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT);`
|
||||
```
|
||||
|
||||
is configured to be non-blocking (the flag **IPC_NOWAIT** ) because the messages are so small. The only danger is that a full queue, unlikely in this example, would result in a sending failure. The _receiver_ program below also receives messages using the **IPC_NOWAIT** flag.
|
||||
|
||||
#### Example 6. The message _receiver_ program
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdlib.h>
|
||||
#include "queue.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
key_t key= ftok(PathName, ProjectId); /* key to identify the queue */
|
||||
if (key < 0) report_and_exit("key not gotten...");
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT); /* access if created already */
|
||||
if (qid < 0) report_and_exit("no access to queue...");
|
||||
|
||||
int types[] = {3, 1, 2, 1, 3, 2}; /* different than in sender */
|
||||
int i;
|
||||
for (i = 0; i < MsgCount; i++) {
|
||||
queuedMessage msg; /* defined in queue.h */
|
||||
if (msgrcv(qid, &msg, sizeof(msg), types[i], MSG_NOERROR | IPC_NOWAIT) < 0)
|
||||
[puts][12]("msgrcv trouble...");
|
||||
[printf][10]("%s received as type %i\n", msg.payload, (int) msg.type);
|
||||
}
|
||||
|
||||
/** remove the queue **/
|
||||
if (msgctl(qid, IPC_RMID, NULL) < 0) /* NULL = 'no flags' */
|
||||
report_and_exit("trouble removing queue...");
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _receiver_ program does not create the message queue, although the API suggests as much. In the _receiver_ , the call:
|
||||
|
||||
|
||||
```
|
||||
`int qid = msgget(key, 0666 | IPC_CREAT);`
|
||||
```
|
||||
|
||||
is misleading because of the **IPC_CREAT** flag, but this flag really means _create if needed, otherwise access_. The _sender_ program calls **msgsnd** to send messages, whereas the _receiver_ calls **msgrcv** to retrieve them. In this example, the _sender_ sends the messages in the order 1-1-2-2-3-3, but the _receiver_ then retrieves them in the order 3-1-2-1-3-2, showing that message queues are not bound to strict FIFO behavior:
|
||||
|
||||
|
||||
```
|
||||
% ./sender
|
||||
msg1 sent as type 1
|
||||
msg2 sent as type 1
|
||||
msg3 sent as type 2
|
||||
msg4 sent as type 2
|
||||
msg5 sent as type 3
|
||||
msg6 sent as type 3
|
||||
|
||||
% ./receiver
|
||||
msg5 received as type 3
|
||||
msg1 received as type 1
|
||||
msg3 received as type 2
|
||||
msg2 received as type 1
|
||||
msg6 received as type 3
|
||||
msg4 received as type 2
|
||||
```
|
||||
|
||||
The output above shows that the _sender_ and the _receiver_ can be launched from the same terminal. The output also shows that the message queue persists even after the _sender_ process creates the queue, writes to it, and exits. The queue goes away only after the _receiver_ process explicitly removes it with the call to **msgctl** :
|
||||
|
||||
|
||||
```
|
||||
`if (msgctl(qid, IPC_RMID, NULL) < 0) /* remove queue */`
|
||||
```
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The pipes and message queue APIs are fundamentally _unidirectional_ : one process writes and another reads. There are implementations of bidirectional named pipes, but my two cents is that this IPC mechanism is at its best when it is simplest. As noted earlier, message queues have fallen in popularity—but without good reason; these queues are yet another tool in the IPC toolbox. Part 3 completes this quick tour of the IPC toolbox with code examples of IPC through sockets and signals.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/interprocess-communication-linux-channels
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
|
||||
[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
|
||||
[4]: http://man7.org/linux/man-pages/man2/mq_open.2.html
|
||||
[5]: http://man7.org/linux/man-pages/man2/mq_open.2.html#ATTRIBUTES
|
||||
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/rand.html
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Foundation Training Courses Sale & Discount Coupon)
|
||||
[#]: via: (https://itsfoss.com/linux-foundation-discount-coupon/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Linux Foundation Training Courses Sale & Discount Coupon
|
||||
======
|
||||
|
||||
Linux Foundation is the non-profit organization that employs Linux creator Linus Torvalds and manages the development of the Linux kernel. Linux Foundation aims to promote the adoption of Linux and Open Source in the industry and it is doing a great job in this regard.
|
||||
|
||||
Open Source jobs are in demand and no one knows is better than Linux Foundation, the official Linux organization. This is why the Linux Foundation provides a number of training and certification courses on Linux related technology. You can browse the [entire course offering on Linux Foundations’ training webpage][1].
|
||||
|
||||
### Linux Foundation Latest Offer: 40% off on all courses [Limited Time]
|
||||
|
||||
At present Linux Foundation is offering some great offers for sysadmin, devops and cloud professionals.
|
||||
|
||||
At present, Linux Foundation is offering massive discount of 40% on the entire range of their e-learning courses and certification bundles, including the growing catalog of cloud and devops e-learning courses like Kubernetes!
|
||||
|
||||
Just use coupon code **APRIL40** at checkout to get your discount.
|
||||
|
||||
[Linux Foundation 40% Off (Coupon Code APRIL40)][2]
|
||||
|
||||
_Do note that this offer is valid till 22nd April 2019 only._
|
||||
|
||||
### Linux Foundation Discount Coupon [Valid all the time]
|
||||
|
||||
You can get a 16% off on any training or certification course provided by The Linux Foundation at any given time. All you have to do is to use the coupon code **FOSS16** at the checkout page.
|
||||
|
||||
Note that it might not be combined with sysadmin day offer.
|
||||
|
||||
[Get 16% off on Linux Foundation Courses with FOSS16 Code][1]
|
||||
|
||||
This article contains affiliate links. Please read our [affiliate policy][3].
|
||||
|
||||
#### Should you get certified?
|
||||
|
||||
![][4]
|
||||
|
||||
This is the question I have been asked regularly. Are Linux certifications worth it? The short answer is yes.
|
||||
|
||||
As per the [open source jobs report in 2018][5], over 80% of open source professionals said that certifications helped with their careers. Certifications enable you to demonstrate technical knowledge to potential employers and thus certifications make you more employable in general.
|
||||
|
||||
Almost half of the hiring managers said that employing certified open source professionals is a priority for them.
|
||||
|
||||
Certifications from a reputed authority like Linux Foundation, Red Hat, LPIC etc are particularly helpful when you are a fresh graduate or if you want to switch to a new domain in your career.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-foundation-discount-coupon/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://shareasale.com/u.cfm?d=507759&m=59485&u=747593&afftrack=
|
||||
[2]: http://shrsl.com/1k5ug
|
||||
[3]: https://itsfoss.com/affiliate-policy/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/07/linux-foundation-training-certification-discount.png?ssl=1
|
||||
[5]: https://www.linuxfoundation.org/publications/open-source-jobs-report-2018/
|
@ -0,0 +1,118 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 3)
|
||||
[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3)
|
||||
[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie)
|
||||
|
||||
Linux Server Hardening Using Idempotency with Ansible: Part 3
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
[Creative Commons Zero][2]
|
||||
|
||||
In the previous articles, we introduced idempotency as a way to approach your server’s security posture and looked at some specific Ansible examples, including the kernel, system accounts, and IPtables. In this final article of the series, we’ll look at a few more server-hardening examples and talk a little more about how the idempotency playbook might be used.
|
||||
|
||||
#### **Time**
|
||||
|
||||
Due to its reduced functionality, and therefore attack surface, the preference amongst a number of OSs has been to introduce “chronyd” over “ntpd”. If you’re new to “chrony” then fret not. It’s still using the NTP (Network Time Protocol) that we all know and love but in a more secure fashion.
|
||||
|
||||
The first thing I do with Ansible within the “chrony.conf” file is alter the “bind address” and if my memory serves there’s also a “command port” option. These config options allow Chrony to only listen on the localhost. In other words you are still syncing as usual with other upstream time servers (just as NTP does) but no remote servers can query your time services; only your local machine has access.
|
||||
|
||||
There’s more information on the “bindcmdaddress 127.0.0.1” and “cmdport 0” on this Chrony page (<https://chrony.tuxfamily.org/faq.html>) under “2.5. How can I make chronyd more secure?” which you should read for clarity. This premise behind the comment on that page is a good idea: “you can disable the internet command sockets completely by adding cmdport 0 to the configuration file”.
|
||||
|
||||
Additionally I would also focus on securing the file permissions for Chrony and insist that the service starts as expected just like the syslog config above. Otherwise make sure that your time sources are sane, have a degree of redundancy with multiple sources set up and then copy the whole config file over using Ansible.
|
||||
|
||||
#### **Logging**
|
||||
|
||||
You can clearly affect the level of detail included in the logs from a number pieces of software on a server. Thinking back to what we’ve looked at in relation to syslog already you can also tweak that application’s config using Ansible to your needs and then use the example Ansible above in addition.
|
||||
|
||||
#### **PAM**
|
||||
|
||||
Apparently PAM (Pluggable Authentication Modules) has been a part of Linux since 1997. It is undeniably useful (a common use is that you can force SSH to use it for password logins, as per the SSH YAML file above). It is extensible, sophisticated and can perform useful functions such as preventing brute force attacks on password logins using a clever rate limiting system. The syntax varies a little between OSes but if you have the time then getting PAM working well (even if you’re only using SSH keys and not passwords for your logins) is a worthwhile effort. Attackers like their own users on a system with lots of usernames, something innocuous such as “webadmin” or similar might be easy to miss on a server, and PAM can help you out in this respect.
|
||||
|
||||
#### **Auditd**
|
||||
|
||||
We’ve looked at logging a little already but what about capturing every “system call” that a kernel makes. The Linux kernel is a super-busy component of any system and logging almost every single thing that a system does is an excellent way of providing post-event forensics. This article will hopefully shed some light on where to begin: <http://www.admin-magazine.com/Archive/2018/43/Auditing-Docker-Containers-in-a-DevOps-Environment>. Note the comments in that article about performance, there’s little point in paying extra for compute and disk IO resource because you’ve misconfigured your logging so spend some time getting it correct would be my advice.
|
||||
|
||||
For concerns over disk space I will usually change a few lines in the file “/etc/audit/auditd.conf” in order to prevent there firstly being too many log files created and secondly logs that grow very large without being rotated. This is also on the proviso that logs are being ingested upstream via another mechanism too. Clearly the files permissions and the service starting are also the basics you need to cover here too. Generally file permissions for auditd are tight as it’s a “root” oriented service so there’s less changes needed here generally.
|
||||
|
||||
#### **Filesystems**
|
||||
|
||||
With a little reading you can discover which filesystems that are made available to your OS by default. You should disable these (at the “modprode.d” file level) with Ansible to prevent weird and wonderful things being attached unwittingly to your servers. You are reducing the attack surface with this approach. The Ansible might look something like this below for example.
|
||||
|
||||
```
|
||||
name: Make sure filesystems which are not needed are forced as off
|
||||
|
||||
lineinfile: dest="/etcmodprobe.d/harden.conf" line='install squashfs /bin/true' state=present
|
||||
```
|
||||
|
||||
#### **SELinux**
|
||||
|
||||
The old, but sometimes avoided due to complexity, security favourite, SELinux, should be set to “enforcing” mode. Or, at the every least, set to log sensibly using “permissive” mode. Permissive mode will at least fill your auditd logs up with any correct rule matches nicely. In terms of what Ansible looks like it’s simple and is along these lines:
|
||||
|
||||
```
|
||||
name: Configure SElinux to be running in permissive mode
|
||||
|
||||
replace: path=”/etc/selinux/config” regexp='SELINUX=disabled' replace='SELINUX=permissive'
|
||||
```
|
||||
|
||||
#### **Packages**
|
||||
|
||||
Needless to say the compliance hardening playbook is also a good place to upgrade all the packages (with some selective exclusions) on the system. Pay attention to the section relating to reboots and idempotency in a moment however. With other mechanisms in place you might not want to update packages here but instead as per the Automation Documents article mentioned in a moment.
|
||||
|
||||
### **Idempotency**
|
||||
|
||||
Now we’ve run through some of the aspects you would want to look at when hardening on a server, let’s think a little more about how the playbook might be used.
|
||||
|
||||
When it comes to cloud platforms most of my professional work has been on AWS and therefore, more often than not, a fresh AMI is launched and then a playbook is run over the top of it. There’s a mountain of detail in one way of doing that in this article (<http://www.admin-magazine.com/Archive/2018/45/AWS-Automation-Documents>) which you may be pleased to discover accommodates a mechanism to spawn a script or playbook.
|
||||
|
||||
It is important to note, when it comes to idempotency, that it may take a little more effort initially to get your head around the logic involved in being able to re-run Ansible repeatedly without disturbing the required status quo of your server estate.
|
||||
|
||||
One thing to be absolutely certain of however (barring rare edge cases) is that after you apply your hardening for the very first time, on a new AMI or server build, you will require a reboot. This is an important element due to a number of system facets not being altered correctly without a reboot. These include applying kernel changes so alterations become live, writing auditd rules as immutable config and also starting or stopping services to improve the security posture.
|
||||
|
||||
Note though that you’re probably not going to want to execute all plays in a playbook every twenty or thirty minutes, such as updating all packages and stopping and restarting key customer-facing services. As a result you should factor the logic into your Ansible so that some tasks only run once initially and then maybe write a “completed” placeholder file to the filesystem afterwards for referencing. There’s a million different ways of achieving a status checker.
|
||||
|
||||
The nice thing about Ansible is that the logic for rerunning playbooks is implicit and unlike shell scripts which for this type of task can be arduous to code the logic into. Sometimes, such as updating the GRUB bootloader for example, trying to guess the many permutations of a system change can be painful.
|
||||
|
||||
### **Bedtime Reading**
|
||||
|
||||
I still think that you can’t beat trial and error when it comes to computing. Experience is valued for good reason.
|
||||
|
||||
Be warned that you’ll find contradictory advice sometimes from the vast array of online resources in this area. Advice differs probably because of the different use cases. The only way to harden the varying flavours of OS to my mind is via a bespoke approach. This is thanks to the environments that servers are used within and the requirements of the security framework or standard that an organisation needs to meet.
|
||||
|
||||
For OS hardening details you can check with resources such as the NSA ([https://www.nsa.gov][3]), the Cloud Security Alliance (<https://cloudsecurityalliance.org/working-groups/security-guidance/#_overview>), proprietary training organisations such as GIAC ([https://www.giac.org][4]) who offer resources (<https://www.giac.org/paper/gcux/97/red-hat-linux-71-installation-hardening-checklist/102167>), the diverse CIS Benchmarks ([https://www.cisecurity.org][5]) for industry consensus-based benchmarking, the SANS Institute (<https://uk.sans.org/score/checklists>), NIST’s Computer Security Research ([https://csrc.nist.gov][6]) and of course print media too.
|
||||
|
||||
### **Conclusion**
|
||||
|
||||
Hopefully, you can see how powerful an idempotent server infrastructure is and are tempted to try it for yourself.
|
||||
|
||||
The ever-present threat of APT (Advanced Persistent Threat) attacks on infrastructure, where a successful attacker will sit silently monitoring events and then when it’s opportune infiltrate deeper into an estate, makes this type of configuration highly valuable.
|
||||
|
||||
The amount of detail that goes into the tests and configuration changes is key to the value that such an approach will bring to an organisation. Like the tests in a CI/CD pipeline they’re only as ever as good as their coverage.
|
||||
|
||||
Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3
|
||||
|
||||
作者:[Chris Binnie][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/chrisbinnie
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tech-1495181_1280.jpg?itok=5WcwApNN
|
||||
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
||||
[3]: https://www.nsa.gov/
|
||||
[4]: https://www.giac.org/
|
||||
[5]: https://www.cisecurity.org/
|
||||
[6]: https://csrc.nist.gov/
|
||||
[7]: https://www.devsecops.cc/
|
@ -0,0 +1,47 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Which programming languages should you learn?)
|
||||
[#]: via: (https://opensource.com/article/19/2/which-programming-languages-should-you-learn)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
应该学习哪种编程语言?
|
||||
======
|
||||
学习一门新的编程语言是在你的职业生涯中继续前进的好方法,但是应该学习哪一门呢?
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0)
|
||||
|
||||
如果你想要在编程生涯中起步或继续前进,那么学习一门新语言是一个聪明的主意。但是,大量活跃使用的语言引发了一个问题:哪种编程语言是最好学习的?要回答这个问题,让我们从一个简单的问题开始:你想做什么样的程序?
|
||||
|
||||
如果你想在客户端进行网络编程,那么特定语言 HTML、CSS 和 JavaScript(一种看似无穷无尽的方言)是必须要学习的。
|
||||
<to 校正:这里是 JavaScript 的形容?>
|
||||
|
||||
如果你想在服务器端进行 Web 编程,那么选项包括常见的通用语言:C++, Golang, Java, C#, Node.js, Perl, Python, Ruby 等等。当然,服务器程序与数据存储(例如关系数据库和其他数据库)打交道,这意味着 SQL 等查询语言可能会发挥作用。
|
||||
|
||||
如果你正在为移动设备编写本地应用程序,那么了解目标平台非常重要。对于 Apple 设备,Swift 已经取代 Objective C 成为首选语言。对于 Android 设备,Java(带有专用库和工具集)仍然是主要语言。有一些特殊语言,如 与 C# 一起使用的 Xamarin,可以为 Apple、Android 和 Windows 设备生成特定于平台的代码。
|
||||
|
||||
那么通用语言呢?通常有各种各样的选择。在*动态*或*脚本*语言(如 Perl、Python 和 Ruby)中,有一些新东西,如 Node.js。java 和 C# 的相似之处比它们的粉丝愿意承认的还要多,仍然是针对虚拟机(分别是 JVM 和 CLR)的主要*静态编译*语言。在编译为*原生可执行文件*的语言中,C++ 仍然处于混合状态,以及后来的 Golang 和 Rust 等。通用*函数*语言比比皆是(如 Clojure、Haskell、Erlang、F#、Lisp 和 Scala),它们通常都有热情投入的社区。值得注意的是,面向对象语言(如 Java 和 C#)已经添加了函数构造(特别是 lambdas),而动态语言从一开始就有函数构造。
|
||||
|
||||
让我以 C 语言结尾,它是一种小巧,优雅,可扩展的语言,不要与 C++ 混淆。现代操作系统主要用 C 语言编写,其余的用汇编语言编写。任何平台上的标准库大多数都是用 C 语言编写的。例如,任何打印 `Hello, world!` 这种问候都是通过调用名为 **write** 的 C 库函数来实现的。
|
||||
|
||||
C 作为一种可移植的汇编语言,公开了其他高级语言有意隐藏的底层系统的详细信息。因此,理解 C 可以更好地掌握程序如何竞争执行所需的共享系统资源(如处理器,内存和 I/O 设备)。C 语言既高级又接近硬件,因此在性能方面无与伦比,当然,汇编语言除外。最后,C 是编程语言中的通用语言,几乎所有通用语言都支持一种或另一种形式的 C 调用。
|
||||
|
||||
有关现代 C 语言的介绍,参考我的书籍 [C Programming: Introducing Portable Assembler][1]。无论你怎么做,学习 C 语言,你会学到比另一种编程语言多得多的东西。
|
||||
|
||||
你认为学习哪些编程语言很重要?你是否同意这些建议?在评论告知我们!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/2/which-programming-languages-should-you-learn
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
<!-- <!-- 译者:[MjSeven](https://github.com/MjSeven) --> -->
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.amazon.com/dp/1977056954?ref_=pe_870760_150889320
|
@ -0,0 +1,129 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "zgj1024 "
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "Why DevOps is the most important tech strategy today"
|
||||
[#]: via: "https://opensource.com/article/19/3/devops-most-important-tech-strategy"
|
||||
[#]: author: "Kelly AlbrechtWilly-Peter Schaub https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht"
|
||||
|
||||
为何 DevOps 是如今最重要的技术策略
|
||||
======
|
||||
消除一些关于 DevOps 的疑惑
|
||||
![CICD with gears][1]
|
||||
|
||||
很多人初学 [DevOps][2] 时,看到它其中一个结果就问这个是如何得来的。其实理解这部分 Devops 的怎样实现并不重要,重要的是——理解(使用) DevOps 策略的原因——这是做一个行业的领导者还是追随者的差别。
|
||||
|
||||
你可能会听过些 Devops 的难以置信的成果,例如生产环境非常有弹性,“混世猴子”([Chaos Monkey][3])程序运行时,将周围的连接随机切断,每天仍可以处理数千个版本。这是令人印象深刻的,但就其本身而言,这是一个 DevOps 的无力案例,本质上会被一个[反例][4]困扰:DevOps 环境有弹性是因为没有观察到严重的故障。。。还没有。
|
||||
|
||||
有很多关于 DevOps 的疑惑,并且许多人还在尝试弄清楚它的意义。下面是来自我 LinkedIn Feed 中的某个人的一个案例:
|
||||
|
||||
> 最近我参加一些 #DevOps 的交流会,那里一些演讲人好像在倡导 #敏捷开发是 DevOps 的子集。不知为何,我的理解洽洽相反。
|
||||
>
|
||||
> 能听一下你们的想法吗?你认为敏捷开发和 DevOps 之间是什么关系呢?
|
||||
>
|
||||
> 1. DevOps 是敏捷开发的子集
|
||||
> 2. 敏捷开发 是 DevOps 的子集
|
||||
> 3. DevOps 是敏捷开发的扩展,从敏捷开发结束的地方开始
|
||||
> 4. DevOps 是敏捷开发的新版本
|
||||
>
|
||||
|
||||
科技行业的专业人士在那篇 LinkedIn 的帖子上达标各样的答案,你会怎样回复呢?
|
||||
|
||||
### DevOps源于精益和敏捷
|
||||
|
||||
如果我们从亨利福特的战略和丰田生产系统对福特车型的改进(的历史)开始, DevOps 就更有意义了。精益制造就诞生在那段历史中,人们对精益制作进行了良好的研究。James P. Womack 和 Daniel T. Jones 将精益思维( [Lean Thinking][5])提炼为五个原则:
|
||||
1. 指明客户所需的价值
|
||||
2. 确定提供该价值的每个产品的价值流,并对当前提供该价值所需的所有浪费步骤提起挑战
|
||||
3. 使产品通过剩余的增值步骤持续流动
|
||||
4. 在可以连续流动的所有步骤之间引入拉力
|
||||
5. 管理要尽善尽美,以便为客户服务所需的步骤数量和时间以及信息量持续下降
|
||||
|
||||
|
||||
Lean seeks to continuously remove waste and increase the flow of value to the customer. This is easily recognizable and understood through a core tenet of lean: single piece flow. We can do a number of activities to learn why moving single pieces at a time is magnitudes faster than batches of many pieces; the [Penny Game][6] and the [Airplane Game][7] are two of them. In the Penny Game, if a batch of 20 pennies takes two minutes to get to the customer, they get the whole batch after waiting two minutes. If you move one penny at a time, the customer gets the first penny in about five seconds and continues getting pennies until the 20th penny arrives approximately 25 seconds later.
|
||||
|
||||
精益致力于持续消除浪费并增加客户的价值流动。这很容易识别并明白精益的核心原则:单一流。我们可以做一些游戏去了解为何同一时间移动单个比批量移动要快得多。其中的两个游戏是[硬币游戏][6]和[飞机游戏][7]。在硬币游戏中,如果一批 20 个硬币到顾客手中要用 2 分钟,顾客等 2 分钟后能拿到整批硬币。如果一次只移动一个硬币,顾客会在 5 秒内得到第一枚硬币,并会持续获得硬币,直到在大约 25 秒后第 20 个硬币到达。(译者注:有相关的视频的)
|
||||
|
||||
这是巨大的不同,但是不是生活中的所有事都像硬币游戏那样简单并可预测的。这就是敏捷的出现的原因。我们当然看到了高效绩敏捷团队的精益原则,但这些团队需要的不仅仅是精益去做他们要做的事。
|
||||
|
||||
为了能够处理典型的软件开发任务的不可预见性和变化,敏捷开发的方法论会将重点放在意识、审议、决策和行动上,以便在不断变化的现实中调整。例如,敏捷框架(如 srcum)通过每日站立会议和冲刺评审会议等仪式提高意识。如果 scrum 团队意识到新的事实,框架允许并鼓励他们在必要时及时调整路线。
|
||||
|
||||
要使团队做出这些类型的决策,他们需要高度信任的环境中的自我组织能力。以这种方式工作的高效绩敏捷团队在不断调整的同时实现快速的价值流,消除错误方向上的浪费。
|
||||
|
||||
### 最佳批量大小
|
||||
|
||||
要了解 DevOps 在软件开发中的请强大功能,这会帮助我们理解批处理大小的经济学。请考虑以下来自Donald Reinertsen 的[产品开发流程原则][8]的U曲线优化示例:
|
||||
|
||||
![U-curve optimization illustration of optimal batch size][9]
|
||||
|
||||
这可以类比杂货店购物来解释。假设你需要买一些鸡蛋,而你住的地方离商店只有 30 分的路程。买一个鸡蛋(图种最左边)意味着每次要花 30 分钟的路程,这就是你的_交易成本_。_持有成本_可能是鸡蛋变质和在你的冰箱中持续地占用空间。_总成本_是_交易成本_加上你的_持有成本_。这 U 型曲线解释了为什么对大部分来说,一次买一打鸡蛋是他们的_最佳批量大小_。如果你就住在商店的旁边,步行到那里不会花费你任何的时候,你可能每次只会买一小盒鸡蛋,以此来节省冰箱的空间并享受新鲜的鸡蛋。
|
||||
|
||||
这 U 型优化曲线可以说明为什么在成功敏捷转换中生产力会显著提高。考虑敏捷转换对组织决策的影响。在传统的分级组织中,决策权是集中的。这会导致较少的人做更大的决策。敏捷方法论会有效地降低组织决策中的交易成本,方法是将决策分散到最被人熟知的认识和信息的位置:跨越高度信任,自组织的敏捷团队。
|
||||
|
||||
下面的动画演示了降低事务成本后,最佳批量大小是如何向左移动。在更频繁地做出更快的决策方面,你不能低估组织的价值。
|
||||
|
||||
![U-curve optimization illustration][10]
|
||||
|
||||
### DevOps 适合哪些地方
|
||||
|
||||
自动化是 DevOps 最知名的事情之一。前面的插图非常详细地展示了自动化的价值。通过自动化,我们将交易成本降低到接近零,实质上是免费进行测试和部署。这使我们可以利用越来越小的批量工作。较小批量的工作更容易理解、提交、测试、审查和知道何时能完成。这些较小的批量大小也包含较少的差异和风险,使其更易于部署,如果出现问题,可以进行故障排除和恢复。通过自动化与扎实的敏捷实践相结合,我们可以使我们的功能开发非常接近单件流程,从而快速,持续地为客户提供价值。
|
||||
|
||||
更传统地说,DevOps 被理解为一种打破开发团队和运营团队之间混乱局面的方法。在这个模型中,开发团队开发新的功能,而运营团队则保持系统的稳定和平稳运行。摩擦的发生是因为开发过程中的新功能将更改引入到系统中,从而增加了停机的风险,运营团队并不认为要对此负责,但无论如何都必须处理这一问题。DevOps 不仅仅尝试让人们一起工作,更重要的是尝试在复杂的环境中安全地进行更频繁的更改。
|
||||
|
||||
我们可以向 [Ron Westrum][11] 寻求有关在复杂组织中实现安全性的研究。在研究为什么有些组织比其他组织更安全时,他发现组织的文化可以预测其安全性。他确定了三种文化:病态,官僚主义的和生产式的。他发现病理的可以预测安全性较低,而生产式文化被预测为更安全(例如,在他的主要研究领域中,飞机坠毁或意外住院死亡的数量要少得多)。
|
||||
|
||||
![Three types of culture identified by Ron Westrum][12]
|
||||
|
||||
高效的 DevOps 团队通过精益和敏捷的实践实现了一种生成性文化,这表明速度和安全性是互补的,或者说是同一个问题的两个方面。通过将决策和功能的最佳批量大小减少到非常小,DevOps 实现了更快的信息流和价值,同时消除了浪费并降低了风险。
|
||||
|
||||
与 Westrum的研究一致,在提高安全性和可靠性的同时,变化也很容易发生。。当一个敏捷的 DevOps 团队被信任做出自己的决定时,我们将获得 DevOps 目前最为人所知的工具和技术:自动化和持续交付。通过这种自动化,交易成本比以往任何时候都进一步降低,并且实现了近乎单一的精益流程,每天创造数千个决策和发布的潜力,正如我们在高效绩的 DevOps 组织中看到的那样
|
||||
|
||||
### 流动、反馈、学习
|
||||
|
||||
DevOps 并不止于此。我们主要讨论了 DevOps 实现了革命性的流程,但通过类似的努力可以进一步放大精益和敏捷实践,从而实现更快的反馈循环和更快的学习。在[_DevOps手册_][13] 中,作者除了详细解释快速流程外, DevOps 如何在整个价值流中实现遥测,从而获得快速且持续的反馈。此外,利用[精益求精的突破][14]和scrum 的[回顾][15],高效的 DevOps 团队将不断推动学习和持续改进深入到他们的组织的基础,实现软件产品开发行业的精益制造革命。
|
||||
|
||||
|
||||
### 从 DevOps 评估开始
|
||||
|
||||
利用 DevOps 的第一步是,经过大量研究或在 DevOps 顾问和教练的帮助下,对高效绩 DevOps 团队中始终存在的一系列维度进行评估。评估应确定需要改进的薄弱或不存在的团队规范。对评估的结果进行评估,以找到具有高成功机会的快速获胜焦点领域,从而产生高影响力的改进。快速获胜非常重要,能让团队获取解决更具挑战性领域所需的动力。团队应该产生可以快速尝试的想法,并开始关注 DevOps 转型。
|
||||
|
||||
一段时间后,团队应重新评估相同的维度,以衡量改进并确立新的高影响力重点领域,并再次采纳团队的新想法。一位好的教练将根据需要进行咨询、培训、指导和支持,直到团队拥有自己的持续改进方案,并通过不断地重新评估、试验和学习,在所有维度上实现近乎一致。
|
||||
|
||||
在本文的[第二部分][16]中,我们将查看 Drupal 社区中 DevOps 调查的结果,并了解最有可能找到快速获胜的位置。
|
||||
|
||||
* * *
|
||||
|
||||
_Rob_ _Bayliss and Kelly Albrecht will present[DevOps: Why, How, and What][17] and host a follow-up [Birds of a][18]_ [_Feather_][18] _[discussion][18] at [DrupalCon 2019][19] in Seattle, April 8-12._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/devops-most-important-tech-strategy
|
||||
|
||||
作者:[Kelly AlbrechtWilly-Peter Schaub][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/zgj1024)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ksalbrecht/users/brentaaronreed/users/wpschaub/users/wpschaub/users/ksalbrecht
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc "CICD with gears"
|
||||
[2]: https://opensource.com/resources/devops
|
||||
[3]: https://github.com/Netflix/chaosmonkey
|
||||
[4]: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)#Proving_a_negative
|
||||
[5]: https://www.amazon.com/dp/B0048WQDIO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
|
||||
[6]: https://youtu.be/5t6GhcvKB8o?t=54
|
||||
[7]: https://www.shmula.com/paper-airplane-game-pull-systems-push-systems/8280/
|
||||
[8]: https://www.amazon.com/dp/B00K7OWG7O/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
|
||||
[9]: https://opensource.com/sites/default/files/uploads/batch_size_optimal_650.gif "U-curve optimization illustration of optimal batch size"
|
||||
[10]: https://opensource.com/sites/default/files/uploads/batch_size_650.gif "U-curve optimization illustration"
|
||||
[11]: https://en.wikipedia.org/wiki/Ron_Westrum
|
||||
[12]: https://opensource.com/sites/default/files/uploads/information_flow.png "Three types of culture identified by Ron Westrum"
|
||||
[13]: https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations/dp/1942788002/ref=sr_1_3?keywords=DevOps+handbook&qid=1553197361&s=books&sr=1-3
|
||||
[14]: https://en.wikipedia.org/wiki/Kaizen
|
||||
[15]: https://www.scrum.org/resources/what-is-a-sprint-retrospective
|
||||
[16]: https://opensource.com/article/19/3/where-drupal-community-stands-devops-adoption
|
||||
[17]: https://events.drupal.org/seattle2019/sessions/devops-why-how-and-what
|
||||
[18]: https://events.drupal.org/seattle2019/bofs/devops-getting-started
|
||||
[19]: https://events.drupal.org/seattle2019
|
@ -1,87 +0,0 @@
|
||||
5 款适合程序员的开源字体
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documentation-type-keys-yearbook.png?itok=Q-ELM2rn)
|
||||
|
||||
什么是最好的编程字体呢?首先,你需要考虑到字体被设计出来的初衷可能并不相同。当选择一款用于休闲阅读的字体时,读者希望该字体的字母能够顺滑地衔接,提供一种轻松愉悦的体验。一款标准字体的每个字符,类似于拼图的一块,它需要被仔细的设计,从而与整个字体的其他部分融合在一起。
|
||||
|
||||
然而,在书写代码时,通常来说对字体的要求自然会更具功能性。这也是为什么当给定选择时,程序员更偏爱使用固定宽度的等宽字体。选择一款带有容易分辨的数字和标点的字体在美学上令人愉悦;但它是否拥有满足你需求的版权许可也是非常重要的。
|
||||
|
||||
使得一个字体更适合编程需要某些特定的需求。首先使得一款等宽字体看上去有条不紊需要一个非常详细的设定。这里,请考虑一会儿字母 `w` 和字母 `i` 之间的对比吧。当选择一款字体时,正如字母一样,在字母周围的空白也同样值得考虑。在物理书和新闻排版中,高效地使用空格也极为重要,相比于相对宽大的字母 `w`,字体将赋予相对瘦小的 `i` 更少的宽度。
|
||||
|
||||
然而在终端中,很幸运你将没有这些限制。每个字符享有相等的空白将非常有用。这么做的首要好处是你可以高效地通过简单的扫一下代码块,推测出你的代码有多长。第二个好处是能够很容易地对齐字符和标点,字符的高亮在视觉上更加明显。另外打印纸张上的等宽字体比均衡字体更加容易识别。
|
||||
|
||||
在本篇文章中,我们将探索 5 款卓越的开源字体,使用它们来编程和写代码都非常理想。
|
||||
|
||||
### 1. Firacode:最佳整套编程字体
|
||||
![FiraCode 示例][1]
|
||||
FiraCode, Andrew Lekashman
|
||||
|
||||
在我们列表上的首款字体是 [FiraCode][3],一款真正符合甚至超越了其职责的编程字体。FiraCode 是 Fira 的扩展,而后者是由 Mozilla 委托设计的开源字体族。使得 FiraCode 如此特别的原因是它修改了在代码中常使用的一些符号的组合或连字,使得它看上去非常具有可读性。这款字体有几种不同的风格,特别是还包含 Retina 选项。你可以在它的 [GitHub][3] 主页中找到它被使用到多种编程语言中的例子。
|
||||
|
||||
![FiraCode compared to Fira Mono][2]
|
||||
FiraCode 与 Fira Mono 的对比,[Nikita Prokopov][3],源自 GitHub
|
||||
|
||||
### 2. Inconsolata:优雅且由卓越设计者创造
|
||||
![Inconsolata 示例][4]
|
||||
Inconsolata, Andrew Lekashman
|
||||
|
||||
[Inconsolata][5] 是最为漂亮的等宽字体之一。从 2006 年开始它便一直是一款开源和可免费获取的字体。它的创造者 Raph Levien 在设计 Inconsolata 时秉承的一个基本原则是:等宽字体并不应该那么糟糕。使得 Inconsolata 如此优秀的两个原因是:对于 `0` 和 `o` 这两个字符它们有很大的不同,另外它还特别地设计了标点符号。
|
||||
|
||||
### 3. DejaVu Sans Mono:许多 Linux 发行版的标准配置,庞大的字形覆盖率
|
||||
![DejaVu Sans Mono example][6]
|
||||
DejaVu Sans Mono, Andrew Lekashman
|
||||
|
||||
受使用在 GNOME 中的带有版权和闭源的 Vera 字体,[DejaVu Sans Mono][7] 是一个非常受欢迎的编程字体,几乎在每个现代的 Linux 发行版中都带有它。在 Book 变种风格下 DejaVu 拥有惊人的 3310 个字形,相比于一般的字体,它们含有 100 个左右的字形。在工作中你将不会出现缺少某些字符的情况,它覆盖了 Unicode 的绝大部分,并且一直在活跃地增长着。
|
||||
|
||||
### 4. Source Code Pro:优雅、可读性强,由 Adobe 中一个小巧但天才的团队打造
|
||||
![Source Code Pro example][8]
|
||||
Source Code Pro, Andrew Lekashman
|
||||
|
||||
由 Paul Hunt 和 Teo Tuominen 设计,[Source Code Pro][9] 是[由 Adobe 创造的][10]它的首款开源字体
|
||||
。Source Code Pro 值得注意的地方在于它非常具有可读性,且对于容易混淆的字符和标点,它有着非常好的区分度。Source Code Pro 也是一个字体族,有 7 中不同的风格: Extralight, Light, Regular, Medium, Semibold, Bold 和 Black,每种风格都还有斜体变体。
|
||||
|
||||
![Differentiating potentially confusable characters][11]
|
||||
|
||||
潜在易混淆的字符之间的区别,[Paul D. Hunt][10] 源自 Adobe Typekit 博客。
|
||||
|
||||
![Metacharacters with special meaning in computer languages][12]
|
||||
在计算机领域中有特别含义的特殊元字符, [Paul D. Hunt][10] 源自 Adobe Typekit 博客。
|
||||
|
||||
### 5. Noto Mono:巨量的语言覆盖率,由 Google 中的一个大团队打造
|
||||
|
||||
![Noto Mono example][13]
|
||||
Noto Mono, Andrew Lekashman
|
||||
|
||||
在我们列表上的最后一款字体是 [Noto Mono][14], Google 打造的庞大 Note 字体族中的等宽版本。尽管它并不是专为编程所设计,但它在 209 种语言(包括 emoji 颜文字!)中都可以获取到,并且一直在维护和更新。该项目非常庞大,是 Google 宣言“组织全世界信息” 的一个扩展。假如你想更多地了解它,可以查看这个绝妙的[关于这些字体的视频][15]。
|
||||
|
||||
### 选择合适的字体
|
||||
|
||||
无论你选择那个字体,你都有可能在每天中花费数小时面对它,所以请确保它在审美和哲学层面上与你产生共鸣。选择正确的开源字体是确保你拥有最佳生产环境的一个重要部分。这些字体都是奇妙的可选项,每个都具有让它脱颖而出的功能强大的特性。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/17/11/how-select-open-source-programming-font
|
||||
|
||||
作者:[Andrew Lekashman][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com
|
||||
[1]:https://opensource.com/sites/default/files/u128651/firacode.png (FiraCode example)
|
||||
[2]:https://opensource.com/sites/default/files/u128651/firacode2.png (FiraCode compared to Fira Mono)
|
||||
[3]:https://github.com/tonsky/FiraCode
|
||||
[4]:https://opensource.com/sites/default/files/u128651/inconsolata.png (Inconsolata example)
|
||||
[5]:http://www.levien.com/type/myfonts/inconsolata.html
|
||||
[6]:https://opensource.com/sites/default/files/u128651/dejavu_sans_mono.png (DejaVu Sans Mono example)
|
||||
[7]:https://dejavu-fonts.github.io/
|
||||
[8]:https://opensource.com/sites/default/files/u128651/source_code_pro.png (Source Code Pro example)
|
||||
[9]:https://github.com/adobe-fonts/source-code-pro
|
||||
[10]:https://blog.typekit.com/2012/09/24/source-code-pro/
|
||||
[11]:https://opensource.com/sites/default/files/u128651/source_code_pro2.png (Differentiating potentially confusable characters)
|
||||
[12]:https://opensource.com/sites/default/files/u128651/source_code_pro3.png (Metacharacters with special meaning in computer languages)
|
||||
[13]:https://opensource.com/sites/default/files/u128651/noto.png (Noto Mono example)
|
||||
[14]:https://www.google.com/get/noto/#mono-mono
|
||||
[15]:https://www.youtube.com/watch?v=AAzvk9HSi84
|
@ -0,0 +1,279 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Build a game framework with Python using the module Pygame)
|
||||
[#]: via: (https://opensource.com/article/17/12/game-framework-python)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
使用 Python 和 Pygame 模块构建一个游戏框架
|
||||
======
|
||||
这系列的第一篇通过创建一个简单的骰子游戏来探究 Python。现在是来从零制作你自己的游戏的时间。
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python2-header.png?itok=tEvOVo4A)
|
||||
|
||||
在我的 [这系列的第一篇文章][1] 中, 我已经讲解如何使用 Python 创建一个简单的,基于文本的骰子游戏。这次,我将展示如何使用 Python 和 Pygame 模块来创建一个图形化游戏。它将占用一些文章来得到一个确实完成一些东西的游戏,但是在这系列的结尾,你将有一个更好的理解,如何查找和学习新的 Python 模块和如何从其基础上构建一个应用程序。
|
||||
|
||||
在开始前,你必须安装 [Pygame][2]。
|
||||
|
||||
### 安装新的 Python 模块
|
||||
|
||||
这里有一些方法来安装 Python 模块,但是最通用的两个是:
|
||||
|
||||
* 从你的发行版的软件存储库
|
||||
* 使用 Python 的软件包管理器,pip
|
||||
|
||||
两个方法都工作很好,并且每一个都有它自己的一套优势。如果你是在 Linux 或 BSD 上开发,促使你的发行版的软件存储库确保自动及时更新。
|
||||
|
||||
然而,使用 Python 的内置软件包管理器给予你控制更新模块时间的能力。而且,它不是明确指定操作系统的,意味着,即使当你不是在你常用的开发机器上时,你也可以使用它。pip 的其它的优势是允许模块局部安装,如果你没有一台正在使用的计算机的权限,它是有用的。
|
||||
|
||||
### 使用 pip
|
||||
|
||||
如果 Python 和 Python3 都安装在你的系统上,你想使用的命令很可能是 `pip3`,它区分来自Python 2.x 的 `pip` 的命令。如果你不确定,先尝试 `pip3`。
|
||||
|
||||
`pip` 命令有些像大多数 Linux 软件包管理器的工作。你可以使用 `search` 搜索 Pythin 模块,然后使用 `install` 安装它们。如果你没有你正在使用的计算机的权限来安装软件,你可以使用 `--user` 选项来仅仅安装模块到你的 home 目录。
|
||||
|
||||
```
|
||||
$ pip3 search pygame
|
||||
[...]
|
||||
Pygame (1.9.3) - Python Game Development
|
||||
sge-pygame (1.5) - A 2-D game engine for Python
|
||||
pygame_camera (0.1.1) - A Camera lib for PyGame
|
||||
pygame_cffi (0.2.1) - A cffi-based SDL wrapper that copies the pygame API.
|
||||
[...]
|
||||
$ pip3 install Pygame --user
|
||||
```
|
||||
|
||||
Pygame 是一个 Python 模块,这意味着它仅仅是一套可以被使用在你的 Python 程序中库。换句话说,它不是一个你启动的程序,像 [IDLE][3] 或 [Ninja-IDE][4] 一样。
|
||||
|
||||
### Pygame 新手入门
|
||||
|
||||
一个电子游戏需要一个故事背景;一个发生的地点。在 Python 中,有两种不同的方法来创建你的故事背景:
|
||||
|
||||
* 设置一种背景颜色
|
||||
* 设置一张背景图片
|
||||
|
||||
你的背景仅是一张图片或一种颜色。你的电子游戏人物不能与在背景中的东西相互作用,因此,不要在后面放置一些太重要的东西。它仅仅是设置装饰。
|
||||
|
||||
### 设置你的 Pygame 脚本
|
||||
|
||||
为了开始一个新的 Pygame 脚本,在计算机上创建一个文件夹。游戏的全部文件被放在这个目录中。在工程文件夹内部保持所需要的所有的文件来运行游戏是极其重要的。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/project.jpg)
|
||||
|
||||
一个 Python 脚本以文件类型,你的姓名,和你想使用的协议开始。使用一个开放源码协议,以便你的朋友可以改善你的游戏并与你一起分享他们的更改:
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
# Seth Kenlon 编写
|
||||
|
||||
## GPLv3
|
||||
# This program is free software: you can redistribute it and/or
|
||||
# modify it under the terms of the GNU General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful, but
|
||||
# WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
# General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
```
|
||||
|
||||
然后,你告诉 Python 你想使用的模块。一些模块是常见的 Python 库,当然,你想包括一个你刚刚安装的,Pygame 。
|
||||
|
||||
```
|
||||
import pygame # 加载 pygame 关键字
|
||||
import sys # 让 python 使用你的文件系统
|
||||
import os # 帮助 python 识别你的操作系统
|
||||
```
|
||||
|
||||
由于你将用这个脚本文件工作很多,在文件中制作成段落是有帮助的,以便你知道在哪里放原料。使用语句块注释来做这些,这些注释仅在看你的源文件代码时是可见的。在你的代码中创建三个语句块。
|
||||
|
||||
```
|
||||
'''
|
||||
Objects
|
||||
'''
|
||||
|
||||
# 在这里放置 Python 类和函数
|
||||
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
|
||||
# 在这里放置一次性的运行代码
|
||||
|
||||
'''
|
||||
Main Loop
|
||||
'''
|
||||
|
||||
# 在这里放置游戏的循环代码指令
|
||||
```
|
||||
|
||||
接下来,为你的游戏设置窗口大小。注意,不是每一个人都有大计算机屏幕,所以,最好使用一个适合大多数人的计算机的屏幕大小。
|
||||
|
||||
这里有一个方法来切换全屏模式,很多现代电子游戏做的方法,但是,由于你刚刚开始,保存它简单和仅设置一个大小。
|
||||
|
||||
```
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
worldx = 960
|
||||
worldy = 720
|
||||
```
|
||||
|
||||
在一个脚本中使用 Pygame 引擎前,你需要一些基本的设置。你必需设置帧频,启动它的内部时钟,然后开始 (`init`) Pygame 。
|
||||
|
||||
```
|
||||
fps = 40 # 帧频
|
||||
ani = 4 # 动画循环
|
||||
clock = pygame.time.Clock()
|
||||
pygame.init()
|
||||
```
|
||||
|
||||
现在你可以设置你的背景。
|
||||
|
||||
### 设置背景
|
||||
|
||||
在你继续前,打开一个图形应用程序,并为你的游戏世界创建一个背景。在你的工程目录中的 `images` 文件夹内部保存它为 `stage.png` 。
|
||||
|
||||
这里有一些你可以使用的自由图形应用程序。
|
||||
|
||||
* [Krita][5] 是一个专业级绘图原料模拟器,它可以被用于创建漂亮的图片。如果你对电子游戏创建艺术作品非常感兴趣,你甚至可以购买一系列的[游戏艺术作品教程][6].
|
||||
* [Pinta][7] 是一个基本的,易于学习的绘图应用程序。
|
||||
* [Inkscape][8] 是一个矢量图形应用程序。使用它来绘制形状,线,样条曲线,和 Bézier 曲线。
|
||||
|
||||
|
||||
|
||||
你的图像不必很复杂,你可以以后回去更改它。一旦你有它,在你文件的 setup 部分添加这些代码:
|
||||
|
||||
```
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
backdrop = pygame.image.load(os.path.join('images','stage.png').convert())
|
||||
backdropbox = world.get_rect()
|
||||
```
|
||||
|
||||
如果你仅仅用一种颜色来填充你的游戏的背景,你需要做的全部是:
|
||||
|
||||
```
|
||||
world = pygame.display.set_mode([worldx,worldy])
|
||||
```
|
||||
|
||||
你也必需定义一个来使用的颜色。在你的 setup 部分,使用红,绿,蓝 (RGB) 的值来创建一些颜色的定义。
|
||||
|
||||
```
|
||||
'''
|
||||
Setup
|
||||
'''
|
||||
|
||||
BLUE = (25,25,200)
|
||||
BLACK = (23,23,23 )
|
||||
WHITE = (254,254,254)
|
||||
```
|
||||
|
||||
在这点上,你能理论上启动你的游戏。问题是,它可能仅持续一毫秒。
|
||||
|
||||
为证明这一点,保存你的文件为 `your-name_game.py` (用你真实的名称替换 `your-name` )。然后启动你的游戏。
|
||||
|
||||
如果你正在使用 IDLE ,通过选择来自 Run 菜单的 `Run Module` 来运行你的游戏。
|
||||
|
||||
如果你正在使用 Ninja ,在左侧按钮条中单击 `Run file` 按钮。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/ninja_run_0.png)
|
||||
|
||||
你也可以直接从一个 Unix 终端或一个 Windows 命令提示符中运行一个 Python 脚本。
|
||||
|
||||
```
|
||||
$ python3 ./your-name_game.py
|
||||
```
|
||||
|
||||
如果你正在使用 Windows ,使用这命令:
|
||||
|
||||
```
|
||||
py.exe your-name_game.py
|
||||
```
|
||||
|
||||
你启动它,不过不要期望很多,因为你的游戏现在仅仅持续几毫秒。你可以在下一部分中修复它。
|
||||
|
||||
### 循环
|
||||
|
||||
除非另有说明,一个 Python 脚本运行一次并仅一次。近来计算机的运行速度是非常快的,所以你的 Python 脚本运行时间少于1秒钟。
|
||||
|
||||
为强制你的游戏来处于足够长的打开和活跃状态来让人看到它(更不要说玩它),使用一个 `while` 循环。为使你的游戏保存打开,你可以设置一个变量为一些值,然后告诉一个 `while` 循环只要变量保持未更改则一直保存循环。
|
||||
|
||||
这经常被称为一个"主循环",你可以使用术语 `main` 作为你的变量。在你的 setup 部分的任意位置添加这些代码:
|
||||
|
||||
```
|
||||
main = True
|
||||
```
|
||||
|
||||
在主循环期间,使用 Pygame 关键字来检查是否在键盘上的按键已经被按下或释放。添加这些代码到你的主循环部分:
|
||||
|
||||
```
|
||||
'''
|
||||
Main loop
|
||||
'''
|
||||
while main == True:
|
||||
for event in pygame.event.get():
|
||||
if event.type == pygame.QUIT:
|
||||
pygame.quit(); sys.exit()
|
||||
main = False
|
||||
|
||||
if event.type == pygame.KEYDOWN:
|
||||
if event.key == ord('q'):
|
||||
pygame.quit()
|
||||
sys.exit()
|
||||
main = False
|
||||
```
|
||||
|
||||
也在你的循环中,刷新你世界的背景。
|
||||
|
||||
如果你使用一个图片作为背景:
|
||||
|
||||
```
|
||||
world.blit(backdrop, backdropbox)
|
||||
```
|
||||
|
||||
如果你使用一种颜色作为背景:
|
||||
|
||||
```
|
||||
world.fill(BLUE)
|
||||
```
|
||||
|
||||
最后,告诉 Pygame 来刷新在屏幕上的所有内容并推进游戏的内部时钟。
|
||||
|
||||
```
|
||||
pygame.display.flip()
|
||||
clock.tick(fps)
|
||||
```
|
||||
|
||||
保存你的文件,再次运行它来查看曾经创建的最无趣的游戏。
|
||||
|
||||
退出游戏,在你的键盘上按 `q` 键。
|
||||
|
||||
在这系列的 [下一篇文章][9] 中,我将向你演示,如何加强你当前空的游戏世界,所以,继续学习并创建一些将要使用的图形!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
通过: https://opensource.com/article/17/12/game-framework-python
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/17/10/python-101
|
||||
[2]: http://www.pygame.org/wiki/about
|
||||
[3]: https://en.wikipedia.org/wiki/IDLE
|
||||
[4]: http://ninja-ide.org/
|
||||
[5]: http://krita.org
|
||||
[6]: https://gumroad.com/l/krita-game-art-tutorial-1
|
||||
[7]: https://pinta-project.com/pintaproject/pinta/releases
|
||||
[8]: http://inkscape.org
|
||||
[9]: https://opensource.com/article/17/12/program-game-python-part-3-spawning-player
|
@ -0,0 +1,287 @@
|
||||
Sensu 监控入门
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
|
||||
|
||||
Sensu 是一个开源基础设施和应用程序监控解决方案,它监控服务器、相关服务和应用程序健康状况,并通过第三方集成发送警报和通知。Sensu 用 Ruby 编写,可以使用 [RabbitMQ][1] 或 [Redis][2] 来处理消息,它使用 Redis 来存储数据。
|
||||
|
||||
如果你想以一种简单而有效的方式监控云基础设施,Sensu 是一个不错的选择。它可以与你组织已经使用的许多现代 DevOps 堆栈集成,比如 [Slack][3]、[HipChat][4 ] 或 [IRC][5],它甚至可以用 [PagerDuty][6] 发送移动或寻呼机警报。
|
||||
|
||||
Sensu 的[模块化架构][7]意味着每个组件都可以安装在同一台服务器上或者在完全独立的机器上。
|
||||
|
||||
### 结构
|
||||
|
||||
Sensu 的主要通信机制是 `Transport`。每个 Sensu 组件必须连接到 `Transport` 才能相互发送消息。`Transport` 可以使用 RabbitMQ(在生产中推荐使用)或 Redis。
|
||||
|
||||
Sensu 服务器处理事件数据并采取行动。它注册客户端并使用过滤器、增变器和处理程序检查结果和监视事件。服务器向客户端发布检查说明,Sensu API 提供 RESTful API,提供对监控数据和核心功能的访问。
|
||||
|
||||
[Sensu 客户端][8]执行 Sensu 服务器安排的检查或本地检查定义。Sensu 使用数据存储(Redis)来保存所有的持久数据。最后,[Uchiwa][9] 是与 Sensu API 进行通信的 Web 界面。
|
||||
|
||||
![sensu_system.png][11]
|
||||
|
||||
### 安装 Sensu
|
||||
|
||||
#### 条件
|
||||
|
||||
* 一个 Linux 系统作为服务器节点(本文使用了 CentOS 7)
|
||||
* 要监控的一台或多台 Linux 机器(客户机)
|
||||
|
||||
#### 服务器侧
|
||||
|
||||
Sensu 需要安装 Redis。要安装 Redis,启用 EPEL 仓库:
|
||||
```
|
||||
$ sudo yum install epel-release -y
|
||||
|
||||
```
|
||||
|
||||
然后安装 Redis:
|
||||
```
|
||||
$ sudo yum install redis -y
|
||||
|
||||
```
|
||||
|
||||
修改 `/etc/redis.conf` 来禁用保护模式,监听每个地址并设置密码:
|
||||
```
|
||||
$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
|
||||
|
||||
$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
|
||||
|
||||
$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
|
||||
|
||||
```
|
||||
|
||||
启用并启动 Redis 服务:
|
||||
```
|
||||
$ sudo systemctl enable redis
|
||||
$ sudo systemctl start redis
|
||||
```
|
||||
|
||||
Redis 现在已经安装并准备好被 Sensu 使用。
|
||||
|
||||
现在让我们来安装 Sensu。
|
||||
|
||||
首先,配置 Sensu 仓库并安装软件包:
|
||||
```
|
||||
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
|
||||
[sensu]
|
||||
name=sensu
|
||||
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
EOF
|
||||
|
||||
$ sudo yum install sensu uchiwa -y
|
||||
```
|
||||
|
||||
让我们为 Sensu 创建最简单的配置文件:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/api.json << EOF
|
||||
{
|
||||
"api": {
|
||||
"host": "127.0.0.1",
|
||||
"port": 4567
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
然后,配置 `sensu-api` 在本地主机上使用端口 4567 监听:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/redis.json << EOF
|
||||
{
|
||||
"redis": {
|
||||
"host": "<IP of server>",
|
||||
"port": 6379,
|
||||
"password": "password123"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
|
||||
$ sudo tee /etc/sensu/conf.d/transport.json << EOF
|
||||
{
|
||||
"transport": {
|
||||
"name": "redis"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
在这两个文件中,我们将 Sensu 配置为使用 Redis 作为传输机制,还有 Reids 监听的地址。客户端需要直接连接到传输机制。每台客户机都需要这两个文件。
|
||||
```
|
||||
$ sudo tee /etc/sensu/uchiwa.json << EOF
|
||||
{
|
||||
"sensu": [
|
||||
{
|
||||
"name": "sensu",
|
||||
"host": "127.0.0.1",
|
||||
"port": 4567
|
||||
}
|
||||
],
|
||||
"uchiwa": {
|
||||
"host": "0.0.0.0",
|
||||
"port": 3000
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
在这个文件中,我们配置 `Uchiwa` 监听端口 3000 上的每个地址(0.0.0.0)。我们还配置 `Uchiwa` 使用 `sensu-api`(已配置好)。
|
||||
|
||||
出于安全原因,更改刚刚创建的配置文件的所有者:
|
||||
```
|
||||
$ sudo chown -R sensu:sensu /etc/sensu
|
||||
```
|
||||
|
||||
启用并启动 Sensu 服务:
|
||||
```
|
||||
$ sudo systemctl enable sensu-server sensu-api sensu-client
|
||||
$ sudo systemctl start sensu-server sensu-api sensu-client
|
||||
$ sudo systemctl enable uchiwa
|
||||
$ sudo systemctl start uchiwa
|
||||
```
|
||||
|
||||
尝试访问 `Uchiwa` 网站:http://<服务器的 IP 地址>:3000
|
||||
|
||||
对于生产环境,建议运行 RabbitMQ 集群作为 Transport 而不是 Redis(虽然 Redis 集群也可以用于生产),运行多个 Sensu 服务器实例和 API 实例,以实现负载均衡和高可用性。
|
||||
|
||||
Sensu 现在安装完成,让我们来配置客户端。
|
||||
|
||||
#### 客户端侧
|
||||
|
||||
要添加一个新客户端,你需要通过创建 `/etc/yum.repos.d/sensu.repo` 文件在客户机上启用 Sensu 仓库。
|
||||
```
|
||||
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
|
||||
[sensu]
|
||||
name=sensu
|
||||
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
EOF
|
||||
```
|
||||
|
||||
启用仓库后,安装 Sensu:
|
||||
```
|
||||
$ sudo yum install sensu -y
|
||||
```
|
||||
|
||||
要配置 `sensu-client`,创建在服务器中相同的 `redis.json` 和 `transport.json`,还有 `client.json` 配置文件:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/client.json << EOF
|
||||
{
|
||||
"client": {
|
||||
"name": "rhel-client",
|
||||
"environment": "development",
|
||||
"subscriptions": [
|
||||
"frontend"
|
||||
]
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
在 `name` 字段中,指定一个名称来标识此客户机(通常是主机名)。`environment` 字段可以帮助你过滤,订阅定义客户机将执行哪些监视检查。
|
||||
|
||||
最后,启用并启动服务并检查 `Uchiwa`,因为客户机会自动注册:
|
||||
```
|
||||
$ sudo systemctl enable sensu-client
|
||||
$ sudo systemctl start sensu-client
|
||||
```
|
||||
|
||||
### Sensu 检查
|
||||
|
||||
Sensu 检查有两个组件:一个插件和一个定义。
|
||||
|
||||
Sensu 与 [Nagios 检查插件规范][12]兼容,因此无需修改即可使用针对 Nagios 的任何检查。检查是可执行文件,由 Sensu 客户机运行。
|
||||
|
||||
检查定义让 Sensu 知道如何、在哪以及何时运行插件。
|
||||
|
||||
#### 客户端侧
|
||||
|
||||
让我们在客户机上安装一个检查插件。请记住,此插件将在客户机上执行。
|
||||
|
||||
启用 EPEL 并安装 `nagios-plugins-http` :
|
||||
```
|
||||
$ sudo yum install -y epel-release
|
||||
$ sudo yum install -y nagios-plugins-http
|
||||
```
|
||||
|
||||
现在让我们通过手动执行它来研究这个插件。尝试检查客户机上运行的 Web 服务器的状态。它应该会失败,因为我们并没有运行 Web 服务器:
|
||||
```
|
||||
$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
|
||||
connect to address 127.0.0.1 and port 80: Connection refused
|
||||
HTTP CRITICAL - Unable to open TCP socket
|
||||
```
|
||||
|
||||
不出所料,它失败了。检查执行的返回值:
|
||||
```
|
||||
$ echo $?
|
||||
2
|
||||
|
||||
```
|
||||
|
||||
Nagios 检查插件规范定义了插件执行的四个返回值:
|
||||
|
||||
| **Plugin return code** | **State** |
|
||||
|------------------------|-----------|
|
||||
| 0 | OK |
|
||||
| 1 | WARNING |
|
||||
| 2 | CRITICAL |
|
||||
| 3 | UNKNOWN |
|
||||
|
||||
有了这些信息,我们现在可以在服务器上创建检查定义。
|
||||
|
||||
#### 服务器侧
|
||||
|
||||
在服务器机器上,创建 `/etc/sensu/conf.d/check_http.json` 文件:
|
||||
```
|
||||
{
|
||||
"checks": {
|
||||
"check_http": {
|
||||
"command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
|
||||
"interval": 10,
|
||||
"subscribers": [
|
||||
"frontend"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
在 `command ` 字段中,使用我们之前测试过的命令。`Interval` 会告诉 Sensu 这个检查的频率,以秒为单位。最后,`subscribers` 将定义执行检查的客户机。
|
||||
|
||||
重新启动 sensu-api 和 sensu-server 并确认新检查在 Uchiwa 中可用。
|
||||
|
||||
```
|
||||
$ sudo systemctl restart sensu-api sensu-server
|
||||
```
|
||||
|
||||
### 接下来
|
||||
|
||||
Sensu 是一个功能强大的工具,本文只简要介绍它可以干什么。参阅[文档][13]了解更多信息,访问 Sensu 网站了解有关 [Sensu 社区][14]的更多信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mzamot
|
||||
[1]:https://www.rabbitmq.com/
|
||||
[2]:https://redis.io/topics/config
|
||||
[3]:https://slack.com/
|
||||
[4]:https://en.wikipedia.org/wiki/HipChat
|
||||
[5]:http://www.irc.org/
|
||||
[6]:https://www.pagerduty.com/
|
||||
[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
|
||||
[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
|
||||
[9]:https://uchiwa.io/#/
|
||||
[10]:/file/406576
|
||||
[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
|
||||
[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
|
||||
[13]:https://docs.sensu.io/
|
||||
[14]:https://sensu.io/community
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Enjoy Netflix? You Should Thank FreeBSD)
|
||||
[#]: via: (https://itsfoss.com/netflix-freebsd-cdn/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
享受 Netflix 么?你应该感谢 FreeBSD
|
||||
======
|
||||
|
||||
Netflix 是世界上最受欢迎的流媒体服务之一。
|
||||
|
||||
但你已经知道了。不是吗?
|
||||
|
||||
你可能不知道的是 Netflix 使用 [FreeBSD][1] 向你提供内容。
|
||||
|
||||
是的。Netflix 依靠 FreeBSD 来构建其内部内容交付网络 (CDN)。
|
||||
|
||||
[CDN][2] 是一组位于世界各地的服务器。它主要用于向终端用户分发像图像和视频这样的“大文件”。
|
||||
|
||||
Netflix 没有选择商业 CDN 服务,而是建立了自己的内部 CDN,名为 [Open Connect][3]。
|
||||
|
||||
Open Connect 使用[自定义硬件][4],Open Connect Appliance。你可以在下面的图片中看到它。它可以处理 40Gb/s 的数据,存储容量为 248 TB。
|
||||
|
||||
![Netflix’s Open Connect Appliance runs FreeBSD][5]
|
||||
|
||||
Netflix 免费为合格的互联网服务提供商 (ISP) 提供 Open Connect Appliance。通过这种方式,大量的 Netflix 流量得到了本地化,ISP 可以更高效地提供 Netflix 内容。
|
||||
|
||||
Open Connect Appliance 运行在 FreeBSD 操作系统上,并且[几乎完全运行开源软件][6]。
|
||||
|
||||
### Open Connect 使用 FreeBSD “头”
|
||||
|
||||
![][7]
|
||||
|
||||
你或许会期望 Netflix 在这样一个关键基础设施上使用 FreeBSD 的稳定版本,但 Netflix 会跟踪 [FreeBSD 头/当前版本][8]。Netflix 表示,跟踪“头”让他们“保持前瞻性,专注于创新”。
|
||||
|
||||
以下是 Netflix 跟踪 FreeBSD 的好处:
|
||||
|
||||
* 更快的功能迭代
|
||||
* 更快地使用 FreeBSD 的新功能
|
||||
* 更快的 bug 修复
|
||||
* 实现协作
|
||||
* 尽量减少合并冲突
|
||||
* 摊销合并“成本”
|
||||
|
||||
|
||||
|
||||
> 运行 FreeBSD “head” 可以让我们非常高效地向用户分发大量数据,同时保持高速的功能开发。
|
||||
>
|
||||
> Netflix
|
||||
|
||||
请记得,甚至[谷歌也使用 Debian][9] 测试版而不是 Debian 稳定版。也许这些企业更喜欢最先进的功能。
|
||||
|
||||
与谷歌一样,Netflix 也计划向上游提供代码。这应该有助于 FreeBSD 和其他基于 FreeBSD 的 BSD 发行版。
|
||||
|
||||
那么 Netflix 用 FreeBSD 实现了什么?以下是一些统计数据:
|
||||
|
||||
> 使用 FreeBSD 和商业硬件,我们在 16 核 2.6GHz CPU 上使用约 55% 的 CPU,实现了 90 Gb/s 的 TLS 加密连接,。
|
||||
>
|
||||
> Netflix
|
||||
|
||||
如果你想了解更多关于 Netflix 和 FreeBSD 的信息,可以参考 [FOSDEM 的这个演示文稿][10]。你还可以在[这里][11]观看演示文稿的视频。
|
||||
|
||||
目前,大型企业主要依靠 Linux 来实现其服务器基础架构,但 Netflix 已经信任了 BSD。这对 BSD 社区来说是一件好事,因为如果像 Netflix 这样的行业领导者重视 BSD,那么其他人也可以跟上。你怎么看?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/netflix-freebsd-cdn/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.freebsd.org/
|
||||
[2]: https://www.cloudflare.com/learning/cdn/what-is-a-cdn/
|
||||
[3]: https://openconnect.netflix.com/en/
|
||||
[4]: https://openconnect.netflix.com/en/hardware/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-open-connect-appliance.jpeg?fit=800%2C533&ssl=1
|
||||
[6]: https://openconnect.netflix.com/en/software/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/netflix-freebsd.png?resize=800%2C450&ssl=1
|
||||
[8]: https://www.bsdnow.tv/tutorials/stable-current
|
||||
[9]: https://itsfoss.com/goobuntu-glinux-google/
|
||||
[10]: https://fosdem.org/2019/schedule/event/netflix_freebsd/attachments/slides/3103/export/events/attachments/netflix_freebsd/slides/3103/FOSDEM_2019_Netflix_and_FreeBSD.pdf
|
||||
[11]: http://mirror.onet.pl/pub/mirrors/video.fosdem.org/2019/Janson/netflix_freebsd.webm
|
@ -1,77 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (14 days of celebrating the Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/19/3/happy-pi-day)
|
||||
[#]: author: (Anderson Silva (Red Hat) https://opensource.com/users/ansilva)
|
||||
|
||||
庆祝 Raspberry Pi 的 14 天
|
||||
======
|
||||
|
||||
在我们关于树莓派入门系列的第 14 篇也是最后一篇文章中,回顾一下我们学到的所有东西。
|
||||
|
||||
![][1]
|
||||
|
||||
**派节快乐!**
|
||||
|
||||
每年的 3 月 14 日,我们这些极客都会庆祝派节。我们用这种方式缩写日期 MMDD,March 14 于是写成 03/14,它的数字上提醒我们 3.14,或者说 [π][2] 的前三位数字。许多美国人没有意识到的是,世界上几乎没有其他国家使用这种[日期格式][3],因此派节几乎只适用于美国,尽管它在全球范围内得到了庆祝。
|
||||
|
||||
无论你身在何处,让我们一起庆祝树莓派,并通过回顾过去两周我们所涉及的主题来结束本系列:
|
||||
|
||||
* 第 1 天:[你应该选择哪种树莓派?][4]
|
||||
* 第 2 天:[如何购买树莓派][5]
|
||||
* 第 3 天:[如何启动新的树莓派][6]
|
||||
* 第 4 天:[用树莓派学习 Linux][7]
|
||||
* 第 5 天:[5 种教孩子用树莓派编程的方法][8]
|
||||
* 第 6 天:[你可以用树莓派学习的 3 种流行编程语言][9]
|
||||
* 第 7 天:[如何更新树莓派][10]
|
||||
* 第 8 天:[如何使用树莓派娱乐][11]
|
||||
* 第 9 天:[在树莓派上玩游戏][12]
|
||||
* 第 10 天:[让我们实物化:如何在树莓派上使用 GPIO 引脚][13]
|
||||
* 第 11 天:[通过树莓派了解计算机安全][14]
|
||||
* 第 12 天:[在树莓派上使用 Mathematica 进行高级数学运算][15]
|
||||
* 第 13 天:[为树莓派社区做出贡献][16]
|
||||
|
||||
|
||||
|
||||
![Pi Day illustration][18]
|
||||
|
||||
我将结束本系列,感谢所有关注的人,尤其是那些在过去 14 天里从中学到了东西的人!我还想鼓励大家不断扩展他们对树莓派以及围绕它构建的所有开源(和闭源)技术的了解。
|
||||
|
||||
我还鼓励你了解其他文化、哲学、宗教和世界观。让我们成为人类的是这种惊人的 (有时是有趣的) 能力,我们不仅要适应外部环境,而且要适应智力环境。
|
||||
|
||||
不管你做什么,保持学习!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/happy-pi-day
|
||||
|
||||
作者:[Anderson Silva (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA
|
||||
[2]: https://www.piday.org/million/
|
||||
[3]: https://en.wikipedia.org/wiki/Date_format_by_country
|
||||
[4]: https://opensource.com/article/19/3/which-raspberry-pi-choose
|
||||
[5]: https://opensource.com/article/19/3/how-buy-raspberry-pi
|
||||
[6]: https://opensource.com/article/19/3/how-boot-new-raspberry-pi
|
||||
[7]: https://opensource.com/article/19/3/learn-linux-raspberry-pi
|
||||
[8]: https://opensource.com/article/19/3/teach-kids-program-raspberry-pi
|
||||
[9]: https://opensource.com/article/19/3/programming-languages-raspberry-pi
|
||||
[10]: https://opensource.com/article/19/3/how-raspberry-pi-update
|
||||
[11]: https://opensource.com/article/19/3/raspberry-pi-entertainment
|
||||
[12]: https://opensource.com/article/19/3/play-games-raspberry-pi
|
||||
[13]: https://opensource.com/article/19/3/gpio-pins-raspberry-pi
|
||||
[14]: https://opensource.com/article/19/3/learn-about-computer-security-raspberry-pi
|
||||
[15]: https://opensource.com/article/19/3/do-math-raspberry-pi
|
||||
[16]: https://opensource.com/article/19/3/contribute-raspberry-pi-community
|
||||
[17]: /file/426561
|
||||
[18]: https://opensource.com/sites/default/files/uploads/raspberrypi_14_piday.jpg (Pi Day illustration)
|
221
translated/tech/20190325 Getting started with Vim- The basics.md
Normal file
221
translated/tech/20190325 Getting started with Vim- The basics.md
Normal file
@ -0,0 +1,221 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Modrisco)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Vim: The basics)
|
||||
[#]: via: (https://opensource.com/article/19/3/getting-started-vim)
|
||||
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
|
||||
|
||||
Vim 入门:基础
|
||||
======
|
||||
|
||||
为工作或者新项目学习足够的 Vim 知识。
|
||||
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
我还清晰地记得我第一次接触 Vim 的时候。那时我还是一名大学生,计算机学院的机房里都装着 Ubuntu 系统。尽管我在上大学前也曾接触过不同的 Linux 发行版(比如 RHEL,Red Hat 在百思买出售它的 CD),但这却是我第一次要在日常中频繁使用 Linux 系统,因为我的课程要求我这样做。当我开始使用 Linux 时,正如我的前辈和将来的后继者们一样,我感觉自己像是一名“真正的程序员”了。
|
||||
|
||||
![Real Programmers comic][2]
|
||||
|
||||
真正的程序员,来自 [xkcd][3]
|
||||
|
||||
学生们可以使用像 [Kate][4] 一样的图形文本编辑器,这也安装在学校的电脑上了。对于那些可以使用 shell 但不习惯使用控制台编辑器的学生,最流行的选择是 [Nano][5],它提供了很好的交互式菜单和类似于 Windows 图形文本编辑器的体验。
|
||||
|
||||
我有时会用 Nano,但当我听说 [Vi/Vim][6] 和 [Emacs][7] 能做一些很棒的事情时我决定试一试它们(主要是因为它们看起来很酷,而且我也很好奇它们有什么特别之处)。第一次使用 Vim 时吓到我了 —— 我不想搞砸任何事情!但是,一旦我掌握了它的诀窍,事情就变得容易得多,我可以欣赏编辑器的强大功能。至于 Emacs,呃,我有点放弃了,但我很高兴我坚持和 Vim 在一起。
|
||||
|
||||
在本文中,我将介绍一下 Vim(基于我的个人经验),这样你就可以在 Linux 系统上用它来作为编辑器使用了。这篇文章不会让你变成 Vim 的专家,甚至不会触及 Vim 许多强大功能的皮毛。但是起点总是很重要的,我想让开始的经历尽可能简单,剩下的则由你自己去探索。
|
||||
|
||||
### 第 0 步:打开一个控制台窗口
|
||||
|
||||
在使用 Vim 前,你需要做一些准备工作。在 Linux 操作系统打开控制台终端。(因为 Vim 也可以在 MacOS 上使用,Mac 用户也可以使用这些说明)。
|
||||
|
||||
打开终端窗口后,输入 `ls` 命令列出当前目录下的内容。然后,输入 `mkdir Tutorial` 命令创建一个名为 `Tutorial` 的新目录。通过输入 `cd Tutorial` 来进入该目录。
|
||||
|
||||
![Create a folder][8]
|
||||
|
||||
这就是全部的准备工作。现在是时候转到有趣的部分了——开始使用 Vim。
|
||||
|
||||
### 第 1 步:创建一个 Vim 文件和不保存退出
|
||||
|
||||
还记得我一开始说过我不敢使用 Vim 吗?我当时在害怕“如果我改变了一个现有的文件,把事情搞砸了怎么办?”毕竟,一些计算机科学作业要求我修改现有的文件。我想知道:_如何在不保存更改的情况下打开和关闭文件?_
|
||||
|
||||
好消息是你可以使用相同的命令在 Vim 中创建或打开文件:`vim <FILE_NAME>`,其中 **<FILE_NAME>** 表示要创建或修改的目标文件名。让我们通过输入 `vim HelloWorld.java` 来创建一个名为 `HelloWorld.java` 的文件。
|
||||
|
||||
你好,Vim!现在,讲一下 Vim 中一个非常重要的概念,可能也是最需要记住的:Vim 有多种模式,下面是 Vim 基础中需要知道的的三种:
|
||||
|
||||
模式 | 描述
|
||||
---|---
|
||||
正常模式 | 默认模式,用于导航和简单编辑
|
||||
插入模式 | 用于插入和修改文本
|
||||
命令模式 | 用于执行如保存,退出等命令
|
||||
|
||||
Vim 也有其他模式,例如可视模式、选择模式和命令模式。不过上面的三种模式对我们来说已经足够好了。
|
||||
|
||||
你现在正处于正常模式,如果有文本,你可以用箭头键移动或使用其他导航键(将在稍后看到)。要确定你正处于正常模式,只需按下 `esc` (Escape)键即可。
|
||||
|
||||
> **提示:** **Esc** 切换到正常模式。即使你已经在正常模式下,点击 **Esc** 只是为了练习。
|
||||
|
||||
现在,有趣的事情发生了。输入 `:` (冒号键)并接着 `q!` (完整命令:`:q!`)。你的屏幕将显示如下:
|
||||
|
||||
![Editing Vim][9]
|
||||
|
||||
在正常模式下输入冒号会将 Vim 切换到命令行模式,执行 `:q!` 命令将退出 Vim 编辑器而不进行保存。换句话说,你放弃了所有的更改。你也可以使用 `ZQ` 命令;选择你认为更方便的选项。
|
||||
|
||||
一旦你按下 `Enter` (回车),你就不再在 Vim 中。重复练习几次来掌握这条命令。熟悉了这部分内容之后,请转到下一节,了解如何对文件进行更改。
|
||||
|
||||
### 第 2 步:在 Vim 中修改并保存
|
||||
|
||||
通过输入 `vim HelloWorld.java` 和回车键来再次打开这个文件。你可以在插入模式中修改文件。首先,通过 `Esc` 键来确定你正处于正常模式。接着输入 `i` 来进入插入模式(没错,就是字母 **i**)。
|
||||
|
||||
在左下角,你将看到 `\-- INSERT --`,这标志着你这处于插入模式。
|
||||
|
||||
![Vim insert mode][10]
|
||||
|
||||
写一些 Java 代码。你可以写任何你想写的,不过这也有一份你可以参照的例子。你的屏幕将显示如下:
|
||||
|
||||
```
|
||||
public class HelloWorld {
|
||||
public static void main([String][11][] args) {
|
||||
}
|
||||
}
|
||||
```
|
||||
非常漂亮!注意文本是如何在 Java 语法中高亮显示的。因为这是个 Java 文件,所以 Vim 将自动检测语法并高亮颜色。
|
||||
|
||||
保存文件:按下 `Esc` 来退出插入模式并进入命令模式。输入 `:` 并接着 `x!` (完整命令:`:x!`),按回车键来保存文件。你也可以输入 `wq` 来执行相同的操作。
|
||||
|
||||
现在,你知道了如何使用插入模式输入文本并使用以下命令保存文件:`:x!` 或者 `:wq`。
|
||||
|
||||
### 第 3 步:Vim 中的基本导航
|
||||
|
||||
虽然你总是可以使用上箭头、下箭头、左箭头和右箭头在文件中移动,但在一个几乎有数不清行数的大文件中,这将是非常困难的。能够在一行中跳跃光标将会是很有用的。虽然 Vim 提供了不少很棒的导航功能,不过在一开始,我想向你展示如何在 Vim 中到达某一特定的行。
|
||||
|
||||
单击 `Esc` 来确定你处于正常模式,接着输入 `:set number` 并键入回车。
|
||||
|
||||
瞧!你现在可以在每一行的左侧看到行号。
|
||||
|
||||
![Showing Line Numbers][12]
|
||||
|
||||
好,你也许会说,“这确实很酷,不过我该怎么跳到某一行呢?”再一次的,确认你正处于正常模式。接着输入 `: <LINE_NUMBER>`,在这里 **< LINE_NUMBER>** 是你想去的那一行的行数。按下回车键来试着移动到第二行。
|
||||
|
||||
```
|
||||
:2
|
||||
```
|
||||
|
||||
现在,跳到第三行。
|
||||
|
||||
![Jump to line 3][13]
|
||||
|
||||
但是,假如你正在处理一个一千多行的文件,而你正想到文件底部。这该怎么办呢?确认你正处于正常模式,接着输入 `:$` 并按下回车。
|
||||
|
||||
你将来到最后一行!
|
||||
|
||||
现在,你知道如何在行间跳跃了,作为补充,我们来学一下如何移动到一行的行尾。确认你正处于有文本内容的一行,如第三行,接着输入 `$`。
|
||||
|
||||
![Go to the last character][14]
|
||||
|
||||
你现在来到这行的最后一个字节了。在此示例中,高亮左大括号以显示光标移动到的位置,右大括号被高亮是因为它是高亮的左大括号的匹配字符。
|
||||
|
||||
这就是 Vim 中的基本导航功能。等等,别急着退出文件。让我们转到 Vim 中的基本编辑。不过,你可以暂时随便喝杯咖啡或茶休息一下。
|
||||
|
||||
### 第 4 步:Vim 中的基本编辑
|
||||
|
||||
现在,你已经知道如何通过跳到想要的一行来在文件中导航,你可以使用这个技能在 Vim 中进行一些基本编辑。切换到插入模式。(还记得怎么做吗?是不是输入 `i` ?)当然,你可以使用键盘逐一删除或插入字符来进行编辑,但是 Vim 提供了更快捷的方法来编辑文件。
|
||||
|
||||
来到第三行,这里的代码是 **public static void main(String[] args) {**。双击 `d` 键,没错,就是 `dd`。如果你成功做到了,你将会看到,第三行消失了,剩下的所有行都向上移动了一行。(例如,第四行变成了第三行)。
|
||||
|
||||
![Deleting A Line][15]
|
||||
|
||||
这就是 _删除_(delete) 命令。不要担心,键入 `u`,你会发现这一行又回来了。喔,这就是 _撤销_(undo) 命令。
|
||||
|
||||
![Undoing a change in Vim][16]
|
||||
|
||||
下一课是学习如何复制和粘贴文本,但首先,你需要学习如何在 Vim 中突出显示文本。按下 `v` 并向左右移动光标来选择或反选文本。当你向其他人展示代码并希望标识你想让他们注意到的代码时,这个功能也非常有用。
|
||||
|
||||
![Highlighting text in Vim][17]
|
||||
|
||||
来到第四行,这里的代码是 **System.out.println("Hello, Opensource");**。高亮这一行的所有内容。好了吗?当第四行的内容处于高亮时,按下 `y`。这就叫做 _复制_(yank)模式,文本将会被复制到剪贴板上。接下来,输入 `o` 来创建新的一行。注意,这将让你进入插入模式。通过按 `Esc` 退出插入模式,然后按下 `p`,代表 _粘贴_。这将把复制的文本从第三行粘贴到第四行。
|
||||
|
||||
![Pasting in Vim][18]
|
||||
|
||||
作为练习,请重复这些步骤,但也要修改新创建的行中的文字。此外,请确保这些行对齐工整。
|
||||
|
||||
> **提示:** 您需要在插入模式和命令行模式之间来回切换才能完成此任务。
|
||||
|
||||
当你完成了,通过 `x!` 命令保存文件。以上就是 Vim 基本编辑的全部内容。
|
||||
|
||||
### 第 5 步:Vim 中的基本搜索
|
||||
|
||||
假设你的团队领导希望你更改项目中的文本字符串。你该如何快速完成任务?你可能希望使用某个关键字来搜索该行。
|
||||
|
||||
Vim 的搜索功能非常有用。通过 `Esc` 键来进入命令模式,然后输入冒号 `:`,我们可以通过输入 `/ <SEARCH_KEYWORD>` 来搜索关键词, **< SEARCH_KEYWORD>** 指你希望搜索的字符串。在这里,我们搜索关键字符串 “Hello”。在面的图示中缺少冒号,但这是必需的。
|
||||
|
||||
![Searching in Vim][19]
|
||||
|
||||
但是,一个关键字可以出现不止一次,而这可能不是你想要的那一个。那么,如何找到下一个匹配项呢?只需按 `n` 键即可,这代表 _下一个_(next)。执行此操作时,请确保你没有处于插入模式!
|
||||
|
||||
### 附加步骤:Vim中的分割模式
|
||||
|
||||
以上几乎涵盖了所有的 Vim 基础知识。但是,作为一个额外奖励,我想给你展示 Vim 一个很酷的特性,叫做 _分割_(split)模式。
|
||||
|
||||
退出 _HelloWorld.java_ 并创建一个新文件。在控制台窗口中,输入 `vim GoodBye.java` 并按回车键来创建一个名为 _GoodBye.java_ 的新文件。
|
||||
|
||||
输入任何你想输入的让内容,我选择输入“Goodbye”。保存文件(记住你可以在命令模式中使用 `:x!` 或者 `:wq`)。
|
||||
|
||||
在命令模式中,输入 `:split HelloWorld.java`,来看看发生了什么。
|
||||
|
||||
![Split mode in Vim][20]
|
||||
|
||||
Wow!快看!**split** 命令将控制台窗口水平分割成了两个部分,上面是 _HelloWorld.java_,下面是 _GoodBye.java_。该怎么能在窗口之间切换呢?按住 `Control` 键 (在 Mac 上)或 `Ctrl` 键(在PC上),然后按下 `ww` (即双击 `w` 键)。
|
||||
|
||||
作为最后一个练习,尝试通过复制和粘贴 _HelloWorld.java_ 来编辑 _GoodBye.java_ 以匹配下面屏幕上的内容。
|
||||
|
||||
![Modify GoodBye.java file in Split Mode][21]
|
||||
|
||||
保存两份文件,成功!
|
||||
|
||||
> **提示 1:** 如果你想将两个文件窗口垂直分割,使用 `:vsplit <FILE_NAME>` 命令。(代替 `:split <FILE_NAME>` 命令,**< FILE_NAME>** 指你想要使用分割模式打开的文件名)。
|
||||
>
|
||||
> **提示 2:** 你可以通过调用任意数量的 **split** 或者 **vsplit** 命令来打开两个以上的文件。试一试,看看它效果如何。
|
||||
|
||||
### Vim 速查表
|
||||
|
||||
在本文中,您学会了如何使用 Vim 来完成工作或项目。但这只是你开启 Vim 强大功能之旅的开始。请务必在 Opensource.com 上查看其他很棒的教程和技巧。
|
||||
|
||||
为了让一切变得简单些,我已经将你学到的一切总结到了 [a handy cheat sheet][22] 中。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/getting-started-vim
|
||||
|
||||
作者:[Bryant Son (Red Hat, Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Modrisco](https://github.com/Modrisco)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_xkcdcartoon.jpg (Real Programmers comic)
|
||||
[3]: https://xkcd.com/378/
|
||||
[4]: https://kate-editor.org
|
||||
[5]: https://www.nano-editor.org
|
||||
[6]: https://www.vim.org
|
||||
[7]: https://www.gnu.org/software/emacs
|
||||
[8]: https://opensource.com/sites/default/files/uploads/2_createtestfolder.jpg (Create a folder)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/4_existingvim.jpg (Editing Vim)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/6_insertionmode.jpg (Vim insert mode)
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[12]: https://opensource.com/sites/default/files/uploads/10_setnumberresult_0.jpg (Showing Line Numbers)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/12_jumpintoline3.jpg (Jump to line 3)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/14_gotolastcharacter.jpg (Go to the last character)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/15_deletinglines.jpg (Deleting A Line)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/16_undoingtheline.jpg (Undoing a change in Vim)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/17_highlighting.jpg (Highlighting text in Vim)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/19_pasting.jpg (Pasting in Vim)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/22_searchmode.jpg (Searching in Vim)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/26_copytonewfiles.jpg (Split mode in Vim)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/27_exercise.jpg (Modify GoodBye.java file in Split Mode)
|
||||
[22]: https://opensource.com/downloads/cheat-sheet-vim
|
102
translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
Normal file
102
translated/tech/20190328 How to run PostgreSQL on Kubernetes.md
Normal file
@ -0,0 +1,102 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to run PostgreSQL on Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/19/3/how-run-postgresql-kubernetes)
|
||||
[#]: author: (Jonathan S. Katz https://opensource.com/users/jkatz05)
|
||||
|
||||
怎样在Kubernetes上运行PostgreSQL
|
||||
======
|
||||
创建统一管理的,具备灵活性的云原生生产部署来部署一个人性化的数据库即服务。
|
||||
![cubes coming together to create a larger cube][1]
|
||||
|
||||
通过在[Kubernetes][2]上运行[PostgreSQL][3]数据库,你能创建统一管理的,具备灵活性的云原生生产部署应用来部署一个个性化的数据库即服务为你的特定需求进行量身定制。
|
||||
|
||||
对于Kubernetes,使用Operator允许你提供额外的上下文去[管理有状态应用][4]。当使用像PostgreSQL这样开源的数据库去执行包括配置,扩张,高可用和用户管理时,Operator也很有帮助。
|
||||
|
||||
让我们来探索如何在Kubernetes上启动并运行PostgreSQL。
|
||||
|
||||
### 安装 PostgreSQL Operator
|
||||
|
||||
将PostgreSQL和Kubernetes结合使用的第一步是安装一个Operator。在针对Linux系统的Crunchy's [快速启动脚本][6]的帮助下,你可以在任意基于Kubernetes的环境下启动和运行开源的[Crunchy PostgreSQL Operator][5]。
|
||||
|
||||
快速启动脚本有一些必要前提:
|
||||
|
||||
* [Wget][7]工具已安装。
|
||||
* [kubectl][8]工具已安装。
|
||||
* 一个[StorageClass][9]已经定义在你的Kubernetes中。
|
||||
* 拥有集群权限的可访问Kubernetes的用户账号。安装Operator的[RBAC][10]规则是必要的。
|
||||
* 拥有一个[namespace][11]去管理PostgreSQL Operator。
|
||||
|
||||
|
||||
|
||||
执行这个脚本将提供给你一个默认的PostgreSQL Operator deployment,它假设你的[dynamic storage][12]和存储类的名字为**standard**。通过这个脚本允许用户自定义的值去覆盖这些默认值。
|
||||
|
||||
通过下列命令,你能下载这个快速启动脚本并把它的权限设置为可执行:
|
||||
```
|
||||
wget <https://raw.githubusercontent.com/CrunchyData/postgres-operator/master/examples/quickstart.sh>
|
||||
chmod +x ./quickstart.sh
|
||||
```
|
||||
|
||||
然后你运行快速启动脚本:
|
||||
```
|
||||
./examples/quickstart.sh
|
||||
```
|
||||
|
||||
在脚本提示你相关的Kubernetes集群基本信息后,它将执行下列操作:
|
||||
* 下载Operator配置文件
|
||||
* 将 **$HOME/.pgouser** 这个文件设置为默认设置
|
||||
* 以Kubernetes [Deployment][13]部署Operator
|
||||
* 设置你的 **.bashrc** 文件包含Operator环境变量
|
||||
* 设置你的 **$HOME/.bash_completion** 文件为 **pgo bash_completion**文件
|
||||
|
||||
在快速启动脚本的执行期间,你将会被提示在你的Kubernetes集群设置RBAC规则。在另一个终端,执行快速启动命令所提示你的命令。
|
||||
|
||||
一旦这个脚本执行完成,你将会得到关于打开一个端口转发到PostgreSQL Operator pod的信息。在另一个终端,执行端口转发;这将允许你开始对PostgreSQL Operator执行命令!尝试输入下列命令创建集群:
|
||||
|
||||
```
|
||||
pgo create cluster mynewcluster
|
||||
```
|
||||
|
||||
你能输入下列命令测试你的集群运行状况:
|
||||
|
||||
```
|
||||
pgo test mynewcluster
|
||||
```
|
||||
|
||||
现在,你能在Kubernetes环境下管理你的PostgreSQL数据库!你可以在[官方文档][14]找到非常全面的命令,包括扩容,高可用,备份等等。
|
||||
|
||||
* * *
|
||||
这篇文章部分参考了该作者为了Crunchy博客而写的[在Kubernetes上开始运行PostgreSQL][15]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/how-run-postgresql-kubernetes
|
||||
|
||||
作者:[Jonathan S. Katz][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[arrowfeng](https://github.com/arrowfeng)
|
||||
校对:[校对ID](https://github.com/校对ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jkatz05
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
|
||||
[2]: https://www.postgresql.org/
|
||||
[3]: https://kubernetes.io/
|
||||
[4]: https://opensource.com/article/19/2/scaling-postgresql-kubernetes-operators
|
||||
[5]: https://github.com/CrunchyData/postgres-operator
|
||||
[6]: https://crunchydata.github.io/postgres-operator/stable/installation/#quickstart-script
|
||||
[7]: https://www.gnu.org/software/wget/
|
||||
[8]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
|
||||
[9]: https://kubernetes.io/docs/concepts/storage/storage-classes/
|
||||
[10]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
|
||||
[11]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
|
||||
[12]: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
|
||||
[13]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
|
||||
[14]: https://crunchydata.github.io/postgres-operator/stable/#documentation
|
||||
[15]: https://info.crunchydata.com/blog/get-started-runnning-postgresql-on-kubernetes
|
@ -0,0 +1,158 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using Square Brackets in Bash: Part 2)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2)
|
||||
[#]: author: (Paul Brown https://www.linux.com/users/bro66)
|
||||
|
||||
在 Bash 中使用[方括号](二)
|
||||
======
|
||||
|
||||
![square brackets][1]
|
||||
|
||||
> 我们继续来看方括号的用法,它们甚至还可以在 Bash 当中作为一个命令使用。
|
||||
|
||||
[Creative Commons Zero][2]
|
||||
|
||||
欢迎回到我们的方括号专题。在[前一篇文章][3]当中,我们介绍了方括号在命令行中可以用于通配操作,如果你已经读过前一篇文章,就可以从这里继续了。
|
||||
|
||||
方括号还可以以一个命令的形式使用,就像这样:
|
||||
|
||||
```
|
||||
[ "a" = "a" ]
|
||||
```
|
||||
|
||||
上面这种 `[ ... ]` 的形式就可以看成是一个可执行的命令。要注意,方括号内部的内容 `"a" = "a"` 和方括号 `[`、`]` 之间是有空格隔开的。因为这里的方括号被视作一个命令,因此要用空格将命令和它的参数隔开。
|
||||
|
||||
上面这个命令的含义是“判断字符串 `"a"` 和字符串 `"a"` 是否相同”,如果判断结果为真,那么 `[ ... ]` 就会以<ruby>状态码<rt>status code</rt></ruby> 0 退出,否则以状态码 1 退出。在之前的文章中,我们也有介绍过状态码的概念,可以通过 `$?` 变量获取到最近一个命令的状态码。
|
||||
|
||||
分别执行
|
||||
|
||||
```
|
||||
[ "a" = "a" ]
|
||||
echo $?
|
||||
```
|
||||
|
||||
以及
|
||||
|
||||
```
|
||||
[ "a" = "b" ]
|
||||
echo $?
|
||||
```
|
||||
|
||||
这两段命令中,前者会输出 0(判断结果为真),后者则会输出 1(判断结果为假)。在 Bash 当中,如果一个命令的状态码是 0,表示这个命令正常执行完成并退出,而且其中没有出现错误,对应布尔值 `true`;如果在命令执行过程中出现错误,就会返回一个非零的状态码,对应布尔值 `false`。而 `[ ... ]`也同样遵循这样的规则。
|
||||
|
||||
因此,`[ ... ]` 很适合在 `if ... then`、`while` 或 `until` 这种在代码块结束前需要判断是否达到某个条件结构中使用。
|
||||
|
||||
对应使用的逻辑判断运算符也相当直观:
|
||||
|
||||
```
|
||||
[ STRING1 = STRING2 ] => checks to see if the strings are equal
|
||||
[ STRING1 != STRING2 ] => checks to see if the strings are not equal
|
||||
[ INTEGER1 -eq INTEGER2 ] => checks to see if INTEGER1 is equal to INTEGER2
|
||||
[ INTEGER1 -ge INTEGER2 ] => checks to see if INTEGER1 is greater than or equal to INTEGER2
|
||||
[ INTEGER1 -gt INTEGER2 ] => checks to see if INTEGER1 is greater than INTEGER2
|
||||
[ INTEGER1 -le INTEGER2 ] => checks to see if INTEGER1 is less than or equal to INTEGER2
|
||||
[ INTEGER1 -lt INTEGER2 ] => checks to see if INTEGER1 is less than INTEGER2
|
||||
[ INTEGER1 -ne INTEGER2 ] => checks to see if INTEGER1 is not equal to INTEGER2
|
||||
etc...
|
||||
```
|
||||
|
||||
方括号的这种用法也可以很有 shell 风格,例如通过带上 `-f` 参数可以判断某个文件是否存在:
|
||||
|
||||
```
|
||||
for i in {000..099}; \
|
||||
do \
|
||||
if [ -f file$i ]; \
|
||||
then \
|
||||
echo file$i exists; \
|
||||
else \
|
||||
touch file$i; \
|
||||
echo I made file$i; \
|
||||
fi; \
|
||||
done
|
||||
```
|
||||
|
||||
如果你在上一篇文章使用到的测试目录中运行以上这串命令,其中的第 3 行会判断那几十个文件当中的某个文件是否存在。如果文件存在,会输出一条提示信息;如果文件不存在,就会把对应的文件创建出来。最终,这个目录中会完整存在从 `file000` 到 `file099` 这一百个文件。
|
||||
|
||||
上面这段命令还可以写得更加简洁:
|
||||
|
||||
```
|
||||
for i in {000..099};\
|
||||
do\
|
||||
if [ ! -f file$i ];\
|
||||
then\
|
||||
touch file$i;\
|
||||
echo I made file$i;\
|
||||
fi;\
|
||||
done
|
||||
```
|
||||
|
||||
其中 `!` 运算符表示将判断结果取反,因此第 3 行的含义就是“如果文件 `file$i` 不存在”。
|
||||
|
||||
可以尝试一下将测试目录中那几十个文件随意删除几个,然后运行上面的命令,你就可以看到它是如何把被删除的文件重新创建出来的。
|
||||
|
||||
除了 `-f` 之外,还有很多有用的参数。`-d` 参数可以判断某个目录是否存在,`-h` 参数可以判断某个文件是不是一个符号链接。可以用 `-G` 参数判断某个文件是否属于某个用户组,用 `-ot` 参数判断某个文件的最后更新时间是否早于另一个文件,甚至还可以判断某个文件是否为空文件。
|
||||
|
||||
运行下面的几条命令,可以向几个文件中写入一些内容:
|
||||
|
||||
```
|
||||
echo "Hello World" >> file023
|
||||
echo "This is a message" >> file065
|
||||
echo "To humanity" >> file010
|
||||
```
|
||||
|
||||
然后运行:
|
||||
|
||||
```
|
||||
for i in {000..099};\
|
||||
do\
|
||||
if [ ! -s file$i ];\
|
||||
then\
|
||||
rm file$i;\
|
||||
echo I removed file$i;\
|
||||
fi;\
|
||||
done
|
||||
```
|
||||
|
||||
你就会发现所有空文件都被删除了,只剩下少数几个非空的文件。
|
||||
|
||||
如果你还想了解更多别的参数,可以执行 `man test` 来查看 `test` 命令的 man 手册(`test` 是 `[ ... ]` 的命令别名)。
|
||||
|
||||
有时候你还会看到 `[[ ... ]]` 这种双方括号的形式,使用起来和单方括号差别不大。但双方括号支持的比较运算符更加丰富:例如可以使用 `==` 来判断某个字符串是否符合某个<ruby>模式<rt>pattern</rt></ruby>,也可以使用 `<`、`>` 来判断两个字符串的出现顺序。
|
||||
|
||||
可以在 [Bash 表达式文档][5]中了解到双方括号支持的更多运算符。
|
||||
|
||||
### 下一集
|
||||
|
||||
在下一篇文章中,我们会开始介绍圆括号 `()` 在 Linux 命令行中的用法,敬请关注!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2019/4/using-square-brackets-bash-part-2
|
||||
|
||||
作者:[Paul Brown][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/bro66
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/square-brackets-3734552_1920.jpg?itok=hv9D6TBy "square brackets"
|
||||
[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO
|
||||
[3]: https://www.linux.com/blog/2019/3/using-square-brackets-bash-part-1
|
||||
[4]: https://www.linux.com/blog/learn/2019/2/logical-ampersand-bash
|
||||
[5]: https://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions
|
||||
[6]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot
|
||||
[7]: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash
|
||||
[8]: https://www.linux.com/blog/learn/2019/1/more-about-angle-brackets-bash
|
||||
[9]: https://www.linux.com/blog/learn/2019/2/and-ampersand-and-linux
|
||||
[10]: https://www.linux.com/blog/learn/2019/2/ampersands-and-file-descriptors-bash
|
||||
[11]: https://www.linux.com/blog/learn/2019/2/all-about-curly-braces-bash
|
||||
|
@ -1,86 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tomjlw)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Streaming internet radio with RadioDroid)
|
||||
[#]: via: (https://opensource.com/article/19/4/radiodroid-internet-radio-player)
|
||||
[#]: author: (Chris Hermansen (Community Moderator) https://opensource.com/users/clhermansen)
|
||||
|
||||
使用 RadioDroid 流传输网络广播
|
||||
======
|
||||
通过简单的设置使用你家中的音响收听你最爱的网络电台
|
||||
![编程中的女士][1]
|
||||
|
||||
最近在线新闻通讯社对 [Google 的 Chromecast 音频设备的推出][2]发出哀叹。该设备在音频媒体界受到[好评][3],因此我已经在考虑入手一个。基于 Chromecast 毁灭的消息,我决定在它们全部被打包扔进垃圾堆之前以一个合理价位买一个。
|
||||
|
||||
我在 [MobileFun][4] 上找到一个放进我的订单中。这个设备最终到货了。它被包在一个常用但是最简的 Google 包装袋中,外面打印着非常简短的开始指南。
|
||||
|
||||
![Google Chromecast 音频][5]
|
||||
|
||||
我用光缆把它通过我的数模转换器的光学 S/PDIF 连接接入家庭音响,希望以此能提供最佳的音质。
|
||||
|
||||
安装过程并无纰漏,在五分钟后我就准备好播放一些音乐了。我知道一些安卓应用支持 Chromecast,因此我决定用 Google Play 音乐测试它。意料之中,它工作得不错,音乐效果听上去也相当好。然而作为一个具有开源精神的人,我决定看看我能找到什么开源播放器能兼容 Chromecast。
|
||||
|
||||
### RadioDroid 的救赎
|
||||
[RadioDroid Android application][6] 满足条件。它开源并且可从 [GitHub][7],Google Play 以及 [F-Droid][8] 上获取。根据帮助文档,RadioDroid 从 [Community Radio Browser][9] 网页寻找播放流。因此我决定在我的手机上安装尝试一下。
|
||||
|
||||
![RadioDroid][10]
|
||||
|
||||
安装过程快速平滑,RadioDroid 打开展示当地电台十分迅速。你可以在这个屏幕截图的右上方附近看到 Chromecast 按钮(看上去像一个有着波阵面的长方形图标)。
|
||||
|
||||
我尝试了几个当地电台。应用可靠地在我手机喇叭上播放了音乐。但是我不得不摆弄 Chromecast 按钮来通过 Chromecast 把音乐传到流上。但是它确实可以做到流传输。
|
||||
|
||||
我决定找一些我最爱的网络广播电台,在法国马赛的 [格雷诺耶广播电台][11]。在 RadioDroid 上有许多找到电台的方法。其中一种是使用标签——当地,最流行等——就在电台列表上方。其中一个标签是国家,我找到法国,在其1500个电台中划来划去寻找格雷诺耶广播电台。另一种办法是使用屏幕上方的查询按钮;查询迅速找到了那家美妙的电台。我尝试了其它几次查询它们都返回了合理的信息。
|
||||
|
||||
回到当地标签,我在列表中翻来覆去,发现“当地”的定义似乎是“在同一个国家”。因此尽管西雅图,波特兰,旧金山,洛杉矶和朱诺比多伦多更靠近我的家,我并没有在当地标签中看到它们。然而通过使用查询功能,我可以发现所有名字中带有西雅图的电台。
|
||||
|
||||
语言标签使我找到所有用葡语(及葡语方言)播报的电台。我很快发现了另一个最爱的电台 [91 Rock Curitiba][12]。
|
||||
|
||||
接着灵感来了,现在是春天了但又如何呢?让我们听一些圣诞音乐。意料之中,搜寻圣诞把我引到了 [181.FM – Christmas Blender][13]。不错,一两分钟的欣赏对我就够了。
|
||||
|
||||
因此总的来说,我推荐 RadioDroid 和 Chromecast 的组合作为一种用家庭音响以合理价位播放网络电台的良好方式、
|
||||
|
||||
### 对于音乐方面。。。
|
||||
|
||||
最近我从 [Blue Coast Music][16] 商店里选了一个 [Qua Continuum][15] 创作的叫作 [Continuum One][14] 的有趣的氛围(甚至无节拍)音乐专辑。
|
||||
|
||||
Blue Coast 有许多可提供给开源音乐爱好者的。音乐可以无需通过那些奇怪的平台专用下载管理器下载(有时以物理形式)。它通常提供几种形式,包括 WAV,FLAC 和 DSD;为 WAV 和 FLAC 提供不同的字长和比特率,包括 16/44.1,24/96和24/192,针对 DSD 则有2.8,5.6,和11.2 MHz。音乐是用优秀的仪器精心录制的。不幸的是,我并没有找到许多符合我口味的音乐尽管我喜欢 Blue Coast 上能获取的几个艺术家,包括 Qua Continuum,[Art Lande][17] 以及 [Alex De Grassi][18]。
|
||||
|
||||
在 [Bandcamp][19] 上,我挑选了 [Emancipator's Baralku][20] 和 [Framework's Tides][21],两个都是我喜欢的。两位艺术家创作的音乐符合我的口味——电音但又不(总体来说)是舞蹈,他们的音乐旋律优美,副歌也很好听。有许多可以让开源音乐发烧友爱上 Bandcamp 的东西比如买前试听整首歌的服务;没有垃圾软件下载器;大量与音乐家的合作;以及对 [Creative Commons music][22] 的支持。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/radiodroid-internet-radio-player
|
||||
|
||||
作者:[Chris Hermansen (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy (woman programming)
|
||||
[2]: https://www.theverge.com/2019/1/11/18178751/google-chromecast-audio-discontinued-sale
|
||||
[3]: https://www.whathifi.com/google/chromecast-audio/review
|
||||
[4]: https://www.mobilefun.com/google-chromecast-audio-black-70476
|
||||
[5]: https://opensource.com/sites/default/files/uploads/internet-radio_chromecast.png (Google Chromecast Audio)
|
||||
[6]: https://play.google.com/store/apps/details?id=net.programmierecke.radiodroid2
|
||||
[7]: https://github.com/segler-alex/RadioDroid
|
||||
[8]: https://f-droid.org/en/packages/net.programmierecke.radiodroid2/
|
||||
[9]: http://www.radio-browser.info/gui/#!/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/internet-radio_radiodroid.png (RadioDroid)
|
||||
[11]: http://www.radiogrenouille.com/
|
||||
[12]: https://91rock.com.br/
|
||||
[13]: http://player.181fm.com/?station=181-xblender
|
||||
[14]: https://www.youtube.com/watch?v=PqLCQXPS8iQ
|
||||
[15]: https://bluecoastmusic.com/artists/qua-continuum
|
||||
[16]: https://bluecoastmusic.com/store
|
||||
[17]: https://bluecoastmusic.com/store?f%5B0%5D=search_api_multi_aggregation_1%3Aart%20lande
|
||||
[18]: https://bluecoastmusic.com/store?f%5B0%5D=search_api_multi_aggregation_1%3Aalex%20de%20grassi
|
||||
[19]: https://bandcamp.com/
|
||||
[20]: https://emancipator.bandcamp.com/album/baralku
|
||||
[21]: https://frameworksuk.bandcamp.com/album/tides
|
||||
[22]: https://bandcamp.com/tag/creative-commons
|
64
translated/tech/20190409 Enhanced security at the edge.md
Normal file
64
translated/tech/20190409 Enhanced security at the edge.md
Normal file
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (hopefully2333)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Enhanced security at the edge)
|
||||
[#]: via: (https://www.networkworld.com/article/3388130/enhanced-security-at-the-edge.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
增强边缘计算的安全性
|
||||
======
|
||||
边缘计算环境带来的安全风险迫使公司必须特别关注它的安全措施。
|
||||
|
||||
说数据安全是高管们和董事会最关注的问题已经是陈词滥调了。但问题是:数据安全问题不会自己消失。
|
||||
|
||||
骇客和攻击者一直在寻找利用漏洞的新方法。就像公司开始使用人工智能和机器学习等新兴技术来自动化地保护他们的组织一样,攻击者们也在使用这些技术来达成他们的目的。
|
||||
|
||||
简而言之,安全问题是一定不能忽视的。现在,随着越来越多的公司开始使用边缘计算,如何保护这些边缘计算环境,需要有新的安全考量。
|
||||
|
||||
**边缘计算的风险更高**
|
||||
|
||||
正如 Network World 中一篇文章所建议的,边缘计算的安全架构应该将重点放在物理安全上。这并不是说要忽视,保护传输过程中的数据这一点。而是说,实际情况里的物理环境和物理设备更加值得关注。
|
||||
|
||||
例如,边缘计算的硬件设备通常位于大公司或者广阔空间中,有时候是在很容易进入的共享办公室和公共区域里。从表面上看,这节省了成本,能更快地访问到相关的数据,而不必在后端的数据中心和前端的设备之间往返。
|
||||
|
||||
但是,如果没有任何级别的访问控制,这台设备就会暴露在恶意操作和简单人为错误的双重风险之下。想象一下办公室的清洁工意外地关掉了设备,以及随之而来的设置停机所造成的后果。
|
||||
|
||||
另一个风险是 “Shadow edge IT”。有时候非 IT 的工作人员会部署一个边缘站点来实现快速启动项目,却没有及时通知 IT 部门这个站点正在连接到网络。例如,零售商店可能会主动安装他们自己的数字标牌,或者,销售团队会将物联网传感器应用到电视中,并在销售演示中实时地部署它们。
|
||||
|
||||
在这种情况下,IT 部门很少甚至完全看不到这些设备和边缘站点,这就使得网络可能暴露在外。
|
||||
|
||||
**保护边缘计算环境**
|
||||
|
||||
部署微型数据中心是规避上述风险的一个简单方法。(MDC)
|
||||
|
||||
“在历史上,大多数这些[边缘]环境都是不受控制的,”施耐德电气安全能源部门的首席技术官和创新高级副总裁 Kevin Brown 说。“它们可能是第一级,但很可能是第 0 级类型的设计-它们就像开放的配件柜。它们现在需要像微型数据中心一样的对待。你管理它需要像管理关键任务数据中心一样”
|
||||
|
||||
单说听起来的感觉,这个解决方案是一个安全,独立的机箱,它包括在室内和室外运行程序所需的所有存储空间,处理性能和网络资源。它同样包含必要的电源、冷却、安全和管理工具。
|
||||
|
||||
最重要的部分是高级别的安全性。这个装置是封闭的,有上锁的们,以防止非法入侵。通过合适的供应商,DMC 可以进行定制,包括用于远程数字化管理的监控摄像头、传感器和监控技术。
|
||||
|
||||
随着越来越多的公司开始利用边缘计算的优势,他们必须利用安全解决方案的优势来保护他们的数据和边缘环境。
|
||||
|
||||
在 APC.com 上了解保护你的边缘计算环境的最佳方案。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3388130/enhanced-security-at-the-edge.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[hopefully2333](https://github.com/hopefully2333)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-1091707448-100793312-large.jpg
|
||||
[2]: https://www.csoonline.com/article/3250144/6-ways-hackers-will-use-machine-learning-to-launch-attacks.html
|
||||
[3]: https://www.marketwatch.com/press-release/edge-computing-market-2018-global-analysis-opportunities-and-forecast-to-2023-2018-08-20
|
||||
[4]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[5]: https://www.youtube.com/watch?v=1NLk1cXEukQ
|
||||
[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -0,0 +1,350 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( NeverKnowsTomorrow )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Four Methods To Add A User To Group In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
在 Linux 中添加用户到组的四个方法
|
||||
======
|
||||
|
||||
Linux 组是用于管理 Linux 中用户帐户的组织单位。
|
||||
|
||||
对于 Linux 系统中的每一个用户和组,它都有惟一的数字标识号。
|
||||
|
||||
它被称为 userid (UID) 和 groupid (GID)。组的主要目的是为组的成员定义一组特权。
|
||||
|
||||
它们都可以执行特定的操作,但不能执行其他操作。
|
||||
|
||||
Linux 中有两种类型的默认组可用。每个用户应该只有一个 <ruby>主要组<rt>primary group</rt></ruby> 和任意数量的 <ruby>次要组<rt>secondary group</rt></ruby>。
|
||||
|
||||
* **主要组:** 创建用户帐户时,已将主组添加到用户。它通常是用户的名称。在执行诸如创建新文件(或目录)、修改文件或执行命令等任何操作时,主组将应用于用户。用户主要组信息存储在 `/etc/passwd` 文件中。
|
||||
* **次要组:** 它被称为次要组。它允许用户组在同一组成员文件中执行特定操作。
|
||||
|
||||
例如,如果你希望允许少数用户运行 apache(httpd)服务命令,那么它将非常适合。
|
||||
|
||||
你可能对以下与用户管理相关的文章感兴趣。
|
||||
|
||||
* 在 Linux 中创建用户帐户的三种方法?
|
||||
* 如何在 Linux 中创建批量用户?
|
||||
* 如何在 Linux 中使用不同的方法更新/更改用户密码?
|
||||
|
||||
可以使用以下四种方法实现。
|
||||
|
||||
* **`usermod:`** usermod 命令修改系统帐户文件,以反映在命令行中指定的更改。
|
||||
* **`gpasswd:`** gpasswd 命令用于管理 /etc/group 和 /etc/gshadow。每个组都可以有管理员、成员和密码。
|
||||
* **`Shell Script:`** shell 脚本允许管理员自动执行所需的任务。
|
||||
* **`Manual Method:`** 我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
|
||||
|
||||
我假设你已经拥有此活动所需的组和用户。在本例中,我们将使用以下用户和组:`user1`、`user2`、`user3`,group 是 `mygroup` 和 `mygroup1`。
|
||||
|
||||
在进行更改之前,我想检查用户和组信息。详见下文。
|
||||
|
||||
我可以看到下面的用户与他们自己的组关联,而不是与其他组关联。
|
||||
|
||||
```
|
||||
# id user1
|
||||
uid=1008(user1) gid=1008(user1) groups=1008(user1)
|
||||
|
||||
# id user2
|
||||
uid=1009(user2) gid=1009(user2) groups=1009(user2)
|
||||
|
||||
# id user3
|
||||
uid=1010(user3) gid=1010(user3) groups=1010(user3)
|
||||
```
|
||||
|
||||
我可以看到这个组中没有关联的用户。
|
||||
|
||||
```
|
||||
# getent group mygroup
|
||||
mygroup:x:1012:
|
||||
|
||||
# getent group mygroup1
|
||||
mygroup1:x:1013:
|
||||
```
|
||||
|
||||
### 方法 1:什么是 usermod 命令?
|
||||
|
||||
usermod 命令修改系统帐户文件,以反映命令行上指定的更改。
|
||||
|
||||
### 如何使用 usermod 命令将现有的用户添加到次要组或附加组?
|
||||
|
||||
要将现有用户添加到辅助组,请使用带有 `-g` 选项和组名称的 usermod 命令。
|
||||
|
||||
语法
|
||||
|
||||
```
|
||||
# usermod [-G] [GroupName] [UserName]
|
||||
```
|
||||
|
||||
如果系统中不存在给定的用户或组,你将收到一条错误消息。如果没有得到任何错误,那么用户已经被添加到相应的组中。
|
||||
|
||||
```
|
||||
# usermod -a -G mygroup user1
|
||||
```
|
||||
|
||||
让我使用 id 命令查看输出。是的,添加成功。
|
||||
|
||||
```
|
||||
# id user1
|
||||
uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
|
||||
```
|
||||
|
||||
### 如何使用 usermod 命令将现有的用户添加到多个次要组或附加组?
|
||||
|
||||
要将现有用户添加到多个次要组中,请使用带有 `-G` 选项的 usermod 命令和带有逗号分隔的组名称。
|
||||
|
||||
语法
|
||||
|
||||
```
|
||||
# usermod [-G] [GroupName1,GroupName2] [UserName]
|
||||
```
|
||||
|
||||
在本例中,我们将把 `user2` 添加到 `mygroup` 和 `mygroup1` 中。
|
||||
|
||||
```
|
||||
# usermod -a -G mygroup,mygroup1 user2
|
||||
```
|
||||
|
||||
让我使用 `id` 命令查看输出。是的,`user2` 已成功添加到 `myGroup` 和 `myGroup1` 中。
|
||||
|
||||
```
|
||||
# id user2
|
||||
uid=1009(user2) gid=1009(user2) groups=1009(user2),1012(mygroup),1013(mygroup1)
|
||||
```
|
||||
|
||||
### 如何改变用户的主要组?
|
||||
|
||||
要更改用户的主要组,请使用带有 `-g` 选项和组名称的 usermod 命令。
|
||||
|
||||
语法
|
||||
|
||||
```
|
||||
# usermod [-g] [GroupName] [UserName]
|
||||
```
|
||||
|
||||
我们必须使用 `-g` 改变用户的主要组。
|
||||
|
||||
```
|
||||
# usermod -g mygroup user3
|
||||
```
|
||||
|
||||
让我们看看输出。是的,已成功更改。现在,它将 mygroup 显示为 user3 主要组而不是 user3。
|
||||
|
||||
```
|
||||
# id user3
|
||||
uid=1010(user3) gid=1012(mygroup) groups=1012(mygroup)
|
||||
```
|
||||
|
||||
### 方法 2:什么是 gpasswd 命令?
|
||||
|
||||
`gpasswd` 命令用于管理 `/etc/group` 和 `/etc/gshadow`。每个组都可以有管理员、成员和密码。
|
||||
|
||||
### 如何使用 gpasswd 命令将现有用户添加到次要组或者附加组?
|
||||
|
||||
要将现有用户添加到次要组,请使用带有 `-M` 选项和组名称的 gpasswd 命令。
|
||||
|
||||
语法
|
||||
|
||||
```
|
||||
# gpasswd [-M] [UserName] [GroupName]
|
||||
```
|
||||
|
||||
在本例中,我们将把 `user1 ` 添加到 `mygroup` 中。
|
||||
|
||||
```
|
||||
# gpasswd -M user1 mygroup
|
||||
```
|
||||
|
||||
让我使用 id 命令查看输出。是的,`user1` 已成功添加到 `mygroup` 中。
|
||||
|
||||
```
|
||||
# id user1
|
||||
uid=1008(user1) gid=1008(user1) groups=1008(user1),1012(mygroup)
|
||||
```
|
||||
|
||||
### 如何使用 gpasswd 命令添加多个用户到次要组或附加组中?
|
||||
|
||||
要将多个用户添加到辅助组中,请使用带有 `-M` 选项和组名称的 gpasswd 命令。
|
||||
|
||||
语法
|
||||
|
||||
```
|
||||
# gpasswd [-M] [UserName1,UserName2] [GroupName]
|
||||
```
|
||||
|
||||
在本例中,我们将把 `user2` 和 `user3` 添加到 `mygroup1` 中。
|
||||
|
||||
```
|
||||
# gpasswd -M user2,user3 mygroup1
|
||||
```
|
||||
|
||||
让我使用 getent 命令查看输出。是的,`user2` 和 `user3` 已成功添加到 `myGroup1` 中。
|
||||
|
||||
```
|
||||
# getent group mygroup1
|
||||
mygroup1:x:1013:user2,user3
|
||||
```
|
||||
|
||||
### 如何使用 gpasswd 命令从组中删除一个用户?
|
||||
|
||||
要从组中删除用户,请使用带有 `-d` 选项的 gpasswd 命令以及用户和组的名称。
|
||||
|
||||
语法
|
||||
|
||||
```
|
||||
# gpasswd [-d] [UserName] [GroupName]
|
||||
```
|
||||
|
||||
在本例中,我们将从 `mygroup` 中删除 `user1` 。
|
||||
|
||||
```
|
||||
# gpasswd -d user1 mygroup
|
||||
Removing user user1 from group mygroup
|
||||
```
|
||||
|
||||
### 方法 3:使用 Shell 脚本?
|
||||
|
||||
基于上面的例子,我知道 `usermod` 命令没有能力将多个用户添加到组中,但是可以通过 `gpasswd` 命令完成。
|
||||
|
||||
但是,它将覆盖当前与组关联的现有用户。
|
||||
|
||||
例如,`user1` 已经与 `mygroup` 关联。如果要使用 `gpasswd` 命令将 `user2` 和 `user3` 添加到 `mygroup` 中,它将不会按预期生效,而是对组进行修改。
|
||||
|
||||
如果要将多个用户添加到多个组中,解决方案是什么?
|
||||
|
||||
两个命令中都没有默认选项来实现这一点。
|
||||
|
||||
因此,我们需要编写一个小的 shell 脚本来实现这一点。
|
||||
|
||||
### 如何使用 gpasswd 命令将多个用户添加到次要组或附加组?
|
||||
|
||||
如果要使用 gpasswd 命令将多个用户添加到次要组或附加组,请创建以下小的 shell 脚本。
|
||||
|
||||
创建用户列表。每个用户应该在单独的行中。
|
||||
|
||||
```bash
|
||||
$ cat user-lists.txt
|
||||
user1
|
||||
user2
|
||||
user3
|
||||
```
|
||||
|
||||
使用以下 shell 脚本将多个用户添加到单个次要组。
|
||||
|
||||
```bash
|
||||
vi group-update.sh
|
||||
|
||||
#!/bin/bash
|
||||
for user in `cat user-lists.txt`
|
||||
do
|
||||
usermod -a -G mygroup $user
|
||||
done
|
||||
```
|
||||
|
||||
设置 `group-update.sh` 文件的可执行权限。
|
||||
|
||||
```
|
||||
# chmod + group-update.sh
|
||||
```
|
||||
|
||||
最后运行脚本来实现它。
|
||||
|
||||
```
|
||||
# sh group-update.sh
|
||||
```
|
||||
|
||||
让我看看使用 getent 命令的输出。 是的,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup` 中。
|
||||
|
||||
```
|
||||
# getent group mygroup
|
||||
mygroup:x:1012:user1,user2,user3
|
||||
```
|
||||
|
||||
### 如何使用 gpasswd 命令将多个用户添加到多个次要组或附加组?
|
||||
|
||||
如果要使用 gpasswd 命令将多个用户添加到多个次要组或附加组中,请创建以下小的 shell 脚本。
|
||||
|
||||
创建用户列表。每个用户应该在单独的行中。
|
||||
|
||||
```bash
|
||||
$ cat user-lists.txt
|
||||
user1
|
||||
user2
|
||||
user3
|
||||
```
|
||||
|
||||
创建组列表。每组应在单独的行中。
|
||||
|
||||
```bash
|
||||
$ cat group-lists.txt
|
||||
mygroup
|
||||
mygroup1
|
||||
```
|
||||
|
||||
使用以下 shell 脚本将多个用户添加到多个次要组。
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
for user in `more user-lists.txt`
|
||||
do
|
||||
for group in `more group-lists.txt`
|
||||
do
|
||||
usermod -a -G $group $user
|
||||
done
|
||||
```
|
||||
|
||||
设置 `group-update-1.sh` 文件的可执行权限。
|
||||
|
||||
```
|
||||
# chmod +x group-update-1.sh
|
||||
```
|
||||
|
||||
最后运行脚本来实现它。
|
||||
|
||||
```
|
||||
# sh group-update-1.sh
|
||||
```
|
||||
|
||||
让我看看使用 getent 命令的输出。 是的,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup` 中。
|
||||
|
||||
```
|
||||
# getent group mygroup
|
||||
mygroup:x:1012:user1,user2,user3
|
||||
```
|
||||
|
||||
此外,`user1`,`user2` 和 `user3` 已成功添加到 `mygroup1` 中。
|
||||
|
||||
```
|
||||
# getent group mygroup1
|
||||
mygroup1:x:1013:user1,user2,user3
|
||||
```
|
||||
|
||||
### 方法 4:在 Linux 中将用户添加到组中的手动方法?
|
||||
|
||||
我们可以通过编辑 `/etc/group` 文件手动将用户添加到任何组中。
|
||||
|
||||
打开 `/etc/group` 文件并搜索要更新用户的组名。最后将用户更新到相应的组中。
|
||||
|
||||
```
|
||||
# vi /etc/group
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-add-user-to-group-primary-secondary-group-usermod-gpasswd/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[NeverKnowsTomorrow](https://github.com/NeverKnowsTomorrow)
|
||||
校对:[校对者 ID](https://github.com/校对者 ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/linux-user-account-creation-useradd-adduser-newusers/
|
||||
[2]: https://www.2daygeek.com/how-to-create-the-bulk-users-in-linux/
|
||||
[3]: https://www.2daygeek.com/linux-passwd-chpasswd-command-set-update-change-users-password-in-linux-using-shell-script/
|
94
translated/tech/20190410 Managing Partitions with sgdisk.md
Normal file
94
translated/tech/20190410 Managing Partitions with sgdisk.md
Normal file
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Managing Partitions with sgdisk)
|
||||
[#]: via: (https://fedoramagazine.org/managing-partitions-with-sgdisk/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
使用 sgdisk 管理分区
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
[Roderick W. Smith][2] 的 _sgdisk_ 命令可在命令行中管理硬盘的分区。下面将介绍使用它所需的基础知识。
|
||||
|
||||
以下六个参数是你使用 sgdisk 大多数基本功能所需了解的:
|
||||
|
||||
1. **-p**
|
||||
_打印_ 分区表:
|
||||
### sgdisk -p /dev/sda
|
||||
2. **-d x**
|
||||
_删除_分区 x:
|
||||
### sgdisk -d 1 /dev/sda
|
||||
3. **-n x:y:z**
|
||||
创建一个编号 x 的_新_分区,从 y 开始,从 z 结束:
|
||||
### sgdisk -n 1:1MiB:2MiB /dev/sda
|
||||
4. **-c x:y**
|
||||
_更改_分区 x 的名称为 y:
|
||||
### sgdisk -c 1:grub /dev/sda
|
||||
5. **-t x:y**
|
||||
将分区 x 的_类型_更改为 y:
|
||||
### sgdisk -t 1:ef02 /dev/sda
|
||||
6. **–list-types**
|
||||
列出分区类型代码:
|
||||
### sgdisk --list-types
|
||||
|
||||
|
||||
|
||||
![The SGDisk Command][3]
|
||||
|
||||
如你在上面的例子中所见,大多数命令都要求将要操作的硬盘的[设备文件名][4]指定为最后一个参数。
|
||||
|
||||
可以组合上面的参数,这样你可以一次定义所有分区:
|
||||
|
||||
### sgdisk -n 1:1MiB:2MiB -t 1:ef02 -c 1:grub /dev/sda
|
||||
|
||||
在值的前面加上 **+** 或 **–** 符号,可以为某些字段指定相对值。如果你使用相对值,sgdisk 会为你做数学运算。例如,上面的例子可以写成:
|
||||
|
||||
### sgdisk -n 1:1MiB:+1MiB -t 1:ef02 -c 1:grub /dev/sda
|
||||
|
||||
**0** 值对于以下几个字段是特殊情况:
|
||||
|
||||
* 对于_分区号_字段,0 表示应使用下一个可用编号(编号从 1 开始)。
|
||||
* 对于_起始地址_字段,0 表示使用最大可用空闲块的头。硬盘开头的一些空间始终保留给分区表本身。
|
||||
* 对于_结束地址_字段,0 表示使用最大可用空闲块的末尾。
|
||||
|
||||
|
||||
|
||||
通过在适当的字段中使用 **0** 和相对值,你可以创建一系列分区,而无需预先计算任何绝对值。例如,如果在一块空白硬盘中,以下 sgdisk 命令序列将创建典型 Linux 安装所需的所有基本分区:
|
||||
|
||||
### sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub /dev/sda
|
||||
### sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot /dev/sda
|
||||
### sgdisk -n 0:0:+4GiB -t 0:8200 -c 0:swap /dev/sda
|
||||
### sgdisk -n 0:0:0 -t 0:8300 -c 0:root /dev/sda
|
||||
|
||||
上面的例子展示了如何为基于 BIOS 的计算机分区硬盘。基于 UEFI 的计算机上不需要 [grub分区][5]。由于 sgdisk 在上面的示例中为你计算了所有绝对值,因此你可以在基于 UEFI 的计算机上跳过第一个命令,并且可以无需修改即可运行其余命令。同样,你可以跳过创建交换分区,并且不需要修改其余命令。
|
||||
|
||||
还有使用一个命令删除硬盘上所有分区的快捷方式:
|
||||
|
||||
### sgdisk --zap-all /dev/sda
|
||||
|
||||
关于最新和详细信息,请查看手册页:
|
||||
|
||||
$ man sgdisk
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/managing-partitions-with-sgdisk/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/glb/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/managing-partitions-816x345.png
|
||||
[2]: https://www.rodsbooks.com/
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/sgdisk.jpg
|
||||
[4]: https://en.wikipedia.org/wiki/Device_file
|
||||
[5]: https://en.wikipedia.org/wiki/BIOS_boot_partition
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Zip Files and Folders in Linux [Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/linux-zip-folder/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
如何在 Linux 中 zip 压缩文件和文件夹(初学者提示)
|
||||
======
|
||||
|
||||
_ **简介:本文向你展示了如何在 Ubuntu 和其他 Linux 发行版中创建一个 zip 文件夹。终端和 GUI 方法都有。** _
|
||||
|
||||
zip 是最流行的归档文件格式之一。使用 zip,你可以将多个文件压缩到一个文件中。这不仅节省了磁盘空间,还节省了网络带宽。这就是为什么你几乎一直会看到 zip 文件的原因。
|
||||
|
||||
作为普通用户,大多数情况下你会在 Linux 中解压缩文件夹。但是如何在 Linux 中压缩文件夹?本文可以帮助你回答这个问题。
|
||||
|
||||
**先决条件:验证是否安装了 zip**
|
||||
|
||||
通常 [zip][1] 已经安装,但验证下也没坏处。你可以运行以下命令来安装 zip 和 unzip。如果它尚未安装,它将立即安装。
|
||||
|
||||
```
|
||||
sudo apt install zip unzip
|
||||
```
|
||||
|
||||
现在你知道你的系统有 zip 支持,你可以继续了解如何在 Linux 中压缩一个目录。
|
||||
|
||||
![][2]
|
||||
|
||||
### 在 Linux 命令行中压缩文件夹
|
||||
|
||||
zip 命令的语法非常简单。
|
||||
|
||||
```
|
||||
zip [option] output_file_name input1 input2
|
||||
```
|
||||
|
||||
虽然有几个选项,但我不希望你将它们混淆。如果你只想要将一堆文件变成一个 zip 文件夹,请使用如下命令:
|
||||
|
||||
```
|
||||
zip -r output_file.zip file1 folder1
|
||||
```
|
||||
|
||||
-r 选项将递归目录并压缩其内容。输出文件中的 .zip 扩展名是可选的,因为默认情况下会添加 .zip。
|
||||
|
||||
你应该会在 zip 操作期间看到要添加到压缩文件夹中的文件。
|
||||
|
||||
```
|
||||
zip -r myzip abhi-1.txt abhi-2.txt sample_directory
|
||||
adding: abhi-1.txt (stored 0%)
|
||||
adding: abhi-2.txt (stored 0%)
|
||||
adding: sample_directory/ (stored 0%)
|
||||
adding: sample_directory/newfile.txt (stored 0%)
|
||||
adding: sample_directory/agatha.txt (deflated 41%)
|
||||
```
|
||||
|
||||
你可以使用 -e 选项[在 Linux 中创建密码保护的 zip 文件夹][3]。
|
||||
|
||||
你并不是只能通过终端创建 zip 归档文件。你也可以用图形方式做到这一点。下面是如何做的!
|
||||
|
||||
### 在 Ubuntu Linux 中使用 GUI 压缩文件夹
|
||||
|
||||
_虽然我在这里使用 Ubuntu,但在使用 GNOME 或其他桌面环境的其他发行版中,方法应该基本相同。_
|
||||
|
||||
如果要在 Linux 桌面中压缩文件或文件夹,只需点击几下即可。
|
||||
|
||||
进入到你想将文件(和文件夹)压缩到一个 zip 文件夹的所在文件夹。
|
||||
|
||||
在这里,选择文件和文件夹。现在,右键单击并选择“压缩”。你也可以对单个文件执行相同操作。
|
||||
|
||||
![Select the files, right click and click compress][4]
|
||||
|
||||
现在,你可以使用 zip、tar xz 或 7z 格式创建压缩归档文件。如果你好奇,这三个都是各种压缩算法,你可以使用它们来压缩文件。
|
||||
|
||||
输入一个你想要的名字,并点击“创建”
|
||||
|
||||
![Create archive file][5]
|
||||
|
||||
这不会花很长时间,你会同一目录中看到一个归档文件。
|
||||
|
||||
![][6]
|
||||
|
||||
好了,就是这些。你已经成功地在 Linux 中创建了一个 zip 文件夹。
|
||||
|
||||
我希望这篇文章能帮助你了解 zip 文件。请随时分享你的建议。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-zip-folder/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Zip_(file_format)
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-folder-linux.png?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/password-protect-zip-file/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-file-ubuntu.jpg?resize=800%2C428&ssl=1
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/create-zip-folder-ubuntu-1.jpg?ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/zip-file-created-in-ubuntu.png?resize=800%2C277&ssl=1
|
@ -0,0 +1,124 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to identify duplicate files on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
如何识别 Linux 上的重复文件
|
||||
======
|
||||
Linux 系统上的一些文件可能出现在多个位置。按照本文指示查找并识别这些“同卵双胞胎”,还可以了解为什么硬链接会如此有利。
|
||||
![Archana Jarajapu \(CC BY 2.0\)][1]
|
||||
|
||||
识别共享磁盘空间的文件依赖于利用文件共享相同的 `inode` 这一事实。这种数据结构存储除了文件名和内容之外的所有信息。如果两个或多个文件具有不同的名称和文件系统位置,但共享一个 inode,则它们还共享内容、所有权、权限等。
|
||||
|
||||
这些文件通常被称为“硬链接”,不像符号链接(即软链接)那样仅仅通过包含它们的名称指向其他文件,符号链接很容易在文件列表中通过第一个位置的 “l” 和引用文件的 **->** 符号识别出来。
|
||||
|
||||
```
|
||||
$ ls -l my*
|
||||
-rw-r--r-- 4 shs shs 228 Apr 12 19:37 myfile
|
||||
lrwxrwxrwx 1 shs shs 6 Apr 15 11:18 myref -> myfile
|
||||
-rw-r--r-- 4 shs shs 228 Apr 12 19:37 mytwin
|
||||
```
|
||||
|
||||
识别单个目录中的硬链接并不是很明显,但它仍然非常容易。如果使用 **ls -i** 命令列出文件并按 `inode` 编号排序,则可以非常容易地挑选出硬链接。在这种类型的 `ls` 输出中,第一列显示 `inode` 编号。
|
||||
|
||||
```
|
||||
$ ls -i | sort -n | more
|
||||
...
|
||||
788000 myfile <==
|
||||
788000 mytwin <==
|
||||
801865 Name_Labels.pdf
|
||||
786692 never leave home angry
|
||||
920242 NFCU_Docs
|
||||
800247 nmap-notes
|
||||
```
|
||||
|
||||
扫描输出,查找相同的 `inode` 编号,任何匹配都会告诉你想知道的内容。
|
||||
|
||||
**[另请参考:[Linux 疑难解答的宝贵提示和技巧][2]]**
|
||||
|
||||
另一方面,如果你只是想知道某个特定文件是否是另一个文件的硬链接,那么有一种方法比浏览数百个文件的列表更简单,即 `find` 命令的 **-samefile** 选项将帮助你完成工作。
|
||||
```
|
||||
$ find . -samefile myfile
|
||||
./myfile
|
||||
./save/mycopy
|
||||
./mytwin
|
||||
```
|
||||
|
||||
注意,提供给 `find` 命令的起始位置决定文件系统会扫描多少来进行匹配。在上面的示例中,我们正在查看当前目录和子目录。
|
||||
|
||||
使用 find 的 **-ls** 选项添加输出的详细信息可能更有说服力:
|
||||
```
|
||||
$ find . -samefile myfile -ls
|
||||
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./myfile
|
||||
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./save/mycopy
|
||||
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 ./mytwin
|
||||
```
|
||||
|
||||
第一列显示 `inode` 编号,然后我们会看到文件权限、链接、所有者、文件大小、日期信息以及引用相同磁盘内容的文件的名称。注意,在这种情况下,`link` 字段是 “4” 而不是我们可能期望的 “3”。这告诉我们还有另一个指向同一个 `inode` 的链接(但不在我们的搜索范围内)。
|
||||
|
||||
如果你想在一个目录中查找所有硬链接的实例,可以尝试以下的脚本来创建列表并为你查找副本:
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
# seaches for files sharing inodes
|
||||
|
||||
prev=""
|
||||
|
||||
# list files by inode
|
||||
ls -i | sort -n > /tmp/$0
|
||||
|
||||
# search through file for duplicate inode #s
|
||||
while read line
|
||||
do
|
||||
inode=`echo $line | awk '{print $1}'`
|
||||
if [ "$inode" == "$prev" ]; then
|
||||
grep $inode /tmp/$0
|
||||
fi
|
||||
prev=$inode
|
||||
done < /tmp/$0
|
||||
|
||||
# clean up
|
||||
rm /tmp/$0
|
||||
|
||||
$ ./findHardLinks
|
||||
788000 myfile
|
||||
788000 mytwin
|
||||
```
|
||||
|
||||
你还可以使用 `find` 命令按 `inode` 编号查找文件,如命令中所示。但是,此搜索可能涉及多个文件系统,因此可能会得到错误的结果。因为相同的 `inode` 编号可能会在另一个文件系统中使用,代表另一个文件。如果是这种情况,文件的其他详细信息将不相同。
|
||||
|
||||
```
|
||||
$ find / -inum 788000 -ls 2> /dev/null
|
||||
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /tmp/mycopy
|
||||
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/myfile
|
||||
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/save/mycopy
|
||||
788000 4 -rw-r--r-- 4 shs shs 228 Apr 12 19:37 /home/shs/mytwin
|
||||
```
|
||||
|
||||
注意,错误输出被重定向到 `/dev/null`,这样我们就不必查看所有 "Permission denied" 错误,否则这些错误将显示在我们不允许查看的其他目录中。
|
||||
|
||||
此外,扫描包含相同内容但不共享 `inode` 的文件(即,简单的文本拷贝)将花费更多的时间和精力。
|
||||
|
||||
加入 [Facebook][3] 和 [LinkedIn][4] 上的网络世界社区,对重要的话题发表评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html#tk.rss_all
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/reflections-candles-100793651-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,309 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zgj1024)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (HTTPie – A Modern Command Line HTTP Client For Curl And Wget Alternative)
|
||||
[#]: via: (https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
HTTPie – 替代 Curl 和 Wget 的现代 HTTP 命令行客户端
|
||||
======
|
||||
|
||||
大多数时间我们会使用 Curl 命令或是 Wget 命令下载文件或者做其他事
|
||||
|
||||
我们以前曾写过 **[最佳命令行下载管理器][1]** 的文章。你可以点击相应的 URL 连接来浏览这些文章。
|
||||
|
||||
* **[aria2 – Linux 下的多协议命令行下载工具][2]**
|
||||
* **[Axel – Linux 下的轻量级命令行下载加速器][3]**
|
||||
* **[Wget – Linux 下的标准命令行下载工具][4]**
|
||||
* **[curl – Linux 下的实用的命令行下载工具][5]**
|
||||
|
||||
|
||||
今天我们将讨论同样的话题。实用程序名为 HTTPie。
|
||||
|
||||
它是现代命令行 http 客户端,也是curl和wget命令的最佳替代品。
|
||||
|
||||
### 什么是 HTTPie?
|
||||
|
||||
HTTPie (发音是 aitch-tee-tee-pie) 是一个 Http 命令行客户端。
|
||||
|
||||
httpie 工具是现代命令的 HTTP 客户端,它能让命令行界面与 Web 服务进行交互。
|
||||
|
||||
他提供一个简单 Http 命令,运行使用简单而自然的语法发送任意的 HTTP 请求,并会显示彩色的输出。
|
||||
|
||||
HTTPie 能用于测试、debugging及与 HTTP 服务器交互。
|
||||
|
||||
### 主要特点
|
||||
|
||||
* 具表达力的和直观语法
|
||||
* 格式化的及彩色化的终端输出
|
||||
* 内置 JSON 支持
|
||||
* 表单和文件上传
|
||||
* HTTPS, 代理, 和认证
|
||||
* 任意请求数据
|
||||
* 自定义头部
|
||||
* 持久化会话(sessions)
|
||||
* 类似 wget 的下载
|
||||
* 支持 Python 2.7 和 3.x
|
||||
|
||||
### 在 Linux 下如何安装 HTTPie
|
||||
|
||||
大部分 Linux 发行版都提供了系统包管理器,可以用它来安装。
|
||||
|
||||
**`Fedora`** 系统,使用 **[DNF 命令][6]** 来安装 httpie
|
||||
|
||||
```
|
||||
$ sudo dnf install httpie
|
||||
```
|
||||
|
||||
**`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][7]** 或 **[APT 命令][8]** 来安装 httpie。
|
||||
|
||||
```
|
||||
$ sudo apt install httpie
|
||||
```
|
||||
|
||||
基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][9]** 来安装 httpie。
|
||||
|
||||
```
|
||||
$ sudo pacman -S httpie
|
||||
```
|
||||
|
||||
**`RHEL/CentOS`** 的系统, 使用 **[YUM 命令][10]** 来安装 httpie。
|
||||
|
||||
```
|
||||
$ sudo yum install httpie
|
||||
```
|
||||
|
||||
**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][11]** 来安装 httpie。
|
||||
|
||||
```
|
||||
$ sudo zypper install httpie
|
||||
```
|
||||
|
||||
### 1) 如何使用 HTTPie 请求URL?
|
||||
|
||||
httpie 的基本用法是将网站的 URL 作为参数。
|
||||
|
||||
```
|
||||
# http 2daygeek.com
|
||||
HTTP/1.1 301 Moved Permanently
|
||||
CF-RAY: 4c4a618d0c02ce6d-LHR
|
||||
Cache-Control: max-age=3600
|
||||
Connection: keep-alive
|
||||
Date: Tue, 09 Apr 2019 06:21:28 GMT
|
||||
Expires: Tue, 09 Apr 2019 07:21:28 GMT
|
||||
Location: https://2daygeek.com/
|
||||
Server: cloudflare
|
||||
Transfer-Encoding: chunked
|
||||
Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
### 2) 如何使用 HTTPie 下载文件
|
||||
|
||||
你可以使用带 `--download` 参数的 HTTPie 命令下载文件。类似于 wget 命令。
|
||||
|
||||
```
|
||||
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png
|
||||
HTTP/1.1 200 OK
|
||||
Accept-Ranges: bytes
|
||||
CF-Cache-Status: HIT
|
||||
CF-RAY: 4c4a65d5ca360a66-LHR
|
||||
Cache-Control: public, max-age=7200
|
||||
Connection: keep-alive
|
||||
Content-Length: 32066
|
||||
Content-Type: image/png
|
||||
Date: Tue, 09 Apr 2019 06:24:23 GMT
|
||||
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
|
||||
Expires: Tue, 09 Apr 2019 08:24:23 GMT
|
||||
Last-Modified: Mon, 08 Apr 2019 04:54:25 GMT
|
||||
Server: cloudflare
|
||||
Set-Cookie: __cfduid=dd2034b2f95ae42047e082f59f2b964f71554791063; expires=Wed, 08-Apr-20 06:24:23 GMT; path=/; domain=.2daygeek.com; HttpOnly; Secure
|
||||
Vary: Accept-Encoding
|
||||
|
||||
Downloading 31.31 kB to "Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png"
|
||||
Done. 31.31 kB in 0.01187s (2.58 MB/s)
|
||||
```
|
||||
|
||||
你还可以使用 `-o` 参数用不同的名称保存输出文件。
|
||||
|
||||
```
|
||||
# http --download https://www.2daygeek.com/wp-content/uploads/2019/04/Anbox-Easy-Way-To-Run-Android-Apps-On-Linux.png -o Anbox-1.png
|
||||
HTTP/1.1 200 OK
|
||||
Accept-Ranges: bytes
|
||||
CF-Cache-Status: HIT
|
||||
CF-RAY: 4c4a68194daa0a66-LHR
|
||||
Cache-Control: public, max-age=7200
|
||||
Connection: keep-alive
|
||||
Content-Length: 32066
|
||||
Content-Type: image/png
|
||||
Date: Tue, 09 Apr 2019 06:25:56 GMT
|
||||
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
|
||||
Expires: Tue, 09 Apr 2019 08:25:56 GMT
|
||||
Last-Modified: Mon, 08 Apr 2019 04:54:25 GMT
|
||||
Server: cloudflare
|
||||
Set-Cookie: __cfduid=d3eea753081690f9a2d36495a74407dd71554791156; expires=Wed, 08-Apr-20 06:25:56 GMT; path=/; domain=.2daygeek.com; HttpOnly; Secure
|
||||
Vary: Accept-Encoding
|
||||
|
||||
Downloading 31.31 kB to "Anbox-1.png"
|
||||
Done. 31.31 kB in 0.01551s (1.97 MB/s)
|
||||
```
|
||||
如何使用HTTPie恢复部分下载?
|
||||
### 3) 如何使用 HTTPie 恢复部分下载?
|
||||
|
||||
你可以使用带 `-c` 参数的 HTTPie 继续下载。
|
||||
```
|
||||
# http --download --continue https://speed.hetzner.de/100MB.bin -o 100MB.bin
|
||||
HTTP/1.1 206 Partial Content
|
||||
Connection: keep-alive
|
||||
Content-Length: 100442112
|
||||
Content-Range: bytes 4415488-104857599/104857600
|
||||
Content-Type: application/octet-stream
|
||||
Date: Tue, 09 Apr 2019 06:32:52 GMT
|
||||
ETag: "5253f0fd-6400000"
|
||||
Last-Modified: Tue, 08 Oct 2013 11:48:13 GMT
|
||||
Server: nginx
|
||||
Strict-Transport-Security: max-age=15768000; includeSubDomains
|
||||
|
||||
Downloading 100.00 MB to "100MB.bin"
|
||||
| 24.14 % 24.14 MB 1.12 MB/s 0:01:07 ETA^C
|
||||
```
|
||||
|
||||
你根据下面的输出验证是否同一个文件
|
||||
```
|
||||
[email protected]:/var/log# ls -lhtr 100MB.bin
|
||||
-rw-r--r-- 1 root root 25M Apr 9 01:33 100MB.bin
|
||||
```
|
||||
|
||||
### 5) 如何使用 HTTPie 上传文件?
|
||||
|
||||
你可以通过使用带有 `小于号 "<"` 的 HTTPie 命令上传文件
|
||||
You can upload a file using HTTPie with the `less-than symbol "<"` symbol.
|
||||
|
||||
```
|
||||
$ http https://transfer.sh < Anbox-1.png
|
||||
```
|
||||
|
||||
### 6) 如何使用带有重定向符号">" 的 HTTPie 下载文件?
|
||||
|
||||
你可以使用带有 `重定向 ">"` 符号的 HTTPie 命令下载文件。
|
||||
|
||||
```
|
||||
# http https://www.2daygeek.com/wp-content/uploads/2019/03/How-To-Install-And-Enable-Flatpak-Support-On-Linux-1.png > Flatpak.png
|
||||
|
||||
# ls -ltrh Flatpak.png
|
||||
-rw-r--r-- 1 root root 47K Apr 9 01:44 Flatpak.png
|
||||
```
|
||||
|
||||
### 7) 发送一个 HTTP GET 请求?
|
||||
|
||||
您可以在请求中发送 HTTP GET 方法。GET 方法会使用给定的 URI,从给定服务器检索信息。
|
||||
|
||||
|
||||
```
|
||||
# http GET httpie.org
|
||||
HTTP/1.1 301 Moved Permanently
|
||||
CF-RAY: 4c4a83a3f90dcbe6-SIN
|
||||
Cache-Control: max-age=3600
|
||||
Connection: keep-alive
|
||||
Date: Tue, 09 Apr 2019 06:44:44 GMT
|
||||
Expires: Tue, 09 Apr 2019 07:44:44 GMT
|
||||
Location: https://httpie.org/
|
||||
Server: cloudflare
|
||||
Transfer-Encoding: chunked
|
||||
Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
### 8) 提交表单?
|
||||
|
||||
使用以下格式提交表单。POST 请求用于向服务器发送数据,例如客户信息、文件上传等。要使用 HTML 表单。
|
||||
|
||||
```
|
||||
# http -f POST Ubuntu18.2daygeek.com hello='World'
|
||||
HTTP/1.1 200 OK
|
||||
Accept-Ranges: bytes
|
||||
Connection: Keep-Alive
|
||||
Content-Encoding: gzip
|
||||
Content-Length: 3138
|
||||
Content-Type: text/html
|
||||
Date: Tue, 09 Apr 2019 06:48:12 GMT
|
||||
ETag: "2aa6-5844bf1b047fc-gzip"
|
||||
Keep-Alive: timeout=5, max=100
|
||||
Last-Modified: Sun, 17 Mar 2019 15:29:55 GMT
|
||||
Server: Apache/2.4.29 (Ubuntu)
|
||||
Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
运行下面的指令以查看正在发送的请求。
|
||||
|
||||
```
|
||||
# http -v Ubuntu18.2daygeek.com
|
||||
GET / HTTP/1.1
|
||||
Accept: */*
|
||||
Accept-Encoding: gzip, deflate
|
||||
Connection: keep-alive
|
||||
Host: ubuntu18.2daygeek.com
|
||||
User-Agent: HTTPie/0.9.8
|
||||
|
||||
hello=World
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Accept-Ranges: bytes
|
||||
Connection: Keep-Alive
|
||||
Content-Encoding: gzip
|
||||
Content-Length: 3138
|
||||
Content-Type: text/html
|
||||
Date: Tue, 09 Apr 2019 06:48:30 GMT
|
||||
ETag: "2aa6-5844bf1b047fc-gzip"
|
||||
Keep-Alive: timeout=5, max=100
|
||||
Last-Modified: Sun, 17 Mar 2019 15:29:55 GMT
|
||||
Server: Apache/2.4.29 (Ubuntu)
|
||||
Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
### 9) HTTP 认证?
|
||||
|
||||
当前支持的身份验证认证方案是基本认证(Basic)和摘要验证(Digest)
|
||||
The currently supported authentication schemes are Basic and Digest
|
||||
|
||||
基本认证
|
||||
|
||||
```
|
||||
$ http -a username:password example.org
|
||||
```
|
||||
|
||||
摘要验证
|
||||
|
||||
```
|
||||
$ http -A digest -a username:password example.org
|
||||
```
|
||||
|
||||
提示输入密码
|
||||
```
|
||||
$ http -a username example.org
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/httpie-curl-wget-alternative-http-client-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/zgj1024)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/best-4-command-line-download-managers-accelerators-for-linux/
|
||||
[2]: https://www.2daygeek.com/aria2-linux-command-line-download-utility-tool/
|
||||
[3]: https://www.2daygeek.com/axel-linux-command-line-download-accelerator/
|
||||
[4]: https://www.2daygeek.com/wget-linux-command-line-download-utility-tool/
|
||||
[5]: https://www.2daygeek.com/curl-linux-command-line-download-manager/
|
||||
[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
Loading…
Reference in New Issue
Block a user