Merge pull request #12 from LCTT/master

Update 2018418
This commit is contained in:
zyk 2018-04-18 22:45:34 +08:00 committed by GitHub
commit bca07254f9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
43 changed files with 4497 additions and 910 deletions

View File

@ -0,0 +1,81 @@
面向企业的最佳 Linux 发行版
====
在这篇文章中,我将分享企业环境下顶级的 Linux 发行版。其中一些发行版用于服务器和云环境以及桌面任务。所有这些可选的 Linux 具有的一个共同点是它们都是企业级 Linux 发行版 —— 所以你可以期待更高程度的功能性,当然还有支持程度。
### 什么是企业级的 Linux 发行版?
企业级的 Linux 发行版可以归结为以下内容 —— 稳定性和支持。在企业环境中,使用的 Linux 版本必须满足这两点。稳定性意味着所提供的软件包既稳定又可用,同时仍然保持预期的安全性。
企业级的支持因素意味着有一个可靠的支持机制。有时这是单一的(官方)来源,如公司。在其他情况下,它可能是一个非营利性的治理机构,向优秀的第三方支持供应商提供可靠的建议。很明显,前者是最好的选择,但两者都可以接受。
### Red Hat 企业级 LinuxRHEL
[Red Hat][1] 有很多很棒的产品,都有企业级的支持来保证可用。其核心重点如下:
- Red Hat 企业级 Linux 服务器:这是一组服务器产品,包括从容器托管到 SAP 服务的所有内容,还有其他衍生的服务器。
- Red Hat 企业级 Linux 桌面:这些是严格控制的用户环境,运行 Red Hat Linux提供基本的桌面功能。这些功能包括访问最新的应用程序如 web 浏览器、电子邮件、LibreOffice 等。
- Red Hat 企业级 Linux 工作站:这基本上是 Red Hat 企业级 Linux 桌面,但针对高性能任务进行了优化。它也非常适合于大型部署和持续管理。
#### 为什么选择 Red Hat 企业级 Linux
Red Hat 是一家非常成功的大型公司,销售围绕 Linux 的服务。基本上Red Hat 从那些想要避免供应商锁定和其他相关问题的公司赚钱。这些公司认识到聘用开源软件专家和管理他们的服务器和其他计算需求的价值。一家公司只需要购买订阅来让 Red Hat 做支持工作就行。
Red Hat 也是一个可靠的社会公民。他们赞助开源项目以及像 OpenSource.com 这样的 FoSS 支持网站LCTT 译注FoSS 是 Free and Open Source Software 的缩写,意为自由及开源软件),并为 Fedora 项目提供支持。Fedora 不是由 Red Hat 所有的,而是由它赞助开发的。这使 Fedora 得以发展,同时也使 Red Hat 受益匪浅。Red Hat 可以从 Fedora 项目中获得他们想要的,并将其用于他们的企业级 Linux 产品中。 就目前来看Fedora 充当了红帽企业 Linux 的上游渠道。
### SUSE Linux 企业版本
[SUSE][2] 是一家非常棒的公司,为企业用户提供了可靠的 Linux 选择。SUSE 的产品类似于 Red Hat桌面和服务器都是该公司所关注的。从我自己使用 SUSE 的经验来看,我相信 YaST 已经证明了,对于希望在工作场所使用 Linux 操作系统的非 Linux 管理员而言它拥有巨大的优势。YaST 为那些需要一些基本的 Linux 命令行知识的任务提供了一个友好的 GUI。
SUSE 的核心重点如下:
- SUSE Linux 企业级服务器SLES包括任务特定的解决方案从云到 SAP以及任务关键计算和基于软件的数据存储。
- SUSE Linux 企业级桌面:对于那些希望为员工提供可靠的 Linux 工作站的公司来说SUSE Linux 企业级桌面是一个不错的选择。和 Red Hat 一样SUSE 通过订阅模式来对其提供支持。你可以选择三个不同级别的支持。
#### 为什么选择 SUSE Linux 企业版?
SUSE 是一家围绕 Linux 销售服务的公司,但他们仍然通过专注于简化操作来实现这一目标。从他们的网站到其提供的 Linux 发行版,重点是易用性,而不会牺牲安全性或可靠性。尽管在美国毫无疑问 Red Hat 是服务器的标准,但 SUSE 作为公司和开源社区的贡献成员都做得很好。
我还会继续说SUSE 不会太严肃,当你在 IT 领域建立联系的时候,这是一件很棒的事情。从他们关于 Linux 的有趣音乐视频到 SUSE 贸易展位中使用的 Gecko 以获得有趣的照片机会SUSE 将自己描述成简单易懂和平易近人的形象。
### Ubuntu LTS Linux
[Ubuntu Long Term Release][3] LTS Linux 是一个简单易用的企业级 Linux 发行版。Ubuntu 看起来比上面提到的其他发行版更新更频繁有时候也更不稳定。但请不要误解Ubuntu LTS 版本被认为是相当稳定的,不过,我认为一些专家可能不太同意它们是安全可靠的。
#### Ubuntu 的核心重点如下:
- Ubuntu 桌面版毫无疑问Ubuntu 桌面非常简单可以快速地学习并运行。也许在高级安装选项中缺少一些东西但这使得其更简单直白。作为额外的奖励Ubuntu 相比其他版本有更多的软件包除了它的父亲Debian 发行版)。我认为 Ubuntu 真正的亮点在于,你可以在网上找到许多销售 Ubuntu 的厂商,包括服务器、台式机和笔记本电脑。
- Ubuntu 服务器版这包括服务器、云和容器产品。Ubuntu 还提供了 Juju 云“应用商店”这样一个有趣的概念。对于任何熟悉 Ubuntu 或 Debian 的人来说Ubuntu 服务器都很有意义。对于这些人来说,它就像手套一样,为你提供了你已经熟知并喜爱的命令行工具。
- Ubuntu IoT最近Ubuntu 的开发团队已经把目标瞄准了“物联网”IoT的创建解决方案。包括数字标识、机器人技术和物联网网关。我的猜测是我们将在 Ubuntu 中看到大量增长的物联网用户来自企业,而不是普通家庭用户。
#### 为什么选择 Ubuntu LTS
社区是 Ubuntu 最大的优点。除了在已经拥挤的服务器市场上的巨大增长之外它还与普通用户在一起。Ubuntu 的开发和用户社区是坚如磐石的。因此,虽然它可能被认为比其他企业版更不稳定,但是我发现将 Ubuntu LTS 安装锁定到 “security updates only” 模式下提供了非常稳定的体验。
### CentOS 或者 Scientific Linux 怎么样呢?
首先,让我们把 [CentOS][4] 作为一个企业发行版,如果你有自己的内部支持团队来维护它,那么安装 CentOS 是一个很好的选择。毕竟,它与 Red Hat 企业级 Linux 兼容,并提供了与 Red Hat 产品相同级别的稳定性。不幸的是,它不能完全取代 Red Hat 支持订阅。
那么 [Scientific Linux][5] 呢?它的发行版怎么样?好吧,它就像 CentOS它是基于 Red Hat Linux 的。但与 CentOS 不同的是,它与 Red Hat 没有任何隶属关系。 Scientific Linux 从一开始就有一个目标 —— 为世界各地的实验室提供一个通用的 Linux 发行版。今天Scientific Linux 基本上是 Red Hat 减去所包含的商标资料。
这两种发行版都不能真正地与 Red Hat 互换,因为它们缺少 Red Hat 支持组件。
哪一个是顶级企业发行版?我认为这取决于你需要为自己确定的许多因素:订阅范围、可用性、成本、服务和提供的功能。这些是每个公司必须自己决定的因素。就我个人而言,我认为 Red Hat 在服务器上获胜,而 SUSE 在桌面环境中轻松获胜,但这只是我的意见 —— 你不同意?点击下面的评论部分,让我们来谈谈它。
--------------------------------------------------------------------------------
via: https://www.datamation.com/open-source/best-linux-distros-for-the-enterprise.html
作者:[Matt Hartley][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
[1]:https://www.redhat.com/en
[2]:https://www.suse.com/
[3]:http://releases.ubuntu.com/16.04/
[4]:https://www.centos.org/
[5]:https://www.scientificlinux.org/

View File

@ -74,7 +74,6 @@ gunzip -c file1.gz > /home/himanshu/file1
> `gunzip` 在命令行接受一系列的文件,并且将每个文件内容以正确的魔法数开始,且后缀名为 `.gz`、`-gz`、`.z`、`-z` 或 `_z` (忽略大小写)的压缩文件,用未压缩的文件替换它,并删除其原扩展名。 `gunzip` 也可识别一些特殊扩展名的压缩文件,如 `.tgz``.taz` 分别是 `.tar.gz``.tar.Z` 的缩写。在压缩时,`gzip` 在必要情况下使用 `.tgz` 作为扩展名,而不是只截取掉 `.tar` 后缀。 > `gunzip` 在命令行接受一系列的文件,并且将每个文件内容以正确的魔法数开始,且后缀名为 `.gz`、`-gz`、`.z`、`-z` 或 `_z` (忽略大小写)的压缩文件,用未压缩的文件替换它,并删除其原扩展名。 `gunzip` 也可识别一些特殊扩展名的压缩文件,如 `.tgz``.taz` 分别是 `.tar.gz``.tar.Z` 的缩写。在压缩时,`gzip` 在必要情况下使用 `.tgz` 作为扩展名,而不是只截取掉 `.tar` 后缀。
> `gunzip` 目前可以解压 `gzip`、`zip`、`compress`、`compress -H``pack`)产生的文件。`gunzip` 自动检测输入文件格式。在使用前两种压缩格式时,`gunzip` 会检验 32 位循环冗余校验码CRC。对于 pack 包,`gunzip` 会检验压缩长度。标准压缩格式在设计上不允许相容性检测。不过 `gunzip` 有时可以检测出坏的 `.Z` 文件。如果你解压 `.Z` 文件时出错,不要因为标准解压没报错就认为 `.Z` 文件一定是正确的。这通常意味着标准解压过程不检测它的输入而是直接产生一个错误的输出。SCO 的 `compress -H` 格式lzh 压缩方法)不包括 CRC 校验码,但也允许一些相容性检查。 > `gunzip` 目前可以解压 `gzip`、`zip`、`compress`、`compress -H``pack`)产生的文件。`gunzip` 自动检测输入文件格式。在使用前两种压缩格式时,`gunzip` 会检验 32 位循环冗余校验码CRC。对于 pack 包,`gunzip` 会检验压缩长度。标准压缩格式在设计上不允许相容性检测。不过 `gunzip` 有时可以检测出坏的 `.Z` 文件。如果你解压 `.Z` 文件时出错,不要因为标准解压没报错就认为 `.Z` 文件一定是正确的。这通常意味着标准解压过程不检测它的输入而是直接产生一个错误的输出。SCO 的 `compress -H` 格式lzh 压缩方法)不包括 CRC 校验码,但也允许一些相容性检查。
```
### 结语 ### 结语

View File

@ -0,0 +1,106 @@
可怕的万圣节 Linux 命令
======
![](https://images.idgesg.net/images/article/2017/10/animal-skeleton-100739983-large.jpg)
虽然现在不是万圣节,也可以关注一下 Linux 可怕的一面。什么命令可能会显示鬼、巫婆和僵尸的图像?哪个会鼓励“不给糖果就捣蛋”的精神?
### crypt
好吧,我们一直看到 `crypt`。尽管名称不同crypt 不是一个地窖,也不是垃圾文件的埋葬坑,而是一个加密文件内容的命令。现在,`crypt` 通常用一个脚本实现,通过调用一个名为 `mcrypt` 的二进制文件来模拟以前的 `crypt` 命令来完成它的工作。直接使用 `mycrypt` 命令是更好的选择。
```
$ mcrypt x
Enter the passphrase (maximum of 512 characters)
Please use a combination of upper and lower case letters and numbers.
Enter passphrase:
Enter passphrase:
File x was encrypted.
```
请注意,`mcrypt` 命令会创建第二个扩展名为 `.nc` 的文件。它不会覆盖你正在加密的文件。
`mcrypt` 命令有密钥大小和加密算法的选项。你也可以再选项中指定密钥,但 `mcrypt` 命令不鼓励这样做。
### kill
还有 `kill` 命令 - 当然并不是指谋杀而是用来强制和非强制地结束进程这取决于正确终止它们的要求。当然Linux 并不止于此。相反,它有各种 `kill` 命令来终止进程。我们有 `kill`、`pkill`、`killall`、`killpg`、`rfkill`、`skill`()读作 es-kill、`tgkill`、`tkill` 和 `xkill`
```
$ killall runme
[1] Terminated ./runme
[2] Terminated ./runme
[3]- Terminated ./runme
[4]+ Terminated ./runme
```
### shred
Linux 系统也支持一个名为 `shred` 的命令。`shred` 命令会覆盖文件以隐藏其以前的内容,并确保使用硬盘恢复工具无法恢复它们。请记住,`rm` 命令基本上只是删除文件在目录文件中的引用,但不一定会从磁盘上删除内容或覆盖它。`shred` 命令覆盖文件的内容。
```
$ shred dupes.txt
$ more dupes.txt
▒oΛ▒▒9▒lm▒▒▒▒▒o▒1־▒▒f▒f▒▒▒i▒▒h^}&▒▒▒{▒▒
```
### 僵尸
虽然不是命令,但僵尸在 Linux 系统上是很顽固的存在。僵尸基本上是没有完全清理掉的死亡进程的遗骸。进程_不应该_这样工作 —— 让死亡进程四处游荡,而不是简单地让它们死亡并进入数字天堂,所以僵尸的存在表明了让他们遗留于此的进程有一些缺陷。
一个简单的方法来检查你的系统是否有僵尸进程遗留,看看 `top` 命令的标题行。
```
$ top
top - 18:50:38 up 6 days, 6:36, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 171 total, 1 running, 167 sleeping, 0 stopped, 3 zombie `< ==`
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 99.9 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 2003388 total, 250840 free, 545832 used, 1206716 buff/cache
KiB Swap: 9765884 total, 9765764 free, 120 used. 1156536 avail Mem
```
可怕!上面显示有三个僵尸进程。
### at midnight
有时会在万圣节这么说死者的灵魂从日落开始游荡直到午夜。Linux 可以通过 `at midnight` 命令跟踪它们的离开。用于安排在下次到达指定时间时运行的作业,`at` 的作用类似于一次性的 cron。
```
$ at midnight
warning: commands will be executed using /bin/sh
at> echo 'the spirits of the dead have left'
at> <EOT>
job 3 at Thu Oct 31 00:00:00 2017
```
### 守护进程
Linux 系统也高度依赖守护进程 —— 在后台运行的进程,并提供系统的许多功能。许多守护进程的名称以 “d” 结尾。这个 “d” 代表<ruby>守护进程<rt>daemon</rt></ruby>,表明这个进程一直运行并支持一些重要功能。有的会用单词 “daemon” 。
```
$ ps -ef | grep sshd
root 1142 1 0 Oct19 ? 00:00:00 /usr/sbin/sshd -D
root 25342 1142 0 18:34 ? 00:00:00 sshd: shs [priv]
$ ps -ef | grep daemon | grep -v grep
message+ 790 1 0 Oct19 ? 00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 836 1 0 Oct19 ? 00:00:02 /usr/lib/accountsservice/accounts-daemon
```
### 万圣节快乐!
在 [Facebook][1] 和 [LinkedIn][2] 上加入 Network World 社区来对主题进行评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3235219/linux/scary-linux-commands-for-halloween.html
作者:[Sandra Henry-Stocker][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -1,14 +1,17 @@
CIO 真正需要 DevOps 团队做什么? CIO 真正需要 DevOps 团队做什么?
====== ======
IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所要求的文化挑战中学习。但是,你在一个 DevOps 团队面对长期或短期挑战的调整中 —— 一个 CIO 真正需要他们做的是什么呢?
> DevOps 团队需要 IT 领导者关注三件事:沟通、技术债务和信任。
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_cloud_2.png?itok=wGdeAHVn)
IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所要求的文化挑战中学习。但是,你在一个 DevOps 团队面对长期或短期的团结挑战的调整中 —— 一个 CIO 真正需要他们做的是什么呢?
在我与 DevOps 团队成员的谈话中我听到的其中一些内容让你感到非常的意外。DevOps 专家(无论是内部团队的还是外部团队的)都希望将下列的事情放在你的 CIO 优先关注的级别。 在我与 DevOps 团队成员的谈话中我听到的其中一些内容让你感到非常的意外。DevOps 专家(无论是内部团队的还是外部团队的)都希望将下列的事情放在你的 CIO 优先关注的级别。
### 1. 沟通 ### 1. 沟通
第一个也是最重要的一个DevOps 专家需要面对面的沟通。一个经验丰富的 DevOps 团队是非常了解当前 DevOps 的趋势,以及成功、和失败的经验,并且他们非常乐意去分享这些信息。表达 DevOps 的概念是很困难的,因此,要在这种新的工作关系中保持开放,定期(不用担心,不用每周)讨论有关你的 IT 的当前状态,如何评价你的沟通环境,以及你的整体的 IT 产业。 第一个也是最重要的一个DevOps 专家需要面对面的沟通。一个经验丰富的 DevOps 团队是非常了解当前 DevOps 的趋势,以及成功和失败的经验,并且他们非常乐意去分享这些信息。表达 DevOps 的概念是很困难的,因此,要在这种新的工作关系中保持开放,定期(不用担心,不用每周)讨论有关你的 IT 的当前状态,如何评价你的沟通环境,以及你的整体的 IT 产业。
**[想从领导 DevOps 的 CIO 们处学习更多的知识吗?查看我们的综合资源,[DevOps: IT 领导者指南][3]。 ]**
相反,你应该准备好与 DevOps 团队去共享当前的业务需求和目标。业务不再是独立于 IT 的东西:它们现在是驱动 IT 发展的重要因素,并且 IT 决定了你的业务需求和目标运行的效果如何。 相反,你应该准备好与 DevOps 团队去共享当前的业务需求和目标。业务不再是独立于 IT 的东西:它们现在是驱动 IT 发展的重要因素,并且 IT 决定了你的业务需求和目标运行的效果如何。
@ -16,19 +19,17 @@ IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所
### 2. 降低技术债务 ### 2. 降低技术债务
第二,力争更好地理解技术债务,并在 DevOps 中努力降低它。你的 DevOps 团队面对的工作都非常难。在这种情况下,技术债务是指在一个庞大的、不可持续的环境(查看 Rube Goldberg之中,通过维护和增加新功能而占用的人力资源和基础设备资源。 第二,力争更好地理解技术债务,并在 DevOps 中努力降低它。你的 DevOps 团队都工作于一线。这里,“技术债务”是指在一个庞大的、不可持续发展的环境之中,通过维护和增加新功能而占用的人力资源和基础设备资源(查看 Rube Goldberg
常见的 CIO 问题包括: CIO 常见的问题包括:
* 为什么我们要用一种新方法去做这件事情? * 为什么我们要用一种新方法去做这件事情?
* 为什么我们要在它上面花费时间和金钱? * 为什么我们要在它上面花费时间和金钱?
* 如果这里没有新功能,只是现有组件实现了自动化,那么我们的收益是什么? * 如果这里没有新功能,只是现有组件实现了自动化,那么我们的收益是什么?
“如果没有坏,就不要去修理它”,这样的事情是可以理解的。但是,如果你正在路上好好的开车,而每个人都加速超过你,这时候,你的环境就被破坏了。持续投入宝贵的资源去支撑或扩张拼凑起来的环境。
选择妥协,并且一个接一个的打补丁,以这种方式去处理每个独立的问题,结果将从一开始就变得很糟糕 —— 在一个不能支撑建筑物的地基上,一层摞一层地往上堆。事实上,这种方法就像不断地在电脑中插入坏磁盘一样。迟早有一天,面对出现的问题你将会毫无办法。在外面持续增加的压力下,整个事情将变得一团糟,完全吞噬掉你的资源。
"如果没有坏,就不要去修理它“ ,这样的事情是可以理解的。但是,如果你正在路上好好的开车,而每个人都加速超过你,这时候,你的环境就被破坏了。持续投入宝贵的资源去支撑或扩张拼凑起来的环境。
选择妥协,并且一个接一个的打补丁,以这种方式去处理每个独立的问题,结果将从一开始就变得很糟糕 —— 在一个不能支撑建筑物的地基上,一层摞一层地往上堆。事实上,这种方法就像不断地在电脑中插入坏磁盘一样。迟早有一天,面对出现的问题,你将会毫无办法。在外面持续增加的压力下,整个事情将变得一团糟,完全吞噬掉你的资源。
这种情况下,解决方案就是:自动化。使用自动化的结果是良好的可伸缩性 —— 每个维护人员在 IT 环境的维护和增长方面花费更少的努力。如果增加人力资源是实现业务增长的唯一办法,那么,可伸缩性就是白日做梦。 这种情况下,解决方案就是:自动化。使用自动化的结果是良好的可伸缩性 —— 每个维护人员在 IT 环境的维护和增长方面花费更少的努力。如果增加人力资源是实现业务增长的唯一办法,那么,可伸缩性就是白日做梦。
@ -36,7 +37,7 @@ IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所
### 3. 信任 ### 3. 信任
最后,相信你的 DevOps 团队并且一定要理解他们。DevOps 专家也知道这个要求很难,但是他们必须有你的强大支持和你参与实践的意愿。因为 DevOps 团队持续改进你的 IT 环境,他们自身也在不断地适应这些变化的技术,而这些变化通常正是 “你要去学习的经验”。 最后,相信你的 DevOps 团队并且一定要理解他们。DevOps 专家也知道这个要求很难,但是他们必须得到你的强大支持和你积极参与的意愿。因为 DevOps 团队持续改进你的 IT 环境,他们自身也在不断地适应这些变化的技术,而这些变化通常正是 “你要去学习的经验”。
倾听倾听倾听他们并且相信他们。DevOps 的改变是非常有价值的,而且也是值的去投入时间和金钱的。它可以提高效率、生产力、和业务响应能力。信任你的 DevOps 团队,并且给予他们更多的自由,实现更高效率的 IT 改进。 倾听倾听倾听他们并且相信他们。DevOps 的改变是非常有价值的,而且也是值的去投入时间和金钱的。它可以提高效率、生产力、和业务响应能力。信任你的 DevOps 团队,并且给予他们更多的自由,实现更高效率的 IT 改进。
@ -48,7 +49,7 @@ via: https://enterprisersproject.com/article/2017/12/what-devops-teams-really-ne
作者:[John Allessio][a] 作者:[John Allessio][a]
译者:[qhwdw](https://github.com/qhwdw) 译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,175 @@
使用 GitHub 和 Python 实现持续部署
======
![](https://fedoramagazine.org/wp-content/uploads/2018/03/cd-github-python-945x400.jpg)
借助 GitHub 的<ruby>网络钩子<rt>webhook</rt></ruby>,开发者可以创建很多有用的服务。从触发一个 Jenkins 实例上的 CI持续集成 任务到配置云中的机器,几乎有着无限的可能性。这篇教程将展示如何使用 Python 和 Flask 框架来搭建一个简单的持续部署CD服务。
在这个例子中的持续部署服务是一个简单的 Flask 应用,其带有接受 GitHub 的<ruby>网络钩子<rt>webhook</rt></ruby>请求的 REST <ruby>端点<rt>endpoint</rt></ruby>。在验证每个请求都来自正确的 GitHub 仓库后,服务器将<ruby>拉取<rt>pull</rt></ruby>更改到仓库的本地副本。这样每次一个新的<ruby>提交<rt>commit</rt></ruby>推送到远程 GitHub 仓库,本地仓库就会自动更新。
### Flask web 服务
用 Flask 搭建一个小的 web 服务非常简单。这里可以先看看项目的结构。
```
├── app
│ ├── __init__.py
│ └── webhooks.py
├── requirements.txt
└── wsgi.py
```
首先,创建应用。应用代码在 `app` 目录下。
两个文件(`__init__.py` 和 `webhooks.py`)构成了 Flask 应用。前者包含有创建 Flask 应用并为其添加配置的代码。后者有<ruby>端点<rt>endpoint</rt></ruby>逻辑。这是该应用接收 GitHub 请求数据的地方。
这里是 `app/__init__.py` 的内容:
```
import os
from flask import Flask
from .webhooks import webhook
def create_app():
""" Create, configure and return the Flask application """
app = Flask(__name__)
app.config['GITHUB_SECRET'] = os.environ.get('GITHUB_SECRET')
app.config['REPO_PATH'] = os.environ.get('REPO_PATH')
app.register_blueprint(webhook)
return(app)
```
该函数创建了两个配置变量:
* `GITHUB_SECRET` 保存一个密码,用来认证 GitHub 请求。
* `REPO_PATH` 保存了自动更新的仓库路径。
这份代码使用<ruby>[Flask 蓝图][1]<rt>Flask Blueprints</rt></ruby>来组织应用的<ruby>端点<rt>endpoint</rt></ruby>。使用蓝图可以对 API 进行逻辑分组,使应用程序更易于维护。通常认为这是一种好的做法。
这里是 `app/webhooks.py` 的内容:
```
import hmac
from flask import request, Blueprint, jsonify, current_app
from git import Repo
webhook = Blueprint('webhook', __name__, url_prefix='')
@webhook.route('/github', methods=['POST'])
def handle_github_hook():
""" Entry point for github webhook """
signature = request.headers.get('X-Hub-Signature')
sha, signature = signature.split('=')
secret = str.encode(current_app.config.get('GITHUB_SECRET'))
hashhex = hmac.new(secret, request.data, digestmod='sha1').hexdigest()
if hmac.compare_digest(hashhex, signature):
repo = Repo(current_app.config.get('REPO_PATH'))
origin = repo.remotes.origin
origin.pull('--rebase')
commit = request.json['after'][0:6]
print('Repository updated with commit {}'.format(commit))
return jsonify({}), 200
```
首先代码创建了一个新的蓝图 `webhook`。然后它使用 Flask `route` 为蓝图添加了一个端点。任何请求 `/GitHub` URL 端点的 POST 请求都将调用这个路由。
#### 验证请求
当服务在该端点上接到请求时,首先它必须验证该请求是否来自 GitHub 以及来自正确的仓库。GitHub 在请求头的 `X-Hub-Signature` 中提供了一个签名。该签名由一个密码(`GITHUB_SECRET`),请求体的 HMAC 十六进制摘要,并使用 `sha1` 哈希生成。
为了验证请求,服务需要在本地计算签名并与请求头中收到的签名做比较。这可以由 `hmac.compare_digest` 函数完成。
#### 自定义钩子逻辑
在验证请求后,现在就可以处理了。这篇教程使用 [GitPython][3] 模块来与 git 仓库进行交互。GitPython 模块中的 `Repo` 对象用于访问远程仓库 `origin`。该服务在本地拉取 `origin` 仓库的最新更改,还用 `--rebase` 选项来避免合并的问题。
调试打印语句显示了从请求体收到的短提交哈希。这个例子展示了如何使用请求体。更多关于请求体的可用数据的信息,请查询 [GitHub 文档][4]。
最后该服务返回了一个空的 JSON 字符串和 200 的状态码。这用于告诉 GitHub 的网络钩子服务已经收到了请求。
### 部署服务
为了运行该服务,这个例子使用 [gunicorn][5] web 服务器。首先安装服务依赖。在支持的 Fedora 服务器上,以 [sudo][6] 运行这条命令:
```
sudo dnf install python3-gunicorn python3-flask python3-GitPython
```
现在编辑 gunicorn 使用的 `wsgi.py` 文件来运行该服务:
```
from app import create_app
application = create_app()
```
为了部署服务,使用以下命令克隆这个 git [仓库][7]或者使用你自己的 git 仓库:
```
git clone https://github.com/cverna/github_hook_deployment.git /opt/
```
下一步是配置服务所需的环境变量。运行这些命令:
```
export GITHUB_SECRET=asecretpassphraseusebygithubwebhook
export REPO_PATH=/opt/github_hook_deployment/
```
这篇教程使用网络钩子服务的 GitHub 仓库,但你可以使用你想要的不同仓库。最后,使用这些命令开启该 web 服务:
```
cd /opt/github_hook_deployment/
gunicorn --bind 0.0.0.0 wsgi:application --reload
```
这些选项中绑定了 web 服务的 IP 地址为 `0.0.0.0`,意味着它将接收来自任何的主机的请求。选项 `--reload` 确保了当代码更改时重启 web 服务。这就是持续部署的魔力所在。每次接收到 GitHub 请求时将拉取仓库的最近更新,同时 gunicore 检测这些更改并且自动重启服务。
**注意: **为了能接收到 GitHub 请求web 服务必须部署到具有公有 IP 地址的服务器上。做到这点的简单方法就是使用你最喜欢的云提供商比如 DigitalOceanAWSLinode等。
### 配置 GitHub
这篇教程的最后一部分是配置 GitHub 来发送网络钩子请求到 web 服务上。这是持续部署的关键。
从你的 GitHub 仓库的设置中,选择 Webhook 菜单并且点击“Add Webhook”。输入以下信息
* “Payload URL” 服务的 URL比如 `<http://public_ip_address:8000/github>`
* “Content type” 选择 “application/json”
* “Secret” 前面定义的 `GITHUB_SECRET` 环境变量
然后点击“Add Webhook” 按钮。
![][8]
现在每当该仓库发生推送事件时GitHub 将向服务发送请求。
### 总结
这篇教程向你展示了如何写一个基于 Flask 的用于接收 GitHub 的网络钩子请求,并实现持续集成的 web 服务。现在你应该能以本教程作为起点来搭建对自己有用的服务。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/continuous-deployment-github-python/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[kimii](https://github.com/kimii)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:http://flask.pocoo.org/docs/0.12/blueprints/
[2]:https://en.wikipedia.org/wiki/HMAC
[3]:https://gitpython.readthedocs.io/en/stable/index.html
[4]:https://developer.github.com/v3/activity/events/types/#webhook-payload-example-26
[5]:http://gunicorn.org/
[6]:https://fedoramagazine.org/howto-use-sudo/
[7]:https://github.com/cverna/github_hook_deployment.git
[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/Screenshot-2018-3-26-cverna-github_hook_deployment1.png

View File

@ -1,3 +1,5 @@
Translating by valoniakim
How to start an open source program in your company How to start an open source program in your company
====== ======

View File

@ -1,3 +1,5 @@
translating by MZqk
Whats next in IT automation: 6 trends to watch Whats next in IT automation: 6 trends to watch
====== ======

View File

@ -0,0 +1,37 @@
Easily Run And Integrate AppImage Files With AppImageLauncher
======
Did you ever download an AppImage file and you didn't know how to use it? Or maybe you know how to use it but you have to navigate to the folder where you downloaded the .AppImage file every time you want to run it, or manually create a launcher for it.
With AppImageLauncher, these are problems of the past. The application lets you **easily run AppImage files, without having to make them executable**. But its most interesting feature is easily integrating AppImages with your system: **AppImageLauncher can automatically add an AppImage application shortcut to your desktop environment's application launcher / menu (including the app icon and proper description).**
Without making the downloaded Kdenline AppImage executable manually, the first time I double click it (having AppImageLauncher installed), AppImageLauncher presents two options:
`Run once` or `Integrate and run`.
Clicking on Integrate and run, the AppImage is copied to the `~/.bin/` folder (hidden folder in the home directory) and is added to the menu, then the app is launched.
**Removing it is just as simple** , as long as the desktop environment you're using has support for desktop actions. For example, in Gnome Shell, simply **right click the application icon in the Activities Overview and select** `Remove from system` :
The AppImageLauncher GitHub page says that the application only supports Debian-based systems for now (this includes Ubuntu and Linux Mint) because it integrates deeply with the system. The application is currently in heavy development, and there are already issues opened by its developer to build RPM packages, so Fedora / openSUSE support might be added in the not too distant future.
### Download AppImageLauncher
The AppImageLauncher download page provides binaries for Debian, Ubuntu or Linux Mint (64bit), as well as a 64bit AppImage. The source is also available.
[Download AppImageLauncher][1]
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/04/easily-run-and-integrate-appimage-files.html
作者:[Logix][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://github.com/TheAssassin/AppImageLauncher/releases
[2]:https://kdenlive.org/download/

View File

@ -0,0 +1,71 @@
Management, from coordination to collaboration
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab2.png?itok=uMO9zn5U)
Any organization is fundamentally a pattern of interactions between people. The nature of those interactions—their quality, their frequency, their outcomes—is the most important product an organization can create. Perhaps counterintuitively, recognizing this fact has never been more important than it is today—a time when digital technologies are reshaping not only how we work but also what we do when we come together.
And yet many organizational leaders treat those interactions between people as obstacles or hindrances to avoid or eliminate, rather than as the powerful sources of innovation they really are.
That's why we're observing that some of the most successful organizations today are those capable of shifting the way they think about the value of the interactions in the workplace. And to do that, they've radically altered their approach to management and leadership.
### Moving beyond mechanical management
Simply put, traditionally managed organizations treat unanticipated interactions between stakeholders as potentially destructive forces—and therefore as costs to be mitigated.
This view has a long, storied history in the field of economics. But it's perhaps nowhere more clear than in the early writing of Nobel Prize-winning economist[Ronald Coase][1]. In 1937, Coase published "[The Nature of the Firm][2]," an essay about the reasons people organized into firms to work on large-scale projects—rather than tackle those projects alone. Coase argued that when the cost of coordinating workers together inside a firm is less than that of similar market transactions outside, people will tend to organize so they can reap the benefits of lower operating costs.
But at some point, Coase's theory goes, the work of coordinating interactions between so many people inside the firm actually outweighs the benefits of having an organization in the first place. The complexity of those interactions becomes too difficult to handle. Management, then, should serve the function of decreasing this complexity. Its primary goal is coordination, eliminating the costs associated with messy interpersonal interactions that could slow the firm and reduce its efficiency. As one Fortune 100 CEO recently told me, "Failures happen most often around organizational handoffs."
This makes sense to people practicing what I've called "[mechanical management][3]," where managing people is the act of keeping them focused on specific, repeatable, specialized tasks. Here, management's key function is optimizing coordination costs—ensuring that every specialized component of the finely-tuned organizational machine doesn't impinge on the others and slow them down. Managers work to avoid failures by coordinating different functions across the organization (accounts payable, research and development, engineering, human resources, sales, and so on) to get them to operate toward a common goal. And managers create value by controlling information flows, intervening only when functions become misaligned.
Today, when so many of these traditionally well-defined tasks have become automated, value creation is much more a result of novel innovation and problem solving—not finding new ways to drive efficiency from repeatable processes. But numerous studies demonstrate that innovative, problem-solving activity occurs much more regularly when people work in cross-functional teams—not as isolated individuals or groups constrained by single-functional silos. This kind of activity can lead to what some call "accidental integration": the serendipitous innovation that occurs when old elements combine in new and unforeseen ways.
That's why working collaboratively has now become a necessity that managers need to foster, not eliminate.
### From coordination to collaboration
Reframing the value of the firm—from something that coordinated individual transactions to something that produces novel innovations—means rethinking the value of the relations at the core of our organizations. And that begins with reimagining the task of management, which is no longer concerned primarily with minimizing coordination costs but maximizing cooperation opportunities.
Too few of our tried-and-true management practices have this goal. If they're seeking greater innovation, managers need to encourage more interactions between people in different functional areas, not fewer. A cross-functional team may not be as efficient as one composed of people with the same skill sets. But a cross-functional team is more likely to be the one connecting points between elements in your organization that no one had ever thought to connect (the one more likely, in other words, to achieve accidental integration).
Working collaboratively has now become a necessity that managers need to foster, not eliminate.
I have three suggestions for leaders interested in making this shift:
First, define organizations around processes, not functions. We've seen this strategy work in enterprise IT, for example, in the case of [DevOps][4], where teams emerge around end goals (like a mobile application or a website), not singular functions (like developing, testing, and production). In DevOps environments, the same team that writes the code is responsible for maintaining it once it's in production. (We've found that when the same people who write the code are the ones woken up when it fails at 3 a.m., we get better code.)
Second, define work around the optimal organization rather than the organization around the work. Amazon is a good example of this strategy. Teams usually stick to the "[Two Pizza Rule][5]" when establishing optimal conditions for collaboration. In other words, Amazon leaders have determined that the best-sized team for maximum innovation is about 10 people, or a group they can feed with two pizzas. If the problem gets bigger than that two-pizza team can handle, they split the problem into two simpler problems, dividing the work between multiple teams rather than adding more people to the single team.
And third, to foster creative behavior and really get people cooperating with one another, do whatever you can to cultivate a culture of honest and direct feedback. Be straightforward and, as I wrote in The Open Organization, let the sparks fly; have frank conversations and let the best ideas win.
### Let it go
I realize that asking managers to significantly shift the way they think about their roles can lead to fear and skepticism. Some managers define their performance (and their very identities) by the control they exert over information and people. But the more you dictate the specific ways your organization should do something, the more static and brittle that activity becomes. Agility requires letting go—giving up a certain degree of control.
Front-line managers will see their roles morph from dictating and monitoring to enabling and supporting. Instead of setting individual-oriented goals, they'll need to set group-oriented goals. Instead of developing individual incentives, they'll need to consider group-oriented incentives.
Because ultimately, their goal should be to[create the context in which their teams can do their best work][6].
[Subscribe to our weekly newsletter][7] to learn more about open organizations.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/4/management-coordination-collaboration
作者:[Jim Whitehurst][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/remyd
[1]:https://news.uchicago.edu/article/2013/09/02/ronald-h-coase-founding-scholar-law-and-economics-1910-2013
[2]:http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.1937.tb00002.x/full
[3]:https://opensource.com/open-organization/18/2/try-learn-modify
[4]:https://enterprisersproject.com/devops
[5]:https://www.fastcompany.com/3037542/productivity-hack-of-the-week-the-two-pizza-approach-to-productive-teamwork
[6]:https://opensource.com/open-organization/16/3/what-it-means-be-open-source-leader
[7]:https://opensource.com/open-organization/resources/newsletter

View File

@ -0,0 +1,79 @@
For project safety back up your people, not just your data
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel)
The [FSF][1] was founded in 1985, Perl in 1987 ([happy 30th birthday, Perl][2]!), and Linux in 1991. The [term open source][3] and the [Open Source Initiative][4] both came into being in 1998 (and [turn 20 years old][5] in 2018). Since then, free and open source software has grown to become the default choice for software development, enabling incredible innovation.
We, the greater open source community, have come of age. Millions of open source projects exist today, and each year the [GitHub Octoverse][6] reports millions of new public repositories. We rely on these projects every day, and many of us could not operate our services or our businesses without them.
So what happens when the leaders of these projects move on? How can we help ease those transitions while ensuring that the projects thrive? By teaching and encouraging **succession planning**.
### What is succession planning?
Succession planning is a popular topic among business executives, boards of directors, and human resources professionals, but it doesn't often come up with maintainers of free and open source projects. Because the concept is common in business contexts, that's where you'll find most resources and advice about establishing a succession plan. As you might expect, most of these articles aren't directly applicable to FOSS, but they do form a springboard from which we can launch our own ideas about succession planning.
According to [Wikipedia][7]:
> Succession planning is a process for identifying and developing new leaders who can replace old leaders when they leave, retire, or die.
In my opinion, this definition doesn't apply very well to free and open source software projects. I primarily object to the use of the term leaders. For the collaborative projects of FOSS, everyone can be some form of leader. Roles other than "project founder" or "benevolent dictator for life" are just as important. Any project role that is measured by bus factor is one that can benefit from succession planning.
> A project's bus factor is the number of team members who, if hit by a bus, would endanger the smooth operation of the project. The smallest and worst bus factor is 1: when only a single person's loss would put the project in jeopardy. It's a somewhat grim but still very useful concept.
I propose that instead of viewing succession planning as a leadership pipeline, free and open source projects should view it as a skills pipeline. What sorts of skills does your project need to continue functioning well, and how can you make sure those skills always exist in your community?
### Benefits of succession planning
When I talk to project maintainers about succession planning, they often respond with something like, "We've been pretty successful so far without having to think about this. Why should we start now?"
Aside from the fact that the phrase, "We've always done it this way" is probably one of the most dangerous in the English language, and hearing (or saying) it should send up red flags in any community, succession planning provides plenty of very real benefits:
* **Continuity** : When someone leaves, what happens to the tasks they were performing? Succession planning helps ensure those tasks continue uninterrupted and no one is left hanging.
* **Avoiding a power vacuum** : When a person leaves a role with no replacement, it can lead to confusion, delays, and often most damaging, political woes. After all, it's much easier to fix delays than hurt feelings. A succession plan helps alleviate the insecure and unstable time when someone in a vital role moves on.
* **Increased project/organization longevity** : The thinking required for succession planning is the same sort of thinking that contributes to project longevity. Ensuring continuity in leadership, culture, and productivity also helps ensure the project will continue. It will evolve, but it will survive.
* **Reduced workload/pressure on current leaders** : When a single team member performs a critical role in the project, they often feel pressure to be constantly "on." This can lead to burnout and worse, resignations. A succession plan ensures that all important individuals have a backup or successor. The knowledge that someone can take over is often enough to reduce the pressure, but it also means that key players can take breaks or vacations without worrying that their role will be neglected in their absence.
* **Talent development** : Members of the FOSS community talk a lot about mentoring these days, and that's great. However, most of the conversation is around mentoring people to contribute code to a project. There are many different ways to contribute to free and open source software projects beyond programming. A robust succession plan recognizes these other forms of contribution and provides mentoring to prepare people to step into critical non-programming roles.
* **Inspiration for new members** : It can be very motivational for new or prospective community members to see that a project uses its succession plan. Not only does it show them that the project is well-organized and considers its own health and welfare as well as that of its members, but it also clearly shows new members how they can grow in the community. An obvious path to critical roles and leadership positions inspires new members to stick around to walk that path.
* **Diversity of thoughts/get out of a rut** : Succession plans provide excellent opportunities to bring in new people and ideas to the critical roles of a project. [Studies show][8] that diverse leadership teams are more effective and the projects they lead are more innovative. Using your project's succession plan to mentor people from different backgrounds and with different perspectives will help strengthen and evolve the project in a healthy way.
* **Enabling meritocracy** : Unfortunately, what often passes for meritocracy in many free and open source projects is thinly veiled hostility toward new contributors and diverse opinions—hostility that's delivered from within an echo chamber. Meritocracy without a mentoring program and healthy governance structure is simply an excuse to practice subjective discrimination while hiding behind unexpressed biases. A well-executed succession plan helps teams reach the goal of a true meritocracy. What counts as merit for any given role, and how to reach that level of merit, are openly, honestly, and completely documented. The entire community will be able to see and judge which members are on the path or deserve to take on a particular critical role.
### Why it doesn't happen
Succession planning isn't a panacea, and it won't solve all problems for all projects, but as described above, it offers a lot of worthwhile benefits to your project.
Despite that, very few free and open source projects or organizations put much thought into it. I was curious why that might be, so I asked around. I learned that the reasons for not having a succession plan fall into one of five different buckets:
* **Too busy** : Many people recognize succession planning (or lack thereof) as a problem for their project but just "hadn't ever gotten around to it" because there's "always something more important to work on." I understand and sympathize with this, but I suspect the problem may have more to do with prioritization than with time availability.
* **Don't think of it** : Some people are so busy and preoccupied that they haven't considered, "Hey, what would happen if Jen had to leave the project?" This never occurs to them. After all, Jen's always been there when they need her, right? And that will always be the case, right?
* **Don't want to think of it** : Succession planning shares a trait with estate planning: It's associated with negative feelings like loss and can make people address their own mortality. Some people are uncomfortable with this and would rather not consider it at all than take the time to make the inevitable easier for those they leave behind.
* **Attitude of current leaders** : A few of the people with whom I spoke didn't want to recognize that they're replaceable, or to consider that they may one day give up their power and influence on the project. While this was (thankfully) not a common response, it was alarming enough to deserve its own bucket. Failure of someone in a critical role to recognize or admit that they won't be around forever can set a project up for failure in the long run.
* **Don't know where to start** : Many people I interviewed realize that succession planning is something that their project should be doing. They were even willing to carve out the time to tackle this very large task. What they lacked was any guidance on how to start the process of creating a succession plan.
As you can imagine, something as important and people-focused as a succession plan isn't easy to create, and it doesn't happen overnight. Also, there are many different ways to do it. Each project has its own needs and critical roles. One size does not fit all where succession plans are concerned.
There are, however, some guidelines for how every project could proceed with the succession plan creation process. I'll cover these guidelines in my next article.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/passing-baton-succession-planning-foss-leadership
作者:[VM(Vicky) Brasseur][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/vmbrasseur
[1]:http://www.fsf.org
[2]:https://opensource.com/article/17/10/perl-turns-30
[3]:https://opensource.com/article/18/2/coining-term-open-source-software
[4]:https://opensource.org
[5]:https://opensource.org/node/910
[6]:https://octoverse.github.com
[7]:https://en.wikipedia.org/wiki/Succession_planning
[8]:https://hbr.org/2016/11/why-diverse-teams-are-smarter

View File

@ -0,0 +1,93 @@
How to develop the FOSS leaders of the future
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T)
Do you hold a critical role in a free and open source software project? Would you like to make it easier for the next person to step into your shoes, while also giving yourself the freedom to take breaks and avoid burnout?
Of course you would! But how do you get started?
Before you do anything, remember that this is a free or open source project. As with all things in FOSS, your succession planning should happen in collaboration with others. The [Principle of Least Astonishment][1] also applies: Don't work on your plan in isolation, then spring it on the entire community. Work together and publicly, so no one is caught off guard when the cultural or governance changes start happening.
### Identify and analyse critical roles
As a project leader, your first step is to identify the critical roles in your community. While it can help to ask each community members what role they perform, it's important to realize that most people perform multiple roles. Make sure you consider every role that each community member plays in the project.
Once you've identified the roles and determined which ones are critical to your project, the next step is to list all of the duties and responsibilities for each of those critical roles. Be very honest here. List the duties and responsibilities you think each role has, then ask the person who performs that role to list the duties the role actually has. You'll almost certainly find that the second list is longer than the first.
### Refactor large roles
During this process, have you discovered any roles that encompass a large number of duties and responsibilities? Large roles are like large methods in your code: They're a sign of a problem, and they need to be refactored to make them easier to maintain. One of the easiest and most effective steps in succession planning for FOSS projects is to split up each large role into two or more smaller roles and distribute these to other community members. With that one step, you've greatly improved the [bus factor][2] for your project. Even better, you've made each one of those new, smaller roles much more accessible and less intimidating for new community members. People are much more likely to volunteer for a role if it's not a massive burden.
### Limit role tenure
Another way to make a role more enticing is to limit its tenure. Community members will be more willing to step into roles that aren't open-ended. They can look at their life and work plans and ask themselves, "Can I take on this role for the next eighteen months?" (or whatever term limit you set).
Setting term limits also helps those who are currently performing the role. They know when they can set aside those duties and move on to something else, which can help alleviate burnout. Also, setting a term limit creates a pool of people who have performed the role and are qualified to step in if needed, which can also mitigate burnout.
### Knowledge transfer
Once you've identified and defined the critical roles in your project, most of what remains is knowledge transfer. Even small projects involve a lot of moving parts and knowledge that needs to be where everyone can see, share, use, and contribute to it. What sort of knowledge should you be collecting? The answer will vary by project, needs, and role, but here are some of the most common (and commonly overlooked) types of information needed to implement a succession plan:
* **Roles and their duties** : You've spent a lot of time identifying, analyzing, and potentially refactoring roles and their duties. Make sure this information doesn't get lost.
* **Policies and procedures** : None of those duties occur in a vacuum. Each duty must be performed in a particular way (procedures) when particular conditions are met (policies). Take stock of these details for every duty of every role.
* **Resources** : What accounts are associated with the project, or are necessary for it to operate? Who helps you with meetup space, sponsorship, or in-kind services? Such information is vital to project operation but can be easily lost when the responsible community member moves on.
* **Credentials** : Ideally, every external service required by the project will use a login that goes to an email address designated for a specific role (`sre@project.org`) rather than to a personal address. Every role's address should include multiple people on the distribution list to ensure that important messages (such as downtime or bogus "forgot password" requests) aren't missed. The credentials for every service should be kept in a secure keystore, with access limited to the fewest number of people possible.
* **Project history** : All community members benefit greatly from learning the history of the project. Collecting project history information can clarify why decisions were made in the past, for example, and reveal otherwise unexpressed requirements and values of the community. Project histories can also help new community members understand "inside jokes," jargon, and other cultural factors.
* **Transition plans** : A succession plan doesn't do much good if project leaders haven't thought through how to transition a role from one person to another. How will you locate and prepare people to take over a critical role? Since the project has already done a lot of thinking and knowledge transfer, transition plans for each role may be easier to put together.
Doing a complete knowledge transfer for all roles in a project can be an enormous undertaking, but the effort is worth it. To avoid being overwhelmed by such a daunting task, approach it one role at a time, finishing each one before you move onto the next. Limiting the scope in this way makes both progress and success much more likely.
### Document, document, document!
Succession planning takes time. The community will be making a lot of decisions and collecting a lot of information, so make sure nothing gets lost. It's important to document everything (not just in email threads). Where knowledge is concerned, documentation scales and people do not. Include even the things that you think are obvious—what's obvious to a more seasoned community member may be less so to a newbie, so don't skip steps or information.
Gather these decisions, processes, policies, and other bits of information into a single place, even if it's just a collection of markdown files in the main project repository. The "how" and "where" of the documentation can be sorted out later. It's better to capture key information first and spend time [bike-shedding][3] a documentation system later.
Once you've collected all of this information, you should understand that it's unlikely that anyone will read it. I know, it seems unfair, but that's just how things usually work out. The reason? There is simply too much documentation and too little time. To address this, add an abstract, or summary, at the top of each item. Often that's all a person needs, and if not, the complete document is there for a deep dive. Recognizing and adapting to how most people use documentation increases the likelihood that they will use yours.
Above all, don't skip the documentation process. Without documentation, succession plans are impossible.
### New leaders
If you don't yet perform a critical role but would like to, you can contribute to the succession planning process while apprenticing your way into one of those roles.
For starters, actively look for opportunities to learn and contribute. Shadow people in critical roles. You'll learn how the role is done, and you can document it to help with the succession planning process. You'll also get the opportunity to see whether it's a role you're interested in pursuing further.
Asking for mentorship is a great way to get yourself closer to taking on a critical role in the project. Even if you haven't heard that mentoring is available, it's perfectly OK to ask about it. The people already in those roles are usually happy to mentor others, but often are too busy to think about offering mentorship. Asking is a helpful reminder to them that they should be helping to train people to take over their role when they need a break.
As you perform your own tasks, actively seek out feedback. This will not only improve your skills, but it shows that you're interested in doing a better job for the community. This commitment will pay off when your project needs people to step into critical roles.
Finally, as you communicate with more experienced community members, take note of anecdotes about the history of the project and how it operates. This history is very important, especially for new contributors or people stepping into critical roles. It provides the context necessary for new contributors to understand what things do or don't work and why. As you hear these stories, document them so they can be passed on to those who come after you.
### Succession planning examples
While too few FOSS projects are actively considering succession planning, some are doing a great job of trying to reduce their bus factor and prevent maintainer burnout.
[Exercism][4] isn't just an excellent tool for gaining fluency in programming languages. It's also an [open source project][5] that goes out of its way to help contributors [land their first patch][6]. In 2016, the project reviewed the health of each language track and [discovered that many were woefully maintained][7]. There simply weren't enough people covering each language, so maintainers were burning out. The Exercism community recognized the risk this created and pushed to find new maintainers for as many language tracks as possible. As a result, the project was able to revive several tracks from near-death and develop a structure for inviting people to become maintainers.
The purpose of the [Vox Pupuli][8] project is to serve as a sort of succession plan for the [Puppet module][9] community. When a maintainer no longer wishes or is able to work on their module, they can bequeath it to the Vox Pupuli community. This community of 30 collaborators shares responsibility for maintaining all the modules it accepts into the project. The large number of collaborators ensures that no single person bears the burden of maintenance while also providing a long and fruitful life for every module in the project.
These are just two examples of how some FOSS projects are tackling succession planning. Share your stories in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/succession-planning-how-develop-foss-leaders-future
作者:[VM(Vicky) Brasseur)][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/vmbrasseur
[1]:https://en.wikipedia.org/wiki/Principle_of_least_astonishment
[2]:https://en.wikipedia.org/wiki/Bus_factor
[3]:https://en.wikipedia.org/wiki/Law_of_triviality
[4]:http://exercism.io
[5]:https://github.com/exercism/exercism.io
[6]:https://github.com/exercism/exercism.io/blob/master/CONTRIBUTING.md
[7]:https://tinyletter.com/exercism/letters/exercism-track-health-check-new-maintainers
[8]:https://voxpupuli.org
[9]:https://forge.puppet.com

View File

@ -0,0 +1,58 @@
What developers need to know about security
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/locks_keys_bridge_paris.png?itok=Bp0dsEc9)
DevOps doesn't mean that everyone needs to be an expert in both development and operations. This is especially true in larger organizations in which roles tend to be more specialized. Rather, DevOps thinking has evolved in a way that makes it more about the separation of concerns. To the degree that operations teams can deploy platforms for developers (whether on-premises or in a public cloud) and get out of the way, that's good news for both teams. Developers get a productive development environment and self-service. Operations can focus on keeping the underlying plumbing running and maintaining the platform.
It's a contract of sorts. Devs expect a stable and functional platform from ops. Ops expects that devs will be able to handle most of the tasks associated with developing apps on their own.
Devs expect a stable and functional platform from ops. Ops expects that devs will be able to handle most of the tasks associated with developing apps on their own.
That said, DevOps is also about better communication, collaboration, and transparency. It works better if it's not about merely a new type of wall between dev and ops. Ops needs to be sensitive to the type of tools developers want and need and the visibility they require, through monitoring and logging, to write better applications. Conversely, developers need some awareness of how the underlying infrastructure can be used most effectively and what can keep operations up at night (literally).
That said, DevOps is also about better communication, collaboration, and transparency. It works better if it's not about merely a new type of wall between dev and ops. Ops needs to be sensitive to the type of tools developers want and need and the visibility they require, through monitoring and logging, to write better applications. Conversely, developers need some awareness of how the underlying infrastructure can be used most effectively and what can keep operations up at night (literally).
The same principle applies more broadly to DevSecOps, a term that serves to explicitly remind us that security needs to be embedded throughout the entire DevOps pipeline from sourcing content to writing apps, building them, testing them, and running them in production. Developers (and operations) don't suddenly need to become security specialists in addition to the other hats they already wear. But they can often benefit from a greater awareness of security best practices (which may be different from what they've become accustomed to) and shifting away from a mindset that views security as some unfortunate obstacle.
Here are a few observations.
The Open Web Application Security Project ([OWASP][1]) [Top 10 list][2] provides a window into the top vulnerabilities in web applications. Many entries on the list will be familiar to web programmers. Cross-site scripting (XSS) and injection flaws are among the most common. What's striking though is that many of the flaws on the original 2007 list are still on 2017's list ([PDF][3]). Whether it's training or tooling that's most at fault, many of the same coding flaws keep popping up.
The situation is exacerbated by new platform technologies. For example, while containers don't necessarily require applications to be written differently, they dovetail with new patterns (such as [microservices][4] ) and can amplify the effects of certain security practices. For example, as my colleague [Dan Walsh][5] [@rhatdan][6] ) writes, "The biggest misconception in computing [is] you need root to run applications. The problem is not so much that devs think they need root. It is that they build this assumption into the services that they build, i.e., the services cannot run without root, making us all less secure."
Was defaulting to root access ever a good practice? Not really. But it was arguably (maybe) a defensible one with applications and systems that were otherwise sufficiently isolated by other means. But with everything connected, no real perimeter, multi-tenant workloads, users with many different levels of access rights—to say nothing of an ever more dangerous threat environment—there's far less leeway for shortcuts.
[Automation][7] should be an integral part of DevOps anyway. That automation needs to include security and compliance testing throughout the process. Where did the code come from? Are third-party technologies, products, or container images involved? Are there known security errata? Are there known common code flaws? Are secrets and personally identifiable information kept isolated? How do we authenticate? Who is authorized to deploy services and applications?
You're not writing your own crypto, are you?
Automate penetration testing where possible. Did I mention automation? It's an essential part of making security continuous rather than a checklist item that's done once in a while.
Does this sound hard? It probably is a bit. At least it may be different. But as a participant in a [DevOpsDays OpenSpaces][8] London said to me: "It's just technical testing. It's not magical or mysterious." He went on to say that it's not even that hard to get involved with security as a way to gain a broader understanding of the whole software lifecycle (which is not a bad skill to have). He also suggested taking part in incident response exercises or [capture the flag exercises][9]. You might even find they're fun.
This article is based on [a talk][10] the author will be giving at [Red Hat Summit 2018][11], which will be held May 8-10 in San Francisco. _[Register by May 7][11] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount._
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/what-developers-need-know-about-security
作者:[Gordon Haff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ghaff
[1]:https://www.owasp.org/index.php/Main_Page
[2]:https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
[3]:https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf
[4]:https://opensource.com/tags/microservices
[5]:https://opensource.com/users/rhatdan
[6]:https://twitter.com/rhatdan
[7]:https://opensource.com/tags/automation
[8]:https://www.devopsdays.org/open-space-format/
[9]:https://dev.to/_theycallmetoni/capture-the-flag-its-a-game-for-hacki-mean-security-professionals
[10]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154677
[11]:https://www.redhat.com/en/summit/2018

View File

@ -0,0 +1,157 @@
Download YouTube Videos in Linux Command Line
======
**Brief: Easily download YouTube videos in Linux using youtube-dl command line tool. With this tool, you can also choose video format and video quality such as 1080p or 4K.**
![](https://itsfoss.com/wp-content/uploads/2015/10/Download-YouTube-Videos.jpeg)
I know you have already seen [how to download YouTube videos][1]. But those tools were mostly GUI ways. I am going to show you how to download YouTube videos in Linux terminal using youtube-dl.
### Install youtube-dl to download YouTube videos in Linux terminal
[youtube-dl][2] is a Python-based small command-line tool that allows downloading videos from [YouTube][3], [Dailymotion][4], Photobucket, Facebook, Yahoo, Metacafe, Depositfiles and few more similar sites. It is written in pygtk and requires Python interpreter to run this program, its not platform restricted. It should run on any Unix, Windows or in Mac OS X based systems.
The youtube-dl tool supports resuming interrupted downloads. If youtube-dl is killed (for example by Ctrl-C or due to loss of Internet connectivity) in the middle of the download, you can simply re-run it with the same YouTube video URL. It will automatically resume the unfinished download, as long as a partial download is present in the current directory. Which means you dont need [download managers in Linux][5] just for resuming downloads.
#### youtube-dl features
This tiny tool has so many features that it wont be an exaggeration to call it the best YouTube downloader for Linux.
* Download videos from not only YouTube but other popular video websites like Dailymotion, Facebook etc
* Allows downloading videos in several available video formats such as MP4, WebM etc.
* You can also choose the quality of the video being downloaded. If the video is available in 4K, you can download it in 4K, 1080p, 720p etc
* Automatical pause and resume of video downloads.
* Allows to bypass YouTube geo-restrictions
Downloading videos from websites could be against their policies. Its up to you if choose to download videos.
#### How to install youtube-dl
youtube-dl is a popular program and is available in the default repositories of most Linux distributions, if not all. You can use the standard way of installing packages in your distribution to install youtube-dl. Ill still show some commands for the sake of it.
If you are running Ubuntu-based Linux distribution, you can install it using this command:
```
sudo apt install youtube-dl
```
For other Linux distribution, you can quickly install youtube-dl on your system through the command line interface with:
```
sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O/usr/local/bin/youtube-dl
```
After fetching the file, you need to set a executable permission on the script to execute properly.
```
sudo chmod a+rx /usr/local/bin/youtube-dl
```
#### Using YouTube-dl for downloading videos:
To download a video file, simply run the following command. Where “VIDEO_URL” is the URL of the video that you want to download.
```
youtube-dl<video_url>
```
#### Download YouTube videos in various formats and quality size
These days YouTube videos have different resolutions, you first need to check available video formats of a given YouTube video. For that run youtube-dl with “-F” option. It will show you a list of available formats.
```
youtube-dl -F <video_url>
```
Its output will be like:
```
 Setting language
BlXaGWbFVKY: Downloading video webpage
BlXaGWbFVKY: Downloading video info webpage
BlXaGWbFVKY: Extracting video information
Available formats:
37      :       mp4     [1080x1920]
46      :       webm    [1080x1920]
22      :       mp4     [720x1280]
45      :       webm    [720x1280]
35      :       flv     [480x854]
44      :       webm    [480x854]
34      :       flv     [360x640]
18      :       mp4     [360x640]
43      :       webm    [360x640]
5       :       flv     [240x400]
17      :       mp4     [144x176]
```
Now among the available video formats, choose one that you like. For example, if you want to download it in MP4 version and 1080 pixel, you should use:
```
youtube-dl -f 37<video_url>
```
#### Download subtitles of videos using youtube-dl
First, check if there are subtitles available for the video. To list all subs for a video, use the command below:
```
youtube-dl --list-subs <video_url>
```
To download all subs, but not the video:
```
youtube-dl --all-subs --skip-download <video_url>
```
#### Download entire YouTube playlist
To download a playlist, simply run the following command. Where “playlist_url” is the URL of the playlist that you want to download.
```
youtube-dl -cit <playlist_url>
```
#### Download only audio from YouTube videos
If you just want to download the audio from a YouTube video, you can use the -x option to simply extract the audio file from the video.
```
youtube-dl -x <video_url>
```
The default file format is Ogg which you may not like. You can specify the file format of the audio file in the following manner:
```
youtube-dl -x --audio-format mp3 <video_url>
```
#### And a lot more can be done with youtube-dl
youtube-dl is a versatile command line tool and provides a number of functionalities. No wonder it is such a popular command line tool.
I have only shown some of the most common usages of this tool. But if you want to explore its capabilities further, please check its [manual][6].
I hope this article helped you to download YouTube videos on Linux. If you have questions or suggestions, please drop a comment below.
Article updated with inputs from Abhishek Prakash.
--------------------------------------------------------------------------------
via: https://itsfoss.com/download-youtube-linux/
作者:[alimiracle][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ali/
[1]:https://itsfoss.com/download-youtube-videos-ubuntu/
[2]:https://rg3.github.io/youtube-dl/
[3]:https://www.youtube.com/c/itsfoss/
[4]:https://www.dailymotion.com/
[5]:https://itsfoss.com/4-best-download-managers-for-linux/
[6]:https://github.com/rg3/youtube-dl/blob/master/README.md#readme

View File

@ -0,0 +1,202 @@
How to Install Docker CE on Your Desktop
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/containers-volumes_0.jpg?itok=gv0_MXiZ)
[In the previous article,][1] we learned some of the basic terminologies of the container world. That background information will come in handy when we run commands and use some of those terms in follow-up articles, including this one. This article will cover the installation of Docker on desktop Linux, macOS, and Windows, and it is intended for beginners who want to get started with Docker containers. The only prerequisite is that you are comfortable with command-line interface.
### Why do I need Docker CE on my local machine?
As a new user, you many wonder why you need containers on your local systems. Arent they meant to run in cloud and servers as microservices? While containers have been part of the Linux world for a very long time, it was Docker that made them really consumable with its tools and technologies.
The greatest thing about Docker containers is that you can use your local machine for development and testing. The container images that you create on your local system can then run “anywhere.” There is no conflict between developers and operators about apps running fine on development systems but not in production.
The point is that in order to create containerized applications, you must be able to run and create containers on your local systems.
You can use any of the three platforms -- desktop Linux, Windows, or macOS as the development platform for containers. Once Docker is successfully running on these systems, you will be using the same commands across platforms so it really doesnt matter which OS you are running underneath.
Thats the beauty of Docker.
### Lets get started
There are two editions of Docker. Docker Enterprise Edition (EE) and Docker Community Edition (CE). We will be using the Docker Community Edition, which is a free of cost version of Docker intended for developers and enthusiasts who want to get started with Docker.
There are two channels of Docker CE: stable and edge. As the name implies, the stable version gives you well-tested quarterly updates, whereas the edge version offers new updates every month. After further testing, these edge features are added to the stable release. I recommend the stable version for new users.
Docker CE is supported on macOS, Windows 10, Ubuntu 14.04, 16.04, 17.04 and 17.10; Debian 7.7,8,9 and 10; Fedora 25, 26, 27; and centOS. While you can download Docker CE binaries and install on your Desktop Linux systems, I recommend adding repositories so you continue to receive patches and updates.
### Install Docker CE on Desktop Linux
You dont need a full blown desktop Linux to run Docker, you can install it on a bare minimal Linux server as well, that you can run in a VM. In this tutorial, I am running it on Fedora 27 and Ubuntu 17.04 running on my main systems.
### Ubuntu Installation
First things first. Run a system update so your Ubuntu packages are fully updated:
```
$ sudo apt-get update
```
Now run system upgrade:
```
$ sudo apt-get dist-upgrade
```
Then install Docker PGP keys:
```
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the repository info again:
$ sudo apt-get update
```
Now install Docker CE:
```
$ sudo apt-get install docker-ce
```
Once it's installed, Docker CE runs automatically on Ubuntu based systems. Lets check if its running:
```
$ sudo systemctl status docker
```
You should get the following output:
```
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2017-12-28 15:06:35 EST; 19min ago
Docs: https://docs.docker.com
Main PID: 30539 (dockerd)
```
Since Docker is installed on your system, you can now use Docker CLI (Command Line Interface) to run Docker commands. Living up to the tradition, lets run the Hello World command:
```
$ sudo docker run hello-world
```
![YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8][2]
Congrats! You have Docker running on your Ubuntu system.
### Installing Docker CE on Fedora
Things are a bit different on Fedora 27. On Fedora, you first need to install def-plugins-core packages that will allow you to manage your DNF packages from CLI.
```
$ sudo dnf -y install dnf-plugins-core
```
Now install the Docker repo on your system:
```
$ sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
Its time to install Docker CE:
$ sudo dnf install docker-ce
```
Unlike Ubuntu, Docker doesnt start automatically on Fedora. So lets start it:
```
$ sudo systemctl start docker
```
You will have to start Docker manually after each reboot, so lets configure it to start automatically after reboots. $ systemctl enable docker Well, its time to run the Hello World command:
```
$ sudo docker run hello-world
```
Congrats, Docker is running on your Fedora 27 system.
### Cutting your roots
You may have noticed that you have to use sudo to run Docker commands. Thats because of Docker daemons binding with the UNIX socket, instead of a TCP port and that socket is owned by the root user. So, you need sudo privileges to run the docker command. You can add system user to the docker group so it wont require sudo:
```
$ sudo groupadd docker
```
In most cases, the docker user group is automatically created when you install Docker CE, so all you need to do is add your user to that group:
```
$ sudo usermod -aG docker $USER
```
To test if the group has been added successfully, run the groups command against the name of the user:
```
$ groups swapnil
```
(Here, Swapnil is the user.)
This is the output on my system:
```
$ swapnil : swapnil adm cdrom sudo dip plugdev lpadmin sambashare docker
```
You can see that the user also belongs to the docker group. Log out of your system, so that group changes take effect. Once you log back in, try the Hello World command without sudo:
```
$ docker run hello-world
```
You can check system wide info about the installed version of Docker and more by running this command:
```
$ docker info
```
### Install Docker CE on macOS and Windows
You can easily install Docker CE (and EE) on macOS and Windows. Download the official Docker for Mac and install it the way you install applications on macOS, by simply dragging them into the Applications directory. Once the file is copied, open Docker from spotlight to start the installation process. Once installed, Docker will start automatically and you can see it in the top bar of macOS.
![IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEA][3]
macOS is UNIX, so you can simply open the terminal app and start using Docker commands natively. Test the hello world app:
```
$ docker run hello-world
```
Congrats, you have Docker running on your macOS.
### Docker on Windows 10
You need the latest version of Windows 10 Pro or Server in order to run/install Docker on it. If you are not fully updated, Windows wont install Docker. I got an error on my Windows 10 system and had to run system updates. My version was still behind, and I hit [this][4] bug. So, if you fail to install Docker on Windows, just know you are not alone. Keep an eye on that bug to find a solution.
Once you install Docker on Windows, you can either use bash shell via WSL or use PowerShell to run docker commands. Lets test the “Hello World” command in PowerShell:
```
PS C:\Users\swapnil> docker run hello-world
```
Congrats, you have Docker running on Windows.
In the next article, we will talk about pulling images from DockerHub and running containers on our systems. We will also talk about pushing our own containers to Docker Hub.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop
作者:[SWAPNIL BHARTIYA][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/blog/intro-to-linux/2017/12/container-basics-terms-you-need-know
[2]:https://lh5.googleusercontent.com/YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8Z8C0-BlynDcL5B5pG-zzH0fKU0Qvnzd89v0KDEbZiO0gTfGNGfDtO-FkTt0bmzIQ-TKbNmv18S9RXdkSeXqgKDFRewnaHPj2
[3]:https://lh3.googleusercontent.com/IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEAfwv9oFpMfcAqkgEk7K5o58iDAAfGozSpIvY_qEsTOHRlSbesMKwTnG9rRkWba1KPSmnuH1LyoccDGNO3Clbz8du0gSByZxNj
[4]:https://github.com/docker/for-win/issues/1263

View File

@ -1,106 +0,0 @@
translating---geekpi
How to Use WSL Like a Linux Pro
============================================================
![WSL](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-pro.png?itok=e65wEEAw "WSL")
Learn how to perform tasks like mounting USB drives and manipulating files in this WSL tutorial. (Image courtesy: Microsoft)[Used with permission][1][Microsoft][2]
In the [previous tutorial][4], we learned about setting up WSL on your Windows 10 system. You can perform a lot of Linux command like tasks in Windows 10 using WSL. Many sysadmin tasks are done inside a terminal, whether its a Linux based system or macOS. Windows 10, however, lacks such capabilities. You want to run a cron job? No. You want to ssh into your server and then rsync files? No way. How about managing your local files with powerful command line utilities instead of using slow and unreliable GUI utilities?
In this tutorial, youll see how to perform additional tasks beyond managing your servers using WSL - things like mounting USB drives and manipulating files. You need to be running a fully updated Windows 10 and the Linux distro of your choice. I covered these steps in the [previous article][5], so begin there if you need to catch up. Lets get started.
### Keep your Linux system updated
The fact is there is no Linux kernel running under the hood when you run Ubuntu or openSUSE through WSL. Yet, you must keep your distros fully updated to keep your system protected from any new known vulnerabilities. Since only two free community distributions are officially available in Windows Store, out tutorial will cover only those two: openSUSE and Ubuntu.
Update your Ubuntu system:
```
# sudo apt-get update
# sudo apt-get dist-upgrade
```
To run updates for openSUSE:
```
# zypper up
```
You can also upgrade openSUSE to the latest version with the  _dup_  command. But before running the system upgrade, please run updates using the previous command.
```
# zypper dup
```
**Note: **openSUSE defaults to the root user. If you want to perform any non-administrative tasks, please switch to a non-privileged user. You can learn how to create a user on openSUSE in this [article][6].
### Manage local files
If you want to use great Linux command line utilities to manage your local files, you can easily do that with WSL. Unfortunately, WSL doesnt yet support things like _lsblk_ or  _mnt_  to mount local drives. You can, however,  _cd _ to the C drive and manage files:
/mnt/c/Users/swapnil/Music
I am now in the Music directory of the C drive.
To mount other drives, partitions, and external USB drives, you will need to create a mount point and then mount that drive.
Open File Explorer and check the mount point of that drive. Lets assume its mounted in Windows as S:\
In the Ubuntu/openSUSE terminal, create a mount point for the drive.
```
sudo mkdir /mnt/s
```
Now mount the drive:
```
mount -f drvfs S: /mnt/s
```
Once mounted, you can now access that drive from your distro. Just bear in mind that distro running with WSL will see what Windows can see. So you cant mount ext4 drives that cant be mounted on Windows natively.
You can now use all those magical Linux commands here. Want to copy or move files from one folder to another? Just run the  _cp_  or  _mv_ command.
```
cp /source-folder/source-file.txt /destination-folder/
cp /music/classical/Beethoven/symphony-2.mp3 /plex-media/music/classical/
```
If you want to move folders or large files, I would recommend  _rsync_ instead of the  _cp_  command:
```
rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/
```
Yay!
Want to create new directories in Windows drives, just use the awesome  _mkdir_ command.
Want to set up a cron job to automate a task at certain time? Go ahead and create a cron job with  _crontab -e_ . Easy peasy.
You can also mount network/remote folders in Linux so you can manage them with better tools. All of my drives are plugged into either a Raspberry Pi powered server or a live server, so I simply ssh into that machine and manage the drive. Transferring files between the local machine and remote system can be done using, once again, the  _rsync_  command.
WSL is now out of beta, and it will continue to get more new features. Two features that I am excited about are the lsblk command and the dd command that allow me to natively manage my drives and create bootable drives of Linux from within Windows. If you are new to the Linux command line, this [previous tutorial ][7]will help you get started with some of the most basic commands.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/how-use-wsl-linux-pro
作者:[SWAPNIL BHARTIYA][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://blogs.msdn.microsoft.com/commandline/learn-about-windows-console-and-windows-subsystem-for-linux-wsl/
[3]:https://www.linux.com/files/images/wsl-propng
[4]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[5]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[6]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[7]:https://www.linux.com/learn/how-use-linux-command-line-basics-cli

View File

@ -1,76 +0,0 @@
translating---geekpi
How to find files in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
If you're a Windows user or a non-power-user of OSX, you probably use a GUI to find files. You may also find the interface limited, frustrating, or both, and have learned to excel at organizing things and remembering the exact order of your files. You can do that in Linux, too—but you don't have to.
One of the best things about Linux is that it offers a variety of ways to do things. You can open any file manager and `ctrl`+`f`, you can use the program you are in to open files manually, or you can simply start typing letters and it'll filter the current directory listing.
![Screenshot of how to find files in Linux with Ctrl+F][2]
Screenshot of how to find files in Linux with Ctrl+F
But what if you don't know where your file is and don't want to search the entire disk? Linux is well-tooled for this and a variety of other use-cases.
### Finding program locations by command name
The Linux file system can seem daunting if you're used to putting things wherever you like. For me, one of the hardest things to get used to was finding where programs are supposed to live.
For example, `which bash` will usually return `/bin/bash`, but if you download a program and it doesn't appear in your menus, the `which` command can be a great tool.
A similar utility is the `locate` command, which I find useful for finding configuration files. I don't like typing in program names because simple ones like `locate php` often offer many results that need to be filtered further.
For more information about `locate` and `which`, see the `man` pages:
* `man which`
* `man locate`
### Find
The `find` utility offers much more advanced functionality. Below is an example from a script I've installed on a number of servers that I administer to ensure that a specific pattern of file (also known as a glob) exists for only five days and all files older than that are deleted. (Since its last modification, a decimal is used to account for up to 240 minutes difference.)
```
find ./backup/core-files*.tar.gz -mtime +4.9 -exec rm {} \;
```
The `find` utility has many advanced use-cases, but most common is executing commands on results without chaining and filtering files by type, creation, and modification date.
Another interesting use of `find` is to find all files with executable permissions. This can help ensure that nobody is installing bitcoin miners or botnets on your expensive servers.
```
find / -perm /+x
```
For more information on `find`, see the `man` page using `man find.`
### Grep
Want to find a file by its contents? Linux has it covered. You can use many Linux utilities to efficiently search for files that match a pattern, but `grep` is one that I use often.
Suppose you have an application that's delivering error messages with a code reference and stack trace. You find these in your logs. Grepping is not always the go-to, but I always `grep -R` if the issue is with a supplied value.
An increasing number of IDEs are implementing find functions, but if you're accessing a remote system or for whatever reason don't have a GUI, or if you want to iterate in-place, then use: `grep -R {searchterm}` or on systems supporting `egrep` alias; just add `-e` flag to command `egrep -r {regex-pattern}`.
I used this technique when patching the `dhcpcd5` in [Raspbian][3] last year so I could continue to operate a network access point on newer Debian releases from the [Raspberry Pi Foundation][4].
What tips help you search for files more efficiently on Linux?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/how-find-files-linux
作者:[Lewis Cowles][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/lewiscowles1986
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/find-files-in-linux-ctrlf.png?itok=1gf9kIut (Screenshot of how to find files in Linux with Ctrl+F)
[3]:https://www.raspbian.org/
[4]:https://www.raspberrypi.org/

View File

@ -1,150 +0,0 @@
12 Git tips for Git's 12th birthday
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/party_anniversary_flag_birthday_celebrate.jpg?itok=KqfMENa7)
[Git][1], the distributed revision-control system that's become the default tool for source code control in the open source world, turns 12 on April 7. One of the more frustrating things about using Git is how much you need to know to use it effectively. This can also be one of the more awesome things about using Git, because there's nothing quite like discovering a new tip or trick that can streamline or improve your workflow.
In honor of Git's 12th birthday, here are 12 tips and tricks to make your Git experience more useful and powerful, starting with some basics you might have overlooked and scaling up to some real power-user tricks!
### 1\. Your ~/.gitconfig file
The first time you tried to use the `git` command to commit a change to a repository, you might have been greeted with something like this:
```
*** Please tell me who you are.
Run
  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"
to set your account's default identity.
```
What you might not have realized is that those commands are modifying the contents of `~/.gitconfig`, which is where Git stores global configuration options. There are a vast array of things you can do via your `~/.gitconfig` file, including defining aliases, turning particular command options on (or off!) on a permanent basis, and modifying aspects of how Git works (e.g., which diff algorithm `git diff` uses or what type of merge strategy is used by default). You can even conditionally include other config files based on the path to a repository! See `man git-config` for all the details.
### 2\. Your repo's .gitconfig file
In the previous tip, you may have wondered what that `--global` flag on the `git config` command was doing. It tells Git to update the "global" configuration, the one found in `~/.gitconfig`. Of course, having a global config also implies a local configuration, and sure enough, if you omit the `--global` flag, `git config` will instead update the repository-specific configuration, which is stored in `.git/config`.
Options that are set in the `.git/config` file will override any setting in the `~/.gitconfig` file. So, for example, if you need to use a different email address for a particular repository, you can run `git config user.email "also_you@example.com"`. Then, any commits in that repository will use your other email address. This can be super useful if you work on open source projects from a work laptop and want them to show up with a personal email address while still using your work email for your main Git configuration.
Pretty much anything you can set in `~/.gitconfig`, you can also set in `.git/config` to make it specific to the given repository. In any of the following tips, when I mention adding something to your `~/.gitconfig`, just remember you could also set that option for just one repository by adding it to `.git/config` instead.
### 3\. Aliases
Aliases are another thing you can put in your `~/.gitconfig`. These work just like aliases in the command shell—they set up a new command name that can invoke one or more other commands, often with a particular set of options or flags. They're super useful for longer, more complicated commands you use frequently.
You can define aliases using the `git config` command—for example, running `git config --global --add alias.st status` will make running `git st` do the same thing as running `git status`—but I find when defining aliases, it's frequently easier to just edit the `~/.gitconfig` file directly.
If you choose to go this route, you'll find that the `~/.gitconfig` file is an [INI file][2]. INI is basically a key-value file format with particular sections. When adding an alias, you'll be changing the `[alias]` section. For example, to define the same `git st` alias as above, add this to the file:
```
[alias]
st = status
```
(If there's already an `[alias]` section, just add the second line to that existing section.)
### 4\. Aliases to shell commands
Aliases aren't limited to just running other Git subcommands—you can also define aliases that run other shell commands. This is a fantastic way to deal with a recurring, infrequent, and complicated task: Once you've figured out how to do it once, preserve the command under an alias. For example, I have a few repositories where I've forked an open source project and made some local modifications that don't need to be contributed back to the project. I want to keep up-to-date with ongoing development work in the project but also maintain my local changes. To accomplish this, I need to periodically merge the changes from the upstream repo into my fork—which I do by using an alias I call `upstream-merge`. It's defined like this:
```
upstream-merge = !"git fetch origin -v && git fetch upstream -v && git merge upstream/master && git push"
```
The `!` at the beginning of the alias definition tells Git to run the command via the shell. This example involves running a number of `git` commands, but aliases defined in this way can run any shell command.
(Note that if you want to copy my `upstream-merge` alias, you'll need to make sure you have a Git remote named `upstream` pointed at the upstream repository you've forked from. You can add this by running `git remote add upstream <URL to repo>`.)
### 5\. Visualizing the commit graph
If you work on a project with a lot of branching activity, sometimes it can be difficult to get a handle on all the work that's happening and how it's all related. Various GUI tools allow you to get a picture of different branches and commits in what's called the "commit graph." For example, here's a section of one of my repositories visualized with the [GitLab][3] commit graph viewer:
![GitLab commit graph viewer][5]
John Anderson, CC BY
If you're a dedicated command-line user or somebody who finds switching tools to be distracting, it's nice to get a similar view of the commit graph from the command line. That's where the `--graph` argument to the `git log` command comes in:
![Repository visualized with --graph command][7]
John Anderson, CC BY
This is the same section of the same repo visualized with the following command:
```
git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --date=relative
```
The `--graph` option adds the graph to the left side of the log, `--abbrev-commit` shortens the commit [SHAs][8], `--date=relative` expresses the dates in relative terms, and the `--pretty` bit handles all the other custom formatting. I have this aliased to `git lg`, and it is one of my top 10 most frequently run commands.
### 6\. A nicer force-push
Sometimes, as hard as you try to avoid it, you'll find that you need to run `git push --force` to overwrite the history on a remote copy of your repository. You may have gotten some feedback that caused you to do an interactive rebase, or you may simply have messed up and want to hide the evidence.
One of the hazards with force pushes happens when somebody else has made changes on top of the same branch in the remote copy of the repository. When you force-push your rewritten history, those commits will be lost. This is where `git push --force-with-lease` comes in—it will not allow you to force-push if the remote branch has been updated, which will ensure you don't throw away someone else's work.
### 7\. git add -N
Have you ever used `git commit -a` to stage and commit all your outstanding changes in a single move, only to discover after you've pushed your commit that `git commit -a` ignores newly added files? You can work around this by using the `git add -N` (think "notify") to tell Git about newly added files you'd like to be included in commits before you actually commit them for the first time.
### 8\. git add -p
A best practice when using Git is to make sure each commit consists of only a single logical change—whether that's a fix for a bug or a new feature. Sometimes when you're working, however, you'll end up with more than one commit's worth of change in your repository. How can you manage to divide things up so that each commit contains only the appropriate changes? `git add --patch` to the rescue!
This flag will cause the `git add` command to look at all the changes in your working copy and, for each one, ask if you'd like to stage it to be committed, skip over it, or defer the decision (as well as a few other more powerful options you can see by selecting `?` after running the command). `git add -p` is a fantastic tool for producing well-structured commits.
### 9\. git checkout -p
Similar to `git add -p`, the `git checkout` command will take a `--patch` or `-p` option, which will cause it to present each "hunk" of change in your local working copy and allow you to discard it—basically reverting your local working copy to what was there before your change.
This is fantastic when, for example, you've introduced a bunch of debug logging statements while chasing down a bug. After the bug is fixed, you can first use `git checkout -p` to remove all the new debug logging, then you `git add -p` to add the bug fix. Nothing is more satisfying than putting together a beautiful, well-structured commit!
### 10\. Rebase with command execution
Some projects have a rule that each commit in the repository must be in a working state—that is, at each commit, it should be possible to compile the code or the test suite should run without failure. This is not too difficult when you're working on a branch over time, but if you end up needing to rebase for whatever reason, it can be a little tedious to step through each rebased commit to make sure you haven't accidentally introduced a break.
Fortunately, `git rebase` has you covered with the `-x` or `--exec` option. `git rebase -x <cmd>` will run that command after each commit is applied in the rebase. So, for example, if you have a project where `npm run tests` runs your test suite, `git rebase -x npm run tests` would run the test suite after each commit was applied during the rebase. This allows you to see if the test suite fails at any of the rebased commits so you can confirm that the test suite is still passing at each commit.
### 11\. Time-based revision references
Many Git subcommands take a revision argument to specify what part of the repository to work on. This can be the SHA1 of a particular commit, a branch name, or even a symbolic name like `HEAD` (which refers to the most recent commit on the currently checked out branch). In addition to these simple forms, you can also append a specific date or time to mean "this reference, at this time."
This becomes very useful when you're dealing with a newly introduced bug and find yourself saying, "I know this worked yesterday! What changed?" Instead of staring at the output of `git log` trying to figure out what commit was changed when, you can simply run `git diff HEAD@{yesterday}`, and see all the changes that have happened since then. This also works with longer time periods (e.g., `git diff HEAD@{'2 months ago'}`) as well as exact dates (e.g., `git diff HEAD@{'2010-01-01 12:00:00'}`).
You can also use these date-based revision arguments with any Git subcommand that takes a revision argument. Find full details about which format to use in the man page for `gitrevisions`.
### 12\. The all-seeing reflog
Have you ever rebased away a commit, then discovered there was something in that commit you wanted to keep? You may have thought that information was lost forever and would need to be recreated. But if you committed it in your local working copy, it was added to the reference log (reflog), and you should still be able to access it.
Running `git reflog` will show you a list of all the activity for the current branch in your local working copy and also give you the SHA1 of each commit. Once you've found the commit you rebased away, you can run `git checkout <SHA1>` to check out that commit, copy any information you need, and run `git checkout HEAD` to return to the most recent commit in the branch.
### That's all folks!
Hopefully at least one of these tips has taught you something new about Git, a 12-year-old project that's continuing to innovate and add new features. What's your favorite Git trick?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/12-git-tips-gits-12th-birthday
作者:[John SJ Anderson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/genehack
[1]:https://git-scm.com/
[2]:https://en.wikipedia.org/wiki/INI_file
[3]:https://gitlab.com/
[4]:/file/392941
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gui_graph.png?itok=3GovYfG1 (GitLab commit graph viewer)
[6]:/file/392936
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/console_graph.png?itok=XogY1P8M (Repository visualized with --graph command)
[8]:https://en.wikipedia.org/wiki/Secure_Hash_Algorithms

View File

@ -1,3 +1,5 @@
translating---geekpi
The Shuf Command Tutorial With Examples For Beginners The Shuf Command Tutorial With Examples For Beginners
====== ======

View File

@ -0,0 +1,192 @@
5 Best Feed Reader Apps for Linux
======
**Brief: Extensively use RSS feeds to stay updated with your favorite websites? Take a look at the best feed reader applications for Linux.**
[RSS][1] feeds were once most widely used, to collect news and articles from different sources at one place. It is often perceived that [RSS usage is in decline][2]. However, there are still people (like me) who believe in opening an application that accumulates all the websites articles at one place, which they can read later even when they are not connected to the internet.
Feed Readers makes it easier by collecting all the published items on a website for anytime access. You dont need to open several browser tabs to go to your favorite websites, and bookmarking the one you liked.
In this article, Ill share some of my favorite feed reader applications for Linux desktop.
### Best Feed Readers for Linux
![Best Feed Readers for Linux][3]
As usual, Linux has multiple choices for feed readers and in this article, we have compiled the 5 good feed readers applications for you. The list is no particular order.
#### 1\. Akregator Feed Reader
[Akregator][4] is a KDE product which is easy to use and powerful enough to provide latest updates from news sites, blogs and RSS/Atom enabled websites.
It comes with an internal browser for news reading and updated the feed in real time.
##### Features
* You can add a websites feed using “Ädd Feed” options and define an interval to refresh and update subscribe feeds.
* It can store and archive contents the setting of which can be defined on a global level or on individual feeds.
* Features option to import subscribed feeds from another browser or a past back up.
* Notifies you of the unread feeds.
##### How to install Akregator
If you are running KDE desktop, most probably Akregator is already installed on your system. If not, you can use the below command for Debian based systems.
```
sudo apt install akregator
```
Once installed, you can directly add a website by clicking on Feed menu and then **Add feed** and giving the website name. This is how Its FOSS feed looks like when added.
![][5]
#### 2\. QuiteRSS
[QuiteRSS][6] is another free and open source RSS/Atom news feed reader with lots of features. There are additional features like proxy integration, adblocker, integrated browser, and system tray integration. Its easier to update feeds by setting up a timer to refresh.
##### Features
* Automatic feed updation on either start up or using a timer option.
* Searching feed URL using website address and categorizing them in new, unread, starred and deleted section.
* Embedded browser so that you dont leave the app.
* Hiding images, if you are only interested in text.
* Adblocker and better system tray integration.
* Multiple language support.
##### How to install QuiteRSS
You can install it from the QuiteRSS ppa.
```
sudo add-apt-repository ppa:quiterss/quiterss
sudo apt-get update
sudo apt-get install quiterss
```
![][7]
#### 3\. Liferea
Linux Feed Reader aka [Liferea][8] is probably the most used feed aggregator on Linux platform. It is fast and easy to use and supports RSS / Atom feeds. It has support for podcasts and there is an option for adding custom scripts which can run depending upon your actions.
Theres a browser integration while you still have the options to open an item in a separate browser.
##### Features
* Liferea can download and save feeds from your favorite website to read offline.
* It can be synced with other RSS feed readers, making a transition easier.
* Support for Podcasts.
* Support for search folders, which allows users to save searches.
##### How to install Liferea
Liferea is available in the official repository for almost all the distributions. Ubuntu-based users can install it by using below command:
```
sudo apt-get install liferea
```
![][9]
#### 4\. FeedReader
[FeedReader][10] is a simple and elegant RSS desktop client for your web-based RSS accounts. It can work with Feedbin, Feedly, FreshRSS, Local RSS among others and has options to send it over mail, tweet about it etc.
##### Features
* There are multiple themes for formatting.
* You can customize it according to your preferences.
* Supports notifications and podcasts.
* Fast searches and various filters are present, along with several keyboard shortcuts to make your reading experience better.
##### How to install FeedReader
FeedReader is available as a Flatpak for almost every Linux distribution.
```
flatpak install http://feedreader.xarbit.net/feedreader-repo/feedreader.flatpakref
```
It is also available in Fedora repository:
```
sudo dnf install feedreader
```
And, in Arch User Repository.
```
yaourt -S feedreader
```
![][11]
#### 5\. Newsbeuter: RSS feed in terminal
[Newsbeuter][12] is an open source feed reader for terminal lovers. There is an option to add and delete an RSS feed and to get the content on the terminal itself. Newsbeuter is loved by people who spend more time on the terminal and want their feed to be clutter free from images and ads.
##### How to install Newsbeuter
```
sudo apt-get install newsbeuter
```
Once installation completes, you can launch it by using below command
```
newsbeuter
```
To add a feed in your list, edit the urls file and add the RSS feed.
```
vi ~/.newsbeuter/urls
>> http://feeds.feedburner.com/itsfoss
```
To read the feeds, launch newsbeuter and it will display all the posts.
![][13]
You can get the useful commands at the bottom of the terminal which can help you in using newsbeuter. You can read this [manual page][14] for detailed information.
#### Final Words
To me, feed readers are still relevant, especially when you follow multiple websites and blogs. The offline access to your favorite website and blogs content with options to archive and search is the biggest advantage of using a feed reader.
Do you use a feed reader on your Linux system? If yes, tell us your favorite one in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/feed-reader-apps-linux/
作者:[Ambarish Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ambarish/
[1]:https://en.wikipedia.org/wiki/RSS
[2]:http://andrewchen.co/the-death-of-rss-in-a-single-graph/
[3]:https://itsfoss.com/wp-content/uploads/2018/04/best-feed-reader-apps-linux.jpg
[4]:https://www.kde.org/applications/internet/akregator/
[5]:https://itsfoss.com/wp-content/uploads/2018/02/Akregator2-800x500.jpg
[6]:https://quiterss.org/
[7]:https://itsfoss.com/wp-content/uploads/2018/02/QuiteRSS2.jpg
[8]:https://itsfoss.com/liferea-rss-client/
[9]:https://itsfoss.com/wp-content/uploads/2018/02/Liferea-800x525.png
[10]:https://jangernert.github.io/FeedReader/
[11]:https://itsfoss.com/wp-content/uploads/2018/02/FeedReader2-800x465.jpg
[12]:https://newsbeuter.org/
[13]:https://itsfoss.com/wp-content/uploads/2018/02/newsbeuter.png
[14]:http://manpages.ubuntu.com/manpages/bionic/man1/newsbeuter.1.html

View File

@ -0,0 +1,171 @@
How To Setup Static File Server Instantly
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/serve-720x340.png)
Ever wanted to share your files or project over network, but dont know how to do? No worries! Here is a simple utility named **“serve”** to share your files instantly over network. This simple utility will instantly turn your system into a static file server, allowing you to serve your files over network. You can access the files from any devices regardless of their operating system. All you need is a web browser. This utility also can be used to serve static websites. It is formerly known as “list” and “micro-list”, but now the name has been changed to “serve”, which is much more suitable for the purpose of this utility.
### Setup Static File Server Using Serve
To install “serve”, you need to install NodeJS and NPM first. Refer the following link to install NodeJS and NPM in your Linux box.
Once NodeJS and NPM installed, run the following command to install “serve”.
```
$ npm install -g serve
```
Done! Now is the time to serve the files or folders.
The typical syntax to use “serve” is:
```
$ serve [options] <path-to-files-or-folders>
```
### Serve Specific files or folders
For example, let us share the contents of the **Documents** directory. To do so, run:
```
$ serve Documents/
```
Sample output would be:
![][2]
As you can see in the above screenshot, the contents of the given directory have been served over network via two URLs.
To access the contents from the local system itself, all you have to do is open your web browser and navigate to **<http://localhost:5000/>** URL.
![][3]
The Serve utility displays the contents of the given directory in a simple layout. You can download (right click on the files and choose “Save link as..”) or just view them in the browser.
If you want to open local address automatically in the browser, use **-o** flag.
```
$ serve -o Documents/
```
Once you run the above command, The Serve utility will open your web browser automatically and display the contents of the shared item.
Similarly, to access the shared directory from a remote system over network, type **<http://192.168.43.192:5000>** in the browsers address bar. Replace 192.168.43.192 with your systems IP.
**Serve contents via different port**
As you may noticed, The serve utility uses port **5000** by default. So, make sure the port 5000 is allowed in your firewall or router. If it is blocked for some reason, you can serve the contents using different port using **-p** flag.
```
$ serve -p 1234 Documents/
```
The above command will serve the contents of Documents directory via port **1234**.
![][4]
To serve a file, instead of a folder, just give its full path like below.
```
$ serve Documents/Papers/notes.txt
```
The contents of the shared directory can be accessed by any user on the network as long as they know the path.
**Serve the entire $HOME directory**
Open your Terminal and type:
```
$ serve
```
This will share the contents of your entire $HOME directory over network.
To stop the sharing, press **CTRL+C**.
**Serve selective files or folders**
You may not want to share all files or directories, but only a few in a directory. You can do this by excluding the files or directories using **-i** flag.
```
$ serve -i Downloads/
```
The above command will serve entire file system except **Downloads** directory.
**Serve contents only on localhost**
Sometimes, you want to serve the contents only on the local system itself, not on the entire network. To do so, use **-l** flag as shown below:
```
$ serve -l Documents/
```
This command will serve the **Documents** directory only on localhost.
![][5]
This can be useful when youre working on a shared server. All users in the in the system can access the share, but not the remote users.
**Serve content using SSL**
Since we serve the contents over the local network, we need not to use SSL. However, Serve utility has the ability to shares contents using SSL using **ssl** flag.
```
$ serve --ssl Documents/
```
![][6]
To access the shares via web browser use “<https://localhost:5000> or “<https://ip:5000>.
![][7]
**Serve contents with authentication**
In all above examples, we served the contents without any authentication. So anyone on the network can access them without any authentication. You might feel some contents should be accessed with username and password.
To do so, use:
```
$ SERVE_USER=ostechnix SERVE_PASSWORD=123456 serve --auth
```
Now the users need to enter the username (i.e **ostechnix** in our case) and password (123456) to access the shares.
![][8]
The Serve utility has some other features, such as disable [**Gzip compression**][9], setup * CORS headers to allow requests from any origin, prevent copy address automatically to clipboard etc. You can read the complete help section by running the following command:
```
$ serve help
```
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-setup-static-file-server-instantly/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-4.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-3.png
[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-6.png
[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-5-1.png
[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-7-1.png
[9]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/

View File

@ -1,75 +0,0 @@
Translating by MjSeven
3 password managers for the Linux command line
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/password.jpg?itok=ec6z6YgZ)
We all want our passwords to be safe and secure. To do that, many people turn to password management applications like [KeePassX][1] or [Bitwarden][2].
If you spend a lot of time in a terminal window and are looking for a simpler solution, you'll want to check out one of the many password managers for the Linux command line. They're quick, easy to use, and secure.
Let's take a look at three of them.
### Titan
[Titan][3] is a password manager that doubles as a file-encryption tool. I'm not sure how well Titan works at encrypting files; I only looked at it as a password manager. In that capacity, it does a solid job.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/titan.png?itok=5QoQ1aY7)
Titan stores your passwords in an encrypted [SQLite database][4], which you create and add a master passphrase to when you first fire up the application. Tell Titan to add a password and it asks for a name to identify it, a username, the password itself, a URL, and a comment about the password.
You can get Titan to generate a password for you, and you can search your database by an entry's name or numeric ID, by the name or comment, or using regular expressions. Viewing a specific password, however, can be a bit clunky. You either have to list all passwords and scroll through them to find the one you want to use, or you can view the password by listing the details of an entry using its numeric ID (if you know it).
### Gopass
[Gopass][5] is billed as "the team password manager." Don't let that put you off. It's also great for personal use.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gopass.png?itok=1Uodlute)
Gopass is an update of the venerable Unix and Linux [Pass][6] password manager written in the Go programming language. In true Linux fashion, you can either [compile the source code][7] or [use an installer][8] to get gopass on your computer.
Before you start using gopass, make sure you have [GNU Privacy Guard (GPG)][9] and [Git][10] on your system. The former encrypts and decrypts your password store, and the latter signs commits to a [Git repository][11]. If gopass is for personal use, you still need Git. You just don't need to worry about signing commits. If you're interested, you can learn about those dependencies [in the documentation][12].
When you first start gopass, you need to create a password store and generate a [secret key][13] to secure that store. When you want to add a password (which gopass refers to as a secret), gopass asks you for information such as a URL, a username, and a note about the secret. You can have gopass generate the password for the secret you're adding, or you can enter one yourself.
As you need to, you can edit, view, or delete passwords. You can also view a specific password or copy it to your clipboard to paste it into a login form or window.
### Kpcli
The open source password manager of choice for many people is either [KeePass][14] or [KeePassX][15]. [Kpcli][16] brings the features of KeePass and KeePassX to your nearest terminal window.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/kpcli.png?itok=kMmOHTJz)
Kpcli is a keyboard-driven shell that does most of what its graphical cousins can do. That includes opening a password database; adding and editing passwords and groups (which help you organize your passwords); or even renaming or deleting passwords and groups.
When you need to, you can copy a username and password to your clipboard to paste into a login form. To keep that information safe, kpcli also has a command to clear the clipboard. Not bad for a little terminal app.
Do you have a favorite command-line password manager? Why not share it by leaving a comment?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/3-password-managers-linux-command-line
作者:[Scott Nesbitt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
[1]:https://www.keepassx.org/
[2]:https://opensource.com/article/18/3/managing-passwords-bitwarden
[3]:https://www.titanpasswordmanager.org/
[4]:https://en.wikipedia.org/wiki/SQLite
[5]:https://www.justwatch.com/gopass/
[6]:https://www.passwordstore.org/
[7]:https://github.com/justwatchcom/gopass
[8]:https://justwatch.com/gopass/#install
[9]:https://www.gnupg.org
[10]:https://git-scm.com/
[11]:https://git-scm.com/book/en/v2/Git-Basics-Getting-a-Git-Repository
[12]:https://github.com/justwatchcom/gopass/blob/master/docs/setup.md
[13]:http://searchsecurity.techtarget.com/definition/private-key
[14]:https://keepass.info/
[15]:https://www.keepassx.org
[16]:http://kpcli.sourceforge.net/

View File

@ -0,0 +1,147 @@
A Desktop GUI Application For NPM
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png)
NPM, short for **N** ode **P** ackage **M** anager, is a command line package manager for installing NodeJS packages, or modules. We already have have published a guide that described how to [**manage NodeJS packages using NPM**][1]. As you may noticed, managing NodeJS packages or modules using Npm is not a big deal. However, if youre not compatible with CLI-way, there is a desktop GUI application named **NDM** which can be used for managing NodeJS applications/modules. NDM, stands for **N** PM **D** esktop **M** anager, is a free, open source graphical front-end for NPM that allows us to install, update, remove NodeJS packages via a simple graphical window.
In this brief tutorial, we are going to learn about Ndm in Linux.
### Install NDM
NDM is available in AUR, so you can install it using any AUR helpers on Arch Linux and its derivatives like Antergos and Manjaro Linux.
Using [**Pacaur**][2]:
```
$ pacaur -S ndm
```
Using [**Packer**][3]:
```
$ packer -S ndm
```
Using [**Trizen**][4]:
```
$ trizen -S ndm
```
Using [**Yay**][5]:
```
$ yay -S ndm
```
Using [**Yaourt**][6]:
```
$ yaourt -S ndm
```
On RHEL based systems like CentOS, run the following command to install NDM.
```
$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update &&
```
On Debian, Ubuntu, Linux Mint:
```
$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm
```
NDM can also be installed using **Linuxbrew**. First, install Linuxbrew as described in the following link.
After installing Linuxbrew, you can install NDM using the following commands:
```
$ brew update
$ brew install ndm
```
On other Linux distributions, go to the [**NDM releases page**][7], download the latest version, compile and install it yourself.
### NDM Usage
Launch NDM wither from the Menu or using application launcher. This is how NDMs default interface looks like.
![][9]
From here, you can install NodeJS packages/modules either locally or globally.
**Install NodeJS packages locally**
To install a package locally, first choose project directory by clicking on the **“Add projects”** button from the Home screen and select the directory where you want to keep your project files. For example, I have chosen a directory named **“demo”** as my project directory.
Click on the project directory (i.e **demo** ) and then, click **Add packages** button.
![][10]
Type the package name you want to install and hit the **Install** button.
![][11]
Once installed, the packages will be listed under the projects directory. Simply click on the directory to view the list of installed packages locally.
![][12]
Similarly, you can create separate project directories and install NodeJS modules in them. To view the list of installed modules on a project, click on the project directory, and you will the packages on the right side.
**Install NodeJS packages globally**
To install NodeJS packages globally, click on the **Globals** button on the left from the main interface. Then, click “Add packages” button, type the name of the package and hit “Install” button.
**Manage packages**
Click on any installed packages and you will see various options on the top, such as
1. Version (to view the installed version),
2. Latest (to install latest available version),
3. Update (to update the currently selected package),
4. Uninstall (to remove the selected package) etc.
![][13]
NDM has two more options namely **“Update npm”** which is used to update the node package manager to latest available version, and **Doctor** that runs a set of checks to ensure that your npm installation has what it needs to manage your packages/modules.
### Conclusion
NDM makes the process of installing, updating, removing NodeJS packages easier! You dont need to memorize the commands to perform those tasks. NDM lets us to do them all with a few mouse clicks via simple graphical window. For those who are lazy to type commands, NDM is perfect companion to manage NodeJS packages.
Cheers!
**Resource:**
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
[7]:https://github.com/720kb/ndm/releases
[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png
[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png
[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png
[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png
[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png

View File

@ -0,0 +1,80 @@
A new approach to security instrumentation
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
How many of us have ever uttered the following phrase: “I hope this works!”?
Without a doubt, most of us have, likely more than once. Its not a phrase that inspires confidence, as it reveals doubts about our abilities or the functionality of whatever we are testing. Unfortunately, this very phrase defines our traditional security model all too well. We operate based on the assumption and the hope that the controls we put in place—from vulnerability scanning on web applications to anti-virus on endpoints—prevent malicious actors and software from entering our systems and damaging or stealing our information.
Penetration testing took a step to combat relying on assumptions by actively trying to break into the network, inject malicious code into a web application, or spread “malware” by sending out phishing emails. Composed of finding and poking holes in our different security layers, pen testing fails to account for situations in which holes are actively opened. In security experimentation, we intentionally create chaos in the form of controlled, simulated incident behavior to objectively instrument our ability to detect and deter these types of activities.
> “Security experimentation provides a methodology for the experimentation of the security of distributed systems to build confidence in the ability to withstand malicious conditions.”
When it comes to security and complex distributed systems, a common adage in the chaos engineering community reiterates that “hope is not an effective strategy.” How often do we proactively instrument what we have designed or built to determine if the controls are failing? Most organizations do not discover that their security controls are failing until a security incident results from that failure. We believe that “Security incidents are not detective measures” and “Hope is not an effective strategy” should be the mantras of IT professionals operating effective security practices.
The industry has traditionally emphasized preventative security measures and defense-in-depth, whereas our mission is to drive new knowledge and insights into the security toolchain through detective experimentation. With so much focus on the preventative mechanisms, we rarely attempt beyond one-time or annual pen testing requirements to validate whether or not those controls are performing as designed.
With all of these constantly changing, stateless variables in modern distributed systems, it becomes next to impossible for humans to adequately understand how their systems behave, as this can change from moment to moment. One way to approach this problem is through robust systematic instrumentation and monitoring. For instrumentation in security, you can break down the domain into two primary buckets: **testing** , and what we call **experimentation**. Testing is the validation or assessment of a previously known outcome. In plain terms, we know what we are looking for before we go looking for it. On the other hand, experimentation seeks to derive new insights and information that was previously unknown. While testing is an important practice for mature security teams, the following example should help further illuminate the differences between the two, as well as provide a more tangible depiction of the added value of experimentation.
### Example scenario: Craft beer delivery
Consider a simple web service or web application that takes orders for craft beer deliveries.
This is a critical service for this craft beer delivery company, whose orders come in from its customers' mobile devices, the web, and via its API from restaurants that serve its craft beer. This critical service runs in the company's AWS EC2 environment and is considered by the company to be secure. The company passed its PCI compliance with flying colors last year and annually performs third-party penetration tests, so it assumes that its systems are secure.
This company also prides itself on its DevOps and continuous delivery practices by deploying sometimes twice in the same day.
After learning about chaos engineering and security experimentation, the company's development teams want to determine, on a continuous basis, how resilient and effective its security systems are to real-world events, and furthermore, to ensure that they are not introducing new problems into the system that the security controls are not able to detect.
The team wants to start small by evaluating port security and firewall configurations for their ability to detect, block, and alert on misconfigured changes to the port configurations on their EC2 security groups.
* The team begins by performing a summary of their assumptions about the normal state.
* Develops a hypothesis for port security in their EC2 instances
* Selects and configures the YAML file for the Unauthorized Port Change experiment.
* This configuration would designate the objects to randomly select from for targeting, as well as the port ranges and number of ports that should be changed.
* The team also configures when to run the experiment and shrinks the scope of its blast radius to ensure minimal business impact.
* For this first test, the team has chosen to run the experiment in their stage environments and run a single run of the test.
* In true Game Day style, the team has elected a Master of Disaster to run the experiment during a predefined two-hour window. During that window of time, the Master of Disaster will execute the experiment on one of the EC2 Instance Security Groups.
* Once the Game Day has finished, the team begins to conduct a thorough, blameless post-mortem exercise where the focus is on the results of the experiment against the steady state and the original hypothesis. The questions would be something similar to the following:
### Post-mortem questions
* Did the firewall detect the unauthorized port change?
* If the change was detected, was it blocked?
* Did the firewall report log useful information to the log aggregation tool?
* Did the SIEM throw an alert on the unauthorized change?
* If the firewall did not detect the change, did the configuration management tool discover the change?
* Did the configuration management tool report good information to the log aggregation tool?
* Did the SIEM finally correlate an alert?
* If the SIEM threw an alert, did the Security Operations Center get the alert?
* Was the SOC analyst who got the alert able to take action on the alert, or was necessary information missing?
* If the SOC alert determined the alert to be credible, was Security Incident Response able to conduct triage activities easily from the data?
The acknowledgment and anticipation of failure in our systems have already begun unraveling our assumptions about how our systems work. Our mission is to take what we have learned and apply it more broadly to begin to truly address security weaknesses proactively, going beyond the reactive processes that currently dominate traditional security models.
As we continue to explore this new domain, we will be sure to post our findings. For those interested in learning more about the research or getting involved, please feel free to contact [Aaron Rinehart][1] or [Grayson Brewer][2].
Special thanks to Samuel Roden for the insights and thoughts provided in this article.
**[See our related story,[Is the term DevSecOps necessary?][3]]**
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/new-approach-security-instrumentation
作者:[Aaron Rinehart][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/aaronrinehart
[1]:https://twitter.com/aaronrinehart
[2]:https://twitter.com/BrewerSecurity
[3]:https://opensource.com/article/18/4/devsecops

View File

@ -0,0 +1,352 @@
Getting started with Jenkins Pipelines
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe)
Jenkins is a well-known open source continuous integration and continuous development automation tool. It has an excellent supporting community, and hundreds of plugins and developers have been using it for years.
This article will provide a brief guide on how to get started with Pipelines and multibranch pipelines.
Why pipelines?
* Developers can automate the integration, testing, and deployment of their code, going from source code to product consumers many times using one tool.
* Pipelines "as code" known as Jenkinsfiles can be saved in any source control system. In previous Jenkins versions, jobs were only configured using the UI. With Jenkinfiles, pipelines are more maintainable and portable.
* Multi-branch pipelines integrate with Git so that different branches, features, and releases can have independent pipelines enabling each developer to customize their development/deployment process.
* Non-technical members of a team can trigger and customize builds using parameters, analyze test reports, receive email alerts and have a better understanding of the build and deployment process through the pipeline stage view (improved in latest versions with the Blue Ocean UI).
* Jenkins can also be [installed using Docker][1] and pipelines can interact with [Docker agents][2].
### Requirements:
* [Jenkins 2.89.2][3] (WAR) with Java 8 is the version used in this how-to
* Plugins used (To install: `Manage Jenkins → Manage Plugins →Available`):
* Pipeline: [declarative][4]
* [Blue Ocean][5]
* [Cucumber reports][6]
## Getting started with Jenkins Pipelines
If you have not used Jenkins Pipelines before, I recommend [reading the documentation][7] before getting started here, as it includes a complete description and introduction to the technology as well as the benefits of using it.
This is the Jenkinsfile I used (you can also access this [code][8] on GitHub):
```
pipeline {
    agent any
    stages {
stage('testing pipeline'){
          steps{
      echo 'test1'
                sh 'mkdir from-jenkins'
                sh 'touch from-jenkins/test.txt'
                }
        }
}
}
```
1\. Click on **New Item**.
2\. Name the project, select **Pipeline** , and click **OK**.
3\. The configuration page displays once the project is created. In the **Definition** segment, you must decide to either obtain the Jenkinsfile from source control management (SCM) or create the Pipeline script in Jenkins. Hosting the Jenkinsfile in SCM is recommended so that it is portable and maintainable.
The SCM I chose was Git with simple user and pass credentials (SSH can also be used). By default, Jenkins will look for a Jenkinsfile in that repository unless it's specified otherwise in the **Script Path** directory.
4\. Go back to the job page after saving the Jenkinsfile and select **Build Now**. Jenkins will trigger the job. Its first stage is to pull down the Jenkinsfile from SCM. It reports any changes from the previous run and executes it.
Clicking on **Stage View** provides console information:
### Using Blue Ocean
Jenkins' [Blue Ocean][9] provides a better UI for Pipelines. It is accessible from the job's main page (see image above).
This simple Pipeline has one stage (in addition to the default stage): **Checkout SCM** , which pulls the Jenkinsfile in three steps. The first step echoes a message, the second creates a directory named `from-``jenkins` in the Jenkins workspace, and the third puts a file called `test.txt` inside that directory. The path for the Jenkins workspace is `$user/.jenkins/workspace`, located in the machine where the job was executed. In this example, the job is executed in any available node. If there is no other node connected then it is executed in the machine where Jenkins is installed—check Manage Jenkins > Manage nodes for information about the nodes.
Another way to create a Pipeline is with Blue Ocean's plugin. (The following screenshots show the same repo.)
1\. Click **Open Blue Ocean**.
2\. Click **New Pipeline**.
3\. Select **SCM** and enter the repository URL; an SSH key will be provided. This must be added to your Git SSH keys (in `Settings →SSH and GPG keys`).
4\. Jenkins will automatically detect the branch and the Jenkinsfile, if present. It will also trigger the job.
### Pipeline development:
The following Jenkinsfile triggers Cucumber tests from a GitHub repository, creates and archives a JAR, sends emails, and exposes different ways the job can execute with variables, parallel stages, etc. The Java project used in this demo was forked from [cucumber/cucumber-jvm][10] to [mluyo3414/cucumber-jvm][11]. You can also access [the][12][Jenkinsfile][12] on GitHub. Since the Jenkinsfile is not in the repository's top directory, the configuration has to be changed to another path:
```
pipeline {
    // 1. runs in any agent, otherwise specify a slave node
    agent any
    parameters {
// 2.variables for the parametrized execution of the test: Text and options
        choice(choices: 'yes\nno', description: 'Are you sure you want to execute this test?', name: 'run_test_only')
        choice(choices: 'yes\nno', description: 'Archived war?', name: 'archive_war')
        string(defaultValue: "your.email@gmail.com", description: 'email for notifications', name: 'notification_email')
    }
//3. Environment variables
environment {
firstEnvVar= 'FIRST_VAR'
secondEnvVar= 'SECOND_VAR'
thirdEnvVar= 'THIRD_VAR'
}
//4. Stages
    stages {
        stage('Test'){
             //conditional for parameter
            when {
                environment name: 'run_test_only', value: 'yes'
            }
            steps{
                sh 'cd examples/java-calculator && mvn clean integration-test'
            }
        }
//5. demo parallel stage with script
        stage ('Run demo parallel stages') {
steps {
        parallel(
        "Parallel stage #1":
                  {
                  //running a script instead of DSL. In this case to run an if/else
                  script{
                    if (env.run_test_only =='yes')
                        {
                        echo env.firstEnvVar
                        }
                    else
                        {
                        echo env.secondEnvVar
                        }
                  }
         },
        "Parallel stage #2":{
                echo "${thirdEnvVar}"
                }
                )
             }
        }
    }
//6. post actions for success or failure of job. Commented out in the following code: Example on how to add a node where a stage is specifically executed. Also, PublishHTML is also a good plugin to expose Cucumber reports but we are using a plugin using Json.
   
post {
        success {
        //node('node1'){
echo "Test succeeded"
            script {
    // configured from using gmail smtp Manage Jenkins-> Configure System -> Email Notification
    // SMTP server: smtp.gmail.com
    // Advanced: Gmail user and pass, use SSL and SMTP Port 465
    // Capitalized variables are Jenkins variables see https://wiki.jenkins.io/display/JENKINS/Building+a+software+project
                mail(bcc: '',
                     body: "Run ${JOB_NAME}-#${BUILD_NUMBER} succeeded. To get more details, visit the build results page: ${BUILD_URL}.",
                     cc: '',
                     from: 'jenkins-admin@gmail.com',
                     replyTo: '',
                     subject: "${JOB_NAME} ${BUILD_NUMBER} succeeded",
                     to: env.notification_email)
                     if (env.archive_war =='yes')
                     {
             // ArchiveArtifact plugin
                        archiveArtifacts '**/java-calculator-*-SNAPSHOT.jar'
                      }
                       // Cucumber report plugin
                      cucumber fileIncludePattern: '**/java-calculator/target/cucumber-report.json', sortingMethod: 'ALPHABETICAL'
            //publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: true, reportDir: '/home/reports', reportFiles: 'reports.html', reportName: 'Performance Test Report', reportTitles: ''])
            }
        //}
        }
        failure {
            echo "Test failed"
            mail(bcc: '',
                body: "Run ${JOB_NAME}-#${BUILD_NUMBER} succeeded. To get more details, visit the build results page: ${BUILD_URL}.",
                 cc: '',
                 from: 'jenkins-admin@gmail.com',
                 replyTo: '',
                 subject: "${JOB_NAME} ${BUILD_NUMBER} failed",
                 to: env.notification_email)
                 cucumber fileIncludePattern: '**/java-calculator/target/cucumber-report.json', sortingMethod: 'ALPHABETICAL'
//publishHTML([allowMissing: true, alwaysLinkToLastBuild: false, keepAll: true, reportDir: '/home/tester/reports', reportFiles: 'reports.html', reportName: 'Performance Test Report', reportTitles: ''])
        }
    }
}
```
Always check **Pipeline Syntax** to see how to use the different plugins in the Jenkinsfile.
An email notification indicates the build was successful:
Archived JAR from a successful build:
You can access **Cucumber reports** on the same page.
## How to create a multibranch pipeline
If your project already has a Jenkinsfile, follow the [**Multibranch Pipeline** project][13] instructions in Jenkins' docs. It uses Git and assumes credentials are already configured. This is how the configuration looks in the traditional view:
### If this is your first time creating a Pipeline, follow these steps:
1\. Select **Open Blue Ocean**.
2\. Select **New Pipeline**.
3\. Select **Git** and insert the Git repository address. This repository does not currently have a Jenkinsfile. An SSH key will be generated; it will be used in the next step.
4\. Go to GitHub. Click on the profile avatar in the top-right corner and select Settings. Then select **SSH and GPG Keys** from the left-hand menu and insert the SSH key Jenkins provides.
5\. Go back to Jenkins and click **Create Pipeline**. If the project does not contain a Jenkinsfile, Jenkins will prompt you to create a new one.
6\. Once you click **Create Pipeline** , an interactive Pipeline diagram will prompt you to add stages by clicking **+**. You can add parallel or sequential stages and multiple steps to each stage. A list offers different options for the steps.
7\. The following diagram shows three stages (Stage 1, Stage 2a, and Stage 2b) with simple print messages indicating steps. You can also add environment variables and specify in which agent the Jenkinsfile will be executed.
Click **Save** , then commit the new Jenkinsfile by clicking **Save & Run**.
You can also add a new branch.
8\. The job will execute.
If a new branch was added, you can see it in GitHub.
9\. If another branch with a Jenkinsfile is created, you can discover it by clicking **Scan Multibranch Pipeline Now**. In this case, a new branch called `new-feature-2` is created in GitHub from Master (only branches with Jenkinsfiles are displayed in Jenkins).
After scanning, the new branch appears in Jenkins.
This new feature was created using GitHub directly; Jenkins will detect new branches when it performs a scan. If you don't want the newly discovered Pipelines to be executed when discovered, change the settings by clicking **Configure** on the job's Multibranch Pipeline main page and adding the property **Suppress automatic SCM triggering**. This way, Jenkins will discover new Pipelines but they will have to be manually triggered.
This article was originally published on the [ITNext channel][14] on Medium and is reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber
作者:[Miguel Suarez][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mluyo3414
[1]:https://jenkins.io/doc/book/installing/#downloading-and-running-jenkins-in-docker
[2]:https://jenkins.io/doc/book/pipeline/docker/
[3]:https://jenkins.io/doc/pipeline/tour/getting-started/
[4]:https://plugins.jenkins.io/pipeline-model-definition
[5]:https://plugins.jenkins.io/blueocean
[6]:https://plugins.jenkins.io/cucumber-reports
[7]:https://jenkins.io/doc/book/pipeline/
[8]:https://github.com/mluyo3414/jenkins-test
[9]:https://jenkins.io/projects/blueocean/
[10]:https://github.com/cucumber/cucumber-jvm
[11]:https://github.com/mluyo3414/cucumber-jvm
[12]:https://github.com/mluyo3414/cucumber-jvm/blob/master/examples/java-calculator/Jenkinsfile
[13]:https://jenkins.io/doc/book/pipeline/multibranch/#creating-a-multibranch-pipeline
[14]:https://itnext.io/jenkins-pipelines-889420409510

View File

@ -0,0 +1,123 @@
translating---geekpi
How To Check User Created Date On Linux
======
Did you know, how to check user account created date on Linux system? If Yes, what are the ways to do.
Are you getting succeed on this? If yes, how to do?
Basically Linux operating system doesnt track this information so, what are the alternate ways to get this information.
You might ask why i want to check this?
Yes, in some cases you may want to check this information, at that time this will very helpful for you.
This can be verified using below 7 methods.
* Using /var/log/secure file
* Using aureport utility
* Using .bash_logout file
* Using chage Command
* Using useradd Command
* Using passwd Command
* Using last Command
### Method-1: Using /var/log/secure file
It stores all security related messages including authentication failures and authorization privileges. It also tracks sudo logins, SSH logins and other errors logged by system security services daemon.
```
# grep prakash /var/log/secure
Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new group: name=prakash, GID=501
Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new user: name=prakash, UID=501, GID=501, home=/home/prakash, shell=/bin/bash
Apr 12 04:07:34 centos.2daygeek.com passwd: pam_unix(passwd:chauthtok): password changed for prakash
Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: Accepted password for prakash from 103.5.134.167 port 60554 ssh2
Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: pam_unix(sshd:session): session opened for user prakash by (uid=0)
```
### Method-2: Using aureport utility
The aureport utility allows you to generate summary and columnar reports on the events recorded in Audit log files. By default, all audit.log files in the /var/log/audit/ directory are queried to create the report.
```
# aureport --auth | grep prakash
46. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 288
47. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 291
```
### Method-3: Using .bash_logout file
The .bash_logout file in your home directory have a special meaning to bash, it provides a way to execute commands when the user logs out of the system.
We can check for the Change date of the .bash_logout file in the users home directory. This file is created upon users first logout.
```
# stat /home/prakash/.bash_logout
File: `/home/prakash/.bash_logout'
Size: 18 Blocks: 8 IO Block: 4096 regular file
Device: 801h/2049d Inode: 256153 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 501/ prakash) Gid: ( 501/ prakash)
Access: 2017-03-22 20:15:00.000000000 -0400
Modify: 2017-03-22 20:15:00.000000000 -0400
Change: 2018-04-12 04:07:18.283000323 -0400
```
### Method-4: Using chage Command
chage stand for change age. This command allows user to mange password expiry information. The chage command changes the number of days between password changes and the date of the last password change.
This information is used by the system to determine when a user must change his/her password. This will work if the user does not change the password since the account creation date.
```
# chage --list prakash
Last password change : Apr 12, 2018
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
```
### Method-5: Using useradd Command
useradd command is used to create new accounts in Linux. By default, it wont add user creation date and we have to add date using “Comment” option.
```
# useradd -m prakash -c `date +%Y/%m/%d`
# grep prakash /etc/passwd
prakash:x:501:501:2018/04/12:/home/prakash:/bin/bash
```
### Method-6: Using useradd Command
passwd command used assign password to local accounts or users. If the user has not changed his password since the accounts creation date, then you can use the passwd command to find out the date of the last password reset.
```
# passwd -S prakash
prakash PS 2018-04-11 0 99999 7 -1 (Password set, MD5 crypt.)
```
### Method-7: Using last Command
last command reads the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created.
```
# last | grep "prakash"
prakash pts/2 103.5.134.167 Thu Apr 12 04:08 still logged in
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
作者:[Prakash Subramanian][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/prakash/

View File

@ -0,0 +1,137 @@
Finding what youre looking for on Linux
======
![](https://images.idgesg.net/images/article/2018/04/binoculars-100754967-large.jpg)
It isnt hard to find what youre looking for on a Linux system — a file or a command — but there are a _lot_ of ways to go looking.
### 7 commands to find Linux files
#### find
The most obvious is undoubtedly the **find** command, and find has become easier to use than it was years ago. It used to require a starting location for your search, but these days, you can also use find with just a file name or regular expression if youre willing to confine your search to the local directory.
```
$ find e*
empty
examples.desktop
```
In this way, it works much like the **ls** command and isn't doing much of a search.
For more relevant searches, find requires a starting point and some criteria for your search (unless you simply want it to provide a recursive listing of that starting points directory. The command **find . -type f** will recursively list all regular files starting with the current directory while **find ~nemo -type f -empty** will find empty files in Nemos home directory.
```
$ find ~nemo -type f -empty
/home/nemo/empty
```
**Also on Network world:[11 pointless but awesome Linux terminal tricks][1]**
#### locate
The name of the **locate** command suggests that it does basically the same thing as find, but it works entirely differently. Where the **find** command can select files based on a variety of criteria — name, size, owner, permissions, state (such as empty), etc. with a selectable depth for the search, the **locate** command looks through a file called /var/lib/mlocate/mlocate.db to find what youre looking for. That db file is periodically updated, so a locate of a file you just created will probably fail to find it. If that bothers you, you can run the updatedb file and get the update to happen right away.
```
$ sudo updatedb
```
#### mlocate
The **mlocate** command works like the **locate** command and uses the same mlocate.db file as locate.
#### which
The **which** command works very differently than the **find** and **locate** commands. It uses your search path and checks each directory on it for an executable with the file name youre looking for. Once it finds one, it stops searching and displays the full path to that executable.
The primary benefit of the **which** command is that it answers the question, “If I enter this command, what executable file will be run?” It ignores files that arent executable and doesnt list all executables on the system with that name — just the one that it finds first. If you wanted to find _all_ executables that have some name, you could run a find command like this, but it might take considerably longer to run the very efficient **which** command.
```
$ find / -name locate -perm -a=x 2>/dev/null
/usr/bin/locate
/etc/alternatives/locate
```
In this find command, were looking for all executables (files that cen be run by anyone) named “locate”. Were also electing not to view all of the “Permission denied” messages that would otherwise clutter our screens.
#### whereis
The **whereis** command works a lot like the **which** command, but it provides more information. Instead of just looking for executables, it also looks for man pages and source files. Like the **which** command, it uses your search path ($PATH) to drive its search.
```
$ whereis locate
locate: /usr/bin/locate /usr/share/man/man1/locate.1.gz
```
#### whatis
The **whatis** command has its own unique mission. Instead of actually finding files, it looks for information in the man pages for the command you are asking about and provides the brief description of the command from the top of the man page.
```
$ whatis locate
locate (1) - find files by name
```
If you ask about a script that youve just set up, it wont have any idea what youre referring to and will tell you so.
```
$ whatis cleanup
cleanup: nothing appropriate.
```
#### apropos
The **apropos** command is useful when you know what you want to do, but you have no idea what command you should be using to do it. If you were wondering how to locate files, for example, the commands “apropos find” and “apropos locate” would have a lot of suggestions to offer.
```
$ apropos find
File::IconTheme (3pm) - find icon directories
File::MimeInfo::Applications (3pm) - Find programs to open a file by mimetype
File::UserDirs (3pm) - find extra media and documents directories
find (1) - search for files in a directory hierarchy
findfs (8) - find a filesystem by label or UUID
findmnt (8) - find a filesystem
gst-typefind-1.0 (1) - print Media type of file
ippfind (1) - find internet printing protocol printers
locate (1) - find files by name
mlocate (1) - find files by name
pidof (8) - find the process ID of a running program.
sane-find-scanner (1) - find SCSI and USB scanners and their device files
systemd-delta (1) - Find overridden configuration files
xdg-user-dir (1) - Find an XDG user dir
$
$ apropos locate
blkid (8) - locate/print block device attributes
deallocvt (1) - deallocate unused virtual consoles
fallocate (1) - preallocate or deallocate space to a file
IO::Tty (3pm) - Low-level allocate a pseudo-Tty, import constants.
locate (1) - find files by name
mlocate (1) - find files by name
mlocate.db (5) - a mlocate database
mshowfat (1) - shows FAT clusters allocated to file
ntfsfallocate (8) - preallocate space to a file on an NTFS volume
systemd-sysusers (8) - Allocate system users and groups
systemd-sysusers.service (8) - Allocate system users and groups
updatedb (8) - update a database for mlocate
updatedb.mlocate (8) - update a database for mlocate
whereis (1) - locate the binary, source, and manual page files for a...
which (1) - locate a command
```
### Wrap-up
The commands available on Linux for locating and identifying files are quite varied, but they're all very useful.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3268768/linux/finding-what-you-re-looking-for-on-linux.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb

View File

@ -0,0 +1,89 @@
Redcore Linux Makes Gentoo Easy
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore.jpg?itok=SfsuPD0w)
Raise your hand if youve always wanted to try [Gentoo Linux][1] but never did because you didnt have either the time or the skills to invest in such a challenging installation. Im sure there are plenty of Linux users out there not willing to admit this, but its okay, really; installing Gentoo is a challenge, and it can be very time consuming. In the end, however, installing Gentoo will result in a very personalized Linux desktop that offers the fulfillment of saying, “I did it!”
So, whats a curious Linux user to do, when they want to experience this elite distribution? One option is to turn to the likes of [Redcore Linux][2]. Redcore does what many have tried (and few have succeeded in doing) in bringing Gentoo to the masses. In fact, [Sabayon][3] Linux is the only other distro I can think of thats truly succeeded in bringing a level of simplicity to Gentoo Linux that many users can enjoy. And while Sabayon is still very much in active development, its good to know there are others attempting what might have once been deemed impossible:
### Making Gentoo Linux easy
Instead of building your desktop piece by piece, system by system, Redcore (like Sabayon) brings a much more standard installation to the process. Unlike Sabayon (which gives you the options of a GNOME, KDE, Xfce, Mate, or Fluxbox editions), Redcore offers a version that ships with two possible desktop options: The [LXQt][4] desktop and [Openbox][5]. The LXQt is a lightweight desktop that offers plenty of configuration options and performs quite well on older hardware, whereas Openbox is a very minimalist take on the desktop. In fact, once you log into the Openbox desktop, youll be left wondering if something had gone wrong (until you right-click on the desktop to see the solitary menu).
If youre looking for a more modern take on the desktop, neither LXQt or Openbox will be what youre looking for. However, there is no doubt the combination of a rolling-release Gentoo-lite system that uses the LXQt and Openbox desktops will perform quite well.
The official description of the distribution is:
Redcore Linux is a distribution based on Gentoo Linux (stable + some unstable) and a continuation of, now defunct, Kogaion Linux. Kogaion Linux itself was a distribution based initially on Sabayon Linux, and later on Gentoo Linux and it was developed by RogentOS Development Group since 2011. Ghiunhan Mamut (aka V3n3RiX) himself joined RogentOS Development Group in January 2014.
If you know much about how Gentoo is structured, Redcore Linux is built from Gentoo Linux stage3. Stage3 a tarball containing a populated directory structure from a basic Gentoo system that contains no kernel, only binaries and libraries essential for bootstrapping. On top of stage3, the Redcore developers add a kernel, a bootloader and a few other items (such as dbus and Dracut), as well as configure the init system (OpenRC).
With all of that out of the way, lets see what the installation of Redcore is like and how well it can serve as your desktop distribution.
### Installation
As youve probably expected, the installation of Redcore is incredibly simple. Download the live ISO image, burn it to a CD/DVD or USB, insert the installation media, boot the device, log into the desktop (live username/password is redcore/redcore) and click the installer icon on the desktop. The installer used by Redcore is [Calamares][6], which means the installation is incredibly easy and, in an instant, familiar (Figure 1).
Everything with Calamares is automatic. In other words, you wont have to manually partition your drive or select individual packages for installation. You should be able to start and finish a Redcore installation in five or ten minutes. Once the installation completes, reboot and log in with the username/password you created during installation.
### Usage
Upon login, you can select between LXQt and Openbox. I highly recommend against using Openbox. Why? Because nothing will open from the menu. I was actually quite surprised to find the Openbox desktop fairly unusable upon installation. With that in mind, select the LXQt option and be done with it.
Upon logging in, youll be greeted by a fairly straight-forward desktop. Click on the menu button (bottom right of screen) and search through the menu hierarchy to launch an application. The list of installed applications is fairly straightforward, with the exception of finding [Steam][7] and [Wine][8] pre-installed. You might be surprised, considering Redcore is a rolling distribution, that many of the user-crucial applications are out of date. Take, for instance, LibreOffice. Redcore ships with 5.4.5.1. The Still release of LibreOffice is currently at 5.4.6. Opening the Sisyphus GUI (front end for the Sisyphus package manager) and youll see that LibreOffice is up to date (according to the package manager) at 5.4.5.1 (Figure 2).
![ Sisyphus][10]
Figure 2: The Sisyphus GUI package manager.
[Used with permission][11]
If you do see packages available for upgrade (which you might), click the upgrade button and allow the upgrade to complete. Considering this is a rolling release, you should be up to date. However, you can search through Sisyphus, locate new packages to install, and install them with ease. Installation with the Sisyphus front end is quite user-friendly.
### That default browser
You wont find a copy of Firefox or Chrome installed on Redcore. Instead, QupZilla serves as the default browser. When you do open the default browser (or if you click on the Ask for help icon on the desktop) you will find the preconfigured home page to be the [recorelinux freenode.net page][12]. Instead of being greeted by a hand-crafted application, geared toward helping new users, one must choose a nickname and venture into the world of IRC. Although one might be inclined to think that does new users a disservice, one must consider the type of “new” user Redcore will be serving: These arent going to be new-to-Linux users. Instead, Redcore knows its users and knows many of them are already familiar with IRC. That means users dont have to turn to Google to search for answers. Instead, they can chat with other users and even developers to solve their problems. This, of course, does depend on those users (who might be capable of answering questions) actually be logged into the redcorelinux channel on freenode.
### That default theme
Im going to make a confession here. Ive never understood the whole “dark theme” preference. I do understand that taste is a subjective issue, but my taste tends to lean toward the lighter themes. Thats not a problem. To change the theme for the LXQt desktop, open the menu, type desktop in the search field, and then select Customize Look and Feel. In the resulting window (Figure 3), you can select from the short list of theme options.
![desktop][14]
Figure 3: Changing the desktop theme in Redcore.
[Used with permission][11]
### That target audience
So who is Redcores best target audience? If youre looking to gain the benefit of Gentoo Linux, without having to go through the exhausting “get up to speed” and installation process required to compile one of the most challenging operating systems on the planet, Redcore might be what youre looking for. Its a very simplified means of enjoying a Gentoo-less take on Gentoo Linux. Of course, if youre looking to enjoy Gentoo with a more modern desktop, I would highly recommend [Sabayon][3]. However, the LXQt lightweight desktop will certainly give life to old hardware. And Recore does this with a bit of Gentoo style.
Learn more about Linux through the free ["Introduction to Linux" ][15] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/4/redcore-linux-makes-gentoo-easy
作者:[JACK WALLEN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.gentoo.org/
[2]:https://redcorelinux.org/
[3]:http://www.sabayon.org/
[4]:https://lxqt.org/
[5]:http://openbox.org/wiki/Main_Page
[6]:https://calamares.io/about/
[7]:http://store.steampowered.com/
[8]:https://www.winehq.org/
[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore_2.jpg?itok=ubNC-htJ ( Sisyphus)
[11]:https://www.linux.com/licenses/category/used-permission
[12]:http://webchat.freenode.net/?channels=redcorelinux
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore_3.jpg?itok=FKg67lrS (desktop)
[15]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,192 @@
The df Command Tutorial With Examples For Beginners
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/df-command-1-720x340.png)
In this guide, we are going to learn to use **df** command. The df command, stands for **D** isk **F** ree, reports file system disk space usage. It displays the amount of disk space available on the file system in a Linux system. The df command is not to be confused with **du** command. Both serves different purposes. The df command reports **how much disk space we have** (i.e free space) whereas the du command reports **how much disk space is being consumed** by the files and folders. Hope I made myself clear. Let us go ahead and see some practical examples of df command, so you can understand it better.
### The df Command Tutorial With Examples
**1\. View entire file system disk space usage**
Run df command without any arguments to display the entire file system disk space.
```
$ df
```
**Sample output:**
```
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
run 4038880 1120 4037760 1% /run
/dev/sda2 478425016 428790352 25308980 95% /
tmpfs 4038880 34396 4004484 1% /dev/shm
tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs 4038880 11636 4027244 1% /tmp
/dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 95054 55724 32162 64% /boot
tmpfs 807776 28 807748 1% /run/user/1000
```
![][2]
As you can see, the result is divided into six columns. Let us see what each column means.
* **Filesystem** the filesystem on the system.
* **1K-blocks** the size of the filesystem, measured in 1K blocks.
* **Used** the amount of space used in 1K blocks.
* **Available** the amount of available space in 1K blocks.
* **Use%** the percentage that the filesystem is in use.
* **Mounted on** the mount point where the filesystem is mounted.
**2\. Display file system disk usage in human readable format**
As you may noticed in the above examples, the usage is showed in 1k blocks. If you want to display them in human readable format, use **-h** flag.
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
dev 3.9G 0 3.9G 0% /dev
run 3.9G 1.1M 3.9G 1% /run
/dev/sda2 457G 409G 25G 95% /
tmpfs 3.9G 27M 3.9G 1% /dev/shm
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 12M 3.9G 1% /tmp
/dev/loop0 83M 83M 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 93M 55M 32M 64% /boot
tmpfs 789M 28K 789M 1% /run/user/1000
```
Now look at the **Size** and **Avail** columns, the usage is shown in GB and MB.
**3\. Display disk space usage only in MB**
To view file system disk space usage only in Megabytes, use **-m** flag.
```
$ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
dev 3939 0 3939 0% /dev
run 3945 2 3944 1% /run
/dev/sda2 467212 418742 24716 95% /
tmpfs 3945 26 3920 1% /dev/shm
tmpfs 3945 0 3945 0% /sys/fs/cgroup
tmpfs 3945 12 3933 1% /tmp
/dev/loop0 83 83 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 93 55 32 64% /boot
tmpfs 789 1 789 1% /run/user/1000
```
**4\. List inode information instead of block usage**
We can list inode information instead of block usage by using **-i** flag as shown below.
```
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
dev 1008304 439 1007865 1% /dev
run 1009720 649 1009071 1% /run
/dev/sda2 30392320 844035 29548285 3% /
tmpfs 1009720 86 1009634 1% /dev/shm
tmpfs 1009720 18 1009702 1% /sys/fs/cgroup
tmpfs 1009720 3008 1006712 1% /tmp
/dev/loop0 12829 12829 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 25688 390 25298 2% /boot
tmpfs 1009720 29 1009691 1% /run/user/1000
```
**5\. Display the file system type**
To display the file system type, use **-T** flag.
```
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
dev devtmpfs 4033216 0 4033216 0% /dev
run tmpfs 4038880 1120 4037760 1% /run
/dev/sda2 ext4 478425016 428790896 25308436 95% /
tmpfs tmpfs 4038880 31300 4007580 1% /dev/shm
tmpfs tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs tmpfs 4038880 11984 4026896 1% /tmp
/dev/loop0 squashfs 84096 84096 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 ext4 95054 55724 32162 64% /boot
tmpfs tmpfs 807776 28 807748 1% /run/user/1000
```
As you see, there is an extra column (second from left) that shows the file system type.
**6\. Display only the specific file system type**
We can limit the listing to a certain file systems. for example **ext4**. To do so, we use **-t** flag.
```
$ df -t ext4
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 478425016 428790896 25308436 95% /
/dev/sda1 95054 55724 32162 64% /boot
```
See? This command shows only the ext4 file system disk space usage.
**7\. Exclude specific file system type**
Some times, you may want to exclude a specific file system from the result. This can be achieved by using **-x** flag.
```
$ df -x ext4
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
run 4038880 1120 4037760 1% /run
tmpfs 4038880 26116 4012764 1% /dev/shm
tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs 4038880 11984 4026896 1% /tmp
/dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327
tmpfs 807776 28 807748 1% /run/user/1000
```
The above command will display all file systems usage, except **ext4**.
**8\. Display usage for a folder**
To display the disk space available and where it is mounted for a folder, for example **/home/sk/** , use this command:
```
$ df -hT /home/sk/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda2 ext4 457G 409G 25G 95% /
```
This command shows the file system type, used and available space in human readable form and where it is mounted. If you dont to display the file system type, just ignore the **-t** flag.
For more details, refer the man pages.
```
$ man df
```
**Recommended read:**
And, thats all for today! I hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/df-command.png

View File

@ -0,0 +1,126 @@
Useful Resources for Those Who Want to Know More About Linux
======
Linux is one of the most popular and versatile operating systems available. It can be used on a smartphone, computer and even a car. Linux has been around since the 1990s and is still one of the most widespread operating systems.
Linux is actually used to run most of the Internet as it is considered to be rather stable compared to other operating systems. This is one of the [reasons why people choose Linux over Windows][1]. Besides, Linux provides its users with privacy and doesnt collect their data at all, while Windows 10 and its Cortana voice control system always require updating your personal information.
Linux has many advantages. However, people do not hear much about it, as it has been squeezed out from the market by Windows and Mac. And many people get confused when they start using Linux, as its a bit different from popular operating systems.
So to help you out weve collected 5 useful resources for those who want to know more about Linux.
### 1.[Linux for Absolute Beginners][2]
If you want to learn as much about Linux as you can, you should consider taking a full course for beginners, provided by Eduonix. This course will introduce you to all features of Linux and provide you with all necessary materials to help you find out more about the peculiarities of how Linux works.
You should definitely choose this course if:
* you want to learn the details about the Linux operating system;
* you want to find out how to install it;
* you want to understand how Linux cooperates with your hardware;
* you want to learn how to operate Linux command line.
### 2.[PC World: A Linux Beginners Guide][3]
A free resource for those who want to learn everything about Linux in one place. PC World specializes in various aspects of working with computer operating systems, and it provides its subscribers with the most accurate and up-to-date information. Here you can also learn more about the [benefits of Linux][4] and latest news about his operating system.
This resource provides you with information on:
* how to install Linux;
* how to use command line;
* how to install additional software;
* how to operate Linux desktop environment.
### 3.[Linux Training][5]
A lot of people who work with computers are required to learn how to operate Linux in case Windows operating system suddenly crashes. And what can be better than using an official resource to start your Linux training?
This resource provides online enrollment on the Linux training, where you can get the most updated information from the authentic source. “A year ago our IT department offered us a Linux training on the official website”, says Martin Gibson, a developer at [Assignmenthelper.com.au][6]. “We took this course because we needed to learn how to back up all our files to another system to provide our customers with maximum security, and this resource really taught us everything.”
So you should definitely use this resource if:
* you want to receive firsthand information about the operating system;
* want to learn the peculiarities of how to run Linux on your computer;
* want to connect with other Linux users and share your experience with them.
4. [The Linux Foundation: Training Videos][7]
If you easily get bored from reading a lot of resources, this website is definitely for you. The Linux Foundation provides training videos, lectures and webinars, held by IT specialists, software developers and technical consultants.
All the training videos are subdivided into categories for:
* Developers: working with Linux Kernel, handling Linux Device Drivers, Linux virtualization etc.;
* System Administrators: developing virtual hosts on Linux, building a Firewall, analyzing Linux performance etc.;
* Users: getting started using Linux, introduction to embedded Linux and so on.
5. [LinuxInsider][8]
Did you know that Microsoft was so amazed by the efficiency of Linux that it [allowed users to run Linux on Microsoft cloud computing device][9]? If you want to learn more about this operating system, Linux Insider provides its subscribers with the latest news on Linux operating systems, gives information about the latest updates and Linux features.
On this resource, you will have the opportunity to:
* participate in Linux community;
* learn about how to run Linux on various devices;
* check out reviews;
* participate in blog discussions and read the tech blog.
### Wrapping up…
Linux offers a lot of benefits, including complete privacy, stable operation and even malware protection. Its definitely worth trying, learning how to use will help you better understand how your computer works and what it needs to operate smoothly.
### About the Author
_Lucy Benton is a digital marketing specialist, business consultant and helps people to turn their dreams into the profitable business. Now she is writing for marketing and business resources. Also Lucy has her own blog_ [_Prowritingpartner.com_][10] _,where you can check her last publications._
--------------------------------------------------------------------------------
via: https://linuxaria.com/article/useful-resources-for-those-who-want-to-know-more-about-linux
作者:[Lucy Benton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.lifewire.com
[1]:https://www.lifewire.com/windows-vs-linux-mint-2200609
[2]:https://www.eduonix.com/courses/system-programming/linux-for-absolute-beginners
[3]:https://www.pcworld.com/article/2918397/operating-systems/how-to-get-started-with-linux-a-beginners-guide.html
[4]:https://www.popsci.com/switch-to-linux-operating-system#page-4
[5]:https://www.linux.com/learn/training
[6]:https://www.assignmenthelper.com.au/
[7]:https://training.linuxfoundation.org/free-linux-training/linux-training-videos
[8]:https://www.linuxinsider.com/
[9]:https://www.wired.com/2016/08/linux-took-web-now-taking-world/
[10]:https://prowritingpartner.com/
[11]:https://cdn.linuxaria.com/wp-content/plugins/flattr/img/flattr-badge-large.png
[12]:https://linuxaria.com/?flattrss_redirect&id=8570&md5=ee76fa2b44bdf6ef419a7f9906d3a5ad (Flattr)

View File

@ -0,0 +1,83 @@
The Vrms Program Helps You To Find Non-free Software In Debian
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/vrms-1-720x340.png)
The other day I was reading an interesting guide that explained the [**difference between free and open source software on Digital ocean**][1]. Until then, I thought both are more or less same. Oh man, I was wrong. There are few significant differences between them. While reading that article, I was wondering how to find non-free software in Linux, hence this post.
### Say hello to “Virtual Richard M. Stallman”, a Perl script to find Non-free Software in Debian
The **Virtual Richard M. Stallman** , shortly **vrms** , is a program, written in Perl, that analyzes the list of installed software on your Debian-based systems and reports all of the packages from non-free and contrib trees which are currently installed. For those wondering, a free software should meet the following [**four essential freedoms**][2].
* **Freedom 0** The freedom to run the program as you wish, for any purpose.
* **Freedom 1** The freedom to study how the program works, and adapt it to your needs. Access to the source code is a precondition for this.
* **Freedom 2** The freedom to redistribute copies so you can help your neighbor.
* **Freedom 3** The freedom to improve the program, and release your improvements to the public, so that the whole community benefits. Access to the source code is a precondition for this.
Any software that doesnt meet the above four conditions are not considered as a free software. In a nutshell, a **Free software means the users have the freedom to run, copy, distribute, study, change and improve the software.**
Now let us find if the installed software is free or non-free, shall we?
The Vrms package is available in the default repositories of Debian and its derivatives like Ubuntu. So, you can install it using apt package manager using the following command.
```
$ sudo apt-get install vrms
```
Once installed, run the following command to find non-free software in your debian-based system.
```
$ vrms
```
Sample output from my Ubuntu 16.04 LTS desktop.
```
Non-free packages installed on ostechnix
unrar Unarchiver for .rar files (non-free version)
1 non-free packages, 0.0% of 2103 installed packages.
```
![][4]
As you can see in the above screenshot, I have one non-free package installed in my Ubuntu box.
If you dont have any non-free packages on your system, you should see the following output instead.
```
No non-free or contrib packages installed on ostechnix! rms would be proud.
```
Vrms can able to find non-free packages not just on Debian but also from Ubuntu, Linux Mint and other deb-based systems as well.
**Limitations**
The Vrms program has some limitations though. Like I already mentioned, it lists the packages from the non-free and contrib sections installed. However, some distributions doesnt follow the policy which ensures proprietary software only ends up in repository sections recognized by vrms as “non-free” and they make no effort to preserve this separation. In such cases, Vrms wont recognize the non-free software and will always report that you have non-free software installed on your system. If youre using distros like Debian and Ubuntu that follows the policy of keeping proprietary software in a non-free repositories, Vrms will definitely help you to find the non-free packages.
And, thats all. Hope this was useful. More good stuffs to come. Stay tuned!
Happy Tamil new year wishes to all Tamil folks around the world!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/the-vrms-program-helps-you-to-find-non-free-software-in-debian/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.digitalocean.com/community/tutorials/Free-vs-Open-Source-Software
[2]:https://www.gnu.org/philosophy/free-sw.html
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/vrms.png

View File

@ -0,0 +1,93 @@
translating---geekpi
4 cool new projects to try in COPR for April
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR is a [collection][1] of personal repositories for software that isnt carried in Fedora. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
Heres a set of new and interesting projects in COPR.
### Anki
[Anki][2] is a program that helps you learn and remember things using spaced repetition. You can create cards and organize them into decks, or download [existing decks][3]. A card has a question on one side and an answer on the other. It may also include images, video or audio. How well you answer each card determines how often you see that particular card in the future.
While Anki is already in Fedora, this repo provides a newer version.
![][4]
#### Installation instructions
The repo currently provides Anki for Fedora 27, 28, and Rawhide. To install Anki, use these commands:
```
sudo dnf copr enable thomasfedb/anki
sudo dnf install anki
```
### Fd
[Fd][5] is a command-line utility thats a simple and slightly faster alternative to [find][6]. It can execute commands on found items in parallel. Fd also uses colorized terminal output and ignores hidden files and patterns specified in .gitignore by default.
#### Installation instructions
The repo currently provides fd for Fedora 26, 27, 28, and Rawhide. To install fd, use these commands:
```
sudo dnf copr enable keefle/fd
sudo dnf install fd
```
### KeePass
[KeePass][7] is a password manager. It holds all passwords in one end-to-end encrypted database locked with a master key or key file. The passwords can be organized into groups and generated by the programs built-in generator. Among its other features is Auto-Type, which can provide a username and password to selected forms.
While KeePass is already in Fedora, this repo provides the newest version.
![][8]
#### Installation instructions
The repo currently provides KeePass for Fedora 26 and 27. To install KeePass, use these commands:
```
sudo dnf copr enable mavit/keepass
sudo dnf install keepass
```
### jo
[Jo][9] is a command-line utility that transforms input to JSON strings or arrays. It features a simple [syntax][10] and recognizes booleans, strings and numbers. In addition, jo supports nesting and can nest its own output as well.
#### Installation instructions
The repo currently provides jo for Fedora 26, 27, and Rawhide, and for EPEL 6 and 7. To install jo, use these commands:
```
sudo dnf copr enable ganto/jo
sudo dnf install jo
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-try-copr-april-2018/
作者:[Dominik Turecek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:https://copr.fedorainfracloud.org/
[2]:https://apps.ankiweb.net/
[3]:https://ankiweb.net/shared/decks/
[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/anki.png
[5]:https://github.com/sharkdp/fd
[6]:https://www.gnu.org/software/findutils/
[7]:https://keepass.info/
[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/keepass.png
[9]:https://github.com/jpmens/jo
[10]:https://github.com/jpmens/jo/blob/master/jo.md

View File

@ -0,0 +1,181 @@
How To Resize Active/Primary root Partition Using GParted Utility
======
Today we are going to discuss about disk partition. Its one of the best topics in Linux. This allow users to resize the active root partition in Linux.
In this article we will teach you how to resize the active root partition on Linux using Gparted utility.
Just imagine, our system has 30GB disk and we didnt configure properly while installation the Ubuntu operating system.
We need to install another OS in that so we want to make secondary partition on that.
Its not advisable to resize active partition. However, we are going to perform this as there is no way to free up the system.
Make sure you should take backup of important data before performing this action because if something goes wrong (For example, if power got failure or your system got rebooted), you can retain your data.
### What Is Gparted
[GParted][1] is a free partition manager that enables you to resize, copy, and move partitions without data loss. We can use all of the features of the GParted application is by using the GParted Live bootable image. GParted Live enables you to use GParted on GNU/Linux as well as other operating systems, such as Windows or Mac OS X.
### 1) Check Disk Space Usage Using df Command
I just want to show you about my partition using df command. The df command output clearly showing that i only have one partition.
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G 3.4G 26.2G 16% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 487M 4.0K 487M 1% /dev
tmpfs 100M 844K 99M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 152K 497M 1% /run/shm
none 100M 52K 100M 1% /run/user
```
### 2) Check Disk Partition Using fdisk Command
Im going to verify this using fdisk command.
```
$ sudo fdisk -l
[sudo] password for daygeek:
Disk /dev/sda: 33.1 GB, 33129218048 bytes
255 heads, 63 sectors/track, 4027 cylinders, total 64705504 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000473a3
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 62609407 31303680 83 Linux
/dev/sda2 62611454 64704511 1046529 5 Extended
/dev/sda5 62611456 64704511 1046528 82 Linux swap / Solaris
```
### 3) Download GParted live ISO Image
Use the below command to download GParted live ISO to perform this action.
```
$ wget https://downloads.sourceforge.net/gparted/gparted-live-0.31.0-1-amd64.iso
```
### 4) Boot Your System With GParted Live Installation Media
Boot your system with GParted live installation media (like Burned CD/DVD or USB or ISO image). You will get the output similar to below screen. Here choose **GParted Live (Default settings)** and hit **Enter**.
![][3]
### 5) Keyboard Selection
By default it chooses the second option, just hit **Enter**.
![][4]
### 6) Language Selection
By default it chooses **33** for US English, just hit **Enter**.
![][5]
### 7) Mode Selection (GUI or Command-Line)
By default it chooses **0** for GUI mode, just hit **Enter**.
![][6]
### 8) Loaded GParted Live Screen
Now, GParted live screen is loaded. It is showing the list of partition which was created by me earlier.
![][7]
### 9) How To Resize The root Partition
Choose the root partition you want to resize, only one partition is available here so im going to edit that partition to install another OS.
![][8]
To do so, press **Resize/Move** button to resize the partition.
![][9]
Here, enter the size which you want take out from this partition in first box. Im going to claim **10GB** so, i added **10240MB** and leave rest of boxes as default, then hit **Resize/Move** button
![][10]
It will ask you once again to confirm to resize the partition because you are editing the live system partition, then hit **Ok**.
![][11]
It has been successfully shrink the partition from 30GB to 20GB. Also shows Unallocated disk space of 10GB.
![][12]
Finally click `Apply` button to perform remaining below operations.
![][13]
* **`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD.
* **`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device.
* **`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file.
**`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD.
![][14]
**`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device.
![][15]
**`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file.
![][16]
All the operation got completed and close the dialog box.
![][17]
Now, i could able to see **10GB** of Unallocated disk partition.
![][18]
Reboot the system to check this.
![][19]
### 10) Check Free Space
Login to the system back and use fdisk command to see the available space in the partition. Yes i could see **10GB** of Unallocated disk space on this partition.
```
$ sudo parted /dev/sda print free
[sudo] password for daygeek:
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sda: 32.2GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 10.7GB 10.7GB Free Space
1 10.7GB 32.2GB 21.5GB primary ext4 boot
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://gparted.org/
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-1.png
[4]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-2.png
[5]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-3.png
[6]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-4.png
[7]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-5.png
[8]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-6.png
[9]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-7.png
[10]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-8.png
[11]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-9.png
[12]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-10.png
[13]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-11.png
[14]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-12.png
[15]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-13.png
[16]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-14.png
[17]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-15.png
[18]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-16.png
[19]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-17.png

View File

@ -0,0 +1,954 @@
Running Jenkins builds in containers
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf)
Running applications in containers has become a well-accepted practice in the enterprise sector, as [Docker][1] with [Kubernetes][2] (K8s) now provides a scalable, manageable application platform. The container-based approach also suits the [microservices architecture][3] that's gained significant momentum in the past few years.
One of the most important advantages of a container application platform is the ability to dynamically bring up isolated containers with resource limits. Let's check out how this can change the way we run our continuous integration/continuous development (CI/CD) tasks.
Building and packaging an application requires an environment that can download the source code, access dependencies, and have the build tools installed. Running unit and component tests as part of the build may use local ports or require third-party applications (e.g., databases, message brokers, etc.) to be running. In the end, we usually have multiple, pre-configured build servers with each running a certain type of job. For tests, we maintain dedicated instances of third-party apps (or try to run them embedded) and avoid running jobs in parallel that could mess up each other's outcome. The pre-configuration for such a CI/CD environment can be a hassle, and the required number of servers for different jobs can significantly change over time as teams shift between versions and development platforms.
Once we have access to a container platform (onsite or in the cloud), it makes sense to move the resource-intensive CI/CD task executions into dynamically created containers. In this scenario, build environments can be independently started and configured for each job execution. Tests during the build have free reign to use available resources in this isolated box, while we can also bring up a third-party application in a side container that exists only for this job's lifecycle.
It sounds nice… Let's see how it works in real life.
Note: This article is based on a real-world solution for a project running on a [Red Hat OpenShift][4] v3.7 cluster. OpenShift is the enterprise-ready version of Kubernetes, so these practices work on a K8s cluster as well. To try, download the [Red Hat CDK][5] and run the `jenkins-ephemeral` or `jenkins-persistent` [templates][6] that create preconfigured Jenkins masters on OpenShift.
### Solution overview
The solution to executing CI/CD tasks (builds, tests, etc.) in containers on OpenShift is based on [Jenkins distributed builds][7], which means:
* We need a Jenkins master; it may run inside the cluster but also works with an external master
* Jenkins features/plugins are available as usual, so existing projects can be used
* The Jenkins GUI is available to configure, run, and browse job output
* if you prefer code, [Jenkins Pipeline][8] is also available
From a technical point of view, the dynamic containers to run jobs are Jenkins agent nodes. When a build kicks off, first a new node starts and "reports for duty" to the Jenkins master via JNLP (port 5000). The build is queued until the agent node comes up and picks up the build. The build output is sent back to the master—just like with regular Jenkins agent servers—but the agent container is shut down once the build is done.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/1_running_jenkinsincontainers.png?itok=fR4ntnn8)
Different kinds of builds (e.g., Java, NodeJS, Python, etc.) need different agent nodes. This is nothing new—labels could previously be used to restrict which agent nodes should run a build. To define the config for these Jenkins agent containers started for each job, we will need to set the following:
* The Docker image to boot up
* Resource limits
* Environment variables
* Volumes mounted
The core component here is the [Jenkins Kubernetes plugin][9]. This plugin interacts with the K8s cluster (by using a ServiceAccount) and starts/stops the agent nodes. Multiple agent types can be defined as Kubernetes pod templates under the plugin's configuration (refer to them by label in projects).
These [agent images][10] are provided out of the box (also on [CentOS7][11]):
* [jenkins-slave-base-rhel7][12]: Base image starting the agent that connects to Jenkins master; the Java heap is set according to container memory
* [jenkins-slave-maven-rhel7][13]: Image for Maven and Gradle builds (extends base)
* [jenkins-slave-nodejs-rhel7][14]: Image with NodeJS4 tools (extends base)
Note: This solution is not related to OpenShift's [Source-to-Image (S2I)][15] build, which can also be used for certain CI/CD tasks.
### Background learning material
There are several good blogs and documentation about Jenkins builds on OpenShift. The following are good to start with:
Take a look at them to understand the overall solution. In this article, we'll look at the different issues that come up while applying those practices.
### Build my application
For our [example][16], let's assume a Java project with the following build steps:
* **Source:** Pull project source from a Git repository
* **Build with Maven:** Dependencies come from an internal repository (let's use Apache Nexus) mirroring external Maven repos
* **Deploy artifact:** The built JAR is uploaded to the repository
During the CI/CD process, we need to interact with Git and Nexus, so the Jenkins jobs have be able to access those systems. This requires configuration and stored credentials that can be managed at different places:
* **In Jenkins:** We can add credentials to Jenkins that the Git plugin can use and add files to the project (using containers doesn't change anything).
* **In OpenShift:** Use ConfigMap and secret objects that are added to the Jenkins agent containers as files or environment variables.
* **In a fully customized Docker image:** These are pre-configured with everything to run a type of job; just extend one of the agent images.
Which approach you use is a question of taste, and your final solution may be a mix. Below we'll look at the second option, where the configuration is managed primarily in OpenShift. Customize the Maven agent container via the Kubernetes plugin configuration by setting environment variables and mounting files.
Note: Adding environment variables through the UI doesn't work with Kubernetes plugin v1.0 due to a [bug][17]. Either update the plugin or (as a workaround) edit `config.xml` directly and restart Jenkins.
### Pull source from Git
Pulling a public Git is trivial. For a private Git repo, authentication is required and the client also needs to trust the server for a secure connection. A Git pull can typically be done via two protocols:
* HTTPS: Authentication is with username/password. The server's SSL certificate must be trusted by the job, which is only tricky if it's signed by a custom CA.
```
git clone https://git.mycompany.com:443/myapplication.git
```
* SSH: Authentication is with a private key. The server is trusted when its public key's fingerprint is found in the `known_hosts` file.
```
git clone ssh://git@git.mycompany.com:22/myapplication.git
```
Downloading the source through HTTP with username/password is OK when it's done manually; for automated builds, SSH is better.
#### Git with SSH
For a SSH download, we need to ensure that the SSH connection works between the agent container and the Git's SSH port. First, we need a private-public key pair. To generate one, run:
```
ssh keygen -t rsa -b 2048 -f my-git-ssh -N ''
```
It generates a private key in `my-git-ssh` (empty passphrase) and the matching public key in `my-git-ssh.pub`. Add the public key to the user on the Git server (preferably a ServiceAccount); web UIs usually support upload. To make the SSH connection work, we need two files on the agent container:
* The private key at `~/.ssh/id_rsa`
* The server's public key in `~/.ssh/known_hosts`. To get this, try `ssh git.mycompany.com` and accept the fingerprint; this will create a new line in the `known_hosts` file. Use that.
Store the private key as `id_rsa` and server's public key as `known_hosts` in an OpenShift secret (or config map).
```
apiVersion: v1
kind: Secret
metadata:
  name: mygit-ssh
stringData:
  id_rsa: |-
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  known_hosts: |-
    git.mycompany.com ecdsa-sha2-nistp256 AAA...
```
Then configure this as a volume in the Kubernetes plugin for the Maven pod at mount point `/home/jenkins/.ssh/`. Each item in the secret will be a file matching the key name under the mount directory. We can use the UI (`Manage Jenkins / Configure / Cloud / Kubernetes`), or edit Jenkins config `/var/lib/jenkins/config.xml`:
```
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<name>maven</name>
...
  <volumes>
    <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
      <mountPath>/home/jenkins/.ssh</mountPath>
      <secretName>mygit-ssh</secretName>
    </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
  </volumes>
```
Pulling a Git source through SSH should work in the jobs running on this agent now.
Note: It's also possible to customize the SSH connection in `~/.ssh/config`, for example, if we don't want to bother with `known_hosts` or the private key is mounted to a different location:
```
Host git.mycompany.com
   StrictHostKeyChecking no
   IdentityFile /home/jenkins/.config/git-secret/ssh-privatekey
```
#### Git with HTTP
If you prefer an HTTP download, add the username/password to a [Git-credential-store][18] file somewhere:
* E.g. `/home/jenkins/.config/git-secret/credentials` from an OpenShift secret, one site per line:
```
https://username:password@git.mycompany.com
https://user:pass@github.com
```
* Enable it in [git-config][19] expected at `/home/jenkins/.config/git/config`:
```
[credential]
  helper = store --file=/home/jenkins/.config/git-secret/credentials
```
If the Git service has a certificate signed by a custom certificate authority (CA), the quickest hack is to set the `GIT_SSL_NO_VERIFY=true` environment variable (EnvVar) for the agent. The proper solution needs two things:
* Add the custom CA's public certificate to the agent container from a config map to a path (e.g. `/usr/ca/myTrustedCA.pem`).
* Tell Git the path to this cert in an EnvVar `GIT_SSL_CAINFO=/usr/ca/myTrustedCA.pem` or in the `git-config` file mentioned above:
```
[http "https://git.mycompany.com"]
    sslCAInfo = /usr/ca/myTrustedCA.pem
```
Note: In OpenShift v3.7 (and earlier), the config map and secret mount points [must not overlap][20], so we can't map to `/home/jenkins` and `/home/jenkins/dir` at the same time. This is why we didn't use the well-known file locations above. A fix is expected in OpenShift v3.9.
### Maven
To make a Maven build work, there are usually two things to do:
* A corporate Maven repository (e.g., Apache Nexus) should be set up to act as a proxy for external repos. Use this as a mirror.
* This internal repository may have an HTTPS endpoint with a certificate signed by a custom CA.
Having an internal Maven repository is practically essential if builds run in containers because they start with an empty local repository (cache), so Maven downloads all the JARs every time. Downloading from an internal proxy repo on the local network is obviously quicker than downloading from the Internet.
The [Maven Jenkins agent][13] image supports an environment variable that can be used to set the URL for this proxy. Set the following in the Kubernetes plugin container template:
```
MAVEN_MIRROR_URL=https://nexus.mycompany.com/repository/maven-public
```
The build artifacts (JARs) should also be archived in a repository, which may or may not be the same as the one acting as a mirror for dependencies above. Maven `deploy` requires the repo URL in the `pom.xml` under [Distribution management][21] (this has nothing to do with the agent image):
```
<project ...>
<distributionManagement>
 <snapshotRepository>
  <id>mynexus</id>
  <url>https://nexus.mycompany.com/repository/maven-snapshots/</url>
 </snapshotRepository>
 <repository>
  <id>mynexus</id>
  <url>https://nexus.mycompany.com/repository/maven-releases/</url>
 </repository>
</distributionManagement>
```
Uploading the artifact may require authentication. In this case, username/password must be set in the `settings.xml` under the server ID matching the one in `pom.xml`. We need to mount a whole `settings.xml` with the URL, username, and password on the Maven Jenkins agent container from an OpenShift secret. We can also use environment variables as below:
* Add environment variables from a secret to the container:
```
MAVEN_SERVER_USERNAME=admin
MAVEN_SERVER_PASSWORD=admin123
```
* Mount `settings.xml` from a config map to `/home/jenkins/.m2/settings.xml`:
```
<settings ...>
 <mirrors>
  <mirror>
   <mirrorOf>external:*</mirrorOf>
   <url>${env.MAVEN_MIRROR_URL}</url>
   <id>mirror</id>
  </mirror>
 </mirrors>
 <servers>
  <server>
   <id>mynexus</id>
   <username>${env.MAVEN_SERVER_USERNAME}</username>
   <password>${env.MAVEN_SERVER_PASSWORD}</password>
  </server>
 </servers>
</settings>
```
Disable interactive mode (use batch mode) to skip the download log by using `-B` for Maven commands or by adding `<interactiveMode>false</interactiveMode>` to `settings.xml`.
If the Maven repository's HTTPS endpoint uses a certificate signed by a custom CA, we need to create a Java KeyStore using the [keytool][22] containing the CA certificate as trusted. This KeyStore should be uploaded as a config map in OpenShift. Use the `oc` command to create a config map from files:
```
 oc create configmap maven-settings--from-file=settings.xml=settings.xml--from-
file=myTruststore.jks=myTruststore.jks
```
Mount the config map somewhere on the Jenkins agent. In this example we use `/home/jenkins/.m2`, but only because we have `settings.xml` in the same config map. The KeyStore can go under any path.
Then make the Maven Java process use this file as a trust store by setting Java parameters in the `MAVEN_OPTS` environment variable for the container:
```
MAVEN_OPTS=
-Djavax.net.ssl.trustStore=/home/jenkins/.m2/myTruststore.jks
-Djavax.net.ssl.trustStorePassword=changeit
```
### Memory usage
This is probably the most important part—if we don't set max memory correctly, we'll run into intermittent build failures after everything seems to work.
Running Java in a container can cause high memory usage errors if we don't set the heap in the Java command line. The JVM [sees the total memory of the host machine][23] instead of the container's memory limit and sets the [default max heap][24] accordingly. This is typically much more than the container's memory limit, and OpenShift simply kills the container when a Java process allocates more memory for the heap.
Although the `jenkins``-slave-base` image has a built-in [script to set max heap ][25]to half the container memory (this can be modified via EnvVar `CONTAINER_HEAP_PERCENT=0.50`), it only applies to the Jenkins agent Java process. In a Maven build, we have important additional Java processes running:
* The `mvn` command itself is a Java tool.
* The [Maven Surefire-plugin][26] executes the unit tests in a forked JVM by default.
At the end of the day, we'll have three Java processes running at the same time in the container, and it's important to estimate their memory usage to avoid unexpectedly killed pods. Each process has a different way to set JVM options:
* Jenkins agent heap is calculated as mentioned above, but we definitely shouldn't let the agent have such a big heap. Memory is needed for the other two JVMs. Setting `JAVA_OPTS` works for the Jenkins agent.
* The `mvn` tool is called by the Jenkins job. Set `MAVEN_OPTS` to customize this Java process.
* The JVM spawned by the Maven `surefire` plugin for the unit tests can be customized by the [argLine][27] Maven property. It can be set in the `pom.xml`, in a profile in `settings.xml` or simply by adding `-DargLine=… to mvn` command in `MAVEN_OPTS`.
Here is an example of how to set these environment variables for the Maven agent container:
```
 JAVA_OPTS=-Xms64m -Xmx64m
MAVEN_OPTS=-Xms128m -Xmx128m -DargLine=${env.SUREFIRE_OPTS}
SUREFIRE_OPTS=-Xms256m -Xmx256m
```
These numbers worked in our tests with 1024Mi agent container memory limit building and running unit tests for a SpringBoot app. These are relatively low numbers and a bigger heap size; a higher limit may be needed for complex Maven projects and unit tests.
Note: The actual memory usage of a Java8 process is something like `HeapSize + MetaSpace + OffHeapMemory`, and this can be significantly more than the max heap size set. With the settings above, the three Java processes took more than 900Mi memory in our case. See RSS memory for processes within the container: `ps -e -o ``pid``,user``,``rss``,comm``,args`
The Jenkins agent images have both JDK 64 bit and 32 bit installed. For `mvn` and `surefire`, the 64-bit JVM is used by default. To lower memory usage, it makes sense to force 32-bit JVM as long as `-Xmx` is less than 1.5 GB:
```
JAVA_HOME=/usr/lib/jvm/Java-1.8.0-openjdk-1.8.0.1610.b14.el7_4.i386
```
Note that it's also possible to set Java arguments in the `JAVA_TOOL_OPTIONS` EnvVar, which is picked up by any JVM started. The parameters in `JAVA_OPTS` and `MAVEN_OPTS` overwrite the ones in `JAVA_TOOL_OPTIONS`, so we can achieve the same heap configuration for our Java processes as above without using `argLine`:
```
JAVA_OPTS=-Xms64m -Xmx64m
MAVEN_OPTS=-Xms128m -Xmx128m
JAVA_TOOL_OPTIONS=-Xms256m -Xmx256m
```
It's still a bit confusing, as all JVMs log `Picked up JAVA_TOOL_OPTIONS:`
### Jenkins Pipeline
Following the settings above, we should have everything prepared to run a successful build. We can pull the code, download the dependencies, run the unit tests, and upload the artifact to our repository. Let's create a Jenkins Pipeline project that does this:
```
pipeline {
  /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var Which container to bring up for the build. Pick one of the templates configured in Kubernetes plugin. */
  agent {
    label 'maven'
  }
  stages {
    stage('Pull Source') {
      steps {
        git url: 'ssh://git@git.mycompany.com:22/myapplication.git', branch: 'master'
      }
    }
    stage('Unit Tests') {
      steps {
        sh 'mvn test'
      }
    }
    stage('Deploy to Nexus') {
      steps {
        sh 'mvn deploy -DskipTests'
      }
    }
  }
}
```
For a real project, of course, the CI/CD pipeline should do more than just the Maven build; it could deploy to a development environment, run integration tests, promote to higher environments, etc. The learning articles linked above show examples of how to do those things.
### Multiple containers
One pod can be running multiple containers with each having their own resource limits. They share the same network interface, so we can reach started services on `localhost`, but we need to think about port collisions. Environment variables are set separately, but the volumes mounted are the same for all containers configured in one Kubernetes pod template.
Bringing up multiple containers is useful when an external service is required for unit tests and an embedded solution doesn't work (e.g., database, message broker, etc.). In this case, this second container also starts and stops with the Jenkins agent.
See the Jenkins `config.xml` snippet where we start an `httpbin` service on the side for our Maven build:
```
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
  <name>maven</name>
  <volumes>
    ...
  </volumes>
  <containers>
    <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      <name>jnlp</name>
      
      <resourceLimitCpu>500m</resourceLimitCpu>
      <resourceLimitMemory>1024Mi</resourceLimitMemory>
      <envVars>
      ...
      </envVars>        
      ...
    </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
    <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      <name>httpbin</name>
      
      <resourceLimitCpu></resourceLimitCpu>
      <resourceLimitMemory>256Mi</resourceLimitMemory>
      <envVars/>
      ...
    </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
  </containers>
  <envVars/>
</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
```
### Summary
For a summary, see the created OpenShift resources and the Kubernetes plugin configuration from Jenkins `config.xml` with the configuration described above.
```
apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: git-config
  data:
    config: |
      [credential]
          helper = store --file=/home/jenkins/.config/git-secret/credentials
      [http "http://git.mycompany.com"]
          sslCAInfo = /home/jenkins/.config/git/myTrustedCA.pem
    myTrustedCA.pem: |-
      -----BEGIN CERTIFICATE-----
      MIIDVzCCAj+gAwIBAgIJAN0sC...
      -----END CERTIFICATE-----
- apiVersion: v1
  kind: Secret
  metadata:
    name: git-secret
  stringData:
    ssh-privatekey: |-
      -----BEGIN RSA PRIVATE KEY-----
      ...
      -----END RSA PRIVATE KEY-----
    credentials: |-
      https://username:password@git.mycompany.com
      https://user:pass@github.com
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: git-ssh
  data:
    config: |-
      Host git.mycompany.com
        StrictHostKeyChecking yes
        IdentityFile /home/jenkins/.config/git-secret/ssh-privatekey
    known_hosts: '[git.mycompany.com]:22 ecdsa-sha2-nistp256 AAAdn7...'
- apiVersion: v1
  kind: Secret
  metadata:
    name: maven-secret
  stringData:
    username: admin
    password: admin123
```
One additional config map was created from files:
```
 oc create configmap maven-settings --from-file=settings.xml=settings.xml
--from-file=myTruststore.jks=myTruststore.jks
```
Kubernetes plugin configuration:
```
<?xml version='1.0' encoding='UTF-8'?>
<hudson>
...
  <clouds>
    <org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud plugin="kubernetes@1.0">
      <name>openshift</name>
      <defaultsProviderTemplate></defaultsProviderTemplate>
      <templates>
        <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
          <inheritFrom></inheritFrom>
          <name>maven</name>
          <namespace></namespace>
          <privileged>false</privileged>
          <alwaysPullImage>false</alwaysPullImage>
          <instanceCap>2147483647</instanceCap>
          <slaveConnectTimeout>100</slaveConnectTimeout>
          <idleMinutes>0</idleMinutes>
          <label>maven</label>
          <serviceAccount>jenkins37</serviceAccount>
          <nodeSelector></nodeSelector>
          <nodeUsageMode>NORMAL</nodeUsageMode>
          <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
          <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
            <memory>false</memory>
          </workspaceVolume>
          <volumes>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
              <mountPath>/home/jenkins/.config/git-secret</mountPath>
              <secretName>git-secret</secretName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.ssh</mountPath>
              <configMapName>git-ssh</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.config/git</mountPath>
              <configMapName>git-config</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.m2</mountPath>
              <configMapName>maven-settings</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
          </volumes>
          <containers>
            <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
              <name>jnlp</name>
              
              <privileged>false</privileged>
              <alwaysPullImage>false</alwaysPullImage>
              <workingDir>/tmp</workingDir>
              <command></command>
              <args>${computer.jnlpmac} ${computer.name}</args>
              <ttyEnabled>false</ttyEnabled>
              <resourceRequestCpu>500m</resourceRequestCpu>
              <resourceRequestMemory>1024Mi</resourceRequestMemory>
              <resourceLimitCpu>500m</resourceLimitCpu>
              <resourceLimitMemory>1024Mi</resourceLimitMemory>
              <envVars>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>JAVA_HOME</key>
                  <value>/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.i386</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>JAVA_OPTS</key>
                  <value>-Xms64m -Xmx64m</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>MAVEN_OPTS</key>
                  <value>-Xms128m -Xmx128m -DargLine=${env.SUREFIRE_OPTS} -Djavax.net.ssl.trustStore=/home/jenkins/.m2/myTruststore.jks -Djavax.net.ssl.trustStorePassword=changeit</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>SUREFIRE_OPTS</key>
                  <value>-Xms256m -Xmx256m</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>MAVEN_MIRROR_URL</key>
                  <value>https://nexus.mycompany.com/repository/maven-public</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                  <key>MAVEN_SERVER_USERNAME</key>
                  <secretName>maven-secret</secretName>
                  <secretKey>username</secretKey>
                </org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                  <key>MAVEN_SERVER_PASSWORD</key>
                  <secretName>maven-secret</secretName>
                  <secretKey>password</secretKey>
                </org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
              </envVars>
              <ports/>
              <livenessProbe>
                <execArgs></execArgs>
                <timeoutSeconds>0</timeoutSeconds>
                <initialDelaySeconds>0</initialDelaySeconds>
                <failureThreshold>0</failureThreshold>
                <periodSeconds>0</periodSeconds>
                <successThreshold>0</successThreshold>
              </livenessProbe>
            </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
              <name>httpbin</name>
              
              <privileged>false</privileged>
              <alwaysPullImage>false</alwaysPullImage>
              <workingDir></workingDir>
              <command>/run.sh</command>
              <args></args>
              <ttyEnabled>false</ttyEnabled>
              <resourceRequestCpu></resourceRequestCpu>
              <resourceRequestMemory>256Mi</resourceRequestMemory>
              <resourceLimitCpu></resourceLimitCpu>
              <resourceLimitMemory>256Mi</resourceLimitMemory>
              <envVars/>
              <ports/>
              <livenessProbe>
                <execArgs></execArgs>
                <timeoutSeconds>0</timeoutSeconds>
                <initialDelaySeconds>0</initialDelaySeconds>
                <failureThreshold>0</failureThreshold>
                <periodSeconds>0</periodSeconds>
                <successThreshold>0</successThreshold>
              </livenessProbe>
            </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
          </containers>
          <envVars/>
          <annotations/>
          <imagePullSecrets/>
        </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
      </templates>
      <serverUrl>https://172.30.0.1:443</serverUrl>
      <serverCertificate>-----BEGIN CERTIFICATE-----
MIIC6jCC...
-----END CERTIFICATE-----</serverCertificate>
      <skipTlsVerify>false</skipTlsVerify>
      <namespace>first</namespace>
      <jenkinsUrl>http://jenkins.cicd.svc:80</jenkinsUrl>
      <jenkinsTunnel>jenkins-jnlp.cicd.svc:50000</jenkinsTunnel>
      <credentialsId>1a12dfa4-7fc5-47a7-aa17-cc56572a41c7</credentialsId>
      <containerCap>10</containerCap>
      <retentionTimeout>5</retentionTimeout>
      <connectTimeout>0</connectTimeout>
      <readTimeout>0</readTimeout>
      <maxRequestsPerHost>32</maxRequestsPerHost>
    </org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud>
  </clouds>
</hudson>
```
Happy builds!
This was originally published on [ITNext][28] and is reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/running-jenkins-builds-containers
作者:[Balazs Szeti][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bszeti
[1]:https://opensource.com/resources/what-docker
[2]:https://opensource.com/resources/what-is-kubernetes
[3]:https://martinfowler.com/articles/microservices.html
[4]:https://www.openshift.com/
[5]:https://developers.redhat.com/products/cdk/overview/
[6]:https://github.com/openshift/origin/tree/master/examples/jenkins
[7]:https://wiki.jenkins.io/display/JENKINS/Distributed+builds
[8]:https://jenkins.io/doc/book/pipeline/
[9]:https://github.com/jenkinsci/kubernetes-plugin
[10]:https://access.redhat.com/containers/#/search/jenkins%2520slave
[11]:https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=openshift+jenkins+slave+&starCount=0
[12]:https://github.com/openshift/jenkins/tree/master/slave-base
[13]:https://github.com/openshift/jenkins/tree/master/slave-maven
[14]:https://github.com/openshift/jenkins/tree/master/slave-nodejs
[15]:https://docs.openshift.com/container-platform/3.7/architecture/core_concepts/builds_and_image_streams.html#source-build
[16]:https://github.com/bszeti/camel-springboot/tree/master/camel-rest-complex
[17]:https://issues.jenkins-ci.org/browse/JENKINS-47112
[18]:https://git-scm.com/docs/git-credential-store/1.8.2
[19]:https://git-scm.com/docs/git-config/1.8.2
[20]:https://bugzilla.redhat.com/show_bug.cgi?id=1430322
[21]:https://maven.apache.org/pom.html#Distribution_Management
[22]:https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html
[23]:https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
[24]:https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/parallel.html#default_heap_size
[25]:https://github.com/openshift/jenkins/blob/master/slave-base/contrib/bin/run-jnlp-client
[26]:http://maven.apache.org/surefire/maven-surefire-plugin/examples/fork-options-and-parallel-execution.html
[27]:http://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#argLine
[28]:https://itnext.io/running-jenkins-builds-in-containers-458e90ff2a7b

View File

@ -1,82 +0,0 @@
面向企业的最佳 Linux 发行版
====
在这篇文章中,我将分享企业环境下顶级的 Linux 发行版。其中一些发行版与桌面任务一起用于服务器和云环境。所有这些 Linux 选项具有的一个共同点是它们都是企业级 Linux 发行版 -- 所以你可以期待更高程度的功能,当然还有支持。
### 什么是企业级的 Linux 发行版?
一个企业级的 Linux 发行版可以归结为以下内容 - 稳定性和支持。在企业环境中,使用的 Linux 版本必须满足这两点。稳定性意味着所提供的软件包既稳定又可用,同时仍然保持预期的安全性。
企业级的支持因素意味着有一个可靠的支持机制。有时这是单一的(官方)来源,如公司。在其他情况下,它可能是一个非营利性的执政机构,向优秀的第三方支持供应商提供可靠的建议。很明显,前者是最好的选择,但两者都可以接受。
### Red Hat 企业级 Linux
[Red Hat][1] 有很多很棒的产品,都有企业级的支持来保证可用。其核心重点如下:
- Red Hat 企业级 Linux 服务器:这是一组服务器产品,包括从容器托管到 SAP 服务的所有内容,还有其他衍生的服务器。
- Red Hat 企业级 Linux 桌面:这些是严格控制的用户环境,运行 Red Hat Linux提供基本的桌面功能。这些功能包括访问最新的应用程序如 web 浏览器、电子邮件、LibreOffice 等。
- Red Hat 企业级 Linux 工作站:这基本上是 Red Hat 企业级 Linux 桌面,但针对高性能任务进行了优化。它也非常适合于大型部署和持续管理。
### 为什么选择 Red Hat 企业级 Linux
Red Hat 是一家大型的、非常成功的公司,销售围绕 Linux 的服务。基本上Red Hat 从那些想要避免供应商锁定和其他相关问题的公司赚钱。这些公司认识到聘用开源软件专家和管理他们的服务器和其他计算需求的价值。一家公司只需要购买订阅Red Hat 会在支持方面做其余的工作。
Red Hat 也是一个可靠的社会公民。他们赞助开源项目FoSS 支持网站译注FoSS 是 Free and Open Source Software 的缩写意为自由及开源软件像OpenSource.com 这样的网站,并为 Fedora 项目提供支持。Fedora 不是由 Red Hat 所有,而是它赞助开发的。这使 Fedora 得以发展,同时也使 Red Hat 受益匪浅。Red Hat 可以从 Fedora 项目中获得他们想要的,并将其用于他们的企业级 Linux 产品中。 就目前来看Fedora 充当了红帽企业 Linux 的上游渠道。
### SUSE Linux 企业版本
[SUSE][2] 是一家非常棒的公司,为企业用户提供了可靠的 Linux 选项。SUSE 的产品类似于 Red Hat因为桌面和服务器都是公司关注的。从我自己使用 SUSE 的经验来看,我相信 YaST 已经证明对于希望在工作场所使用 Linux 操作系统的非 Linux 管理员而言它是一笔巨大的资产。YaST 为那些需要一些基本的 Linux 命令行知识的任务提供了一个友好的 GUI。
SUSE 的核心重点如下:
- SUSE Linux 企业级服务器:包括任务特定的解决方案,从云到 SAP 选项,以及任务关键计算和基于软件的数据存储。
- SUSE Linux 企业级桌面:对于那些希望为员工提供可靠的 Linux 工作站的公司来说SUSE Linux 企业级桌面是一个不错的选择。和 Red Hat 一样SUSE 通过订阅模式来提供对其支持产品的访问。你可以选择三个不同级别的支持。
为什么选择 SUSE Linux 企业版?
SUSE 是一家围绕 Linux 销售服务的公司,但他们仍然通过专注于简化操作来实现这一目标。从他们的网站到 SUSE 提供的 Linux 发行版,重点是易用性,而不会牺牲安全性或可靠性。尽管在美国毫无疑问 Red Hat 是服务器的标准,但 SUSE 作为公司和开源社区贡献成员都做得很好。
我还会继续说SUSE 不会把自己太当回事。当你在 IT 领域建立联系的时候,这是一件很棒的事情。从他们关于 Linux 的有趣音乐视频到 SUSE 贸易展位中使用的 Gecko 以获得有趣的照片机会SUSE 将自己描述成简单易懂和平易近人的形象。
### Ubuntu LTS Linux
[Ubuntu Long Term Release][3] (LTS) Linux 是一个简单易用的企业级 Linux 发行版。Ubuntu 看起来比上面提到的其他发行版更新更频繁有时候也更不稳定。不要误解Ubuntu LTS 版本被认为是相当稳定的,不过,我认为一些专家可能不同意你的观点,如果你认为他们是防弹的。(这句不太理解)
Ubuntu 的核心重点如下:
- Ubuntu 桌面版毫无疑问Ubuntu 桌面非常简单可以快速学习和快速运行。它在高级安装选项中可能缺少一些东西但它有一种直截了当的弥补。作为额外的奖励Ubuntu 相比比其他版本有更多的软件包除了它的父亲Debian 发行版)。我认为 Ubuntu 真正的亮点在于,你可以在网上找到许多销售 Ubuntu 的厂商,包括服务器,台式机和笔记本电脑。
- Ubuntu 服务器版这包括服务器云和容器产品。Ubuntu 还提供了 Juju 云“应用商店”这样一个有趣的概念。对于任何熟悉 Ubuntu 或 Debian 的人来说Ubuntu 服务器都很有意义。对于这些人来说,它就像手套一样,为你提供你已经知道并喜爱的命令行工具。
Ubuntu IoT最近Ubuntu 的开发团队已经把目标瞄准了“物联网”IoT的创建解决方案。包括数字标识、机器人技术和物联网网关。我的猜测是我们将在 Ubuntu 中看到物联网增长的大部分来自企业用户,而不是普通家庭用户。
为什么选择 Ubuntu LTS
社区是 Ubuntu 最大的优点。除了在已经拥挤的服务器市场上的巨大增长之外,它还与普通用户在一起。使用 Ubuntu 开发者和社区用户是坚如磐石的。因此,虽然它可能被认为比其他企业版更不稳定,但是我发现将 Ubuntu LTS 安装锁定到到 “security updates only” 模式下提供了非常稳定的体验。
### CentOS 或者 Scientific Linux 怎么样呢?
首先,让我们把 [CentOS][4] 作为一个企业发行版,如果你有自己的内部支持团队来维护它,那么安装 CentOS 是一个很好的选择。毕竟,它与 Red Hat 企业级 Linux 兼容,并提供了与 Red Hat 产品相同级别的稳定性。不幸的是,它不会完全取代 Red Hat 支持订阅。
那么 [Scientific Linux][5] 呢?它的发行版怎么样?好吧,它就像 CentOS它是基于 Red Hat Linux 的。但与 CentOS 不同的是,它与 Red Hat 没有任何关系。 Scientific Linux 从一开始就有一个目标 - 为世界各地的实验室提供一个通用的 Linux 发行版。今天Scientific Linux 基本上是 Red Hat 减去包含的商标资料。
这两种发行版都不能真正地与 Red Hat 互换,因为他们缺少 Red Hat 支持组件。
哪一个是顶级企业发行版?我认为这取决于你需要为自己确定的许多因素:订阅范围,可用性,成本,服务和提供的功能。这些是每个公司必须自己决定的因素。就我个人而言,我认为 Red Hat 在服务器上获胜,而 SUSE 在桌面环境中轻松获胜,但这只是我的意见 - 你不同意?点击下面的评论部分,让我们来谈谈它。
--------------------------------------------------------------------------------
via: https://www.datamation.com/open-source/best-linux-distros-for-the-enterprise.html
作者:[Matt Hartley][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
[1]:https://www.redhat.com/en
[2]:https://www.suse.com/
[3]:http://releases.ubuntu.com/16.04/
[4]:https://www.centos.org/
[5]:https://www.scientificlinux.org/

View File

@ -1,21 +1,17 @@
Translating by FelixYFZ 如何在Linux的终端测试网速
How to test internet speed in Linux terminal
====== ======
Learn how to use speedtest cli tool to test internet speed in Linux terminal. Also includes one liner python command to get speed details right away. 学习如何在Linux终端使用命令行工具测试网速,或者仅用一条python命令立刻获得网速的测试结果。
![在Linux终端测试网速][1]
![test internet speed in linux terminal][1] 我们都会在连接网络或者wifi的时候去测试网络带宽。 为什么不用我们自己的服务器下面将会教你如何在Linux终端测试网速。
Most of us check the internet bandwidth speed whenever we connect to new network or wifi. So why not our servers! Here is a tutorial which will walk you through to test internet speed in Linux terminal. 我们多数都会使用[Mb/s][2]标准来测试网速。 仅仅只是桌面上的一个简单的操作,访问他们的网站点击浏览。
它将使用最近的服务器来扫描你的本地主机来测试网速。 如果你使用的是移动设备他们有对应的移动端APP。但如果你使用的是只有命令行终端界面的则会有些不同。下面让我们一起看看如何在Linux的终端来测试网速。
如果你只是想偶尔的做一次网速测试而不想去下载测试工具,那么请往下看如何使用命令完成测试。
Everyone of us generally uses [Speedtest by Ookla][2] to check internet speed. Its pretty simple process for a desktop. Goto their website and just click GO button. It will scans your location and speed test with nearest server. If you are on mobile, they have their app for you. But if you are on terminal with command line interface things are little different. Lets see how to check internet speed from Linux terminal. ### 第一步:下载网速测试命令行工具。
If you want to speed check only once and dont want to download tool on server, jump here and see one liner command. 首先你需要从github上下载网速测试命令行工具。现在上面也包含许多其他的Linu相关的仓库如果已经在你的库中你可以直接在你的Linux上进行安装。 让我们继续下载和安装过程安装的git包取决于你的Linux发行版。然后按照下面的方法来克隆Github speedtest存储库
### Step 1 : Download speedtest cli tool
First of all, you have to download speedtest CLI tool from [github repository][3]. Now a days, its also included in many well known Linux repositories as well. If its their in yours then you can directly [install that package on your Linux distro][4].
Lets proceed with Github download and install process. [Install git package][4] depending on your distro. Then clone Github repo of speedtest like belwo :
``` ```
[root@kerneltalks ~]# git clone https://github.com/sivel/speedtest-cli.git [root@kerneltalks ~]# git clone https://github.com/sivel/speedtest-cli.git
@ -27,7 +23,7 @@ Resolving deltas: 100% (518/518), done.
``` ```
It will be cloned to your present working directory. New directory named `speedtest-cli` will be created. You can see below files in it. 它将会被克隆到你当前的工作目录新的名为speedtest-cli的目录将会被创建,你将在新的目录下看到如下的文件。
``` ```
[root@kerneltalks ~]# cd speedtest-cli [root@kerneltalks ~]# cd speedtest-cli
@ -45,13 +41,11 @@ total 96
-rw-r--r--. 1 root root 333 Oct 7 16:55 tox.ini -rw-r--r--. 1 root root 333 Oct 7 16:55 tox.ini
``` ```
The python script `speedtest.py` is the one we will be using to check internet speed. 名为speedtest.py的脚本文件就是用来测试网速的。你可以在/usr/bin执行环境下将这个脚本链接到一条命令以便这台机器上的所有用户都能使用。或者你可以为这个脚本创建一个命令别名这样就能让所有用户很容易使用它。
You can link this script for a command in /usr/bin so that all users on server can use it. Or you can even create [command alias][5] for it and it will be easy for all users to use it. ### 运行python脚本
### Step 2 : Run python script 现在,直接运行这个脚本,不需要添加任何参数它将会搜寻最近的服务器来测试你的网速。
Now, run python script without any argument and it will search nearest server and test your internet speed.
``` ```
[root@kerneltalks speedtest-cli]# python speedtest.py [root@kerneltalks speedtest-cli]# python speedtest.py
@ -66,13 +60,13 @@ Testing upload speed............................................................
Upload: 323.95 Mbit/s Upload: 323.95 Mbit/s
``` ```
Oh! Dont amaze with speed. 😀 I am on [AWS EC2 Linux server][6]. Thats the bandwidth of Amazon data center! 🙂 Oh 不要被这个网速惊讶道。😀我在AWE EX2的服务器上。那是亚马逊数据中心的网速🙂
### Different options with script ### 这个脚本可以添加有不同的选项。
Few options which might be useful are as below : 下面的几个选项对这个脚本可能会很有用处:
**To search speedtest servers** nearby your location use `--list` switch and `grep` for your location name. ** 用搜寻你附近的网路测试服务器,使用 --list和grep加上本地名来列出所有附件的服务器。
``` ```
[root@kerneltalks speedtest-cli]# python speedtest.py --list | grep -i mumbai [root@kerneltalks speedtest-cli]# python speedtest.py --list | grep -i mumbai
@ -90,11 +84,10 @@ Few options which might be useful are as below :
6403) YOU Broadband India Pvt Ltd., Mumbai (Mumbai, India) [1.15 km] 6403) YOU Broadband India Pvt Ltd., Mumbai (Mumbai, India) [1.15 km]
``` ```
You can see here, first column is server identifier followed by name of company hosting that server, location and finally its distance from your location. 然后你就能从搜寻结果中看到,第一列是服务器识别号,紧接着是公司的名称和所在地,最后是离你的距离。
**To test internet speed using specific server** use `--server` switch and server identifier from previous output as argument. ** 如果要使用指定的服务器来测试网速,后面跟上命令 --server 加上服务器的识别号。
```
[root@kerneltalks speedtest-cli]# python speedtest.py --server 2827 [root@kerneltalks speedtest-cli]# python speedtest.py --server 2827
Retrieving speedtest.net configuration... Retrieving speedtest.net configuration...
Testing from Amazon (35.154.184.126)... Testing from Amazon (35.154.184.126)...
@ -107,7 +100,7 @@ Testing upload speed............................................................
Upload: 69.25 Mbit/s Upload: 69.25 Mbit/s
``` ```
**To get share link of your speed test** , use `--share` switch. It will give you URL of your test hosted on speedtest website. You can share this URL. ** 如果想得到你的测试结果的共享链接,使用命令 --share,你将会得到测试结果的链接。
``` ```
[root@kerneltalks speedtest-cli]# python speedtest.py --share [root@kerneltalks speedtest-cli]# python speedtest.py --share
@ -124,21 +117,20 @@ Share results: http://www.speedtest.net/result/6687428141.png
``` ```
Observe last line which includes URL of your test result. If I download that image its the one below : 输出中的最后一行就是你的测试结果的链接。下载下来的图片内容如下 :
![Speedtest result on Linux][7] ![Speedtest result on Linux][7]
Thats it! But hey if you dont want all this technical jargon, you can even use below one liner to get speed test done right away. 这就是全部的过程!如果你不想子结果中看到这些技术术语,你也可以使用如下的命令迅速测出你的网速
### Internet speed test using one liner in terminal ### 要想在终端使用一条命令测试网速。
We are going to use [curl tool ][8]to fetch above said python script online and supply it to python for execution on the go! 我们将使用Curl工具来在线抓取上面使用的python脚本然后直接用python执行脚本
``` ```
[root@kerneltalks ~]# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python - [root@kerneltalks ~]# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
``` ```
上面的脚本将会运行脚本输出结果到屏幕上。
Above command will run the script and show you result on screen!
``` ```
[root@kerneltalks speedtest-cli]# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python - [root@kerneltalks speedtest-cli]# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
@ -153,14 +145,12 @@ Testing upload speed............................................................
Upload: 355.84 Mbit/s Upload: 355.84 Mbit/s
``` ```
I tested this tool on RHEL 7 server but process is same on Ubuntu, Debian, Fedora or CentOS.
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://kerneltalks.com/tips-tricks/how-to-test-internet-speed-in-linux-terminal/ 这是在RHEL 7上执行的结果在Ubuntu,Debian, Fedora或者Centos上一样可以执行。
作者:[Shrikant Lavhate][a] 作者:[Shrikant Lavhate][a]
译者:[译者ID](https://github.com/译者ID) 译者:[FelixYFZ ](https://github.com/FelixYFZ)
校对:[校对者ID](https://github.com/校对者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,103 +0,0 @@
可怕的万圣节 Linux 命令
======
万圣节快到了,是时候关注一下 Linux 可怕的一面。什么命令可能会显示鬼、巫婆和僵尸的图像?这可能会鼓励伎俩或治疗的精神?哪个会鼓励“不给糖果就捣蛋”的精神?
### crypt
好吧,我们一直看到 **crypt**。尽管名称不同crypt 不是一个地窖也不是垃圾文件的埋葬坑而是一个加密文件内容的命令。现在“crypt” 通常用一个脚本实现,通过调用一个名为 **mcrypt** 的二进制文件来模拟以前的 crypt 命令来完成它的工作。直接使用 **mycrypt** 命令是更好的选择。
```
$ mcrypt x
Enter the passphrase (maximum of 512 characters)
Please use a combination of upper and lower case letters and numbers.
Enter passphrase:
Enter passphrase:
File x was encrypted.
```
请注意mcrypt 命令会创建一个扩展名为 “.nc” 的第二个文件。它不会覆盖你正在加密的文件。
mcrypt 命令有密钥大小和加密算法的选项。你也可以再选项中指定密钥,但 mcrypt 命令不鼓励这样做。
### kill
还有 kill 命令 - 当然并不是指谋杀而是用来强制和非强制地结束进程这取决于正确终止它们的要求。当然Linux 并不止于此。相反,它有各种 kill 命令来终止进程。我们有 kill、pkill、killall、killpg、rfkill、skill (读作 es-kill)、tgkill、tkill 和 xkill。
```
$ killall runme
[1] Terminated ./runme
[2] Terminated ./runme
[3]- Terminated ./runme
[4]+ Terminated ./runme
```
### shred
Linux 系统也支持一个名为 **shred** 的命令。shred 命令会覆盖文件以隐藏其以前的内容并确保使用硬盘恢复工具无法恢复它们。请记住rm 命令基本上只是删除文件在目录文件中的引用,但不一定会从磁盘上删除内容或覆盖它。**shred** 命令覆盖文件的内容。
```
$ shred dupes.txt
$ more dupes.txt
▒oΛ▒▒9▒lm▒▒▒▒▒o▒1־▒▒f▒f▒▒▒i▒▒h^}&▒▒▒{▒▒
```
### 僵尸
虽然不是命令,但**僵尸**在 Linux 系统上是很顽固的存在。僵尸基本上是没有完全清理掉的死亡进程的遗骸。进程_不应该_这样工作 - 让死亡进程四处游荡,而不是简单地让它们死亡并进入数字天堂,所以僵尸的存在表明让他们遗留的进程有一些缺陷。
一个简单的方法来检查你的系统是否有僵尸进程遗留,看看 top 命令的标题行。
```
$ top
top - 18:50:38 up 6 days, 6:36, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 171 total, 1 running, 167 sleeping, 0 stopped, 3 zombie **< ==**
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 99.9 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 2003388 total, 250840 free, 545832 used, 1206716 buff/cache
KiB Swap: 9765884 total, 9765764 free, 120 used. 1156536 avail Mem
```
可怕!上面显示有三个僵尸进程。
### at midnight
有时会在万圣节这么说死者的灵魂从日落开始游荡直到午夜。Linux 可以通过 “at midnight” 命令跟踪它们的离开。用于安排在下次到达指定时间时运行的作业,**at** 的作用类似于一次性的 cron。
```
$ at midnight
warning: commands will be executed using /bin/sh
at> echo 'the spirits of the dead have left'
at> <EOT>
job 3 at Thu Oct 31 00:00:00 2017
```
### 守护进程
Linux 系统也高度依赖守护进程 - 在后台运行的进程,并提供系统的许多功能。许多守护进程的名称以 “d” 结尾。这个 “d” 代表“守护进程” daemon表明这个进程一直运行并支持一些重要功能。有的会将单词 “daemon” 展开。
```
$ ps -ef | grep sshd
root 1142 1 0 Oct19 ? 00:00:00 /usr/sbin/sshd -D
root 25342 1142 0 18:34 ? 00:00:00 sshd: shs [priv]
$ ps -ef | grep daemon | grep -v grep
message+ 790 1 0 Oct19 ? 00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 836 1 0 Oct19 ? 00:00:02 /usr/lib/accountsservice/accounts-daemon
```
### 万圣节快乐!
在 [Facebook][1] 和 [LinkedIn][2] 上加入 Network World 社区来对主题进行评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3235219/linux/scary-linux-commands-for-halloween.html
作者:[Sandra Henry-Stocker][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -1,26 +1,24 @@
translating---geekpi 如何使用 virsh 命令创建、还原和删除 KVM 虚拟机快照
How to Create, Revert and Delete KVM Virtual machine snapshot with virsh command
====== ======
[![KVM-VirtualMachine-Snapshot][1]![KVM-VirtualMachine-Snapshot][2]][2] [![KVM-VirtualMachine-Snapshot][1]![KVM-VirtualMachine-Snapshot][2]][2]
While working on the virtualization platform system administrators usually take the snapshot of virtual machine before doing any major activity like deploying the latest patch and code. 在虚拟化平台上进行系统管理工作时经常在开始主要活动比如部署补丁和代码前先设置一个虚拟机快照。
Virtual machine **snapshot** is a copy of virtual machines disk at the specific point of time. In other words we can say snapshot keeps or preserve the state and data of a virtual machine at given point of time. 虚拟机**快照**是特定时间点的虚拟机磁盘的副本。换句话说,快照保存了给定的时间点虚拟机的状态和数据。
### Where we can use VM snapshots ..? ### 我们可以在哪里使用虚拟机快照?
If you are working on **KVM** based **hypervisors** we can take virtual machines or domain snapshot using the virsh command. Snapshot becomes very helpful in a situation where you have installed or apply the latest patches on the VM but due to some reasons, application hosted in the VMs becomes unstable and application team wants to revert all the changes or patches. If you had taken the snapshot of the VM before applying patches then we can restore or revert the VM to its previous state using snapshot. 如果你在使用基于 **KVM** 的**虚拟机管理程序**,那么可以使用 virsh 命令获取虚拟机或域快照。快照在一种情况下变得非常有用,当你已经在虚拟机上安装或应用了最新的补丁,但是由于某些原因,虚拟机上的程序变得不稳定,程序团队想要还原所有的更改和补丁。如果你在应用补丁之前设置了虚拟机的快照,那么可以使用快照将虚拟机恢复到之前的状态。
**Note:** We can only take the snapshot of the VMs whose disk format is **Qcow2** and raw disk format is not supported by kvm virsh command, Use below command to convert the raw disk format to qcow2 **注意:**我们只能设置磁盘格式为 **Qcow2** 的虚拟机的快照,并且 kvm virsh 命令不支持 raw 磁盘格式,请使用以下命令将原始磁盘格式转换为 qcow2。
``` ```
# qemu-img convert -f raw -O qcow2 image-name.img image-name.qcow2 # qemu-img convert -f raw -O qcow2 image-name.img image-name.qcow2
``` ```
### Create KVM Virtual Machine (domain) Snapshot ### 创建 KVM 虚拟机(域)快照
I am assuming KVM hypervisor is already configured on CentOS 7 / RHEL 7 box and VMs are running on it. We can list the all the VMs on hypervisor using below virsh command, 我假设 KVM 管理程序已经在 CentOS 7 / RHEL 7 机器上配置好了,并且有虚拟机正在运行。我们可以使用下面的 virsh 命令列出虚拟机管理程序中的所有虚拟机,
``` ```
[root@kvm-hypervisor ~]# virsh list --all [root@kvm-hypervisor ~]# virsh list --all
 Id    Name                           State  Id    Name                           State
@ -35,9 +33,9 @@ I am assuming KVM hypervisor is already configured on CentOS 7 / RHEL 7 box and
``` ```
Lets suppose we want to create the snapshot of **webserver** VM, run the below command, 假设我们想创建 **webserver** 虚拟机的快照,运行下面的命令,
**Syntax :** **语法:**
``` ```
# virsh snapshot-create-as domain {vm_name} name {snapshot_name} description “enter description here” # virsh snapshot-create-as domain {vm_name} name {snapshot_name} description “enter description here”
@ -49,7 +47,7 @@ Domain snapshot webserver_snap created
``` ```
Once the snapshot is created then we can list snapshots related to the VM using below command, 创建快照后,我们可以使用下面的命令列出与虚拟机相关的快照,
``` ```
[root@kvm-hypervisor ~]# virsh snapshot-list webserver [root@kvm-hypervisor ~]# virsh snapshot-list webserver
 Name                 Creation Time             State  Name                 Creation Time             State
@ -59,7 +57,7 @@ Once the snapshot is created then we can list snapshots related to the VM using
``` ```
To list the detailed info of VMs snapshot, run the beneath virsh command, 要列出虚拟机快照的详细信息,请运行下面的 virsh 命令,
``` ```
[root@kvm-hypervisor ~]# virsh snapshot-info --domain webserver --snapshotname webserver_snap [root@kvm-hypervisor ~]# virsh snapshot-info --domain webserver --snapshotname webserver_snap
Name:           webserver_snap Name:           webserver_snap
@ -75,7 +73,7 @@ Metadata:       yes
``` ```
We can view the size of snapshot using below qemu-img command, 我们可以使用下面的 qemu-img 命令查看快照的大小,
``` ```
[root@kvm-hypervisor ~]# qemu-img info /var/lib/libvirt/images/snaptestvm.img [root@kvm-hypervisor ~]# qemu-img info /var/lib/libvirt/images/snaptestvm.img
@ -83,11 +81,11 @@ We can view the size of snapshot using below qemu-img command,
[![qemu-img-command-output-kvm][1]![qemu-img-command-output-kvm][3]][3] [![qemu-img-command-output-kvm][1]![qemu-img-command-output-kvm][3]][3]
### Revert / Restore KVM virtual Machine to Snapshot ### 还原 KVM 虚拟机快照
Lets assume we want to revert or restore webserver VM to the snapshot that we have created in above step. Use below virsh command to restore Webserver VM to its snapshot “ **webserver_snap** 假设我们想要将 webserver 虚拟机还原到我们在上述步骤中创建的快照。使用下面的 virsh 命令将 Webserver 虚拟机恢复到其快照 “**webserver_snap**” 上。
**Syntax :** **语法:**
``` ```
# virsh snapshot-revert {vm_name} {snapshot_name} # virsh snapshot-revert {vm_name} {snapshot_name}
@ -98,9 +96,9 @@ Lets assume we want to revert or restore webserver VM to the snapshot that we
``` ```
### Delete KVM virtual Machine Snapshots ### 删除 KVM 虚拟机快照
To delete KVM virtual machine snapshots, first get the VMs snapshot details using “ **virsh snapshot-list** ” command and then use “ **virsh snapshot-delete** ” command to delete the snapshot. Example is shown below: 要删除 KVM 虚拟机快照,首先使用 “**virsh snapshot-list**” 命令获取虚拟机的快照详细信息,然后使用 “**virsh snapshot-delete**” 命令删除快照。如下示例所示:
``` ```
[root@kvm-hypervisor ~]# virsh snapshot-list --domain webserver [root@kvm-hypervisor ~]# virsh snapshot-list --domain webserver
 Name                 Creation Time             State  Name                 Creation Time             State
@ -114,14 +112,14 @@ Domain snapshot webserver_snap deleted
``` ```
Thats all from this article, I hope you guys get an idea on how to manage KVM virtual machine snapshots using virsh command. Please do share your feedback and dont hesitate to share it among your technical friends 🙂 这就是本文的全部内容,我希望你们能够了解如何使用 virsh 命令来管理 KVM 虚拟机快照。请分享你的反馈,并不要犹豫地分享给你的技术朋友🙂
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://www.linuxtechi.com/create-revert-delete-kvm-virtual-machine-snapshot-virsh-command/ via: https://www.linuxtechi.com/create-revert-delete-kvm-virtual-machine-snapshot-virsh-command/
作者:[Pradeep Kumar][a] 作者:[Pradeep Kumar][a]
译者:[译者ID](https://github.com/译者ID) 译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,104 @@
如何像 Linux 专家那样使用 WSL
============================================================
![WSL](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-pro.png?itok=e65wEEAw "WSL")
在本 WSL 教程中了解如何执行像挂载 USB 驱动器和操作文件等任务。图片提供Microsoft[经许可使用][1]
在[之前的教程][4]中,我们学习了在 Windows 10 上设置 WSL。你可以在 Windows 10 中使用 WSL 执行许多 Linux 命令。许多系统管理任务都是在终端内部完成的,无论是基于 Linux 的系统还是 macOS。然而Windows 10 缺乏这样的功能。你想运行一个 cron 任务么?不行。你想 SSH 进入你的服务器,然后 rsync 文件么?没门。如何用强大的命令行工具管理本地文件,而不是使用缓慢和不可靠的 GUI 工具?
在本教程中,你将看到如何使用 WSL执行除了管理的其他任务 - 例如挂载 USB 驱动器和操作文件。你需要运行一个完全更新的 Windows 10 并选择一个 Linux 发行版。我在[上一篇文章][5]中介绍了这些步骤,所以如果你需要赶上,那就从那里开始。让我们开始吧。
### 保持你的 Linux 系统更新
事实上,当你通过 WSL 运行 Ubuntu 或 openSUSE 时,没有 Linux 内核在运行。然而,你必须保持你的发行版完整更新,以保护你的系统免受任何新的已知漏洞的影响。由于在 Windows 应用商店中只有两个免费的社区发行版所以教程将只覆盖以下两个openSUSE 和 Ubuntu。
更新你的 Ubuntu 系统:
```
# sudo apt-get update
# sudo apt-get dist-upgrade
```
运行 openSUSE 的更新:
```
# zypper up
```
您还可以使用 _dup_ 命令将 openSUSE 升级到最新版本。但在运行系统升级之前,请使用上一个命令运行更新。
```
# zypper dup
```
**注意:** openSUSE 默认为 root 用户。如果你想执行任何非管理员任务,请切换到非特权用户。您可以这篇[文章][6]中了解如何在 openSUSE上 创建用户。
### 管理本地文件
如果你想使用优秀的 Linux 命令行工具来管理本地文件,你可以使用 WSL 轻松完成此操作。不幸的是WSL 还不支持像 _lsblk__mnt_ 这样的东西来挂载本地驱动器。但是,你可以 _cd _ 到 C 盘并管理文件:
/mnt/c/Users/swapnil/Music
我现在在 C 盘的 Music 目录下。
要安装其他驱动器、分区和外部 USB 驱动器,你需要创建一个挂载点,然后挂载该驱动器。
打开文件资源管理器并检查该驱动器的挂载点。假设它在 Windows 中被挂载为 S:\
在 Ubuntu/openSUSE 终端中,为驱动器创建一个挂载点。
```
sudo mkdir /mnt/s
```
现在挂载驱动器:
```
mount -f drvfs S: /mnt/s
```
挂载完毕后,你现在可以从发行版访问该驱动器。请记住,使用 WSL 运行的发行版将会看到 Windows 能看到的内容。因此,你无法挂载在 Windows 上无法原生挂载的 ext4 驱动器。
现在你可以在这里使用所有这些神奇的 Linux 命令。想要将文件从一个文件夹复制或移动到另一个文件夹?只需运行 _cp__mv_ 命令。
```
cp /source-folder/source-file.txt /destination-folder/
cp /music/classical/Beethoven/symphony-2.mp3 /plex-media/music/classical/
```
如果你想移动文件夹或大文件,我会推荐 _rsync_ 而不是 _cp_ 命令:
```
rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/
```
耶!
想要在 Windows 驱动器中创建新目录,只需使用 _mkdir_ 命令。
想要在某个时间设置一个 cron 作业来自动执行任务吗?继续使用 _crontab -e_ 创建一个 cron 作业。十分简单。
你还可以在 Linux 中挂载网络/远程文件夹,以便你可以使用更好的工具管理它们。我的所有驱动器都插在树莓派或者服务器上,因此我只需 ssh 进入该机器并管理硬盘。在本地计算机和远程系统之间传输文件可以再次使用 _rsync_ 命令完成。
WSL 现在已经不再是测试版了,它将继续获得更多新功能。我很兴奋的两个特性是 lsblk 命令和 dd 命令,它们允许我在 Windows 中本机管理我的驱动器并创建可引导的 Linux 驱动器。如果你是 Linux 命令行的新手,[前一篇教程][7]将帮助你开始使用一些最基本的命令。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/how-use-wsl-linux-pro
作者:[SWAPNIL BHARTIYA][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://blogs.msdn.microsoft.com/commandline/learn-about-windows-console-and-windows-subsystem-for-linux-wsl/
[3]:https://www.linux.com/files/images/wsl-propng
[4]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[5]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[6]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[7]:https://www.linux.com/learn/how-use-linux-command-line-basics-cli

View File

@ -1,185 +0,0 @@
使用 Github 和 Python 实现持续部署
======
![](https://fedoramagazine.org/wp-content/uploads/2018/03/cd-github-python-945x400.jpg)
借助 Github 的 webhook网络钩子开发者可以创建很多有用的服务。从触发一个 Jenkins 实例上的 CI持续集成 任务到配置云中的机器,几乎有着无限的可能性。这篇教程将展示如何使用 Python 和 Flask 框架来搭建一个简单的持续部署服务。
在这个例子中的持续部署服务是一个简单的 Flask 应用,其带有接受 Github 的 webhook 请求的 REST 端点endpoint。在验证每个请求都来自正确的 Github 仓库后服务器将拉取pull更改到仓库的本地副本。这样每次一个新的提交(commit)推送到远程 Github 仓库,本地仓库就会自动更新。
### Flask web 服务
用 Flask 搭建一个小的 web 服务非常简单。这里可以先看看项目的结构。
```
├── app
│ ├── __init__.py
│ └── webhooks.py
├── requirements.txt
└── wsgi.py
```
首先,创建应用。应用代码在 app 目录下。
两个文件__init__.py 和 webhooks.py构成了 Flask 应用。前者有创建 Flask 应用并为其添加配置的代码。后者有端点endpoint逻辑。这是该应用接收 Github 请求的地方。
这里是 app/__init__.py 的内容:
```
import os
from flask import Flask
from .webhooks import webhook
def create_app():
""" Create, configure and return the Flask application """
app = Flask(__name__)
app.config['GITHUB_SECRET'] = os.environ.get('GITHUB_SECRET')
app.config['REPO_PATH'] = os.environ.get('REPO_PATH')
app.register_blueprint(webhook)
return(app)
```
该函数创建了两个配置变量:
* **GITHUB_SECRET** 保存一个密码,用来认证 Github 请求。
* **REPO_PATH** 保存了自动更新的仓库路径。
这份代码使用[Flask 蓝图Blueprints][1]来组织应用的端点endpoint。使用蓝图可以对 API 进行逻辑分组,使应用程序更易于维护。通常认为这是一种好的做法。
这里是 app/webhooks.py 的内容:
```
import hmac
from flask import request, Blueprint, jsonify, current_app
from git import Repo
webhook = Blueprint('webhook', __name__, url_prefix='')
@webhook.route('/github', methods=['POST'])
def handle_github_hook():
""" Entry point for github webhook """
signature = request.headers.get('X-Hub-Signature')
sha, signature = signature.split('=')
secret = str.encode(current_app.config.get('GITHUB_SECRET'))
hashhex = hmac.new(secret, request.data, digestmod='sha1').hexdigest()
if hmac.compare_digest(hashhex, signature):
repo = Repo(current_app.config.get('REPO_PATH'))
origin = repo.remotes.origin
origin.pull('--rebase')
commit = request.json['after'][0:6]
print('Repository updated with commit {}'.format(commit))
return jsonify({}), 200
```
首先代码创建了一个新的蓝图 webhook。然后它使用 Flask 路由route为蓝图添加了一个端点。任何 URL 为 /Github 的端点的 POST 请求都将调用这个路由。
#### 验证请求
当服务收到该端点的请求时,首先它必须验证该请求是否来自 GitHub 同时来自正确的仓库。Github 在请求头的 X-Hub-Signature 中提供了一个签名。该签名由一个密码GITHUB_SECRET请求体的 HMAC 十六进制摘要和使用 sha1 的哈希生成。
为了验证请求,服务需要在本地计算签名并与请求头中收到的签名做比较。这可以由 hmac.compare_digest 函数完成。
#### 自定义钩子逻辑
在验证请求后,现在可以处理了。这篇教程使用[GitPython][3]模块来与 git 仓库进行交互。GitPython 模块中的 Repo 对象用于访问远程仓库 origin。服务在本地拉取远程仓库的最新更改还用 -rebase 选项来避免合并的问题。
调试打印语句显示了从请求体收到的短提交哈希。这个例子展示了如何使用请求体。更多关于请求体的可用数据的信息,请查询[github 文档][4]。
最后服务返回了一个空的 JSON 字符串和 200 的状态码。这用于告诉 Github 的 webhook 服务已经收到了请求。
### 部署服务
为了运行该服务,这个例子使用 [gunicorn][5] web 服务器。首先安装服务依赖。在支持的 Fedora 服务器上,以[sudo][6]运行这条命令:
```
sudo dnf install python3-gunicorn python3-flask python3-GitPython
```
现在编辑 gunicorn 使用的 wsgi.py 文件来运行服务:
```
from app import create_app
application = create_app()
```
为了部署服务,使用以下命令克隆这个 git [仓库][7]或者使用你自己的 git 仓库:
```
git clone https://github.com/cverna/github_hook_deployment.git /opt/
```
下一步是配置服务所需的环境变量。运行这些命令:
```
export GITHUB_SECRET=asecretpassphraseusebygithubwebhook
export REPO_PATH=/opt/github_hook_deployment/
```
这篇教程使用 webhook 服务的 Github 仓库,但你可以使用你想要的不同仓库。最后,使用这些命令开启 该 web 服务:
```
cd /opt/github_hook_deployment/
gunicorn --bind 0.0.0.0 wsgi:application --reload
```
这些选项中绑定了 web 服务的 ip 地址为 0.0.0.0,意味着它将接收来自任何的主机的请求。选项 -reload 确保了当代码更改时重启 web 服务。这就是持续部署的魔力所在。每次接收到 Github 请求时将拉取仓库的最近更新,同时 gunicore 检测这些更改并且自动重启服务。
**注意: **为了能接收到 Github 请求web 服务必须部署到具有公有 IP 地址的服务器上。做到这点的简单方法就是使用你最喜欢的云提供商比如 DigitalOceanAWSLinode等。
### 配置 Github
这篇教程的最后一部分是配置 Github 来发送 webhook 请求到 web 服务上。这是持续部署的关键。
从你的 Github 仓库的设置中,选择 Webhook 菜单,并且点击添加 Webhook。输入以下信息
* **Payload URL:** 服务的 URL比如<http://public\_ip\_address:8000/github>
* **Content type:** 选择 application/json
* **Secret:** 前面定义的 **GITHUB_SECRET** 环境变量
然后点击添加 Webhook 按钮。
![][8]
现在每当该仓库发生 push(推送)事件时Github 将向服务发送请求。
### 总结
这篇教程向你展示了如何写一个基于 Flask 的用于接收 Github 的 webhook 请求和实现持续集成的 web 服务。现在你应该能以本教程作为起点来搭建对自己有用的服务。
#### 像这样:
加载中...
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/continuous-deployment-github-python/
作者:[Author Archive;Author Website;Clément Verna][a]
译者:[kimii](https://github.com/kimii)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:http://flask.pocoo.org/docs/0.12/blueprints/
[2]:https://en.wikipedia.org/wiki/HMAC
[3]:https://gitpython.readthedocs.io/en/stable/index.html
[4]:https://developer.github.com/v3/activity/events/types/#webhook-payload-example-26
[5]:http://gunicorn.org/
[6]:https://fedoramagazine.org/howto-use-sudo/
[7]:https://github.com/cverna/github_hook_deployment.git
[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/Screenshot-2018-3-26-cverna-github_hook_deployment1.png

View File

@ -1,24 +1,25 @@
9 Useful touch command examples in Linux 在 Linux 下 9 个有用的 touch 命令示例
====== =====
Touch command is used to create empty files and also changes the timestamps of existing files on Unix & Linux System. Changing timestamps here means updating the access and modification time of files and directories.
touch 命令用于创建空文件,并且更改 Unix 和 Linux 系统上现有文件时间戳。这里更改时间戳意味着更新文件和目录的访问以及修改时间。
[![touch-command-examples-linux][1]![touch-command-examples-linux][2]][2] [![touch-command-examples-linux][1]![touch-command-examples-linux][2]][2]
Lets have a look on the syntax and options used in touch command, 让我们来看看 touch 命令的语法和选项:
**Syntax** : # touch {options} {file} **语法** # touch {选项} {文件}
Options used in touch command, touch 命令中使用的选项:
![touch-command-options][1] ![touch-command-options][1]
![touch-command-options][3] ![touch-command-options][3]
In this article we will walk through 9 useful touch command examples in Linux, 在这篇文章中,我们将介绍 Linux 中 9 个有用的 touch 命令示例。
### Example:1 Create an empty file using touch ### 示例:1 使用 touch 创建一个空文件
To create an empty file using touch command on Linux systems, type touch followed by the file name, example is shown below, 要在 Linux 系统上使用 touch 命令创建空文件,键入 touch然后输入文件名。如下所示
``` ```
[root@linuxtechi ~]# touch devops.txt [root@linuxtechi ~]# touch devops.txt
[root@linuxtechi ~]# ls -l devops.txt [root@linuxtechi ~]# ls -l devops.txt
@ -27,26 +28,26 @@ To create an empty file using touch command on Linux systems, type touch followe
``` ```
### Example:2 Create empty files in bulk using touch ### 示例:2 使用 touch 创建批量空文件
There can be some scenario where we have to create lots of empty files for some testing, this can be easily achieved using touch command, 可能会出现一些情况,我们必须为某些测试创建大量空文件,这可以使用 touch 命令轻松实现:
``` ```
[root@linuxtechi ~]# touch sysadm-{1..20}.txt [root@linuxtechi ~]# touch sysadm-{1..20}.txt
``` ```
In the above example we have created 20 empty files with name sysadm-1.txt to sysadm-20.txt, you can change the name and numbers based on your requirements. 在上面的例子中,我们创建了 20 个名为 sysadm-1.txt 到 sysadm-20.txt 的空文件,你可以根据需要更改名称和数字。
### Example:3 Change / Update access time of a file and directory ### 示例:3 改变/更新文件和目录的访问时间
Lets assume we want to change access time of a file called “ **devops.txt** “, to do this use **-a** option in touch command followed by file name, example is shown below, 假设我们想要改变名为 **devops.txt** 文件的访问时间,在 touch 命令中使用 **-a** 选项,然后输入文件名。如下所示:
``` ```
[root@linuxtechi ~]# touch -a devops.txt [root@linuxtechi ~]# touch -a devops.txt
[root@linuxtechi ~]# [root@linuxtechi ~]#
``` ```
Now verify whether access time of a file has been updated or not using stat command 现在使用 `stat` 命令验证文件的访问时间是否已更新:
``` ```
[root@linuxtechi ~]# stat devops.txt [root@linuxtechi ~]# stat devops.txt
  File: devops.txt   File: devops.txt
@ -62,9 +63,9 @@ Change: 2018-03-29 23:03:10.902000000 -0400
``` ```
**Change access time of a directory** , **改变目录的访问时间**
Lets assume we have a nfsshare folder under /mnt, Lets change the access time of this folder using the below command, 假设我们在 /mnt 目录下有一个 nfsshare 文件夹,让我们用下面的命令改变这个文件夹的访问时间:
``` ```
[root@linuxtechi ~]# touch -m /mnt/nfsshare/ [root@linuxtechi ~]# touch -m /mnt/nfsshare/
[root@linuxtechi ~]# [root@linuxtechi ~]#
@ -83,9 +84,9 @@ Change: 2018-03-29 23:34:38.095000000 -0400
``` ```
### Example:4 Change Access time without creating new file ### 示例:4 更改访问时间而不用创建新文件
There can be some situations where we want to change access time of a file if it exists and avoid creating the file. Using **-c** option in touch command, we can change access time of a file if it exists and will not a create a file, if it doesnt exist. 在某些情况下,如果文件存在,我们希望更改文件的访问时间,并避免创建文件。在 touch 命令中使用 **-c** 选项即可,如果文件存在,那么我们可以改变文件的访问时间,如果不存在,我们也可不会创建它。
``` ```
[root@linuxtechi ~]# touch -c sysadm-20.txt [root@linuxtechi ~]# touch -c sysadm-20.txt
[root@linuxtechi ~]# touch -c winadm-20.txt [root@linuxtechi ~]# touch -c winadm-20.txt
@ -95,18 +96,18 @@ ls: cannot access winadm-20.txt: No such file or directory
``` ```
### Example:5 Change Modification time of a file and directory ### 示例:5 更改文件和目录的修改时间
Using **-m** option in touch command, we can change the modification time of a file and directory, 在 touch 命令中使用 **-m** 选项,我们可以更改文件和目录的修改时间。
Lets change the modification time of a file called “devops.txt”, 让我们更改名为 “devops.txt” 文件的更改时间:
``` ```
[root@linuxtechi ~]# touch -m devops.txt [root@linuxtechi ~]# touch -m devops.txt
[root@linuxtechi ~]# [root@linuxtechi ~]#
``` ```
Now verify whether modification time has been changed or not using stat command, 现在使用 stat 命令来验证修改时间是否改变:
``` ```
[root@linuxtechi ~]# stat devops.txt [root@linuxtechi ~]# stat devops.txt
  File: devops.txt   File: devops.txt
@ -122,23 +123,14 @@ Change: 2018-03-29 23:59:49.106000000 -0400
``` ```
Similarly, we can change modification time of a directory, 同样的,我们可以改变一个目录的修改时间:
``` ```
[root@linuxtechi ~]# touch -m /mnt/nfsshare/ [root@linuxtechi ~]# touch -m /mnt/nfsshare/
[root@linuxtechi ~]# [root@linuxtechi ~]#
``` ```
### Example:6 Changing access and modification time in one go 使用 stat 交叉验证访问和修改时间:
Use “ **-am** ” option in touch command to change the access and modification together or in one go, example is shown below,
```
[root@linuxtechi ~]# touch -am devops.txt
[root@linuxtechi ~]#
```
Cross verify the access and modification time using stat,
``` ```
[root@linuxtechi ~]# stat devops.txt [root@linuxtechi ~]# stat devops.txt
  File: devops.txt   File: devops.txt
@ -154,45 +146,43 @@ Change: 2018-03-30 00:06:20.145000000 -0400
``` ```
### Example:7 Set the Access & modification time to a specific date and time ### 示例:7 将访问和修改时间设置为特定的日期和时间
Whenever we do change access and modification time of a file & directory using touch command, then it set the current time as access & modification time of that file or directory, 每当我们使用 touch 命令更改文件和目录的访问和修改时间时,它将当前时间设置为该文件或目录的访问和修改时间。
Lets assume we want to set specific date and time as access & modification time of a file, this is can be achieved using -c & -t option in touch command, 假设我们想要将特定的日期和时间设置为文件的访问和修改时间,这可以使用 touch 命令中的 -c-t 选项来实现。
Date and Time can be specified in the format: {CCYY}MMDDhhmm.ss 日期和时间可以使用以下格式指定:{CCYY}MMDDhhmm.ss
Where: 其中:
* CC First two digits of a year * CC 年份的前两位数字
* YY Second two digits of a year * YY 年份的后两位数字
* MM Month of the Year (01-12) * MM 月份 (01-12)
* DD Day of the Month (01-31) * DD (01-31)
* hh Hour of the day (00-23) * hh 小时 (00-23)
* mm Minutes of the hour (00-59) * mm 分钟 (00-59)
让我们将 devops.txt file 文件的访问和修改时间设置为未来的一个时间( 2025 年, 10 月, 19 日, 18 时 20 分)。
Lets set the access & modification time of devops.txt file for future date and time( 2025 year, 10th Month, 19th day of month, 18th hours and 20th minute)
``` ```
[root@linuxtechi ~]# touch -c -t 202510191820 devops.txt [root@linuxtechi ~]# touch -c -t 202510191820 devops.txt
``` ```
Use stat command to view the update access & modification time, 使用 stat 命令查看更新访问和修改时间:
![stat-command-output-linux][1] ![stat-command-output-linux][1]
![stat-command-output-linux][4] ![stat-command-output-linux][4]
Set the Access and Modification time based on date string, Use -d option in touch command and then specify the date string followed by the file name, example is shown below, 根据日期字符串设置访问和修改时间,在 touch 命令中使用 -d 选项,然后指定日期字符串,后面跟文件名。如下所示:
``` ```
[root@linuxtechi ~]# touch -c -d "2010-02-07 20:15:12.000000000 +0530" sysadm-29.txt [root@linuxtechi ~]# touch -c -d "2010-02-07 20:15:12.000000000 +0530" sysadm-29.txt
[root@linuxtechi ~]# [root@linuxtechi ~]#
``` ```
Verify the status using stat command, 使用 stat 命令验证文件的状态:
``` ```
[root@linuxtechi ~]# stat sysadm-20.txt [root@linuxtechi ~]# stat sysadm-20.txt
  File: sysadm-20.txt   File: sysadm-20.txt
@ -208,24 +198,24 @@ Change: 2018-03-30 10:23:31.584000000 +0530
``` ```
**Note:** In above commands, if we dont specify -c then touch command will create a new file in case it doesnt exist on the system and will set the timestamps whatever is mentioned in the command. **注意:**在上述命令中,如果我们不指定 -c那么 touch 命令将创建一个新文件以防系统中存在该文件,并将时间戳设置为命令中给出的。
### Example:8 Set the timestamps to a file using a reference file (-r) ### 示例:8 使用参考文件设置时间戳(-r
In touch command we can use a reference file for setting the timestamps of file or directory. Lets assume I want to set the same timestamps of file “sysadm-20.txt” on “devops.txt” file. This can be easily achieved using -r option in touch. 在 touch 命令中,我们可以使用参考文件来设置文件或目录的时间戳。假设我想在 “devops.txt” 文件上设置与文件 “sysadm-20.txt” 文件相同的时间戳touch 命令中使用 -r 选项可以轻松实现。
**Syntax:** # touch -r {reference-file} actual-file **语法:**# touch -r {参考文件} 真正文件
``` ```
[root@linuxtechi ~]# touch -r sysadm-20.txt devops.txt [root@linuxtechi ~]# touch -r sysadm-20.txt devops.txt
[root@linuxtechi ~]# [root@linuxtechi ~]#
``` ```
### Example:9 Change Access & Modification time on symbolic link file ### 示例:9 在符号链接文件上更改访问和修改时间
By default, whenever we try to change timestamps of a symbolic link file using touch command then it will change the timestamps of original file only, In case you want to change timestamps of a symbolic link file then this can be achieved using -h option in touch command, 默认情况下,每当我们尝试使用 touch 命令更改符号链接文件的时间戳时,它只会更改原始文件的时间戳。如果你想更改符号链接文件的时间戳,则可以使用 touch 命令中的 -h 选项来实现。
**Syntax:** # touch -h {symbolic link file} **语法:** # touch -h {符号链接文件}
``` ```
[root@linuxtechi opt]# ls -l /root/linuxgeeks.txt [root@linuxtechi opt]# ls -l /root/linuxgeeks.txt
lrwxrwxrwx. 1 root root 15 Mar 30 10:56 /root/linuxgeeks.txt -> linuxadmins.txt lrwxrwxrwx. 1 root root 15 Mar 30 10:56 /root/linuxgeeks.txt -> linuxadmins.txt
@ -236,14 +226,14 @@ lrwxrwxrwx. 1 root root 15 Oct 19  2030 linuxgeeks.txt -> linuxadmins.txt
``` ```
Thats all from this tutorial, I hope these examples help you to understand touch command. Please do share your valuable feedback and comments. 这就是本教程的全部了。我希望这些例子能帮助你理解 touch 命令。请分享你的宝贵意见和评论。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
via: https://www.linuxtechi.com/9-useful-touch-command-examples-linux/ via: https://www.linuxtechi.com/9-useful-touch-command-examples-linux/
作者:[Pradeep Kumar][a] 作者:[Pradeep Kumar][a]
译者:[译者ID](https://github.com/译者ID) 译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,74 @@
如何在 Linux 中查找文件
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
如果你是 Windows 或 OSX 的非高级用户,那么可能使用 GUI 来查找文件。你也可能发现界面限制,令人沮丧,或者两者兼而有之,并学会组织文件并记住它们的确切顺序。你也可以在 Linux 中做到这一点 - 但你不必这样做。
Linux 的好处之一是它提供了多种方式来处理。你可以打开任何文件管理器或按下 `ctrl` +`f`,你也可以使用程序手动打开文件,或者你可以开始输入字母,它会过滤当前目录列表。
![Screenshot of how to find files in Linux with Ctrl+F][2]
使用 Ctrl+F 在 Linux 中查找文件的截图
但是如果你不知道你的文件在哪里又不想搜索整个磁盘呢对于这个以及其他各种情况Linux 都很合适。
### 按命令名查找程序位置
如果你习惯随心所欲地放文件Linux 文件系统看起来会让人望而生畏。对我而言,最难习惯的一件事是找到程序在哪里。
例如,`which bash` 通常会返回 `/bin/bash`,但是如果你下载了一个程序并且它没有出现在你的菜单中,`which` 命令可以是一个很好的工具。
一个类似的工具是 `locate` 命令,我发现它对于查找配置文件很有用。我不喜欢输入程序名称,因为像 `locate php` 这样的简单程序通常会提供很多需要进一步过滤的结果。
有关 `locate``which` 的更多信息,请参阅 `man` 页面:
* `man which`
* `man locate`
### Find
`find` 工具提供了更先进的功能。以下是我安装在许多服务器上的脚本示例,我用于确保特定模式的文件(也称为 glob仅存在五天并且所有早于该文件的文件都将被删除。 (自上次修改以来,十进制用于计算最多 240 分钟的差异)
```
find ./backup/core-files*.tar.gz -mtime +4.9 -exec rm {} \;
```
`find` 工具有许多高级用法,但最常见的是对结果执行命令,而不用链式地按照类型、创建日期、修改日期过滤文件。
find 的另一个有趣用处是找到所有有可执行权限的文件。这有助于确保没有人在你昂贵的服务器上安装比特币挖矿程序或僵尸网络。
```
find / -perm /+x
```
有关 `find` 的更多信息,请使用 `man find` 参考 `man` 页面。
### Grep
想通过内容中查找文件? Linux 已经实现了。你可以使用许多 Linux 工具来高效搜索符合模式的文件,但是 `grep` 是我经常使用的工具。
假设你有一个程序发布代码引用和堆栈跟踪的错误消息。你要在日志中找到这些。 grep 不总是最好的方法,但如果文件是一个给定的值,我经常使用 `grep -R`
越来越多的 IDE 正在实现查找功能,但是如果你正在访问远程系统或出于任何原因没有 GUI或者如果你想在当前目录递归查找请使用`grep -R {searchterm} ` 或在支持 `egrep` 别名的系统上,只需将 `-e` 标志添加到命令 `egrep -r {regex-pattern}`
我在去年给 [Raspbian][3] 中的 `dhcpcd5` 打补丁时使用了这种技术,这样我就可以在[树莓派基金会][4]发布新的 Debian 时继续操作网络接入点了。
哪些提示可帮助你在 Linux 上更有效地搜索文件?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/how-find-files-linux
作者:[Lewis Cowles][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/lewiscowles1986
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/find-files-in-linux-ctrlf.png?itok=1gf9kIut (Screenshot of how to find files in Linux with Ctrl+F)
[3]:https://www.raspbian.org/
[4]:https://www.raspberrypi.org/

View File

@ -0,0 +1,150 @@
12 个 Git 提示献给 Git 12 岁生日
=====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/party_anniversary_flag_birthday_celebrate.jpg?itok=KqfMENa7)
[Git][1] 是一个分布式版本控制系统,它已经成为开源世界中源代码控制的默认工具,在 4 月 12 日这天,它 12 岁了。使用 Git 令人沮丧的事情之一是你需要知道更多才能有效地使用 Git。这也可能是使用 Git 比较美妙的一件事,因为没有什么比发现一个新技巧来简化或提高你的工作流的效率更令人快乐了。
为了纪念 Git 的 12 岁生日,这里有 12 条技巧和诀窍来让你的 Git 经验更加有用和强大。从你可能忽略的一些基本知识开始,并扩展到一些真正的高级用户技巧!
### 1\. 你的 ~/.gitconfig 文件
当你第一次尝试使用 `git` 命令向仓库提交一个更改时,你可能会收到这样的欢迎信息:
```
*** Please tell me who you are.
Run
  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"
to set your account's default identity.
```
你可能没有意识到正是这些命令正在修改 `~/.gitconfig` 的内容,这是 Git 存储全局配置选项的地方。你可以通过 `~/.gitconfig` 文件来做大量的事,包括定义别名,永久性打开(或关闭)特定命令选项,以及修改 Git 工作方式(例如,`git diff` 使用哪个 diff 算法,或者默认使用什么类型的合并策略)。你甚至可以根据仓库的路径有条件地包含其他配置文件!所有细节请参阅 `man git-config`
### 2\. 你仓库的 .gitconfig 文件
在之前的技巧中,你可能想知道 `git config` 命令中 `--global` 标志是干什么的。它告诉 Git 更新 `~/.gitconfig` 中的 "global" 配置。当然,有一个全局配置意味着一个本地配置,确定的是,如果你省略 `--global` 标志,`git config` 将改为更新仓库特有的配置,该配置在 `.git/config` 中。
`.git/config` 文件中设置的选项将覆盖 `〜/.gitconfig` 文件中的所有设置。因此,例如,如果你需要为特定仓库使用不同的电子邮件地址,则可以运行 `git config user.email "also_you@example.com"`。然后,该仓库中的任何提交都将使用你单独配置的电子邮件地址。如果你在开源项目中工作,而且希望他们显示自己的电子邮件地址,同时仍然使用自己工作邮箱作为主 Git 配置,这非常有用。
几乎任何你可以在 `〜/ .gitconfig` 中设置的东西,你也可以在 `.git / config` 中进行设置,以使其作用于特定的仓库。在线面的提示中,当我提到将某些内容添加到 `~/.gitconfig` 时,只需记住你也可以在特定仓库的 `.git/config` 中添加来设置那个选项。
### 3\. 别名
别名是你可以在 `〜/ .gitconfig` 中做的另一件事。它的工作原理就像命令行中的 shell -- 它们设定一个新的命令名称,可以调用一个或多个其他命令,通常使用一组特定的选项或标志。它们对于那些你经常使用的又长又复杂的命令来说非常有效。
你可以使用 `git config` 命令来定义别名 - 例如,运行 `git config --global --add alias.st status` 将使运行 `git st` 与运行 `git status` 做同样的事情 - 但是我在定义别名时发现,直接编辑 `~/.gitconfig` 文件通常更容易。
如果你选择使用这种方法,你会发现 `~/.gitconfig` 文件是一个 [INI 文件][2]。INI 是一种带有特定段落的键值对文件格式。当添加一个别名时,你将改变 `[alias]` 段落。例如,定义上面相同的 `git st` 别名时,添加如下到文件:
```
[alias]
st = status
```
(如果已经有 `[alias]` 段落,只需将第二行添加到现有部分。)
### 4\. shell 命令中的别名
别名不仅仅限于运行其他 Git 子 命令 - 你还可以定义运行其他 shell 命令的别名。这是一个很好的方式来处理一个反复发生的,罕见和复杂的任务:一旦你确定了如何完成它,就可以在别名下保存该命令。例如,我有一些仓库,我已经 fork 了一个开源的项目并进行了一些本地修改。我想跟上项目正在进行的开发工作,并保存我本地的变化。为了实现这个目标,我需要定期将来自上游仓库的更改合并到我 fork 的项目中 - 我通过使用我称之为 `upstream-merge` 的别名。它是这样定义的:
```
upstream-merge = !"git fetch origin -v && git fetch upstream -v && git merge upstream/master && git push"
```
别名定义开头的 `!` 告诉 Git 通过 shell 运行这个命令。这个例子涉及到运行一些 `git` 命令,但是以这种方式定义的别名可以运行任何 shell 命令。
(注意,如果你想复制我的 `upstream-merge` 别名,你需要确保你有一个名为 `upstream` 的 Git remote 指向你已经分配的上游仓库,你可以通过运行 `git remote add upstream <URL to repo>` 来添加一个。)
### 5\. 可视化提交图
如果你从事的是一个有很多分支活动的项目,有时可能很难掌握所有正在发生的工作以及它们之间的相关性。各种图形用户界面工具可让你获取不同分支的图片并在所谓的“提交图表”中提交。例如,以下是我使用 [GitLab][3] 提交图表查看器可视化的我的一个仓库的一部分:
![GitLab commit graph viewer][5]
John Anderson, CC BY
如果你是一个专注于命令行的用户或者某个发现切换工具让人分心,那么最好从命令行获得类似的提交视图。这就是 `git log` 命令的 `--graph` 参数出现的地方:
![Repository visualized with --graph command][7]
John Anderson, CC BY
以下命令可视化相同仓库可达到相同效果:
```
git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --date=relative
```
`--graph` 选项将图添加到日志的右侧,`--abbrev-commit` 缩短提交的 [SHAs][8] 值,`--date=relative` 用相对属于表示日期,以及 `--pretty` 位处理所有其他自定义格式。我有 `git lg` 别名用于这个功能,它是我最常用的 10 个命令之一。
### 6\. 更优雅的强制推送
有时,就像你试图避免使用它一样困难的是,你会发现你需要运行 `git push --force` 来覆盖仓库远程副本上的历史记录。你可能得到了一些反馈,导致你进行交互式重新分配,或者你可能已经搞砸了,并希望隐藏证据。
当其他人在仓库的远程副本的同一分支上进行更改时,会发生强制推送的危险。当你强制推送已重写的历史记录时,这些提交将会丢失。这就是 `git push --force-with-lease` 出现的原因 -- 如果远程分支已经丢失,它不会允许你强制推送,这确保你不会丢掉别人的工作。
### 7\. git add -N
你是否过 `git commit -a` 在一次行动中提交所有未完成的修改,只有在你推送完提交后才发现 `git commit -a` 忽略了新添加的文件才能发现?你可以使用 `git add -N` (想想 "notify") 来解决这个问题,告诉 Git 在第一次实际提交它们之前,你希望在提交中包含新增文件。
### 8\. git add -p
使用 Git 时的最佳做法是确保每次提交都只包含一个逻辑修改 - 无论这是修复错误还是添加新功能。然而,有时当你在工作时,你的仓库最终会有多个提交值的修改。你怎样才能设法把事情分开,使每个提交只包含适当的修改呢?`git add --patch` 来拯救你了!
这个标志会让 `git add` 命令查看你工作副本中的所有变化,并为每个变化询问你是否想要将它提交,跳过,或者推迟决定(以及其他更强大的选项,你可以在运行命令后选择 `?` 来查看)。`git add -p` 是生成结构良好的提交的绝佳工具。
### 9\. git checkout -p
`git add -p` 类似,`git checkout` 命令将采用 `--patch``-p` 选项,这会使其在本地工作副本中显示每个“大块”的改动,并允许丢弃它 -- 简单来说就是将本地工作副本恢复到更改之前的状态。
这真的很棒。例如,当你追踪一个 bug 时引入了一堆调试日志语句,修正了这个 bug 之后,你可以先使用 `git checkout -p` 移除所有新的调试日志,然后 `git add -p` 来添加 bug 修复。没有比组合一个优雅的,结构良好的提交更令人满意!
### 10\. Rebase 时执行命令
有些项目有一个规则,即存储库中的每个提交都必须处于工作状态 - 也就是说,在每次提交时,应该可以编译代码,或者应该运行测试套件而不会失败。 当你在分支上工作时,这并不困难,但是如果你最终因为某种原因需要 rebase 时,那么逐步完成每个重新绑定的提交以确保你没有意外地引入一个中断是乏味的。
幸运的是,`git rebase` 已经覆盖了 `-x``--exec` 选项。`git rebase -x <cmd>` 将在每次提交应用到 rebase 后运行该命令。因此,举个例子,如果你有一个项目,其中 `npm run tests` 运行你的测试套件,`git rebase -x npm run tests` 将在 rebase 期间每次提交之后运行测试套件。这使你可以查看测试套件是否在任何重新发布的提交中失败,因此你可以确认测试套件在每次提交时仍能通过。
### 11\. 基于时间的修正参考
很多 Git 子命令都接受一个修正的参数来决定命令作用于仓库的哪个部分,可以是某次特定的提交的 sha1 值,一个分支的名称,甚至是一个符号性的名称如 `HEAD`(代表当前检出分支最后一次的提交),除了这些简单的形式以外,你还可以附加一个指定的日期或时间作为参数,表示“这个时间的引用”。
这个功能在某些时候会变得十分有用。当你处理最新出现的 bug自言自语道“这个功能昨天还是好好的到底又改了些什么”不用盯着满屏的 `git log` 的输出试图弄清楚什么时候更改了提交,你只需运行 `git diff HEAD@{yesterday}`,看看从昨天以来的所有修改。这也适用于更长的时间段(例如 `git diff HEAD@{'2 months ago'}`),以及一个确切的日期(例如 `git diff HEAD@{'2010-01-01 12:00:00'}`)。
你也可以将这些基于日期的修改参数与使用修正参数的任何 Git 子命令一起使用。在 `gitrevisions` 手册页中有关于具体使用哪种格式的详细信息。
### 12\. 能知道所有的 reflog
你是不是试过在 rebase 时干掉过某次提交然后发现你需要保留那个提交中一些东西你可能觉得这些信息已经永远找不回来了只能重新创建。但是如果你在本地工作副本中提交了提交就会被添加到引用日志reflog中 ,你仍然可以访问到。
运行 `git reflog` 将在本地工作副本中显示当前分支的所有活动的列表,并为你提供每个提交的 SHA1 值。一旦发现你 rebase 时放弃的那个提交,你可以运行 `git checkout <SHA1>` 跳转到该提交,复制任何你需要的信息,然后再运行 `git checkout HEAD` 返回到分支最近的提交去。
### 以上是全部内容
希望这些技巧中至少有一个能够教给你一些关于 Git 的新东西Git 是一个有12年历史的项目并且在持续创新和增加新功能中。你最喜欢的 Git 技巧是什么?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/12-git-tips-gits-12th-birthday
作者:[John SJ Anderson][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/genehack
[1]:https://git-scm.com/
[2]:https://en.wikipedia.org/wiki/INI_file
[3]:https://gitlab.com/
[4]:/file/392941
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gui_graph.png?itok=3GovYfG1 (GitLab commit graph viewer)
[6]:/file/392936
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/console_graph.png?itok=XogY1P8M (Repository visualized with --graph command)
[8]:https://en.wikipedia.org/wiki/Secure_Hash_Algorithms

View File

@ -0,0 +1,73 @@
3 个 Linux 命令行密码管理器
=====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/password.jpg?itok=ec6z6YgZ)
我们都希望我们的密码安全可靠。为此,许多人转向密码管理应用程序,如 [KeePassX][1] 和 [Bitwarden][2]。
如果你在终端中花费了大量时间而且正在寻找更简单的解决方案,那么你需要查看 Linux 命令行的许多密码管理器。它们快速,易于使用且安全。
让我们来看看其中的三个。
### Titan
[Titan][3] 是一个密码管理器,也可作为文件加密工具。我不确定 Titan 在加密文件方面效果有多好;我只是把它看作密码管理器,在这方面,它确实做的很好。
Titan 将你的密码存储在加密的 [SQLite 数据库][4]中,你可以在第一次启动应用程序时创建并添加主密码。告诉 Titan 增加一个密码,它要求一个名字来识别它,一个用户名,密码本身,一个 URL 和一个关于密码的注释。
你可以让 Titan 为你生成一个密码,你可以通过条目名称或数字 ID名称注释或使用正则表达式来搜索数据库但是查看特定的密码可能会有点笨拙你要么必须列出所有密码滚动查找你想要使用的密码要么你可以通过使用其数字 ID如果你知道列出条目的详细信息来查看密码。
### Gopass
[Gopass][5] 被称为“团队密码管理员”。不要让这让你感到失望,它对个人的使用也很好。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gopass.png?itok=1Uodlute)
Gopass 是用 Go 语言编写的经典 Unix 和 Linux [Pass][6] 密码管理器的更新。在真正的 Linux 潮流中,你可以[编译源代码][7]或[使用安装程序][8]在你的计算机上使用 gopass。
在开始使用 gopass 之前,确保你的系统上有 [GNU Privacy Guard (GPG)][9] 和 [Git][10]。前者对你的密码存储进行加密和解密,后者将提交给一个 [Git 仓库][11]。如果 gopass 是给个人使用,你仍然需要 Git。你只需要担心签名提交。如果你感兴趣你可以[在文档中][12]了解这些依赖关系。
当你第一次启动 gopass 时,你需要创建一个密码存储并生成一个[秘钥][13]以确保存储的安全。当你想添加一个密码( gopass 指的是一个秘密gopass 会要求你提供一些信息,比如 URL用户名和密码。你可以让 gopass 为你添加的密码生成密码,或者你可以自己输入密码。
根据需要,你可以编辑,查看或删除密码。你还可以查看特定的密码或将其复制到剪贴板以将其粘贴到登录表单或窗口中。
### Kpcli
许多人选择的是开源密码管理器 [KeePass][14] 和 [KeePassX][15]。 [Kpcli][16] 将 KeePass 和 KeePassX 的功能带到离你最近的终端窗口。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/kpcli.png?itok=kMmOHTJz)
Kpcli 是一个键盘驱动的 shell可以完成其图形化表亲的大部分功能。这包括打开密码数据库添加和编辑密码和组组帮助你组织密码甚至重命名或删除密码和组。
当你需要时你可以将用户名和密码复制到剪贴板以粘贴到登录表单中。为了保证这些信息的安全kpcli 也有清除剪贴板的命令。对于一个小终端应用程序来说还不错。
你有最喜欢的命令行密码管理器吗?何不通过发表评论来分享它?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/3-password-managers-linux-command-line
作者:[Scott Nesbitt][a]
译者:[MjSeven](https://github.com/mjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/scottnesbitt
[1]:https://www.keepassx.org/
[2]:https://opensource.com/article/18/3/managing-passwords-bitwarden
[3]:https://www.titanpasswordmanager.org/
[4]:https://en.wikipedia.org/wiki/SQLite
[5]:https://www.justwatch.com/gopass/
[6]:https://www.passwordstore.org/
[7]:https://github.com/justwatchcom/gopass
[8]:https://justwatch.com/gopass/#install
[9]:https://www.gnupg.org
[10]:https://git-scm.com/
[11]:https://git-scm.com/book/en/v2/Git-Basics-Getting-a-Git-Repository
[12]:https://github.com/justwatchcom/gopass/blob/master/docs/setup.md
[13]:http://searchsecurity.techtarget.com/definition/private-key
[14]:https://keepass.info/
[15]:https://www.keepassx.org
[16]:http://kpcli.sourceforge.net/