mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
a76a505ba1
@ -1,33 +1,35 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( luming)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: translator: (luming)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10569-1.html)
|
||||
[#]: subject: (How To Copy A File/Folder From A Local System To Remote System In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in-multiple-servers-using-shell-script/)
|
||||
[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/)
|
||||
|
||||
如何在 Linux 上拷贝文件/文件夹到远程系统?
|
||||
如何在 Linux 上复制文件/文件夹到远程系统?
|
||||
======
|
||||
|
||||
从一个服务器拷贝文件到另一个服务器,或是从本地到远程是 Linux 管理员的日常任务之一。
|
||||
从一个服务器复制文件到另一个服务器,或者从本地到远程复制是 Linux 管理员的日常任务之一。
|
||||
|
||||
我觉得不会有人不同意,因为无论在哪里这都是你的日常操作之一。有很多办法都能处理这个任务,我们试着加以概括。你可以挑一个喜欢的方法。当然,看看其他命令也能在别的地方帮到你。
|
||||
|
||||
如果有人说不,我不接受。因为无论去哪这都是你的日常操作之一。
|
||||
有很多办法都能解决,我们就试着加以概括。
|
||||
你可以挑一个喜欢的方法。当然,看看其他命令也能在别的地方帮到你。
|
||||
我已经在自己的环境下测试过所有的命令和脚本了,因此你可以直接用到日常工作当中。
|
||||
通常大家都倾向 `scp` ,因为它是文件拷贝的<ruby>原生命令<rt>native command</rt></ruby>之一。但本文所列出的其它命令也很好用,建议你尝试一下。
|
||||
文件拷贝可以轻易地用以下四种方法。
|
||||
**`SCP`**:`scp` 在网络上的两个主机之间拷贝文件,与 `ssh` 使用相同的认证方式,具有相同的安全性。
|
||||
**`RSYNC`**:`rsync`是一个即快速又出众的多功能文件拷贝工具。它能本地拷贝,通过远程 shell 在其它主机之间拷贝,或者远程 `rsync` <ruby>守护进程<rt>daemon</rt></ruby>。
|
||||
**`PSCP`**:`pscp` 是一个并行拷贝文件到多个主机上的程序。它提供了诸多特性,例如为 scp 配置免密传输,保存输出到 文件,统计时长。
|
||||
**`PRSYNC`**:`prsync` 也是一个并行拷贝文件到多个主机上的程序。它也提供了诸多特性,例如为 ssh 配置免密传输,保存输出到 文件,统计时长。
|
||||
|
||||
### 方式1:如何在 Linux 上使用 scp 命令从本地系统向远程系统拷贝文件/文件夹?
|
||||
通常大家都倾向 `scp`,因为它是文件复制的<ruby>原生命令<rt>native command</rt></ruby>之一。但本文所列出的其它命令也很好用,建议你尝试一下。
|
||||
|
||||
`scp` 命令可以让我们拷贝文件/文件夹到远程系统上。
|
||||
文件复制可以轻易地用以下四种方法。
|
||||
|
||||
我会把 `output.txt` 文件从本地系统拷贝到 `2g.CentOS.com` 远程系统的 `/opt/backup` 文件夹下。
|
||||
- `scp`:在网络上的两个主机之间复制文件,它使用 `ssh` 做文件传输,并使用相同的认证方式,具有相同的安全性。
|
||||
- `rsync`:是一个既快速又出众的多功能文件复制工具。它能本地复制、通过远程 shell 在其它主机之间复制,或者与远程的 `rsync` <ruby>守护进程<rt>daemon</rt></ruby> 之间复制。
|
||||
- `pscp`:是一个并行复制文件到多个主机上的程序。它提供了诸多特性,例如为 `scp` 配置免密传输,保存输出到文件,以及超时控制。
|
||||
- `prsync`:也是一个并行复制文件到多个主机上的程序。它也提供了诸多特性,例如为 `ssh` 配置免密传输,保存输出到 文件,以及超时控制。
|
||||
|
||||
### 方式 1:如何在 Linux 上使用 scp 命令从本地系统向远程系统复制文件/文件夹?
|
||||
|
||||
`scp` 命令可以让我们从本地系统复制文件/文件夹到远程系统上。
|
||||
|
||||
我会把 `output.txt` 文件从本地系统复制到 `2g.CentOS.com` 远程系统的 `/opt/backup` 文件夹下。
|
||||
|
||||
```
|
||||
# scp output.txt root@2g.CentOS.com:/opt/backup
|
||||
@ -35,7 +37,7 @@
|
||||
output.txt 100% 2468 2.4KB/s 00:00
|
||||
```
|
||||
|
||||
拷贝两个文件 `output.txt` 和 `passwd-up.sh` 到远程系统 `2g.CentOs.com` 的 `/opt/backup` 文件夹下。
|
||||
从本地系统复制两个文件 `output.txt` 和 `passwd-up.sh` 到远程系统 `2g.CentOs.com` 的 `/opt/backup` 文件夹下。
|
||||
|
||||
```
|
||||
# scp output.txt passwd-up.sh root@2g.CentOS.com:/opt/backup
|
||||
@ -44,8 +46,9 @@ output.txt 100% 2468 2.4KB/s 00:00
|
||||
passwd-up.sh 100% 877 0.9KB/s 00:00
|
||||
```
|
||||
|
||||
拷贝 `shell-script` 文件夹到远程系统`2g.CentOs.com` 的 `/opt/back` 文件夹下。
|
||||
这会连同`/opt/backup`文件夹下所有的文件一同拷贝进去。
|
||||
从本地系统复制 `shell-script` 文件夹到远程系统 `2g.CentOs.com` 的 `/opt/back` 文件夹下。
|
||||
|
||||
这会连同`shell-script` 文件夹下所有的文件一同复制到`/opt/back` 下。
|
||||
|
||||
```
|
||||
# scp -r /home/daygeek/2g/shell-script/ root@:/opt/backup/
|
||||
@ -57,29 +60,31 @@ passwd-up1.sh 100% 7 0.0KB/s 00:00
|
||||
server-list.txt 100% 23 0.0KB/s 00:00
|
||||
```
|
||||
|
||||
### 方式2:如何在 Linux 上使用 scp 命令和 Shell 脚本拷贝文件/文件夹到多个远程系统上?
|
||||
### 方式 2:如何在 Linux 上使用 scp 命令和 Shell 脚本复制文件/文件夹到多个远程系统上?
|
||||
|
||||
如果你想拷贝同一个文件到多个远程服务器上,那就需要创建一个如下面那样的小 shell 脚本。
|
||||
如果你想复制同一个文件到多个远程服务器上,那就需要创建一个如下面那样的小 shell 脚本。
|
||||
|
||||
并且,需要将服务器添加进 `server-list.txt` 文件。确保添加成功后,每个服务器之间应当空一行。
|
||||
并且,需要将服务器添加进 `server-list.txt` 文件。确保添加成功后,每个服务器应当单独一行。
|
||||
|
||||
最终,你想要的脚本就像下面这样:
|
||||
|
||||
```
|
||||
# file-copy.sh
|
||||
|
||||
#!/bin/sh
|
||||
for server in `more server-list.txt`
|
||||
do
|
||||
scp /home/daygeek/2g/shell-script/output.txt root@$server:/opt/backup
|
||||
scp /home/daygeek/2g/shell-script/output.txt root@$server:/opt/backup
|
||||
done
|
||||
```
|
||||
|
||||
完成之后,给 `file-copy.sh` 文件设置可执行权限。
|
||||
|
||||
```
|
||||
# chmod +x file-copy.sh
|
||||
```
|
||||
|
||||
最后运行脚本完成拷贝。
|
||||
最后运行脚本完成复制。
|
||||
|
||||
```
|
||||
# ./file-copy.sh
|
||||
@ -88,7 +93,7 @@ output.txt 100% 2468 2.4KB/s 00:00
|
||||
output.txt 100% 2468 2.4KB/s 00:00
|
||||
```
|
||||
|
||||
使用下面的脚本可以拷贝多个文件到多个远程服务器上。
|
||||
使用下面的脚本可以复制多个文件到多个远程服务器上。
|
||||
|
||||
```
|
||||
# file-copy.sh
|
||||
@ -96,11 +101,12 @@ output.txt 100% 2468 2.4KB/s 00:00
|
||||
#!/bin/sh
|
||||
for server in `more server-list.txt`
|
||||
do
|
||||
scp /home/daygeek/2g/shell-script/output.txt passwd-up.sh root@$server:/opt/backup
|
||||
scp /home/daygeek/2g/shell-script/output.txt passwd-up.sh root@$server:/opt/backup
|
||||
done
|
||||
```
|
||||
|
||||
下面结果显示所有的两个文件都拷贝到两个服务器上。
|
||||
下面结果显示所有的两个文件都复制到两个服务器上。
|
||||
|
||||
```
|
||||
# ./file-cp.sh
|
||||
|
||||
@ -110,7 +116,7 @@ output.txt 100% 2468 2.4KB/s 00:00
|
||||
passwd-up.sh 100% 877 0.9KB/s 00:00
|
||||
```
|
||||
|
||||
使用下面的脚本递归地拷贝文件夹到多个远程服务器上。
|
||||
使用下面的脚本递归地复制文件夹到多个远程服务器上。
|
||||
|
||||
```
|
||||
# file-copy.sh
|
||||
@ -118,11 +124,12 @@ passwd-up.sh 100% 877 0.9KB/s 00:00
|
||||
#!/bin/sh
|
||||
for server in `more server-list.txt`
|
||||
do
|
||||
scp -r /home/daygeek/2g/shell-script/ root@$server:/opt/backup
|
||||
scp -r /home/daygeek/2g/shell-script/ root@$server:/opt/backup
|
||||
done
|
||||
```
|
||||
|
||||
上面脚本的输出。
|
||||
上述脚本的输出。
|
||||
|
||||
```
|
||||
# ./file-cp.sh
|
||||
|
||||
@ -139,11 +146,11 @@ passwd-up1.sh 100% 7 0.0KB/s 00:00
|
||||
server-list.txt 100% 23 0.0KB/s 00:00
|
||||
```
|
||||
|
||||
### 方式3:如何在 Linux 上使用 pscp 命令拷贝文件/文件夹到多个远程系统上?
|
||||
### 方式 3:如何在 Linux 上使用 pscp 命令复制文件/文件夹到多个远程系统上?
|
||||
|
||||
`pscp` 命令可以直接让我们拷贝文件到多个远程服务器上。
|
||||
`pscp` 命令可以直接让我们复制文件到多个远程服务器上。
|
||||
|
||||
使用下面的 `pscp` 命令拷贝单个文件到远程服务器。
|
||||
使用下面的 `pscp` 命令复制单个文件到远程服务器。
|
||||
|
||||
```
|
||||
# pscp.pssh -H 2g.CentOS.com /home/daygeek/2g/shell-script/output.txt /opt/backup
|
||||
@ -151,7 +158,7 @@ server-list.txt 100% 23 0.0KB/s 00:00
|
||||
[1] 18:46:11 [SUCCESS] 2g.CentOS.com
|
||||
```
|
||||
|
||||
使用下面的 `pscp` 命令拷贝多个文件到远程服务器。
|
||||
使用下面的 `pscp` 命令复制多个文件到远程服务器。
|
||||
|
||||
```
|
||||
# pscp.pssh -H 2g.CentOS.com /home/daygeek/2g/shell-script/output.txt ovh.sh /opt/backup
|
||||
@ -159,7 +166,7 @@ server-list.txt 100% 23 0.0KB/s 00:00
|
||||
[1] 18:47:48 [SUCCESS] 2g.CentOS.com
|
||||
```
|
||||
|
||||
递归地拷贝整个文件夹到远程服务器。
|
||||
使用下面的 `pscp` 命令递归地复制整个文件夹到远程服务器。
|
||||
|
||||
```
|
||||
# pscp.pssh -H 2g.CentOS.com -r /home/daygeek/2g/shell-script/ /opt/backup
|
||||
@ -167,7 +174,7 @@ server-list.txt 100% 23 0.0KB/s 00:00
|
||||
[1] 18:48:46 [SUCCESS] 2g.CentOS.com
|
||||
```
|
||||
|
||||
使用下面的命令拷贝单个文件到多个远程服务器。
|
||||
使用下面的 `pscp` 命令使用下面的命令复制单个文件到多个远程服务器。
|
||||
|
||||
```
|
||||
# pscp.pssh -h server-list.txt /home/daygeek/2g/shell-script/output.txt /opt/backup
|
||||
@ -176,7 +183,7 @@ server-list.txt 100% 23 0.0KB/s 00:00
|
||||
[2] 18:49:48 [SUCCESS] 2g.Debian.com
|
||||
```
|
||||
|
||||
使用下面的 `pscp` 命令拷贝多个文件到多个远程服务器。
|
||||
使用下面的 `pscp` 命令复制多个文件到多个远程服务器。
|
||||
|
||||
```
|
||||
# pscp.pssh -h server-list.txt /home/daygeek/2g/shell-script/output.txt passwd-up.sh /opt/backup
|
||||
@ -185,7 +192,7 @@ server-list.txt 100% 23 0.0KB/s 00:00
|
||||
[2] 18:50:30 [SUCCESS] 2g.CentOS.com
|
||||
```
|
||||
|
||||
使用下面的命令递归地拷贝文件夹到多个远程服务器。
|
||||
使用下面的命令递归地复制文件夹到多个远程服务器。
|
||||
|
||||
```
|
||||
# pscp.pssh -h server-list.txt -r /home/daygeek/2g/shell-script/ /opt/backup
|
||||
@ -194,11 +201,11 @@ server-list.txt 100% 23 0.0KB/s 00:00
|
||||
[2] 18:51:31 [SUCCESS] 2g.CentOS.com
|
||||
```
|
||||
|
||||
### 方式4:如何在 Linux 上使用 rsync 命令拷贝文件/文件夹到多个远程系统上?
|
||||
### 方式 4:如何在 Linux 上使用 rsync 命令复制文件/文件夹到多个远程系统上?
|
||||
|
||||
`rsync`是一个即快速又出众的多功能文件拷贝工具。它能本地拷贝,通过远程 shell 在其它主机之间拷贝,或者远程 `rsync` <ruby>守护进程<rt>daemon</rt></ruby>。
|
||||
`rsync` 是一个即快速又出众的多功能文件复制工具。它能本地复制、通过远程 shell 在其它主机之间复制,或者在远程 `rsync` <ruby>守护进程<rt>daemon</rt></ruby> 之间复制。
|
||||
|
||||
使用下面的 `rsync` 命令拷贝单个文件到远程服务器。
|
||||
使用下面的 `rsync` 命令复制单个文件到远程服务器。
|
||||
|
||||
```
|
||||
# rsync -avz /home/daygeek/2g/shell-script/output.txt root@:/opt/backup
|
||||
@ -210,7 +217,7 @@ sent 598 bytes received 31 bytes 1258.00 bytes/sec
|
||||
total size is 2468 speedup is 3.92
|
||||
```
|
||||
|
||||
使用下面的 `rsync` 命令拷贝多个文件到远程服务器。
|
||||
使用下面的 `rsync` 命令复制多个文件到远程服务器。
|
||||
|
||||
```
|
||||
# rsync -avz /home/daygeek/2g/shell-script/output.txt passwd-up.sh root@2g.CentOS.com:/opt/backup
|
||||
@ -223,7 +230,7 @@ sent 737 bytes received 50 bytes 1574.00 bytes/sec
|
||||
total size is 2537 speedup is 3.22
|
||||
```
|
||||
|
||||
使用下面的 `rsync` 命令通过 `ssh` 拷贝单个文件到远程服务器。
|
||||
使用下面的 `rsync` 命令通过 `ssh` 复制单个文件到远程服务器。
|
||||
|
||||
```
|
||||
# rsync -avzhe ssh /home/daygeek/2g/shell-script/output.txt root@2g.CentOS.com:/opt/backup
|
||||
@ -235,7 +242,7 @@ sent 598 bytes received 31 bytes 419.33 bytes/sec
|
||||
total size is 2.47K speedup is 3.92
|
||||
```
|
||||
|
||||
使用下面的 `rsync` 命令通过 `ssh` 递归地拷贝文件夹到远程服务器。这种方式只拷贝文件不包括文件夹。
|
||||
使用下面的 `rsync` 命令通过 `ssh` 递归地复制文件夹到远程服务器。这种方式只复制文件不包括文件夹。
|
||||
|
||||
```
|
||||
# rsync -avzhe ssh /home/daygeek/2g/shell-script/ root@2g.CentOS.com:/opt/backup
|
||||
@ -252,9 +259,9 @@ sent 3.85K bytes received 281 bytes 8.26K bytes/sec
|
||||
total size is 9.12K speedup is 2.21
|
||||
```
|
||||
|
||||
### 如何在 Linux 上使用 rsync 命令和 Shell 脚本拷贝文件/文件夹到多个远程系统上?
|
||||
### 方式 5:如何在 Linux 上使用 rsync 命令和 Shell 脚本复制文件/文件夹到多个远程系统上?
|
||||
|
||||
如果你想拷贝同一个文件到多个远程服务器上,那也需要创建一个如下面那样的小 shell 脚本。
|
||||
如果你想复制同一个文件到多个远程服务器上,那也需要创建一个如下面那样的小 shell 脚本。
|
||||
|
||||
```
|
||||
# file-copy.sh
|
||||
@ -294,9 +301,9 @@ sent 3.86K bytes received 281 bytes 2.76K bytes/sec
|
||||
total size is 9.13K speedup is 2.21
|
||||
```
|
||||
|
||||
### 方式6:如何在 Linux 上使用 scp 命令和 Shell 脚本从本地系统向多个远程系统拷贝文件/文件夹?
|
||||
### 方式 6:如何在 Linux 上使用 scp 命令和 Shell 脚本从本地系统向多个远程系统复制文件/文件夹?
|
||||
|
||||
在上面两个 shell 脚本中,我们需要事先指定好文件和文件夹的路径,这儿我做了些小修改,让脚本可以接收文件或文件夹的输入。当你每天需要多次执行拷贝时,这将会非常有用。
|
||||
在上面两个 shell 脚本中,我们需要事先指定好文件和文件夹的路径,这儿我做了些小修改,让脚本可以接收文件或文件夹作为输入参数。当你每天需要多次执行复制时,这将会非常有用。
|
||||
|
||||
```
|
||||
# file-copy.sh
|
||||
@ -317,11 +324,11 @@ output1.txt 100% 3558 3.5KB/s 00:00
|
||||
output1.txt 100% 3558 3.5KB/s 00:00
|
||||
```
|
||||
|
||||
### 方式7:如何在Linux 系统上用非标准端口拷贝文件/文件夹到远程系统?
|
||||
### 方式 7:如何在 Linux 系统上用非标准端口复制文件/文件夹到远程系统?
|
||||
|
||||
如果你想使用非标准端口,使用下面的 shell 脚本拷贝文件或文件夹。
|
||||
如果你想使用非标准端口,使用下面的 shell 脚本复制文件或文件夹。
|
||||
|
||||
如果你使用了<ruby>非标准<rt>Non-Standard</rt></ruby>端口,确保像下面 `SCP` 命令那样指定好了端口号。
|
||||
如果你使用了<ruby>非标准<rt>Non-Standard</rt></ruby>端口,确保像下面 `scp` 命令那样指定好了端口号。
|
||||
|
||||
```
|
||||
# file-copy-scp.sh
|
||||
@ -354,7 +361,7 @@ rsync -avzhe 'ssh -p 2222' $1 root@2g.CentOS.com$server:/opt/backup
|
||||
done
|
||||
```
|
||||
|
||||
运行脚本,输入文件名
|
||||
运行脚本,输入文件名。
|
||||
|
||||
```
|
||||
# ./file-copy-rsync.sh passwd-up.sh
|
||||
@ -370,6 +377,7 @@ passwd-up.sh
|
||||
sent 238 bytes received 35 bytes 26.00 bytes/sec
|
||||
total size is 159 speedup is 0.58
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in-multiple-servers-using-shell-script/
|
||||
@ -377,7 +385,7 @@ via: https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[LuuMing](https://github.com/LuuMing)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,136 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (jdh8383)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (7 CI/CD tools for sysadmins)
|
||||
[#]: via: (https://opensource.com/article/18/12/cicd-tools-sysadmins)
|
||||
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
|
||||
|
||||
7 CI/CD tools for sysadmins
|
||||
======
|
||||
An easy guide to the top open source continuous integration, continuous delivery, and continuous deployment tools.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc)
|
||||
|
||||
Continuous integration, continuous delivery, and continuous deployment (CI/CD) have all existed in the developer community for many years. Some organizations have involved their operations counterparts, but many haven't. For most organizations, it's imperative for their operations teams to become just as familiar with CI/CD tools and practices as their development compatriots are.
|
||||
|
||||
CI/CD practices can equally apply to infrastructure and third-party applications and internally developed applications. Also, there are many different tools but all use similar models. And possibly most importantly, leading your company into this new practice will put you in a strong position within your company, and you'll be a beacon for others to follow.
|
||||
|
||||
Some organizations have been using CI/CD practices on infrastructure, with tools like [Ansible][1], [Chef][2], or [Puppet][3], for several years. Other tools, like [Test Kitchen][4], allow tests to be performed on infrastructure that will eventually host applications. In fact, those tests can even deploy the application into a production-like environment and execute application-level tests with production loads in more advanced configurations. However, just getting to the point of being able to test the infrastructure individually is a huge feat. Terraform can also use Test Kitchen for even more [ephemeral][5] and [idempotent][6] infrastructure configurations than some of the original configuration-management tools. Add in Linux containers and Kubernetes, and you can now test full infrastructure and application deployments with prod-like specs and resources that come and go in hours rather than months or years. Everything is wiped out before being deployed and tested again.
|
||||
|
||||
However, you can also focus on getting your network configurations or database data definition language (DDL) files into version control and start running small CI/CD pipelines on them. Maybe it just checks syntax or semantics or some best practices. Actually, this is how most development pipelines started. Once you get the scaffolding down, it will be easier to build on. You'll start to find all kinds of use cases for pipelines once you get started.
|
||||
|
||||
For example, I regularly write a newsletter within my company, and I maintain it in version control using [MJML][7]. I needed to be able to host a web version, and some folks liked being able to get a PDF, so I built a [pipeline][8]. Now when I create a new newsletter, I submit it for a merge request in GitLab. This automatically creates an index.html with links to HTML and PDF versions of the newsletter. The HTML and PDF files are also created in the pipeline. None of this is published until someone comes and reviews these artifacts. Then, GitLab Pages publishes the website and I can pull down the HTML to send as a newsletter. In the future, I'll automatically send the newsletter when the merge request is merged or after a special approval step. This seems simple, but it has saved me a lot of time. This is really at the core of what these tools can do for you. They will save you time.
|
||||
|
||||
The key is creating tools to work in the abstract so that they can apply to multiple problems with little change. I should also note that what I created required almost no code except [some light HTML templating][9], some [node to loop through the HTML files][10], and some more [node to populate the index page with all the HTML pages and PDFs][11].
|
||||
|
||||
Some of this might look a little complex, but most of it was taken from the tutorials of the different tools I'm using. And many developers are happy to work with you on these types of things, as they might also find them useful when they're done. The links I've provided are to a newsletter we plan to start for [DevOps KC][12], and all the code for creating the site comes from the work I did on our internal newsletter.
|
||||
|
||||
Many of the tools listed below can offer this type of interaction, but some offer a slightly different model. The emerging model in this space is that of a declarative description of a pipeline in something like YAML with each stage being ephemeral and idempotent. Many of these systems also ensure correct sequencing by creating a [directed acyclic graph][13] (DAG) over the different stages of the pipeline.
|
||||
|
||||
These stages are often run in Linux containers and can do anything you can do in a container. Some tools, like [Spinnaker][14], focus only on the deployment component and offer some operational features that others don't normally include. [Jenkins][15] has generally kept pipelines in an XML format and most interactions occur within the GUI, but more recent implementations have used a [domain specific language][16] (DSL) using [Groovy][17]. Further, Jenkins jobs normally execute on nodes with a special Java agent installed and consist of a mix of plugins and pre-installed components.
|
||||
|
||||
Jenkins introduced pipelines in its tool, but they were a bit challenging to use and contained several caveats. Recently, the creator of Jenkins decided to move the community toward a couple different initiatives that will hopefully breathe new life into the project—which is the one that really brought CI/CD to the masses. I think its most interesting initiative is creating a Cloud Native Jenkins that can turn a Kubernetes cluster into a Jenkins CI/CD platform.
|
||||
|
||||
As you learn more about these tools and start bringing these practices into your company or your operations division, you'll quickly gain followers. You will increase your own productivity as well as that of others. We all have years of backlog to get to—how much would your co-workers love if you could give them enough time to start tackling that backlog? Not only that, but your customers will start to see increased application reliability, and your management will see you as a force multiplier. That certainly can't hurt during your next salary negotiation or when interviewing with all your new skills.
|
||||
|
||||
Let's dig into the tools a bit more. We'll briefly cover each one and share links to more information.
|
||||
|
||||
### GitLab CI
|
||||
|
||||
GitLab is a fairly new entrant to the CI/CD space, but it's already achieved the top spot in the [Forrester Wave for Continuous Integration Tools][20]. That's a huge achievement in such a crowded and highly qualified field. What makes GitLab CI so great? It uses a YAML file to describe the entire pipeline. It also has a functionality called Auto DevOps that allows for simpler projects to have a pipeline built automatically with multiple tests built-in. This system uses [Herokuish buildpacks][21] to determine the language and how to build the application. Some languages can also manage databases, which is a real game-changer for building new applications and getting them deployed to production from the beginning of the development process. The system has native integrations into Kubernetes and will deploy your application automatically into a Kubernetes cluster using one of several different deployment methodologies, like percentage-based rollouts and blue-green deployments.
|
||||
|
||||
In addition to its CI functionality, GitLab offers many complementary features like operations and monitoring with Prometheus deployed automatically with your application; portfolio and project management using GitLab Issues, Epics, and Milestones; security checks built into the pipeline with the results provided as an aggregate across multiple projects; and the ability to edit code right in GitLab using the WebIDE, which can even provide a preview or execute part of a pipeline for faster feedback.
|
||||
|
||||
### GoCD
|
||||
|
||||
GoCD comes from the great minds at Thoughtworks, which is testimony enough for its capabilities and efficiency. To me, GoCD's main differentiator from the rest of the pack is its [Value Stream Map][22] (VSM) feature. In fact, pipelines can be chained together with one pipeline providing the "material" for the next pipeline. This allows for increased independence for different teams with different responsibilities in the deployment process. This may be a useful feature when introducing this type of system in older organizations that intend to keep these teams separate—but having everyone using the same tool will make it easier later to find bottlenecks in the VSM and reorganize the teams or work to increase efficiencies.
|
||||
|
||||
It's incredibly valuable to have a VSM for each product in a company; that GoCD allows this to be [described in JSON or YAML][23] in version control and presented visually with all the data around wait times makes this tool even more valuable to an organization trying to understand itself better. Start by installing GoCD and mapping out your process with only manual approval gates. Then have each team use the manual approvals so you can start collecting data on where bottlenecks might exist.
|
||||
|
||||
### Travis CI
|
||||
|
||||
Travis CI was my first experience with a Software as a Service (SaaS) CI system, and it's pretty awesome. The pipelines are stored as YAML with your source code, and it integrates seamlessly with tools like GitHub. I don't remember the last time a pipeline failed because of Travis CI or the integration—Travis CI has a very high uptime. Not only can it be used as SaaS, but it also has a version that can be hosted. I haven't run that version—there were a lot of components, and it looked a bit daunting to install all of it. I'm guessing it would be much easier to deploy it all to Kubernetes with [Helm charts provided by Travis CI][26]. Those charts don't deploy everything yet, but I'm sure it will grow even more in the future. There is also an enterprise version if you don't want to deal with the hassle.
|
||||
|
||||
However, if you're developing open source code, you can use the SaaS version of Travis CI for free. That is an awesome service provided by an awesome team! This alleviates a lot of overhead and allows you to use a fairly common platform for developing open source code without having to run anything.
|
||||
|
||||
### Jenkins
|
||||
|
||||
Jenkins is the original, the venerable, de facto standard in CI/CD. If you haven't already, you need to read "[Jenkins: Shifting Gears][27]" from Kohsuke, the creator of Jenkins and CTO of CloudBees. It sums up all of my feelings about Jenkins and the community from the last decade. What he describes is something that has been needed for several years, and I'm happy CloudBees is taking the lead on this transformation. Jenkins will be a bit overwhelming to most non-developers and has long been a burden on its administrators. However, these are items they're aiming to fix.
|
||||
|
||||
[Jenkins Configuration as Code][28] (JCasC) should help fix the complex configuration issues that have plagued admins for years. This will allow for a zero-touch configuration of Jenkins masters through a YAML file, similar to other CI/CD systems. [Jenkins Evergreen][29] aims to make this process even easier by providing predefined Jenkins configurations based on different use cases. These distributions should be easier to maintain and upgrade than the normal Jenkins distribution.
|
||||
|
||||
Jenkins 2 introduced native pipeline functionality with two types of pipelines, which [I discuss][30] in a LISA17 presentation. Neither is as easy to navigate as YAML when you're doing something simple, but they're quite nice for doing more complex tasks.
|
||||
|
||||
[Jenkins X][31] is the full transformation of Jenkins and will likely be the implementation of Cloud Native Jenkins (or at least the thing most users see when using Cloud Native Jenkins). It will take JCasC and Evergreen and use them at their best natively on Kubernetes. These are exciting times for Jenkins, and I look forward to its innovation and continued leadership in this space.
|
||||
|
||||
### Concourse CI
|
||||
|
||||
I was first introduced to Concourse through folks at Pivotal Labs when it was an early beta version—there weren't many tools like it at the time. The system is made of microservices, and each job runs within a container. One of its most useful features that other tools don't have is the ability to run a job from your local system with your local changes. This means you can develop locally (assuming you have a connection to the Concourse server) and run your builds just as they'll run in the real build pipeline. Also, you can rerun failed builds from your local system and inject specific changes to test your fixes.
|
||||
|
||||
Concourse also has a simple extension system that relies on the fundamental concept of resources. Basically, each new feature you want to provide to your pipeline can be implemented in a Docker image and included as a new resource type in your configuration. This keeps all functionality encapsulated in a single, immutable artifact that can be upgraded and modified independently, and breaking changes don't necessarily have to break all your builds at the same time.
|
||||
|
||||
### Spinnaker
|
||||
|
||||
Spinnaker comes from Netflix and is more focused on continuous deployment than continuous integration. It can integrate with other tools, including Travis and Jenkins, to kick off test and deployment pipelines. It also has integrations with monitoring tools like Prometheus and Datadog to make decisions about deployments based on metrics provided by these systems. For example, the canary deployment uses a judge concept and the metrics being collected to determine if the latest canary deployment has caused any degradation in pertinent metrics and should be rolled back or if deployment can continue.
|
||||
|
||||
A couple of additional, unique features related to deployments cover an area that is often overlooked when discussing continuous deployment, and might even seem antithetical, but is critical to success: Spinnaker helps make continuous deployment a little less continuous. It will prevent a stage from running during certain times to prevent a deployment from occurring during a critical time in the application lifecycle. It can also enforce manual approvals to ensure the release occurs when the business will benefit the most from the change. In fact, the whole point of continuous integration and continuous deployment is to be ready to deploy changes as quickly as the business needs to change.
|
||||
|
||||
### Screwdriver
|
||||
|
||||
Screwdriver is an impressively simple piece of engineering. It uses a microservices approach and relies on tools like Nomad, Kubernetes, and Docker to act as its execution engine. There is a pretty good [deployment tutorial][34] for deploying to AWS and Kubernetes, but it could be improved once the in-progress [Helm chart][35] is completed.
|
||||
|
||||
Screwdriver also uses YAML for its pipeline descriptions and includes a lot of sensible defaults, so there's less boilerplate configuration for each pipeline. The configuration describes an advanced workflow that can have complex dependencies among jobs. For example, a job can be guaranteed to run after or before another job. Jobs can run in parallel and be joined afterward. You can also use logical operators to run a job, for example, if any of its dependencies are successful or only if all are successful. Even better is that you can specify certain jobs to be triggered from a pull request. Also, dependent jobs won't run when this occurs, which allows easy segregation of your pipeline for when an artifact should go to production and when it still needs to be reviewed.
|
||||
|
||||
This is only a brief description of these CI/CD tools—each has even more cool features and differentiators you can investigate. They are all open source and free to use, so go deploy them and see which one fits your needs best.
|
||||
|
||||
### What to read next
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/cicd-tools-sysadmins
|
||||
|
||||
作者:[Dan Barker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ansible.com/
|
||||
[2]: https://www.chef.io/
|
||||
[3]: https://puppet.com/
|
||||
[4]: https://github.com/test-kitchen/test-kitchen
|
||||
[5]: https://www.merriam-webster.com/dictionary/ephemeral
|
||||
[6]: https://en.wikipedia.org/wiki/Idempotence
|
||||
[7]: https://mjml.io/
|
||||
[8]: https://gitlab.com/devopskc/newsletter/blob/master/.gitlab-ci.yml
|
||||
[9]: https://gitlab.com/devopskc/newsletter/blob/master/index/index.html
|
||||
[10]: https://gitlab.com/devopskc/newsletter/blob/master/html-to-pdf.js
|
||||
[11]: https://gitlab.com/devopskc/newsletter/blob/master/populate-index.js
|
||||
[12]: https://devopskc.com/
|
||||
[13]: https://en.wikipedia.org/wiki/Directed_acyclic_graph
|
||||
[14]: https://www.spinnaker.io/
|
||||
[15]: https://jenkins.io/
|
||||
[16]: https://martinfowler.com/books/dsl.html
|
||||
[17]: http://groovy-lang.org/
|
||||
[18]: https://about.gitlab.com/product/continuous-integration/
|
||||
[19]: https://gitlab.com/gitlab-org/gitlab-ce/
|
||||
[20]: https://about.gitlab.com/2017/09/27/gitlab-leader-continuous-integration-forrester-wave/
|
||||
[21]: https://github.com/gliderlabs/herokuish
|
||||
[22]: https://www.gocd.org/getting-started/part-3/#value_stream_map
|
||||
[23]: https://docs.gocd.org/current/advanced_usage/pipelines_as_code.html
|
||||
[24]: https://docs.travis-ci.com/
|
||||
[25]: https://github.com/travis-ci/travis-ci
|
||||
[26]: https://github.com/travis-ci/kubernetes-config
|
||||
[27]: https://jenkins.io/blog/2018/08/31/shifting-gears/
|
||||
[28]: https://jenkins.io/projects/jcasc/
|
||||
[29]: https://github.com/jenkinsci/jep/blob/master/jep/300/README.adoc
|
||||
[30]: https://danbarker.codes/talk/lisa17-becoming-plumber-building-deployment-pipelines/
|
||||
[31]: https://jenkins-x.io/
|
||||
[32]: https://concourse-ci.org/
|
||||
[33]: https://github.com/concourse/concourse
|
||||
[34]: https://docs.screwdriver.cd/cluster-management/kubernetes
|
||||
[35]: https://github.com/screwdriver-cd/screwdriver-chart
|
81
sources/tech/20190220 Automation evolution.md
Normal file
81
sources/tech/20190220 Automation evolution.md
Normal file
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Automation evolution)
|
||||
[#]: via: (https://leancrew.com/all-this/2019/02/automation-evolution/)
|
||||
[#]: author: (Dr.Drang https://leancrew.com)
|
||||
|
||||
Automation evolution
|
||||
======
|
||||
|
||||
In my experience, scripts and macros almost never end up the way they start. This shouldn’t be a surprise. Just as spending time performing a particular task makes you realize it should be automated, spending time working with the automation makes you realize how it can be improved. Contra [XKCD][3], this doesn’t mean the decision to automate a task puts you on an endless treadmill of tweaking that’s never worth the time you invest. It means you’re continuing to think about how you do things and how your methods can be improved. I have an example that I’ve been working on for years.
|
||||
|
||||
Two of the essential but dull parts of my job involve sending out invoices to clients and following up when those invoices aren’t paid on time. I’ve gradually built up a system to handle both of these interrelated duties. I’ve written about certain details before, but here I want to talk about how and why the system has evolved.
|
||||
|
||||
It started with [TextExpander][4] snippets. One was for the text of the email that accompanied the invoice when it was first sent, and it looked like this (albeit less terse):
|
||||
|
||||
```
|
||||
Attached is invoice A for $B on project C. Payment is due on D.
|
||||
```
|
||||
|
||||
where the A, B, C, and D were [fill-in fields][5]. Similarly, there was a snippet for the followup emails.
|
||||
|
||||
```
|
||||
The attached invoice, X for $Y on project Z, is still outstanding
|
||||
and is now E days old. Pay up.
|
||||
```
|
||||
|
||||
While these snippets was certainly better than typing this boilerplate out again and again, they weren’t using the computer for what it’s good at: looking things up and calculating. The invoices are PDFs that came out of my company’s accounting system and contain the information for X, Y, Z, and D. The age of the invoice, E, can be calculated from D and the current date.
|
||||
|
||||
So after a month or two of using the snippets, I wrote an invoicing script in Python that read the invoice PDF and created an email message with all of the parts filled in. It also added a subject line and used a project database to look up the client’s email address to put in the To field. A similar script created a dunning email message. Both of these scripts could be run from the Terminal and took the invoice PDF as their argument, e.g.,
|
||||
|
||||
```
|
||||
invoice 12345.pdf
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
dun 12345.pdf
|
||||
```
|
||||
|
||||
I should mention that these scripts created the email messages, but they didn’t send them. Sometimes I need to add an extra sentence or two to handle particular situations, and these scripts stopped short of sending so I could do that.
|
||||
|
||||
It didn’t take very long for me to realize that opening a Terminal window just to run a single command was itself a waste of time. I used Automator to add Quick Action workflows that run the `invoice` and `dun` scripts to the Services menu. That allowed me to run the scripts by right-clicking on an invoice PDF file in the Finder.
|
||||
|
||||
This system lasted quite a while. Eventually, though, I decided it was foolish to rely on my memory (or periodic checking of my outstanding invoices) to decide when to send out the followup emails on unpaid bills. I added a section to the `invoice` script that created a reminder along with the invoicing email. The reminder went in the Invoices list of the Reminders app and was given a due date of the first Tuesday at least 45 days after the invoice date. My invoices are net 30, so 45 days seemed like a good starting time for followups. And rather than having the reminder pop up on any day of the week, I set it to Tuesday—early in the week but unlikely to be on a holiday.1
|
||||
|
||||
Changing the `invoice` script changed the behavior of the Services menu item that called it; I didn’t have to make any changes in Automator.
|
||||
|
||||
This system was the state of the art until it hit me that I could write a script that checked Reminders for every invoice that was past due and run the `dun` script on all of them, creating a series of followup emails in one fell swoop. I wrote this script as a combination of Python and AppleScript and embedded it in a [Keyboard Maestro][6] macro. With this macro in place, I no longer had to hunt for the invoices to right-click on.
|
||||
|
||||
A couple of weeks ago, after reading Federico Viticci’s article on [using a Mac from iOS][7], I began thinking about the hole in my followup system: I have to be at my Mac to run Keyboard Maestro. What if I’m traveling on Tuesday and want to send out followup emails from my iPhone or iPad? OK, sure, I could use Screens to connect to the Mac and run the Keyboard Maestro macro that way, but that’s very slow and clumsy over a cellular network connection, especially when trying to manipulate windows on a 27″ iMac screen as viewed through an iPhone-sized keyhole.
|
||||
|
||||
The obvious solution, which wasn’t obvious to me until I’d thought of and rejected a few other ideas, was to change the `dun` script to create and save the followup email. Saving the email puts it in the Drafts folder, which I can get at from all of my devices. I also changed the Keyboard Maestro macro that executes the `dun` script on every overdue invoice to run every Tuesday morning at 5:00 am. When the reminders pop up later in the day, the emails are already written and waiting for me in the Drafts folder.
|
||||
|
||||
Yesterday was the first “live” test of the new system. I was in an airport restaurant—nothing but the best cuisine for me—when my watch buzzed with reminders for two overdue invoices. I pulled out my phone, opened Mail, and there were the emails, waiting to be sent. In this case, I didn’t have to edit the messages before sending, but it wouldn’t have been a big deal if I had—no more difficult than writing any other email from my phone.
|
||||
|
||||
Am I done with this? History suggests I’m not, and I’m OK with that. By getting rid of more scutwork, I’ve made myself better at following up on old invoices, and my average time-to-collection has improved. Even XKCD would think that’s worth the effort.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://leancrew.com/all-this/2019/02/automation-evolution/
|
||||
|
||||
作者:[Dr.Drang][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://leancrew.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://leancrew.com/all-this/2019/02/regex-groups-and-numerals/
|
||||
[2]: https://leancrew.com/all-this/2019/02/transparency/
|
||||
[3]: https://xkcd.com/1319/
|
||||
[4]: https://textexpander.com/
|
||||
[5]: https://textexpander.com/help/desktop/fillins.html
|
||||
[6]: https://www.keyboardmaestro.com/main/
|
||||
[7]: https://www.macstories.net/ipad-diaries/ipad-diaries-using-a-mac-from-ios-part-1-finder-folders-siri-shortcuts-and-app-windows-with-keyboard-maestro/
|
60
sources/tech/20190223 Regex groups and numerals.md
Normal file
60
sources/tech/20190223 Regex groups and numerals.md
Normal file
@ -0,0 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Regex groups and numerals)
|
||||
[#]: via: (https://leancrew.com/all-this/2019/02/regex-groups-and-numerals/)
|
||||
[#]: author: (Dr.Drang https://leancrew.com)
|
||||
|
||||
Regex groups and numerals
|
||||
======
|
||||
|
||||
A week or so ago, I was editing a program and decided I should change some variable names. I thought it would be a simple regex find/replace, and it was. Just not as simple as I thought.
|
||||
|
||||
The variables were named `a10`, `v10`, and `x10`, and I wanted to change them to `a30`, `v30`, and `x30`, respectively. I brought up BBEdit’s Find window and entered this:
|
||||
|
||||
![Mistaken BBEdit replacement pattern][2]
|
||||
|
||||
I couldn’t just replace `10` with `30` because there were instances of `10` in the code that weren’t related to the variables. And because I think I’m clever, I didn’t want to do three non-regex replacements, one each for `a10`, `v10`, and `x10`. But I wasn’t clever enough to notice the blue coloring in the replacement pattern. Had I done so, I would have seen that BBEdit was interpreting my replacement pattern as “Captured group 13, followed by `0`” instead of “Captured group 1, followed by `30`,” which was what I intended. Since captured group 13 was blank, all my variable names were replaced with `0`.
|
||||
|
||||
You see, BBEdit can capture up to 99 groups in the search pattern and, strictly speaking, we should use two-digit numbers when referring to them in the replacement pattern. But in most cases, we can use `\1` through `\9` instead of `\01` through `\09` because there’s no ambiguity. In other words, if I had been trying to change `a10`, `v10`, and `x10` to `az`, `vz`, and `xz`, a replacement pattern of `\1z` would have been just fine, because the trailing `z` means there’s no way to misinterpret the intent of the `\1` in that pattern.
|
||||
|
||||
So after undoing the replacement, I changed the pattern to this,
|
||||
|
||||
![Two-digit BBEdit replacement pattern][3]
|
||||
|
||||
and all was right with the world.
|
||||
|
||||
There was another option: a named group. Here’s how that would have looked, using `var` as the pattern name:
|
||||
|
||||
![Named BBEdit replacement pattern][4]
|
||||
|
||||
I don’t think I’ve ever used a named group in any situation, whether the regex was in a text editor or a script. My general feeling is that if the pattern is so complicated I have to use variables to keep track of all the groups, I should stop and break the problem down into smaller parts.
|
||||
|
||||
By the way, you may have heard that BBEdit is celebrating its [25th anniversary][5] of not sucking. When a well-documented app has such a long history, the manual starts to accumulate delightful callbacks to the olden days. As I was looking up the notation for named groups in the BBEdit manual, I ran across this note:
|
||||
|
||||
![BBEdit regex manual excerpt][6]
|
||||
|
||||
BBEdit is currently on Version 12.5; Version 6.5 came out in 2001. But the manual wants to make sure that long-time customers (I believe it was on Version 4 when I first bought it) don’t get confused by changes in behavior, even when those changes occurred nearly two decades ago.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://leancrew.com/all-this/2019/02/regex-groups-and-numerals/
|
||||
|
||||
作者:[Dr.Drang][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://leancrew.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://leancrew.com/all-this/2019/02/automation-evolution/
|
||||
[2]: https://leancrew.com/all-this/images2019/20190223-Mistaken%20BBEdit%20replacement%20pattern.png (Mistaken BBEdit replacement pattern)
|
||||
[3]: https://leancrew.com/all-this/images2019/20190223-Two-digit%20BBEdit%20replacement%20pattern.png (Two-digit BBEdit replacement pattern)
|
||||
[4]: https://leancrew.com/all-this/images2019/20190223-Named%20BBEdit%20replacement%20pattern.png (Named BBEdit replacement pattern)
|
||||
[5]: https://merch.barebones.com/
|
||||
[6]: https://leancrew.com/all-this/images2019/20190223-BBEdit%20regex%20manual%20excerpt.png (BBEdit regex manual excerpt)
|
134
translated/talk/20181220 7 CI-CD tools for sysadmins.md
Normal file
134
translated/talk/20181220 7 CI-CD tools for sysadmins.md
Normal file
@ -0,0 +1,134 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (jdh8383)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (7 CI/CD tools for sysadmins)
|
||||
[#]: via: (https://opensource.com/article/18/12/cicd-tools-sysadmins)
|
||||
[#]: author: (Dan Barker https://opensource.com/users/barkerd427)
|
||||
|
||||
系统管理员的 7 个 CI/CD 工具
|
||||
======
|
||||
本文是一篇简单指南:介绍一些常见的开源 CI/CD 工具。
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc)
|
||||
|
||||
虽然持续集成、持续交付和持续部署(CI/CD)在开发者社区里已经存在很多年,一些机构在运维部门也有实施经验,但大多数公司并没有做这样的尝试。对于很多机构来说,让运维团队能够像他们的开发同行一样熟练操作 CI/CD 工具,已经变得十分必要了。
|
||||
|
||||
无论是基础设施、第三方应用还是内部开发的应用,都可以开展 CI/CD 实践。尽管你会发现有很多不同的工具,但它们都有着相似的设计模型。而且可能最重要的一点是:通过带领你的公司进行这些实践,会让你在公司内部变得举足轻重,成为他人学习的榜样。
|
||||
|
||||
一些机构在自己的基础设施上已有多年的 CI/CD 实践经验,常用的工具包括 [Ansible][1]、[Chef][2] 或者 [Puppet][3]。另一些工具,比如 [Test Kitchen][4],允许在最终要部署应用的基础设施上运行测试。事实上,如果使用更高级的配置方法,你甚至可以将应用部署到有真实负载的仿真“生产环境”上,来运行应用级别的测试。然而,单单是能够测试基础设施就是一项了不起的成就了。配置管理工具 Terraform 可以通过 Test Kitchen 来快速创建[可复用][6]的基础设施配置,这比它的前辈要强不少。再加上 Linux 容器和 Kubernetes,在数小时内,你就可以创建一套类似于生产环境的配置参数和系统资源,来测试整个基础设施和其上部署的应用,这在以前可能需要花费几个月的时间。而且,删除和再次创建整个测试环境也非常容易。
|
||||
|
||||
当然,作为初学者,你也可以把网络配置和 DDL(数据定义语言)文件加入版本控制,然后开始尝试一些简单的 CI/CD 流程。虽然只能帮你检查一下语义语法,但实际上大多数用于开发的管道(pipeline)都是这样起步的。只要你把脚手架搭起来,建造就容易得多了。而一旦起步,你就会发现各种真实的使用场景。
|
||||
|
||||
举个例子,我经常会在公司内部写新闻简报,我使用 [MJML][7] 制作邮件模板,然后把它加入版本控制。我一般会维护一个 web 版本,但是一些同事喜欢 PDF 版,于是我创建了一个[管道][8]。每当我写好一篇新闻稿,就在 Gitlab 上提交一个合并请求。这样做会自动创建一个 index.html 文件,生成这篇新闻稿的 HTML 和 PDF 版链接。HTML 和 PDF 文件也会在管道里同时生成。除非有人来检查确认,这些文件不会被直接发布出去。使用 GitLab Pages 发布这个网站后,我就可以下载一份 HTML 版,用来发送新闻简报。未来,我会修改这个流程,当合并请求成功或者在某个审核步骤后,自动发出对应的新闻稿。这些处理逻辑并不复杂,但的确为我节省了不少时间。实际上这些工具最核心的用途就是替你节省时间。
|
||||
|
||||
关键是要在抽象层创建出工具,这样稍加修改就可以处理不同的问题。值得留意的是,我创建的这套流程几乎不需要任何代码,除了一些[轻量级的 HTML 模板][9],一些[把 HTML 文件转换成 PDF 的 nodejs 代码][10],还有一些[生成 index 页面的 nodejs 代码][11]。
|
||||
|
||||
这其中一些东西可能看起来有点复杂,但其中大部分都源自我使用的不同工具的教学文档。而且很多开发人员也会乐意跟你合作,因为他们在完工时会发现这些东西也挺有用。上面我提供的那些代码链接是给 [DevOps KC][12](一个地方性DevOps组织) 发送新闻简报用的,其中大部分用来创建网站的代码来自我在内部新闻简报项目上所作的工作。
|
||||
|
||||
下面列出的大多数工具都可以提供这种类型的交互,但是有些工具提供的模型略有不同。这一领域新兴的模型是用声明式的方法例如 YAML 来描述一个管道,其中的每个阶段都是短暂而幂等的。许多系统还会创建[有向无环图(DAG)][13],来确保管道上不同的阶段排序的正确性。
|
||||
|
||||
这些阶段一般运行在 Linux 容器里,和普通的容器并没有区别。有一些工具,比如 [Spinnaker][14],只关注部署组件,而且提供一些其他工具没有的操作特性。[Jenkins][15] 则通常把管道配置存成 XML 格式,大部分交互都可以在图形界面里完成,但最新的方案是使用[领域专用语言(DSL)][16]如[Groovy][17]。并且,Jenkins 的任务(job)通常运行在各个节点里,这些节点上会装一个专门的 Java 程序还有一堆混杂的插件和预装组件。
|
||||
|
||||
Jenkins 在自己的工具里引入了管道的概念,但使用起来却并不轻松,甚至包含一些禁区。最近,Jenkins 的创始人决定带领社区向新的方向前进,希望能为这个项目注入新的活力,把 CI/CD 真正推广开(译者注:详见后面的 Jenkins 章节)。我认为其中最有意思的想法是构建一个云原生 Jenkins,能把 Kubernetes 集群转变成 Jenkins CI/CD 平台。
|
||||
|
||||
当你更多地了解这些工具并把实践带入你的公司和运维部门,你很快就会有追随者,因为你有办法提升自己和别人的工作效率。我们都有多年积累下来的技术债要解决,如果你能给同事们提供足够的时间来处理这些积压的工作,他们该会有多感激呢?不止如此,你的客户也会开始看到应用变得越来越稳定,管理层会把你看作得力干将,你也会在下次谈薪资待遇或参加面试时更有底气。
|
||||
|
||||
让我们开始深入了解这些工具吧,我们将对每个工具做简短的介绍,并分享一些有用的链接。
|
||||
|
||||
### GitLab CI
|
||||
|
||||
GitLab 可以说是 CI/CD 领域里新登场的玩家,但它却在 [Forrester(一个权威调研机构) 的调查报告][20]中位列第一。在一个高水平、竞争充分的领域里,这是个了不起的成就。是什么让 GitLab CI 这么成功呢?它使用 YAML 文件来描述整个管道。另有一个功能叫做 Auto DevOps,可以为较简单的项目自动生成管道,并且包含多种内置的测试单元。这套系统使用 [Herokuish buildpacks][21]来判断语言的种类以及如何构建应用。它和 Kubernetes 紧密整合,可以根据不同的方案将你的应用自动部署到 Kubernetes 集群,比如灰度发布、蓝绿部署等。
|
||||
|
||||
除了它的持续集成功能,GitLab 还提供了许多补充特性,比如:将 Prometheus 和你的应用一同部署,以提供监控功能;通过 GitLab 提供的 Issues、Epics 和 Milestones 功能来实现项目评估和管理;管道中集成了安全检测功能,多个项目的检测结果会聚合显示;你可以通过 GitLab 提供的网页版 IDE 在线编辑代码,还可以快速查看管道的预览或执行状态。
|
||||
|
||||
### GoCD
|
||||
|
||||
GoCD 是由老牌软件公司 Thoughtworks 出品,这已经足够证明它的能力和效率。对我而言,GoCD 最具亮点的特性是它的[价值流视图(VSM)][22]。实际上,一个管道的输出可以变成下一个管道的输入,从而把管道串联起来。这样做有助于提高不同开发团队在整个开发流程中的独立性。比如在引入 CI/CD 系统时,有些成立较久的机构希望保持他们各个团队相互隔离,这时候 VSM 就很有用了:让每个人都使用相同的工具就很容易在 VSM 中发现工作流程上的瓶颈,然后可以按图索骥调整团队或者想办法提高工作效率。
|
||||
|
||||
为公司的每个产品配置 VSM 是非常有价值的;GoCD 可以使用 [JSON 或 YAML 格式存储配置][23],还能以可视化的方式展示等待时间,这让一个机构能有效减少学习它的成本。刚开始使用 GoCD 创建你自己的流程时,建议使用人工审核的方式。让每个团队也采用人工审核,这样你就可以开始收集数据并且找到可能的瓶颈点。
|
||||
|
||||
### Travis CI
|
||||
|
||||
我使用的第一个软件既服务(SaaS)类型的 CI 系统就是 Travis CI,体验很不错。管道配置以源码形式用 YAML 保存,它与 GitHub 等工具无缝整合。我印象中管道从来没有失效过,因为 Travis CI 的在线率很高。除了 SaaS 版之外,你也可以使用自行部署的版本。我还没有自行部署过,它的组件非常多,要全部安装的话,工作量就有点吓人了。我猜更简单的办法是把它部署到 Kubernetes 上,[Travis CI 提供了 Helm charts][26],这些 charts 目前不包含所有要部署的组件,但我相信以后会越来越丰富的。如果你不想处理这些细枝末节的问题,还有一个企业版可以试试。
|
||||
|
||||
假如你在开发一个开源项目,你就能免费使用 SaaS 版的 Travis CI,享受顶尖团队提供的优质服务!这样能省去很多麻烦,你可以在一个相对通用的平台上(如 GitHub)研发开源项目,而不用找服务器来运行任何东西。
|
||||
|
||||
### Jenkins
|
||||
|
||||
Jenkins在 CI/CD 界绝对是元老级的存在,也是事实上的标准。我强烈建议你读一读这篇文章:"[Jenkins: Shifting Gears][27]",作者 Kohsuke 是 Jenkins 的创始人兼 CloudBees 公司 CTO。这篇文章契合了我在过去十年里对 Jenkins 及其社区的感受。他在文中阐述了一些这几年呼声很高的需求,我很乐意看到 CloudBees 引领这场变革。长期以来,Jenkins 对于非开发人员来说有点难以接受,并且一直是其管理员的重担。还好,这些问题正是他们想要着手解决的。
|
||||
|
||||
[Jenkins 配置既代码][28](JCasC)应该可以帮助管理员解决困扰了他们多年的配置复杂性问题。与其他 CI/CD 系统类似,只需要修改一个简单的 YAML 文件就可以完成 Jenkins 主节点的配置工作。[Jenkins Evergreen][29] 的出现让配置工作变得更加轻松,它提供了很多预设的使用场景,你只管套用就可以了。这些发行版会比官方的标准版本 Jenkins 更容易维护和升级。
|
||||
|
||||
Jenkins 2 引入了两种原生的管道(pipeline)功能,我在 LISA(一个系统架构和运维大会) 2017 年的研讨会上已经[讨论过了][30]。这两种功能都没有 YAML 简便,但在处理复杂任务时它们很好用。
|
||||
|
||||
[Jenkins X][31] 是 Jenkins 的一个全新变种,用来实现云端原生 Jenkins(至少在用户看来是这样)。它会使用 JCasC 及 Evergreen,并且和 Kubernetes 整合的更加紧密。对于 Jenkins 来说这是个令人激动的时刻,我很乐意看到它在这一领域的创新,并且继续发挥领袖作用。
|
||||
|
||||
### Concourse CI
|
||||
|
||||
我第一次知道 Concourse 是通过 Pivotal Labs 的伙计们介绍的,当时它处于早期 beta 版本,而且那时候也很少有类似的工具。这套系统是基于微服务构建的,每个任务运行在一个容器里。它独有的一个优良特性是能够在你本地系统上运行任务,体现你本地的改动。这意味着你完全可以在本地开发(假设你已经连接到了 Concourse 的服务器),像在真实的管道构建流程一样从你本地构建项目。而且,你可以在修改过代码后从本地直接重新运行构建,来检验你的改动结果。
|
||||
|
||||
Concourse 还有一个简单的扩展系统,它依赖于资源这一基础概念。基本上,你想给管道添加的每个新功能都可以用一个 Docker 镜像实现,并作为一个新的资源类型包含在你的配置中。这样可以保证每个功能都被封装在一个不易改变的独立工件中,方便对其单独修改和升级,改变其中一个时不会影响其他构建。
|
||||
|
||||
### Spinnaker
|
||||
|
||||
Spinnaker 出自 Netflix,它更关注持续部署而非持续集成。它可以与其他工具整合,比如Travis 和 Jenkins,来启动测试和部署流程。它也能与 Prometheus、Datadog 这样的监控工具集成,参考它们提供的指标来决定如何部署。例如,在一次金丝雀发布(canary deployment)里,我们可以根据收集到的相关监控指标来做出判断:最近的这次发布是否导致了服务降级,应该立刻回滚;还是说看起来一切OK,应该继续执行部署。
|
||||
|
||||
谈到持续部署,一些另类但却至关重要的问题往往被忽略掉了,说出来可能有点让人困惑:Spinnaker 可以帮助持续部署不那么“持续”。在整个应用部署流程期间,如果发生了重大问题,它可以让流程停止执行,以阻止可能发生的部署错误。但它也可以在最关键的时刻让人工审核强制通过,发布新版本上线,使整体收益最大化。实际上,CI/CD 的主要目的就是在商业模式需要调整时,能够让待更新的代码立即得到部署。
|
||||
|
||||
### Screwdriver
|
||||
|
||||
Screwdriver 是个简单而又强大的软件。它采用微服务架构,依赖像 Nomad、Kubernetes 和 Docker 这样的工具作为执行引擎。官方有一篇很不错的[部署教学文档][34],介绍了如何将它部署到 AWS 和 Kubernetes 上,但如果相应的 [Helm chart][35] 也完成的话,就更完美了。
|
||||
|
||||
Screwdriver 也使用 YAML 来描述它的管道,并且有很多合理的默认值,这样可以有效减少各个管道重复的配置项。用配置文件可以组织起高级的工作流,来描述各个 job 间复杂的依赖关系。例如,一项任务可以在另一个任务开始前或结束后运行;各个任务可以并行也可以串行执行;更赞的是你可以预先定义一项任务,只在特定的 pull request 请求时被触发,而且与之有依赖关系的任务并不会被执行,这能让你的管道具有一定的隔离性:什么时候被构造的工件应该被部署到生产环境,什么时候应该被审核。
|
||||
|
||||
以上只是我对这些 CI/CD 工具的简单介绍,它们还有许多很酷的特性等待你深入探索。而且它们都是开源软件,可以自由使用,去部署一下看看吧,究竟哪个才是最适合你的那个。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/cicd-tools-sysadmins
|
||||
|
||||
作者:[Dan Barker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[jdh8383](https://github.com/jdh8383)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/barkerd427
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ansible.com/
|
||||
[2]: https://www.chef.io/
|
||||
[3]: https://puppet.com/
|
||||
[4]: https://github.com/test-kitchen/test-kitchen
|
||||
[5]: https://www.merriam-webster.com/dictionary/ephemeral
|
||||
[6]: https://en.wikipedia.org/wiki/Idempotence
|
||||
[7]: https://mjml.io/
|
||||
[8]: https://gitlab.com/devopskc/newsletter/blob/master/.gitlab-ci.yml
|
||||
[9]: https://gitlab.com/devopskc/newsletter/blob/master/index/index.html
|
||||
[10]: https://gitlab.com/devopskc/newsletter/blob/master/html-to-pdf.js
|
||||
[11]: https://gitlab.com/devopskc/newsletter/blob/master/populate-index.js
|
||||
[12]: https://devopskc.com/
|
||||
[13]: https://en.wikipedia.org/wiki/Directed_acyclic_graph
|
||||
[14]: https://www.spinnaker.io/
|
||||
[15]: https://jenkins.io/
|
||||
[16]: https://martinfowler.com/books/dsl.html
|
||||
[17]: http://groovy-lang.org/
|
||||
[18]: https://about.gitlab.com/product/continuous-integration/
|
||||
[19]: https://gitlab.com/gitlab-org/gitlab-ce/
|
||||
[20]: https://about.gitlab.com/2017/09/27/gitlab-leader-continuous-integration-forrester-wave/
|
||||
[21]: https://github.com/gliderlabs/herokuish
|
||||
[22]: https://www.gocd.org/getting-started/part-3/#value_stream_map
|
||||
[23]: https://docs.gocd.org/current/advanced_usage/pipelines_as_code.html
|
||||
[24]: https://docs.travis-ci.com/
|
||||
[25]: https://github.com/travis-ci/travis-ci
|
||||
[26]: https://github.com/travis-ci/kubernetes-config
|
||||
[27]: https://jenkins.io/blog/2018/08/31/shifting-gears/
|
||||
[28]: https://jenkins.io/projects/jcasc/
|
||||
[29]: https://github.com/jenkinsci/jep/blob/master/jep/300/README.adoc
|
||||
[30]: https://danbarker.codes/talk/lisa17-becoming-plumber-building-deployment-pipelines/
|
||||
[31]: https://jenkins-x.io/
|
||||
[32]: https://concourse-ci.org/
|
||||
[33]: https://github.com/concourse/concourse
|
||||
[34]: https://docs.screwdriver.cd/cluster-management/kubernetes
|
||||
[35]: https://github.com/screwdriver-cd/screwdriver-chart
|
Loading…
Reference in New Issue
Block a user