Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-07-16 22:42:25 +08:00
commit 5685202ce3
9 changed files with 1140 additions and 137 deletions

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12421-1.html)
[#]: subject: (Build a Kubernetes cluster with the Raspberry Pi)
[#]: via: (https://opensource.com/article/20/6/kubernetes-raspberry-pi)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
@ -12,52 +12,51 @@
> 将 Kubernetes 安装在多个树莓派上,实现自己的“家庭私有云”容器服务。
![树莓派板的卡通图形][1]
![](https://img.linux.net.cn/data/attachment/album/202007/15/234152ivw1y2wwhmhmpuvo.jpg)
[Kubernetes][2] 从一开始就被设计为云原生的企业级容器编排系统。它已经成长为事实上的云容器平台,并由于接受了容器原生虚拟化和无服务器计算等新技术而继续发展。
[Kubernetes][2] 从一开始就被设计为云原生的企业级容器编排系统。它已经成长为事实上的云容器平台,并由于接受了容器原生虚拟化和无服务器计算等新技术而继续发展。
从微型的边缘计算到大规模的容器环境无论是公有云还是私有云环境Kubernetes 都可以管理其中的容器。它是“家庭私有云”项目的理想选择,既提供了强大的容器编排,又有机会了解一项这样的技术 —— 它的需求如此之大,与云计算结合得如此彻底以至于它的名字几乎就是“云计算”的代名词。
从微型的边缘计算到大规模的容器环境无论是公有云还是私有云环境Kubernetes 都可以管理其中的容器。它是“家庭私有云”项目的理想选择,既提供了强大的容器编排,又让你有机会了解一项这样的技术 —— 它的需求如此之大,与云计算结合得如此彻底以至于它的名字几乎就是“云计算”的代名词。
没有什么比 Kubernetes 更能说明“云”,也没有什么能比树莓派更合适“集群起来”!在廉价的树莓派硬件上运行本地的 Kubernetes 集群是获得在真正的云技术巨头上进行管理和开发的经验的好方法。
没有什么比 Kubernetes 更“云”,也没有什么能比树莓派更合适“集群起来”!在廉价的树莓派硬件上运行本地的 Kubernetes 集群是获得在真正的云技术巨头上进行管理和开发的经验的好方法。
### 在树莓派上安装 Kubernetes 集群
本练习将在三个及以上运行 Ubuntu 20.04 的树莓派 4 上安装一个 Kubernetes 1.18.2 集群。Ubuntu 20.04Focal Fossa提供了针对 64 位 ARMARM64的树莓派镜像64 位内核和用户空间)。由于目标是使用这些树莓派来运行 Kubernetes 集群,因此运行 AArch64 容器镜像的能力非常重要:很难找到 32 位的通用软件镜像乃至于标准基础镜像。Ubuntu 20.04 通过其 ARM64 镜像,允许你将 64 位容器镜像与 Kubernetes 一同使用。
本练习将在三个或更多运行 Ubuntu 20.04 的树莓派 4 上安装 Kubernetes 1.18.2 集群。Ubuntu 20.04Focal Fossa提供了针对 64 位 ARMARM64的树莓派镜像64 位内核和用户空间)。由于目标是使用这些树莓派来运行 Kubernetes 集群,因此运行 AArch64 容器镜像的能力非常重要:很难找到 32 位的通用软件镜像乃至于标准基础镜像。借助 Ubuntu 20.04 的 ARM64 镜像,可以让你在 Kubernetes 上使用 64 位容器镜像。
#### AArch64 vs. ARM6432 位 vs. 64 位ARM vs. x86
请注意AArch64 和 ARM64 实际上是同一种东西。不同的名称源于它们在不同社区中的使用。许多容器镜像都标为 AArch64并能在标为 ARM64 的系统上正常运行。采用 AArch64/ARM64 架构的系统能够运行 32 位的 ARM 镜像但反之则不然32 位的 ARM 系统无法运行 64 位的容器镜像。这就是 Ubuntu 20.04 ARM64 镜像如此有用的原因。
请注意AArch64 和 ARM64 实际上是同一种东西。不同的名称源于它们在不同社区中的使用。许多容器镜像都标为 AArch64并能在标为 ARM64 的系统上正常运行。采用 AArch64/ARM64 架构的系统能够运行 32 位的 ARM 镜像但反之则不然32 位的 ARM 系统无法运行 64 位的容器镜像。这就是 Ubuntu 20.04 ARM64 镜像如此有用的原因。
不需要太深入地解释不同的架构类型值得注意的是ARM64/AArch64 和 x86\_64 架构是不同的,运行在 64 位 ARM 架构上的 Kubernetes 节点无法运行为 x86\_64 构建的容器镜像。在实践中,你会发现有些镜像不是为两种架构构建的,这些镜像可能无法在你的集群中使用。你还需要在基于 Arch64 的系统上构建自己的镜像,或者跳过一些束缚以让你的常规的 x86\_64 系统构建 Arch64 镜像。在“家庭私有云”项目的后续文章中,我将介绍如何在常规系统上构建 AArch64 镜像。
这里不会太深入地解释不同的架构类型值得注意的是ARM64/AArch64 和 x86\_64 架构是不同的,运行在 64 位 ARM 架构上的 Kubernetes 节点无法运行为 x86\_64 构建的容器镜像。在实践中,你会发现有些镜像没有为两种架构构建,这些镜像可能无法在你的集群中使用。你还需要在基于 Arch64 的系统上构建自己的镜像,或者跳过一些限制以让你的常规的 x86\_64 系统构建 Arch64 镜像。在“家庭私有云”项目的后续文章中,我将介绍如何在常规系统上构建 AArch64 镜像。
为了达到两全其美的效果,在本教程中设置好 Kubernetes 集群后,你可以在以后向其中添加 x86\_64 节点。你可以通过使用 [Kubernetes 的<ruby>污点<rt>taint</rt></ruby><ruby>容忍<rt>toleration</rt></ruby>][3],由 Kubernetes 的调度器将给定架构的镜像调度到相应的节点上运行。
为了达到两全其美的效果,在本教程中设置好 Kubernetes 集群后,你可以在以后向其中添加 x86\_64 节点。你可以通过使用 [Kubernetes 的<ruby>污点<rt>taint</rt></ruby><ruby>容忍<rt>toleration</rt></ruby>][3] 能力,由 Kubernetes 的调度器将给定架构的镜像调度到相应的节点上运行。
关于架构和镜像的内容就不多说了。是时候安装 Kubernetes 了,我们走!
关于架构和镜像的内容就不多说了。是时候安装 Kubernetes 了,开始吧!
#### 前置需求
这个练习的要求很低。你将需要
这个练习的要求很低。你将需要
* 三台(或更多)树莓派 4最好是 4GB 内存的型号)。
* 在全部树莓派上安装 Ubuntu 20.04 ARM64。
为了简化初始设置,请阅读《[修改磁盘镜像来创建基于树莓派的家庭实验室][4]》,在将 Ubuntu 镜像写入 SD 卡并安装在树莓派上之前,添加一个用户和 SSH 授权密钥。
为了简化初始设置,请阅读《[修改磁盘镜像来创建基于树莓派的家庭实验室][4]》,在将 Ubuntu 镜像写入 SD 卡并安装在树莓派上之前,添加一个用户和 SSH 授权密钥`authorized_keys`
### 配置主机
在 Ubuntu 被安装在树莓派上,并且它们可以通过 SSH 访问后,你需要在安装 Kubernetes 之前做一些修改。
在 Ubuntu 被安装在树莓派上,并且可以通过 SSH 访问后,你需要在安装 Kubernetes 之前做一些修改。
#### 安装和配置 Docker
#### 安装和配置 Docker
截至目前Ubuntu 20.04 在基础软件库中提供了最新版本的 Docker即 v19.03,可以直接使用 `apt` 命令安装它。请注意,包名是 `docker.io`。请在所有的树莓派上安装 Docker
截至目前Ubuntu 20.04 在 base 软件库中提供了最新版本的 Docker即 v19.03,可以直接使用 `apt` 命令安装它。请注意,包名是 `docker.io`。请在所有的树莓派上安装 Docker
```
# 安装 docker.io 软件包
$ sudo apt install -y docker.io
```
安装好软件包后,你需要做一些修改来启用 [cgroup][5]控制组。cgroup 允许 Linux 内核限制和隔离资源。实际上,这允许 Kubernetes 更好地管理其运行的容器所使用的资源,并通过让容器彼此隔离来增加安全性。
安装好软件包后,你需要做一些修改来启用 [cgroup][5]控制组。cgroup 允许 Linux 内核限制和隔离资源。实际上,这可以让 Kubernetes 更好地管理其运行的容器所使用的资源,并通过让容器彼此隔离来增加安全性。
在对所有树莓派进行以下修改之前,请检查 `docker info` 的输出:
@ -77,7 +76,7 @@ WARNING: No oom kill disable support
上面的输出突出显示了需要修改的部分cgroup 驱动和限制支持。
首先,将 Docker 使用的默认 cgroup 驱动从 `cgroups` 改为 `systemd`,让 systemd 充当 cgroup 管理器,确保只有一个 cgroup 管理器在使用。这有助于系统的稳定性,也是 Kubernetes 所推荐的。要做到这一点,请`/etc/docker/daemon.json` 文件创建或替换为:
首先,将 Docker 使用的默认 cgroup 驱动从 `cgroups` 改为 `systemd`,让 systemd 充当 cgroup 管理器,确保只有一个 cgroup 管理器在使用。这有助于系统的稳定性,也是 Kubernetes 所推荐的。要做到这一点,请创建 `/etc/docker/daemon.json` 文件或将内容替换为:
```
# 创建或替换 /etc/docker/daemon.json 以启用 cgroup 的 systemd 驱动
@ -132,17 +131,17 @@ $ sudo sysctl --system
#### 安装 Ubuntu 的 Kubernetes 包
由于你使用的是 Ubuntu你可以从 Kubernetes.io 的 Apt 仓库中安装 Kubernetes软件包。目前没有 Ubuntu 20.04Focal的仓库但最近的 Ubuntu LTS 仓库 Ubuntu 18.04Xenial 中有 Kubernetes 1.18.2。最新的 Kubernetes 软件包可以从那里安装。
由于你使用的是 Ubuntu你可以从 Kubernetes.io 的 apt 仓库中安装 Kubernetes 软件包。目前没有 Ubuntu 20.04Focal的仓库但最近的 Ubuntu LTS 仓库 Ubuntu 18.04Xenial 中有 Kubernetes 1.18.2。最新的 Kubernetes 软件包可以从那里安装。
将 Kubernetes 软件库添加到 Ubuntu 的源列表之中
将 Kubernetes 软件库添加到 Ubuntu 的源列表之中
```
# 添加 packages.cloud.google.com 的 atp 密钥
$ curl -s <https://packages.cloud.google.com/apt/doc/apt-key.gpg> | sudo apt-key add -
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# 添加 Kubernetes 软件库
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb <https://apt.kubernetes.io/> kubernetes-xenial main
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
```
@ -170,9 +169,9 @@ kubectl set on hold.
### 创建 Kubernetes 集群
在安装了 Kubernetes 软件包之后,你现在可以继续创建集群了。在开始之前,你需要做一些决定。首先,其中一个树莓派需要被指定为控制平面(即主节点。其余的节点将被指定为计算节点。
在安装了 Kubernetes 软件包之后,你现在可以继续创建集群了。在开始之前,你需要做一些决定。首先,其中一个树莓派需要被指定为控制平面节点(即主节点。其余的节点将被指定为计算节点。
你还需要选择一个 [CIDR][6](无类别域间路由)地址用于 Kubernetes 集群中的 Pod。在集群创建过程中设置`pod-network-cidr` 可以确保设置了 `podCIDR` 值,它以后可以被<ruby>容器网络接口<rt>Container Network Interface</rt></ruby>CNI加载项使用。本练习使用的是 [Flannel][7] CNI。你选择的 CIDR 不应该与你的家庭网络中当前使用的任何 CIDR 重叠,也不应该与你的路由器或 DHCP 服务器管理的 CIDR 重叠。确保使用一个比你预期需要的更大的子网:**总是**有比你最初计划的更多的 Pod在这个例子中我将使用 CIDR 地址 `10.244.0.0/16`,但你可以选择一个适合你的。
你还需要选择一个 [CIDR][6](无类别域间路由)地址用于 Kubernetes 集群中的 Pod。在集群创建过程中设置 `pod-network-cidr` 可以确保设置了 `podCIDR` 值,它以后可以被<ruby>容器网络接口<rt>Container Network Interface</rt></ruby>CNI加载项使用。本练习使用的是 [Flannel][7] CNI。你选择的 CIDR 不应该与你的家庭网络中当前使用的任何 CIDR 重叠,也不应该与你的路由器或 DHCP 服务器管理的 CIDR 重叠。确保使用一个比你预期需要的更大的子网:**总是**有比你最初计划的更多的 Pod在这个例子中我将使用 CIDR 地址 `10.244.0.0/16`,但你可以选择一个适合你的。
有了这些决定,你就可以初始化控制平面节点了。用 SSH 或其他方式登录到你为控制平面指定的节点。
@ -216,9 +215,9 @@ kubeadm join 192.168.2.114:6443 --token zqqoy7.9oi8dpkfmqkop2p5 \
--discovery-token-ca-cert-hash sha256:71270ea137214422221319c1bdb9ba6d4b76abfa2506753703ed654a90c4982b
```
注意两点第一Kubernetes 的 `kubectl` 连接信息已经写入到 `/etc/kubernetes/admin.conf`。这个 kubeconfig 文件可以复制到用户的 `~/.kube/config` 中,用户可以是主节点上的 root 用户或普通用户,也可以是远程机器。这样你就可以用 `kubectl` 命令来控制你的集群。
注意两点第一Kubernetes 的 `kubectl` 连接信息已经写入到 `/etc/kubernetes/admin.conf`。这个 kubeconfig 文件可以复制到用户的 `~/.kube/config` 中,可以是主节点上的 root 用户或普通用户,也可以是远程机器。这样你就可以用 `kubectl` 命令来控制你的集群。
其次,输出中以 `kubernetes join` 开头的最后一行是你可以运行的命令,以加入更多的节点到集群中。
其次,输出中以 `kubernetes join` 开头的最后一行是你可以运行的命令,你可运行这些命令加入更多的节点到集群中。
将新的 kubeconfig 复制到你的用户可以使用的地方后,你可以用 `kubectl get nodes` 命令来验证控制平面是否已经安装:
@ -232,14 +231,14 @@ elderberry   Ready    master   7m32s   v1.18.2
#### 安装 CNI 加载项
CNI 加载项负责 Pod 网络的配置和清理。如前所述,这个练习使用的是 Flannel CNI 插件,在已经设置好 `podCIDR` 值的情况下,你只需下载 Flannel YAML 并使用 `kubectl apply` 将其安装到集群中。这可以用 `kubectl apply -f -` 从标准输入中获取数据,用一行命令完成。这将创建管理 Pod 网络所需的 ClusterRoles、ServiceAccounts 和 DaemonSets 等。
CNI 加载项负责 Pod 网络的配置和清理。如前所述,这个练习使用的是 Flannel CNI 加载项,在已经设置好 `podCIDR` 值的情况下,你只需下载 Flannel YAML 并使用 `kubectl apply` 将其安装到集群中。这可以用 `kubectl apply -f -` 从标准输入中获取数据,用一行命令完成。这将创建管理 Pod 网络所需的 ClusterRoles、ServiceAccounts 和 DaemonSets 等。
下载并应用 Flannel YAML 数据到集群中:
```
# 下载 Flannel YAML 数据并应用它
# (输出略)
$ curl -sSL <https://raw.githubusercontent.com/coreos/flannel/v0.12.0/Documentation/kube-flannel.yml> | kubectl apply -f -
$ curl -sSL https://raw.githubusercontent.com/coreos/flannel/v0.12.0/Documentation/kube-flannel.yml | kubectl apply -f -
```
#### 将计算节点加入到集群中
@ -252,7 +251,7 @@ $ sudo kubeadm join 192.168.2.114:6443 --token zqqoy7.9oi8dpkfmqkop2p5 \
    --discovery-token-ca-cert-hash sha256:71270ea137214422221319c1bdb9ba6d4b76abfa2506753703ed654a90c4982b
```
一旦你完成了每个节点的加入过程,你应该能够在 `kubectl get nodes` 的输出中看到新节点:
一旦你完成了每个节点的加入,你应该能够在 `kubectl get nodes` 的输出中看到新节点:
```
# 显示 Kubernetes 集群中的节点
@ -268,7 +267,7 @@ huckleberry   Ready    &lt;none&gt;   17s     v1.18.2
此时,你已经拥有了一个完全正常工作的 Kubernetes 集群。你可以运行 Pod、创建部署和作业等。你可以使用[服务][8]从集群中的任何一个节点访问集群中运行的应用程序。你可以通过 NodePort 服务或入口控制器实现外部访问。
要验证集群正在运行,请创建一个新的命名空间、部署和服务,并检查在部署中运行的 Pod 是否按预期响应。此部署使用 `quay.io/clcollins/kube-verify:01` 镜像,这是一个监听请求的 Nginx 容器(实际上,与文章《[使用Cloud-init 添加节点到你的私有云][9]》中使用的镜像相同)。你可以在[这里][10]查看镜像的 Containerfile
要验证集群正在运行,请创建一个新的命名空间、部署和服务,并检查在部署中运行的 Pod 是否按预期响应。此部署使用 `quay.io/clcollins/kube-verify:01` 镜像,这是一个监听请求的 Nginx 容器(实际上,与文章《[使用 Cloud-init 将节点添加到你的私有云][9]》中使用的镜像相同)。你可以在[这里][10]查看镜像的容器文件
为部署创建一个名为 `kube-verify` 的命名空间:
@ -386,15 +385,15 @@ $ curl 10.98.188.200
### 去吧Kubernetes
“Kubernetes”κυβερνήτης在希腊语中是飞行员的意思 —— 但这是否意味着驾驶船只的个人以及引导船只的动作不是。“Kubernan”κυβερνάω是希腊语“驾驶”或“引导”的意思所以去吧Kubernan如果你看到我出去参加会议什么的请试着给我一个动词或名词的通行证。以另一种语言我不会说的语言。
“Kubernetes”κυβερνήτης在希腊语中是飞行员的意思 —— 但这是否意味着驾驶船只以及引导船只的人不是。“Kubernan”κυβερνάω是希腊语“驾驶”或“引导”的意思因此去吧Kubernan如果你在会议上或其它什么活动上看到我请试着给我一个动词或名词的通行证以另一种语言 —— 我不会说的语言。
免责声明:如前所述,我不会读也不会讲希腊语,尤其是古希腊语,所以我选择相信我在网上读到的东西。你知道那是怎么一回事。带着盐分,给我一点休息时间,因为我没有开“对我来说都是希腊语”的玩笑。然而,只是提一下,我,可以玩笑,但是实际没开,所以我要么偷偷摸摸,要么聪明,要么两者兼而有之。或者,两者都不是。我并没有说这是个好笑话。
免责声明:如前所述,我不会读也不会讲希腊语,尤其是古希腊语,所以我选择相信我在网上读到的东西。你知道那是怎么一回事。我对此有所保留,放过我吧,因为我没有开“对我来说都是希腊语”这种玩笑。然而,只是提一下,虽然我可以开玩笑,但是实际上没有,所以我要么偷偷摸摸,要么聪明,要么两者兼而有之。或者,两者都不是。我并没有说这是个好笑话。
所以,去吧,像专业人员一样在你的家庭私有云中用自己的 Kubernetes 容器服务来试运行你的容器吧!当你越来越得心应手时,你可以修改你的 Kubernetes 集群,尝试不同的选项,比如前面提到的入口控制器和用于持久卷的动态 StorageClasses。
这种持续学习是 [DevOps][14] 的核心,新服务的持续集成和交付反映了敏捷方法论,我们已经接受了这两种方法,因为我们已经学会了处理云实现的大规模,并发现我们的传统做法无法跟上步伐。
这种持续学习是 [DevOps][14] 的核心,持续集成和新服务交付反映了敏捷方法论,我们学会了处理云实现的大规模扩容,并发现我们的传统做法无法跟上步伐时,我们就接受了这两种方法论
你看,那是什么?技术、政策、哲学、一小段希腊语和一个可怕的元笑话,都在一篇文章中。
你看,技术、策略、哲学、一小段希腊语和一个可怕的原始笑话,都汇聚在一篇文章当中。
--------------------------------------------------------------------------------
@ -403,7 +402,7 @@ via: https://opensource.com/article/20/6/kubernetes-raspberry-pi
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -417,9 +416,9 @@ via: https://opensource.com/article/20/6/kubernetes-raspberry-pi
[6]: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
[7]: https://github.com/coreos/flannel
[8]: https://kubernetes.io/docs/concepts/services-networking/service/
[9]: https://opensource.com/article/20/5/create-simple-cloud-init-service-your-homelab
[9]: https://linux.cn/article-12407-1.html
[10]: https://github.com/clcollins/homelabCloudInit/blob/master/simpleCloudInitService/data/Containerfile
[11]: http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"\>
[11]: http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"
[12]: https://opensource.com/article/20/4/http-kubernetes-skipper
[13]: https://opensource.com/article/20/5/nfs-raspberry-pi
[13]: https://linux.cn/article-12413-1.html
[14]: https://opensource.com/tags/devops

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (Yufei-Yan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The Impact of COVID-19 on the Global Digital Economy)
[#]: via: (https://www.networkworld.com/article/3566911/the-impact-of-covid-19-on-the-global-digital-economy.html)
[#]: author: (QTS https://www.networkworld.com/author/Andy-Patrizio/)
The Impact of COVID-19 on the Global Digital Economy
======
Post-Pandemic IT Landscape Demands a New Data Center Paradigm
QTS
Prior to COVID-19, the world was undergoing massive business transformation illustrated by stunning success of mega-scale Internet businesses such as Amazon Prime, Twitter, Uber, Netflix, Xbox and others that are now an integral part of our lives and exemplify the new global digital economy.
The ability to apply artificial intelligence (AL), machine learning (ML), speech recognition, location services, speed and identity tracking in real-time enabled new applications and services once thought to be beyond the reach of computing technology.
Fueling this transformation are exponential increases in digitized data and the ability to apply enormous compute and storage capacity to it. Since 2016, digitization has created 90% of the worlds data. According to IDC, more than 59 zettabytes (ZB) of data are going to be created and consumed globally in 2020 and this is forecast to grow to 175 ZB by 2025. How much is 1 ZB you ask?  It is equivalent to a _trillion gigabytes. Or, 100 million HD movies worth of data._
As it turns out, according to IDC, instead of hindering growth, COVID-19 is accelerating data growth, particularly in 2020 and 2021, due to abrupt increases in work from home employees, a changing mix of richer data sets, and a surge in video-based content consumption.
For the enterprise, an unforeseen byproduct is an even greater urgency for agility, adaptability and transformation. Business models are being disrupted while the digitalization of the economy is accelerating as new technologies and services serve a reshaped workforce.
The competitive landscape across all market sectors is changing. Now more than ever business is looking to technology to be agile in the face of disruption and create new digitally enabled business models for the post-COVID “new normal.”
That will require new capabilities and expertise in thousands of data centers and networks behind the scenes. That infrastructure provides the digital infrastructure that powers the critical applications and services keeping the economy afloat -- and all of us connected. The “cloud” lives in data centers and data centers are the commerce platforms of the 21st century.
**Data centers are essential…the best ones are innovating**
At the outset of the pandemic, data centers were designated “essential businesses” since virtually all industries and consumers depend on them. The more advanced data centers were quickly recognized for their [ability to provide customers with remote access and management of their systems][1] without operators or enterprise customers having to be there physically.
[QTS Realty Trust (NYSE: QTS)][2] is at the forefront of providing these services to all their customers. Supporting its commitment to digitize its entire end-to-end systems and processes, QTS is the first and only multi-tenant data center operator with a sophisticated software-defined orchestration platform powered by AI, ML, predictive analytics (PA) and virtual reality (VR) technologies.
QTS API-driven [Service Delivery Platform (SDP)][3] empowers customers to interact with their data, services, and connectivity ecosystem by providing real-time visibility, access and dynamic control of critical metrics across hybrid and hyperscale environments from a single platform and/or mobile device. It is akin to it having a software company within the data center delivering operational savings and business innovation which are central to every IT investment.
SDP applications leverage next-generation AI, ML and PA to accurately forecast power consumption, automate the service provisioning, perform online ordering and asset management. VR technologies are enabling new virtual collaboration tools and a 3D visualization application that renders an exact replication of a customers IT environment in real-time.
QTS SDP was profiled in the Raymond James Industry Brief: _Data Maps and Killer Apps_ (released June 2020) that surveyed the platforms of three global data center operators:
_“While all three platforms have the ability to track and report common data, the ease of use of the systems was quite different. Only QTS had the entire system wrapped up into an app that was available across multiple desktop, tablet, and mobile platforms, complete with 3D imaging and simple graphics that outlined the situation visually down to an individual rack within a cabinet. QTS' system also has video capability using facial recognition to detect and identify employees and contractors separately, highlight an open cabinet door and other potential hazards inside the customers cage, and it is all either real time or on a recorded basis to highlight potential errors and problems. Live heat maps allow customers to see the areas with potential and existing performance issues and to see outages in real time and track down problems. As far as features and functionality, QTS SDP system was the clear winner.”_
**Customer experience is the dealmaker**
As consumers become more adamant in their demand for quality of experience in their digital lives, businesses must ensure they are providing services, and the data that is generated from them, real-time, on-the-go, via any network, and are personalized.
Post COVID, the ability of data centers to ensure excellent customer experience will play an even greater role as large numbers of customers continue to work remotely with less on-premises interaction. Enterprises will seek data center operators that can ensure secure, ubiquitous, real time access to services and data backed by superior customer support.
Given a purchasing decision based on performance and price between two equally qualified data center operators, the first tiebreaker is increasingly coming down to proven and documented customer support.
In the data center industry, QTS is the undisputed leader in customer service and support boasting an [independent Net Promoter Score of 88][4] \- more than double the average NPS score for data center companies (42).
Customers rated QTS highly in a range of service areas, including its customer service, service delivery platform, physical facilities, processes, responsiveness, and service of onsite staff and the 24-hour [Operations Service Center][5]. QTS score of 88 is its highest yet and exceeds NPS scores of companies well-known for their customer service including Starbucks (71) and Apple (72).
Enterprise, Hyperscale and government organizations recognize that the post-COVID landscape is going to present an increasing need for innovation in their IT environments. In terms of IT service delivery, this means higher levels of transparency, visibility, compliance and sustainability that are at the foundation of QTS Service Delivery Platform.
With innovation come new technologies and complexity, raising the profile of the best service and support partners for companies looking to re-establish themselves in a new, post-COVID competitive landscape.
For more information, visit [www.qtsdatacenters.com][6]
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3566911/the-impact-of-covid-19-on-the-global-digital-economy.html
作者:[QTS][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.qtsdatacenters.com/company/news/2020/03/26/qts-reports-significant-increase-in-customer-usage-of-sdp
[2]: http://www.qtsdatacenters.com/
[3]: https://www.qtsdatacenters.com/why-qts/service-delivery-platform
[4]: https://www.qtsdatacenters.com/why-qts/customer-benefits/nps
[5]: https://www.qtsdatacenters.com/why-qts/customer-benefits/operations-support-center
[6]: http://www.qtsdatacenters.com

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Options grow for migrating mainframe apps to the cloud)
[#]: via: (https://www.networkworld.com/article/3567058/options-grow-for-migrating-mainframe-apps-to-the-cloud.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
Options grow for migrating mainframe apps to the cloud
======
Mainframe modernization vendor LzLabs is targeting Big Iron enterprises that want to move legacy applications into public or private cloud environments.
Thinkstock
Mainframe users looking to bring legacy applications into the public or private cloud world have a new option: [LzLabs][1], a mainframe software migration vendor.
Founded in 2011 and based in Switzerland, LzLabs this week said it's setting up shop in North America to help mainframe users move legacy applications think COBOL into the more modern and flexible cloud application environment.
**Read also: [How to plan a software-defined data-center network][2]**
At the heart of LzLabs' service is its Software Defined Mainframe (SDM), an open-source, Eclipse-based system that's designed to let legacy applications, particularly those without typically available source code, such as COBOL, run in the cloud without recompilation.
The company says it works closely with cloud providers such as Amazon Web Services, and its service can be implemented on [Microsoft Azure][3]. It also works with other technology partners such as Red Hat and Accenture.
Legacy applications have become locked into mainframe infrastructures, and the lack of available skilled personnel who understand the platform and its development process has revealed the urgent need to move these critical applications to open platforms and the cloud, according to Mark Cresswell, CEO of LzLabs.
"With SDM, customers can run mainframe workloads on x86 or the cloud without recompilation or data reformatting. This approach significantly reduces the risks associated with mainframe migration, enables incremental modernization and integrates applications with DevOps, open-source and the cloud," the company [stated][4].
Cresswell pointed to the news stories around the COVID-19 pandemic that found many mainframe and COBOL-based state government systems were having trouble keeping up with the huge volume of unemployment claims hitting those systems.
For example, [CNN in April reported][5] that in New Jersey, Gov. Phil Murphy put out a call for volunteers who know how to code COBOL because many of the state's systems still run on older mainframes. Connecticut is also reportedly struggling to process the large volume of unemployment claims with its decades-old mainframe; it's working to develop a new benefits system with Maine, Rhode Island, Mississippi and Oklahoma, but the system won't be finished before next year, according to the CNN story.
"An estimated 70% of the world's commercial transactions are processed by a mainframe application at some point in their cycle, which means U.S. state governments are merely the canaries in the coal mine. Banks, insurance, telecom and manufacturing companies (to mention a few) should be planning their exit," Cresswell wrote in a [blog][6].
LzLabs is part of an ecosystem of mainframe modernization service providers that includes [Astadia][7], [Asysco][8], [GTSoftware][9], [Micro Focus][10] and others.
Large cloud players are also involved in modernizing mainframe applications. For example, Google Cloud in February [bought mainframe cloud-migration service firm Cornerstone Technology][11] with an eye toward helping Big Iron customers move workloads to the private and public cloud. Google said the Cornerstone technology found in its [G4 platform][12]  will shape the foundation of its future mainframe-to-Google Cloud offerings and help mainframe customers modernize applications and infrastructure.
"Through the use of automated processes, Cornerstone's tools can break down your Cobol, PL/1, or Assembler programs into services and then make them cloud native, such as within a managed, containerized environment," wrote Howard Weale, Google's director, transformation practice, in a [blog][13] about the acquisition.
"As the industry increasingly builds applications as a set of services, many customers want to break their mainframe monolith programs into either Java monoliths or Java microservices," Weale stated. 
The Cornerstone move is also part of Google's effort stay competitive in the face of mainframe-migration offerings from [Amazon Web Services][14], [IBM/RedHat][15] and [Microsoft][16].
While the number of services looking to aid in mainframe modernization might be increasing, the actual migration to those services should be well planned, experts say.
A _Network World_ article on [mainframe migration options][17] from 2017 still holds true today: "The problem facing enterprises wishing to move away from mainframes has always been the 'all-or-nothing' challenge posed by their own workloads. These workloads are so interdependent and complex that everything has to be moved at once or the enterprise suffers. The suffering typically centers on underperformance, increased complexity caused by bringing functions over piecemeal, or having to add new development or operational staff to support the target environment. In the end, the savings on hardware or software turns out to be less than the increased costs required of a hybrid computing solution."
Gartner last year also warned that migrating legacy applications should be undertaken very deliberately.
"The value gained by moving applications from the traditional enterprise platform onto the next 'bright, shiny thing' rarely provides an improvement in the business process or the company's bottom line. A great deal of analysis must be performed and each cost accounted for," Gartner stated in a 2019 report, [Considering Leaving Legacy IBM Platforms? Beware, as Cost Savings May Disappoint, While Risking Quality][18]*. "*Legacy platforms may seem old, outdated and due for replacement. Yet IBM and other vendors are continually integrating open-source tools to appeal to more developers while updating the hardware. Application leaders should reassess the capabilities and quality of these platforms before leaving them."
Join the Network World communities on [Facebook][19] and [LinkedIn][20] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3567058/options-grow-for-migrating-mainframe-apps-to-the-cloud.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/LzLabsGmbH
[2]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
[3]: https://www.astadia.com/video/mainframe-transformation-azure-is-the-new-mainframe
[4]: https://appsource.microsoft.com/en-us/product/web-apps/lzlabsgmbh-5083555.lzlabs-softwaredefinedmainframe?src=office&tab=Overview
[5]: https://www.cnn.com/2020/04/08/business/coronavirus-cobol-programmers-new-jersey-trnd/index.html
[6]: https://blog.lzlabs.com/cobol-crisis-for-us-state-government-departments-is-the-canary-in-the-coal-mine
[7]: https://www.astadia.com/blog/mainframe-migration-to-azure-in-5-steps
[8]: https://www.asysco.com/code-transformation/
[9]: https://www.gtsoftware.com/services/migration-services/
[10]: https://www.microfocus.com/en-us/home
[11]: https://www.networkworld.com/article/3528451/google-cloud-moves-to-aid-mainframe-migration.html
[12]: https://www.cornerstone.nl/solutions/modernization
[13]: https://cloud.google.com/blog/topics/inside-google-cloud/helping-customers-migrate-their-mainframe-workloads-to-google-cloud
[14]: https://aws.amazon.com/blogs/enterprise-strategy/yes-you-should-modernize-your-mainframe-with-the-cloud/
[15]: https://www.networkworld.com/article/3438542/ibm-z15-mainframe-amps-up-cloud-security-features.html
[16]: https://azure.microsoft.com/en-us/migration/mainframe/
[17]: https://www.networkworld.com/article/3192652/why-are-mainframes-still-in-the-enterprise-data-center.html
[18]: https://www.gartner.com/doc/3905276
[19]: https://www.facebook.com/NetworkWorld/
[20]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,95 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Btrfs to be the Default Filesystem on Fedora? Fedora 33 Starts Testing Btrfs Switch)
[#]: via: (https://itsfoss.com/btrfs-default-fedora/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Btrfs to be the Default Filesystem on Fedora? Fedora 33 Starts Testing Btrfs Switch
======
While were months away from Fedoras next stable release ([Fedora 33][1]), there are a few changes worth keeping tabs on.
Among all the other [accepted system-wide changes for Fedora 33][1], the proposal of having Btrfs as the default filesystem for desktop variants is the most interesting one.
Heres what Fedora mentions for the proposal:
> For laptop and workstation installs of Fedora, we want to provide file system features to users in a transparent fashion. We want to add new features, while reducing the amount of expertise needed to deal with situations like running out of disk space. Btrfs is well adapted to this role by design philosophy, lets make it the default.
Its worth noting that this isnt an accepted system-wide change as of now and is subject to tests made on the [Test Day][2] (**8th July 2020**).
So, why is Fedora proposing this change? Is it going to be useful in any way? Is it a bad move? How is it going to affect Fedora distributions? Lets talk a few things about it here.
![][3]
### What Fedora Editions will it Affect?
As per the proposal, all the desktop editions of Fedora 33, spins, and labs will be subject to this change, if the tests are successful.
So, you should expect the [workstation editions][4] to get Btrfs as the default file system on Fedora 33.
### Potential Benefits of Implementing This Change
To improve Fedora for laptops and workstation use-cases, Btrfs file system offers some benefits.
Even though this change hasnt been accepted for Fedora 33 yet let me point out the advantages of having Btrfs as the default file system:
* Improves the lifespan of storage hardware
* Providing an easy solution to resolve when a user runs out of free space on the root or home directory.
* Less-prone to data corruption and easy to recover
* Gives better file system re-size ability
* Ensure desktop responsiveness under heavy memory pressure by enforcing I/O limit
* Makes complex storage setups easy to manage
If youre curious, you might want to dive in deeper to know about [Btrfs][5] and its benefits in general.
Not to forget, Btrf was already a supported option — it just wasnt the default file system.
But, overall, it feels like the introducing of Btrfs as the default file system on Fedora 33 could be a useful change, if implemented properly.
### Will Red Hat Enterprise Linux Implement This?
Its quite obvious that Fedora is considered as the cutting-edge version of [Red Hat Enterprise Linux][6].
So, if Fedora rejects the change, Red Hat wont implement it. On the other hand, if youd want RHEL to use Btrfs, Fedora should be the first to approve the change.
To give you more clarity on this, Fedora has mentioned it in detail:
> Red Hat supports Fedora well, in many ways. But Fedora already works closely with, and depends on, upstreams. And this will be one of them. Thats an important consideration for this proposal. The community has a stake in ensuring it is supported. Red Hat will never support Btrfs if Fedora rejects it. Fedora necessarily needs to be first, and make the persuasive case that it solves more problems than alternatives. Feature owners believe it does, hands down.
Also, its worth noting that if youre someone who does not want btrfs in Fedora, you should be looking at [OpenSUSE][7] and [SUSE Linux Enterprise][8] instead.
### Wrapping Up
Even though it looks like the change should not affect any upgrades or compatibility, you can find more information on the changes with Btrfs by default in [Fedora Projects wiki page][9].
What do you think about this change targeted for Fedora 33 release? Do you want btrfs file system as the default?
Feel free to let me know your thooughts in the comments below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/btrfs-default-fedora/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://fedoraproject.org/wiki/Releases/33/ChangeSet
[2]: https://fedoraproject.org/wiki/Test_Day:2020-07-08_Btrfs_default?rd=Test_Day:F33_btrfs_by_default_2020-07-08
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/btrfs-default-fedora.png?ssl=1
[4]: https://getfedora.org/en/workstation/
[5]: https://en.wikipedia.org/wiki/Btrfs
[6]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
[7]: https://www.opensuse.org
[8]: https://www.suse.com
[9]: https://fedoraproject.org/wiki/Changes/BtrfsByDefault

View File

@ -0,0 +1,741 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (An example of very lightweight RESTful web services in Java)
[#]: via: (https://opensource.com/article/20/7/restful-services-java)
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
An example of very lightweight RESTful web services in Java
======
Explore lightweight RESTful services in Java through a full code example
to manage a book collection.
![Coding on a computer][1]
Web services, in one form or another, have been around for more than two decades. For example, [XML-RPC services][2] appeared in the late 1990s, followed shortly by ones written in the SOAP offshoot. Services in the [REST architectural style][3] also made the scene about two decades ago, soon after the XML-RPC and SOAP trailblazers. [REST][4]-style (hereafter, Restful) services now dominate in popular sites such as eBay, Facebook, and Twitter. Despite the alternatives to web services for distributed computing (e.g., web sockets, microservices, and new frameworks for remote-procedure calls), Restful web services remain attractive for several reasons:
* Restful services build upon existing infrastructure and protocols, in particular, web servers and the HTTP/HTTPS protocols. An organization that has HTML-based websites can readily add web services for clients interested more in the data and underlying functionality than in the HTML presentation. Amazon, for example, has pioneered making the same information and functionality available through both websites and web services, either SOAP-based or Restful.
* Restful services treat HTTP as an API, thereby avoiding the complicated software layering that has come to characterize the SOAP-based approach to web services. For example, the Restful API supports the standard CRUD (Create-Read-Update-Delete) operations through the HTTP verbs POST-GET-PUT-DELETE, respectively; HTTP status codes inform a requester whether a request succeeded or why it failed.
* Restful web services can be as simple or complicated as needed. Restful is a style—indeed, a very flexible one—rather than a set of prescriptions about how services should be designed and structured. (The attendant downside is that it may be hard to determine what does _not_ count as a Restful service.)
* For a consumer or client, Restful web services are language- and platform-neutral. The client makes requests in HTTP(S) and receives text responses in a format suitable for modern data interchange (e.g., JSON).
* Almost every general-purpose programming language has at least adequate (and often strong) support for HTTP/HTTPS, which means that web-service clients can be written in those languages.
This article explores lightweight Restful services in Java through a full code example.
### The Restful novels web service
The Restful novels web service consists of three programmer-defined classes:
* The `Novel` class represents a novel with just three properties: a machine-generated ID, an author, and a title. The properties could be expanded for more realism, but I want to keep this example simple.
* The `Novels` class consists of utilities for various tasks: converting a plain-text encoding of a `Novel` or a list of them into XML or JSON; supporting the CRUD operations on the novels collection; and initializing the collection from data stored in a file. The `Novels` class mediates between `Novel` instances and the servlet.
* The `NovelsServlet` class derives from `HttpServlet`, a sturdy and flexible piece of software that has been around since the very early enterprise Java of the late 1990s. The servlet acts as an HTTP endpoint for client CRUD requests. The servlet code focuses on processing client requests and generating the appropriate responses, leaving the devilish details to utilities in the `Novels` class.
Some Java frameworks, such as Jersey (JAX-RS) and Restlet, are designed for Restful services. Nonetheless, the `HttpServlet` on its own provides a lightweight, flexible, powerful, and well-tested API for delivering such services. I'll demonstrate this with the novels example.
### Deploy the novels web service
Deploying the novels web service requires a web server, of course. My choice is [Tomcat][5], but the service should work (famous last words!) if it's hosted on, for example, Jetty or even a Java Application Server. The code and a README that summarizes how to install Tomcat are [available on my website][6]. There is also a documented Apache Ant script that builds the novels service (or any other service or website) and deploys it under Tomcat or the equivalent.
Tomcat is available for download from its [website][7]. Once you install it locally, let `TOMCAT_HOME` be the install directory. There are two subdirectories of immediate interest:
* The `TOMCAT_HOME/bin` directory contains startup and stop scripts for Unix-like systems (`startup.sh` and `shutdown.sh`) and Windows (`startup.bat` and `shutdown.bat`). Tomcat runs as a Java application. The web server's servlet container is named Catalina. (In Jetty, the web server and container have the same name.) Once Tomcat starts, enter `http://localhost:8080/` in a browser to see extensive documentation, including examples.
* The `TOMCAT_HOME/webapps` directory is the default for deployed websites and web services. The straightforward way to deploy a website or web service is to copy a JAR file with a `.war` extension (hence, a WAR file) to `TOMCAT_HOME/webapps` or a subdirectory thereof. Tomcat then unpacks the WAR file into its own directory. For example, Tomcat would unpack `novels.war` into a subdirectory named `novels`, leaving `novels.war` as-is. A website or service can be removed by deleting the WAR file and updated by overwriting the WAR file with a new version. By the way, the first step in debugging a website or service is to check that Tomcat has unpacked the WAR file; if not, the site or service was not published because of a fatal error in the code or configuration.
* Because Tomcat listens by default on port 8080 for HTTP requests, a request URL for Tomcat on the local machine begins:
```
`http://localhost:8080/`
```
Access a programmer-deployed WAR file by adding the WAR file's name but without the `.war` extension:
```
`http://locahost:8080/novels/`
```
If the service was deployed in a subdirectory (e.g., `myapps`) of `TOMCAT_HOME`, this would be reflected in the URL:
```
`http://locahost:8080/myapps/novels/`
```
I'll offer more details about this in the testing section near the end of the article.
As noted, the ZIP file on my homepage contains an Ant script that compiles and deploys a website or service. (A copy of `novels.war` is also included in the ZIP file.) For the novels example, a sample command (with `%` as the command-line prompt) is:
```
`% ant -Dwar.name=novels deploy`
```
This command compiles Java source files and then builds a deployable file named `novels.war`, leaves this file in the current directory, and copies it to `TOMCAT_HOME/webapps`. If all goes well, a `GET` request (using a browser or a command-line utility, such as `curl`) serves as a first test:
```
`% curl http://localhost:8080/novels/`
```
Tomcat is configured, by default, for _hot deploys_: the web server does not need to be shut down to deploy, update, or remove a web application.
### The novels service at the code level
Let's get back to the novels example but at the code level. Consider the `Novel` class below:
#### Example 1. The Novel class
```
package novels;
import java.io.Serializable;
public class Novel implements [Serializable][8], Comparable&lt;Novel&gt; {
    static final long serialVersionUID = 1L;
    private [String][9] author;
    private [String][9] title;
    private int id;
    public Novel() { }
    public void setAuthor(final [String][9] author) { this.author = author; }
    public [String][9] getAuthor() { return this.author; }
    public void setTitle(final [String][9] title) { this.title = title; }
    public [String][9] getTitle() { return this.title; }
    public void setId(final int id) { this.id = id; }
    public int getId() { return this.id; }
    public int compareTo(final Novel other) { return this.id - other.id; }
}
```
This class implements the `compareTo` method from the `Comparable` interface because `Novel` instances are stored in a thread-safe `ConcurrentHashMap`, which does not enforce a sorted order. In responding to requests to view the collection, the novels service sorts a collection (an `ArrayList`) extracted from the map; the implementation of `compareTo` enforces an ascending sorted order by `Novel` ID.
The class `Novels` contains various utility functions:
#### Example 2. The Novels utility class
```
package novels;
import java.io.IOException;
import java.io.File;
import java.io.ByteArrayOutputStream;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.BufferedReader;
import java.nio.file.Files;
import java.util.stream.Stream;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.Collections;
import java.beans.XMLEncoder;
import javax.servlet.ServletContext; // not in JavaSE
import org.json.JSONObject;
import org.json.XML;
public class Novels {
    private final [String][9] fileName = "/WEB-INF/data/novels.db";
    private ConcurrentMap&lt;[Integer][10], Novel&gt; novels;
    private ServletContext sctx;
    private AtomicInteger mapKey;
    public Novels() {
        novels = new ConcurrentHashMap&lt;[Integer][10], Novel&gt;();
        mapKey = new AtomicInteger();
    }
    public void setServletContext(ServletContext sctx) { this.sctx = sctx; }
    public ServletContext getServletContext() { return this.sctx; }
    public ConcurrentMap&lt;[Integer][10], Novel&gt; getConcurrentMap() {
        if (getServletContext() == null) return null; // not initialized
        if (novels.size() &lt; 1) populate();
        return this.novels;
    }
    public [String][9] toXml([Object][11] obj) { // default encoding
        [String][9] xml = null;
        try {
            [ByteArrayOutputStream][12] out = new [ByteArrayOutputStream][12]();
            XMLEncoder encoder = new XMLEncoder(out);
            encoder.writeObject(obj);
            encoder.close();
            xml = out.toString();
        }
        catch([Exception][13] e) { }
        return xml;
    }
    public [String][9] toJson([String][9] xml) { // option for requester
        try {
            JSONObject jobt = XML.toJSONObject(xml);
            return jobt.toString(3); // 3 is indentation level
        }
        catch([Exception][13] e) { }
        return null;
    }
    public int addNovel(Novel novel) {
        int id = mapKey.incrementAndGet();
        novel.setId(id);
        novels.put(id, novel);
        return id;
    }
    private void populate() {
        [InputStream][14] in = sctx.getResourceAsStream(this.fileName);
        // Convert novel.db string data into novels.
        if (in != null) {
            try {
                [InputStreamReader][15] isr = new [InputStreamReader][15](in);
                [BufferedReader][16] reader = new [BufferedReader][16](isr);
                [String][9] record = null;
                while ((record = reader.readLine()) != null) {
                    [String][9][] parts = record.split("!");
                    if (parts.length == 2) {
                        Novel novel = new Novel();
                        novel.setAuthor(parts[0]);
                        novel.setTitle(parts[1]);
                        addNovel(novel); // sets the Id, adds to map
                    }
                }
                in.close();
            }
            catch ([IOException][17] e) { }
        }
    }
}
```
The most complicated method is `populate`, which reads from a text file contained in the deployed WAR file. The text file contains the initial collection of novels. To open the text file, the `populate` method needs the `ServletContext`, a Java map that contains all of the critical information about the servlet embedded in the servlet container. The text file, in turn, contains records such as this:
```
`Jane Austen!Persuasion`
```
The line is parsed into two parts (author and title) separated by the bang symbol (`!`). The method then builds a `Novel` instance, sets the author and title properties, and adds the novel to the collection, which acts as an in-memory data store.
The `Novels` class also has utilities to encode the novels collection into XML or JSON, depending upon the format that the requester prefers. XML is the default, but JSON is available upon request. A lightweight XML-to-JSON package provides the JSON. Further details on encoding are below.
#### Example 3. The NovelsServlet class
```
package novels;
import java.util.concurrent.ConcurrentMap;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.util.Arrays;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.OutputStream;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.beans.XMLEncoder;
import org.json.JSONObject;
import org.json.XML;
public class NovelsServlet extends HttpServlet {
    static final long serialVersionUID = 1L;
    private Novels novels; // back-end bean
    // Executed when servlet is first loaded into container.
    @Override
    public void init() {
        this.novels = new Novels();
        novels.setServletContext(this.getServletContext());
    }
    // GET /novels
    // GET /novels?id=1
    @Override
    public void doGet(HttpServletRequest request, HttpServletResponse response) {
        [String][9] param = request.getParameter("id");
        [Integer][10] key = (param == null) ? null : [Integer][10].valueOf((param.trim()));
        // Check user preference for XML or JSON by inspecting
        // the HTTP headers for the Accept key.
        boolean json = false;
        [String][9] accept = request.getHeader("accept");
        if (accept != null &amp;&amp; accept.contains("json")) json = true;
        // If no query string, assume client wants the full list.
        if (key == null) {
            ConcurrentMap&lt;[Integer][10], Novel&gt; map = novels.getConcurrentMap();
            [Object][11][] list = map.values().toArray();
            [Arrays][18].sort(list);
            [String][9] payload = novels.toXml(list);        // defaults to Xml
            if (json) payload = novels.toJson(payload); // Json preferred?
            sendResponse(response, payload);
        }
        // Otherwise, return the specified Novel.
        else {
            Novel novel = novels.getConcurrentMap().get(key);
            if (novel == null) { // no such Novel
                [String][9] msg = key + " does not map to a novel.\n";
                sendResponse(response, novels.toXml(msg));
            }
            else { // requested Novel found
                if (json) sendResponse(response, novels.toJson(novels.toXml(novel)));
                else sendResponse(response, novels.toXml(novel));
            }
        }
    }
    // POST /novels
    @Override
    public void doPost(HttpServletRequest request, HttpServletResponse response) {
        [String][9] author = request.getParameter("author");
        [String][9] title = request.getParameter("title");
        // Are the data to create a new novel present?
        if (author == null || title == null)
            throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
        // Create a novel.
        Novel n = new Novel();
        n.setAuthor(author);
        n.setTitle(title);
        // Save the ID of the newly created Novel.
        int id = novels.addNovel(n);
        // Generate the confirmation message.
        [String][9] msg = "Novel " + id + " created.\n";
        sendResponse(response, novels.toXml(msg));
    }
    // PUT /novels
    @Override
    public void doPut(HttpServletRequest request, HttpServletResponse response) {
        /* A workaround is necessary for a PUT request because Tomcat does not
           generate a workable parameter map for the PUT verb. */
        [String][9] key = null;
        [String][9] rest = null;
        boolean author = false;
        /* Let the hack begin. */
        try {
            [BufferedReader][16] br =
                new [BufferedReader][16](new [InputStreamReader][15](request.getInputStream()));
            [String][9] data = br.readLine();
            /* To simplify the hack, assume that the PUT request has exactly
               two parameters: the id and either author or title. Assume, further,
               that the id comes first. From the client side, a hash character
               # separates the id and the author/title, e.g.,
                  id=33#title=War and Peace
            */
            [String][9][] args = data.split("#");      // id in args[0], rest in args[1]
            [String][9][] parts1 = args[0].split("="); // id = parts1[1]
            key = parts1[1];
            [String][9][] parts2 = args[1].split("="); // parts2[0] is key
            if (parts2[0].contains("author")) author = true;
            rest = parts2[1];
        }
        catch([Exception][13] e) {
            throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
        }
        // If no key, then the request is ill formed.
        if (key == null)
            throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
        // Look up the specified novel.
        Novel p = novels.getConcurrentMap().get([Integer][10].valueOf((key.trim())));
        if (p == null) { // not found
            [String][9] msg = key + " does not map to a novel.\n";
            sendResponse(response, novels.toXml(msg));
        }
        else { // found
            if (rest == null) {
                throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
            }
            // Do the editing.
            else {
                if (author) p.setAuthor(rest);
                else p.setTitle(rest);
                [String][9] msg = "Novel " + key + " has been edited.\n";
                sendResponse(response, novels.toXml(msg));
            }
        }
    }
    // DELETE /novels?id=1
    @Override
    public void doDelete(HttpServletRequest request, HttpServletResponse response) {
        [String][9] param = request.getParameter("id");
        [Integer][10] key = (param == null) ? null : [Integer][10].valueOf((param.trim()));
        // Only one Novel can be deleted at a time.
        if (key == null)
            throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
        try {
            novels.getConcurrentMap().remove(key);
            [String][9] msg = "Novel " + key + " removed.\n";
            sendResponse(response, novels.toXml(msg));
        }
        catch([Exception][13] e) {
            throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
        }
    }
    // Methods Not Allowed
    @Override
    public void doTrace(HttpServletRequest request, HttpServletResponse response) {
        throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
    }
    @Override
    public void doHead(HttpServletRequest request, HttpServletResponse response) {
        throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
    }
    @Override
    public void doOptions(HttpServletRequest request, HttpServletResponse response) {
        throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
    }
    // Send the response payload (Xml or Json) to the client.
    private void sendResponse(HttpServletResponse response, [String][9] payload) {
        try {
            [OutputStream][20] out = response.getOutputStream();
            out.write(payload.getBytes());
            out.flush();
        }
        catch([Exception][13] e) {
            throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
        }
    }
}
```
Recall that the `NovelsServlet` class above extends the `HttpServlet` class, which in turn extends the `GenericServlet` class, which implements the `Servlet` interface:
```
`NovelsServlet extends HttpServlet extends GenericServlet implements Servlet`
```
As the name makes clear, the `HttpServlet` is designed for servlets delivered over HTTP(S). The class provides empty methods named after the standard HTTP request verbs (officially, _methods_):
* `doPost` (Post = Create)
* `doGet` (Get = Read)
* `doPut` (Put = Update)
* `doDelete` (Delete = Delete)
Some additional HTTP verbs are covered as well. An extension of the `HttpServlet`, such as the `NovelsServlet`, overrides any `do` method of interest, leaving the others as no-ops. The `NovelsServlet` overrides seven of the `do` methods.
Each of the `HttpServlet` CRUD methods takes the same two arguments. Here is `doPost` as an example:
```
`public void doPost(HttpServletRequest request, HttpServletResponse response) {`
```
The `request` argument is a map of the HTTP request information, and the `response` provides an output stream back to the requester. A method such as `doPost` is structured as follows:
* Read the `request` information, taking whatever action is appropriate to generate a response. If information is missing or otherwise deficient, generate an error.
* Use the extracted request information to perform the appropriate CRUD operation (in this case, create a `Novel`) and then encode an appropriate response to the requester using the `response` output stream to do so. In the case of `doPost`, the response is a confirmation that a new novel has been created and added to the collection. Once the response is sent, the output stream is closed, which closes the connection as well.
### More on the do method overrides
An HTTP request has a relatively simple structure. Here is a sketch in the familiar HTTP 1.1 format, with comments introduced by double sharp signs:
```
GET /novels              ## start line
Host: localhost:8080     ## header element
Accept-type: text/plain  ## ditto
...
[body]                   ## POST and PUT only
```
The start line begins with the HTTP verb (in this case, `GET`) and the URI (Uniform Resource Identifier), which is the noun (in this case, `novels`) that names the targeted resource. The headers consist of key-value pairs, with a colon separating the key on the left from the value(s) on the right. The header with key `Host` (case insensitive) is required; the hostname `localhost` is the symbolic address of the local machine on the local machine, and the port number `8080` is the default for the Tomcat web server awaiting HTTP requests. (By default, Tomcat listens on port 8443 for HTTPS requests.) The header elements can occur in arbitrary order. In this example, the `Accept-type` header's value is the MIME type `text/plain`.
Some requests (in particular, `POST` and `PUT`) have bodies, whereas others (in particular, `GET` and `DELETE`) do not. If there is a body (perhaps empty), two newlines separate the headers from the body; the HTTP body consists of key-value pairs. For bodyless requests, header elements, such as the query string, can be used to send information. Here is a request to `GET` the `/novels` resource with the ID of 2:
```
`GET /novels?id=2`
```
The query string starts with the question mark and, in general, consists of key-value pairs, although a key without a value is possible.
The `HttpServlet`, with methods such as `getParameter` and `getParameterMap`, nicely hides the distinction between HTTP requests with and without a body. In the novels example, the `getParameter` method is used to extract the required information from the `GET`, `POST`, and `DELETE` requests. (Handling a `PUT` request requires lower-level code because Tomcat does not provide a workable parameter map for `PUT` requests.) Here, for illustration, is a slice of the `doPost` method in the `NovelsServlet` override:
```
@Override
public void doPost(HttpServletRequest request, HttpServletResponse response) {
   [String][9] author = request.getParameter("author");
   [String][9] title = request.getParameter("title");
   ...
```
For a bodyless `DELETE` request, the approach is essentially the same:
```
@Override
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
   [String][9] param = request.getParameter("id"); // id of novel to be removed
   ...
```
The `doGet` method needs to distinguish between two flavors of a `GET` request: one flavor means "get all*"*, whereas the other means _get a specified one_. If the `GET` request URL contains a query string whose key is an ID, then the request is interpreted as "get a specified one":
```
`http://localhost:8080/novels?id=2  ## GET specified`
```
If there is no query string, the `GET` request is interpreted as "get all":
```
`http://localhost:8080/novels       ## GET all`
```
### Some devilish details
The novels service design reflects how a Java-based web server such as Tomcat works. At startup, Tomcat builds a thread pool from which request handlers are drawn, an approach known as the _one thread per request model_. Modern versions of Tomcat also use non-blocking I/O to boost performance.
The novels service executes as a _single_ instance of the `NovelsServlet` class, which in turn maintains a _single_ collection of novels. Accordingly, a race condition would arise, for example, if these two requests were processed concurrently:
* One request changes the collection by adding a new novel.
* The other request gets all the novels in the collection.
The outcome is indeterminate, depending on exactly how the _read_ and _write_ operations overlap. To avoid this problem, the novels service uses a thread-safe `ConcurrentMap`. Keys for this map are generated with a thread-safe `AtomicInteger`. Here is the relevant code segment:
```
public class Novels {
    private ConcurrentMap&lt;[Integer][10], Novel&gt; novels;
    private AtomicInteger mapKey;
    ...
```
By default, a response to a client request is encoded as XML. The novels program uses the old-time `XMLEncoder` class for simplicity; a far richer option is the JAX-B library. The code is straightforward:
```
public [String][9] toXml([Object][11] obj) { // default encoding
   [String][9] xml = null;
   try {
      [ByteArrayOutputStream][12] out = new [ByteArrayOutputStream][12]();
      XMLEncoder encoder = new XMLEncoder(out);
      encoder.writeObject(obj);
      encoder.close();
      xml = out.toString();
   }
   catch([Exception][13] e) { }
   return xml;
}
```
The `Object` parameter is either a sorted `ArrayList` of novels (in response to a "get all" request); or a single `Novel` instance (in response to a _get one_ request); or a `String` (a confirmation message).
If an HTTP request header refers to JSON as a desired type, then the XML is converted to JSON. Here is the check in the `doGet` method of the `NovelsServlet`:
```
[String][9] accept = request.getHeader("accept"); // "accept" is case insensitive
if (accept != null &amp;&amp; accept.contains("json")) json = true;
```
The `Novels` class houses the `toJson` method, which converts XML to JSON:
```
public [String][9] toJson([String][9] xml) { // option for requester
   try {
      JSONObject jobt = XML.toJSONObject(xml);
      return jobt.toString(3); // 3 is indentation level
   }
   catch([Exception][13] e) { }
   return null;
}
```
The `NovelsServlet` checks for errors of various types. For example, a `POST` request should include an author and a title for the new novel. If either is missing, the `doPost` method throws an exception:
```
if (author == null || title == null)
   throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
```
The `SC` in `SC_BAD_REQUEST` stands for _status code_, and the `BAD_REQUEST` has the standard HTTP numeric value of 400. If the HTTP verb in a request is `TRACE`, a different status code is returned:
```
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
   throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
}
```
### Testing the novels service
Testing a web service with a browser is tricky. Among the CRUD verbs, modern browsers generate only `POST` (Create) and `GET` (Read) requests. Even a `POST` request is challenging from a browser, as the key-values for the body need to be included; this is typically done through an HTML form. A command-line utility such as [curl][21] is a better way to go, as this section illustrates with some `curl` commands, which are included in the ZIP on my website.
Here are some sample tests without the corresponding output:
```
% curl localhost:8080/novels/
% curl localhost:8080/novels?id=1
% curl --header "Accept: application/json" localhost:8080/novels/
```
The first command requests all the novels, which are encoded by default in XML. The second command requests the novel with an ID of 1, which is encoded in XML. The last command adds an `Accept` header element with `application/json` as the MIME type desired. The `get one` command could also use this header element. Such requests have JSON rather than the XML responses.
The next two commands create a new novel in the collection and confirm the addition:
```
% curl --request POST --data "author=Tolstoy&amp;title=War and Peace" localhost:8080/novels/
% curl localhost:8080/novels?id=4
```
A `PUT` command in `curl` resembles a `POST` command except that the `PUT` body does not use standard syntax. The documentation for the `doPut` method in the `NovelsServlet` goes into detail, but the short version is that Tomcat does not generate a proper map on `PUT` requests. Here is the sample `PUT` command and a confirmation command:
```
% curl --request PUT --data "id=3#title=This is an UPDATE" localhost:8080/novels/
% curl localhost:8080/novels?id=3
```
The second command confirms the update.
Finally, the `DELETE` command works as expected:
```
% curl --request DELETE localhost:8080/novels?id=2
% curl localhost:8080/novels/
```
The request is for the novel with the ID of 2 to be deleted. The second command shows the remaining novels.
### The web.xml configuration file
Although it's officially optional, a `web.xml` configuration file is a mainstay in a production-grade website or service. The configuration file allows routing, security, and other features of a site or service to be specified independently of the implementation code. The configuration for the novels service handles routing by providing a URL pattern for requests dispatched to this service:
```
&lt;?xml version = "1.0" encoding = "UTF-8"?&gt;
&lt;web-app&gt;
  &lt;servlet&gt;
    &lt;servlet-name&gt;novels&lt;/servlet-name&gt;
    &lt;servlet-class&gt;novels.NovelsServlet&lt;/servlet-class&gt;
  &lt;/servlet&gt;
  &lt;servlet-mapping&gt;
    &lt;servlet-name&gt;novels&lt;/servlet-name&gt;
    &lt;url-pattern&gt;/*&lt;/url-pattern&gt;
  &lt;/servlet-mapping&gt;
&lt;/web-app&gt;
```
The `servlet-name` element provides an abbreviation (`novels`) for the servlet's fully qualified class name (`novels.NovelsServlet`), and this name is used in the `servlet-mapping` element below.
Recall that a URL for a deployed service has the WAR file name right after the port number:
```
`http://localhost:8080/novels/`
```
The slash immediately after the port number begins the URI known as the _path_ to the requested resource, in this case, the novels service; hence, the term `novels` occurs after the first single slash.
In the `web.xml` file, the `url-pattern` is specified as `/*`, which means _any path that starts with /novels_. Suppose Tomcat encounters a contrived request URL, such as this:
```
`http://localhost:8080/novels/foobar/`
```
The `web.xml` configuration specifies that this request, too, should be dispatched to the novels servlet because the `/*` pattern covers `/foobar`. The contrived URL thus has the same result as the legitimate one shown above it.
A production-grade configuration file might include information on security, both wire-level and users-roles. Even in this case, the configuration file would be only two or three times the size of the sample one.
### Wrapping up
The `HttpServlet` is at the center of Java's web technologies. A website or web service, such as the novels service, extends this class, overriding the `do` verbs of interest. A Restful framework such as Jersey (JAX-RS) or Restlet does essentially the same by providing a customized servlet, which then acts as the HTTP(S) endpoint for requests against a web application written in the framework.
A servlet-based application has access, of course, to any Java library required in the web application. If the application follows the separation-of-concerns principle, then the servlet code remains attractively simple: the code checks a request, issuing the appropriate error if there are deficiencies; otherwise, the code calls out for whatever functionality may be required (e.g., querying a database, encoding a response in a specified format), and then sends the response to the requester. The `HttpServletRequest` and `HttpServletResponse` types make it easy to perform the servlet-specific work of reading the request and writing the response.
Java has APIs that range from the very simple to the highly complicated. If you need to deliver some Restful services using Java, my advice is to give the low-fuss `HttpServlet` a try before anything else.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/restful-services-java
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: http://xmlrpc.com/
[3]: https://en.wikipedia.org/wiki/Representational_state_transfer
[4]: https://www.redhat.com/en/topics/integration/whats-the-difference-between-soap-rest
[5]: http://tomcat.apache.org/
[6]: https://condor.depaul.edu/mkalin
[7]: https://tomcat.apache.org/download-90.cgi
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+serializable
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+object
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bytearrayoutputstream
[13]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
[14]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstream
[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstreamreader
[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bufferedreader
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+arrays
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+runtimeexception
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+outputstream
[21]: https://curl.haxx.se/

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What's the difference between DevSecOps and agile software development)
[#]: via: (https://opensource.com/article/20/7/devsecops-vs-agile)
[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta)
What's the difference between DevSecOps and agile software development
======
Are you focused more on security or software delivery? Or can you have
both?
![Brick wall between two people, a developer and an operations manager][1]
There is a tendency in the tech community to use the terms DevSecOps and agile development interchangeably. While there are some similarities, such as that both aim to detect risks earlier, there are also distinctions that [drastically alter how each would work][2] in your organization.
DevSecOps built on some of the principles that agile development established. However, DevSecOps is [especially focused on integrating security features][3], while agile is focused on delivering software.
Knowing how to protect your website or application from ransomware and other threats really comes down to the software and systems development you use. Your needs may impact whether you choose to utilize DevSecOps, agile development, or both.
### Differences between DevSecOps and agile
The main distinction between these two systems comes down to one simple concept: security. Depending on your software development practices, your company's security measures—and when, where, and who implements them—may differ significantly.
Every business [needs IT security][4] to protect their vital data. Virtual private networks (VPNs), digital certificates, firewall protection, multi-factor authentication, secure cloud storage, and teaching employees about basic cybersecurity measures are all actions a business should take if it truly values IT security.
When you trust DevSecOps, you're taking your company's security and essentially making it tantamount to continuous integration and delivery. DevSecOps methodologies emphasize security at the very beginning of development and make it an integral component of overall software quality.
This is due to three major principles in DevSecOps security:
* Balancing user access with data security
* [Encrypting data][5] with VPN and SSL to protect it from intruders while it is in transit
* Anticipating future risks with tools that scan new code for security flaws and notifying developers about the flaws
While DevOps has always intended to include security, not every organization practicing DevOps has kept it in mind. That is where DevSecOps as an evolution of DevOps can offer clarity. Despite the similarity of their names, the two [should not be confused][6]. In a DevSecOps model, security is the primary driving force for the organization.
Meanwhile, agile development is more focused on iterative development cycles, which means feedback is constantly integrated into continuous software development. [Agile's key principles][7] are to embrace changing environments to provide customers and clients with competitive advantages, to collaborate closely with developers and stakeholders, and to maintain a consistent focus of technical excellence throughout the process to help boost efficiency. In other words, unless an agile team includes security in its definition of excellence, security _is_ an afterthought in agile.
### Challenges for defense agencies
If there's any organization dedicated to the utmost in security, it's the U.S. Department of Defense. In 2018, the DoD published a [guide to "fake agile"][8] or "agile in name only" in software development. The guide was designed to warn DoD executives about bad programming and explain how to spot it to avoid risks.
It's not only DoD that has something to gain by using these methodologies. The healthcare and financial sectors also [maintain massive quantities][9] of sensitive data that must remain secure.
DoD's changing of the guard with its modernization strategy, which includes the adoption of DevSecOps, is essential. This is particularly pertinent in an age when even the DoD is susceptible to hacker attacks and data breaches, as evidenced by its [massive data breach][10] in February 2020.
There are also risks inherent in transferring cybersecurity best practices into real-life development. Things won't go perfectly 100% of the time. At best, things will be uncomfortable, and at worst, they could create a whole new set of risks.
Developers, especially those working on code for military software, may not have a thorough [understanding of all contexts][11] where DevSecOps should be employed. There will be a steep learning curve, but for the greater good of security, these are necessary growing pains.
### New models in the age of automation
To address growing concerns about previous security measures, DoD contractors have begun to assess the DevSecOps model. The key is deploying the methodology into continuous service delivery contexts.
There are three ways this can happen. The first involves automation, which is [already being used][12] in most privacy and security tools, including VPNs and privacy-enhanced mobile operating systems. Instead of relying on human-based checks and balances, automation in large-scale cloud infrastructures can handle ongoing maintenance and security assessments.
The second element involves the transition to DevSecOps as the primary security checkpoint. Traditionally, systems were designed with zero expectation that data would be accessible as it moves between various components.
The third and final element involves bringing corporate approaches to military software development. Many DoD contractors and employees come from the commercial sector rather than the military. Their background gives them knowledge and experience in [providing cybersecurity][13] to large-scale businesses, which they can bring into government positions.
### Challenges worth overcoming
Switching to a DevSecOps-based methodology presents some challenges. In the last decade, many organizations have completely redesigned their development lifecycles to comply with agile development practices, and making another switch so soon may seem daunting.
Businesses should gain peace of mind knowing that even the DoD has had trouble with this transition, and they're not alone in the challenges of rolling out new processes to make commercial techniques and tools more widely accessible.
Looking into the future, the switch to DevSecOps will be no more painful than the switch to agile development was. Firms have a lot to gain by acknowledging the [value of building security][4] into development workflows, as well as building upon the advantages of existing agile networks.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/devsecops-vs-agile
作者:[Sam Bocetta][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sambocetta
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/devops_confusion_wall_questions.png?itok=zLS7K2JG (Brick wall between two people, a developer and an operations manager)
[2]: https://tech.gsa.gov/guides/understanding_differences_agile_devsecops/
[3]: https://www.redhat.com/en/topics/devops/what-is-devsecops
[4]: https://www.redhat.com/en/topics/security
[5]: https://surfshark.com/blog/does-vpn-protect-you-from-hackers
[6]: https://www.infoq.com/articles/evolve-devops-devsecops/
[7]: https://enterprisersproject.com/article/2019/9/agile-project-management-explained
[8]: https://www.governmentciomedia.com/defense-innovation-board-issues-guide-detecting-agile-bs
[9]: https://www.redhat.com/en/solutions/financial-services
[10]: https://www.military.com/daily-news/2020/02/25/dod-agency-suffers-data-breach-potentially-compromising-ssns.html
[11]: https://fcw.com/articles/2020/01/23/dod-devsecops-guidance-williams.aspx
[12]: https://privacyaustralia.net/privacy-tools/
[13]: https://www.securitymagazine.com/articles/88301-cybersecurity-is-standard-business-practice-for-large-companies

View File

@ -0,0 +1,95 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Btrfs to be the Default Filesystem on Fedora? Fedora 33 Starts Testing Btrfs Switch)
[#]: via: (https://itsfoss.com/btrfs-default-fedora/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Btrfs 将成为 Fedora 上的默认文件系统? Fedora 33 开始测试切换到 Btrfs
======
尽管距离 Fedora 的下一个稳定版本([Fedora 33][1])还有几个月的距离,但仍有一些值得关注的变化。
在所有其他 [Fedora 33 接受的系统范围的更改][1]中,最有趣的提议是将 Btrfs 作为桌面的默认文件系统。
这是 Fedora 对该提案的评价:
>对于 Fedora 的笔记本电脑和工作站安装我们希望以透明的方式向用户提供文件系统功能。我们希望添加新功能同时减少处理磁盘空间不足之类的情况所需的专业知识。Btrfs 它的设计理念非常适合这个角色,让我们将其设为默认设置。
值得注意的是,到目前为止,这不是系统范围内的更改,并且需要在[测试日][2]**2020 年 7 月 8 日**)进行测试。
那么,为什么 Fedora 提出这一更改?这会有什么用么?这是糟糕的举动吗?对 Fedora 的发行有何影响?让我们在这里谈论下。
![][3]
### 它会影响哪些 Fedora 版本?
根据提议如果测试成功那么Fedora 33、spins 和 labs 的所有桌面版本都可能发生此更改。
因此,你可以期望[工作站版本][4]将 Btrfs 作为 Fedora 33 上的默认文件系统。
### 实施此更改的潜在好处
为了改进笔记本和工作站用例的 FedoraBtrfs 文件系统提供了一些好处。
即使 Fedora 33 尚未接受此更改,但我先说下使用 Btrfs 作为默认文件系统的优点:
* 延长存储硬件的使用寿命
* 提供一个简单的方案来解决用户耗尽根目录或主目录上的可用空间的情况。
* 不易损坏数据,易于恢复
* 提供更好的文件系统大小调整功能
* 通过强制 I/O 限制来确保桌面在高内存压力下的响应能力
* 使复杂的存储设置易于管理
如果你感到好奇,你可能想更深入地了解[ Btrfs][5] 及其总体优点。
不要忘记Btrf 已经是受支持的选项,但它不是默认的文件系统。
但是,总的来说,感觉是引入 Btrfs 作为 Fedora 33 上的默认文件系统,如果实施得当,这可能会是一个有用的更改。
### Red Hat Enterprise Linux 可以实现吗?
很明显Fedora 被认为是 [Red Hat Enterprise Linux][6] 的前沿版本。
因此,如果 Fedora 拒绝更改,那么 Red Hat 将不会实施。另一方面,如果你希望 RHEL 使用 Btrfs那么 Fedora 应该首先同意更改。
为了让你更加清楚Fedora 对其进行了详细介绍:
> Red Hat 在许多方面都很好地支持 Fedora。但是 Fedora 已经与上游紧密合作,并依赖上游。这将是其中之一。这是该提案的重要考虑因素。社区有责任确保它得到支持。如果 Fedora 拒绝,那么 Red Hat 将永远不会支持 Btrfs。Fedora 必然需要是第一位的, 并提出令人信服的理由, 它解决的问题比替代方案多。功能所有者相信它, 这是毫无疑问的。
另外,值得注意的是,如果你不想在 Fedora 中使用 btrfs你应该看看 [OpenSUSE][7] 和 [SUSE Linux Enterprise][8]。
### 总结
即使这个更改看起来不会影响任何升级或兼容性,你也可以在 [Fedora 项目的 Wiki 页面][9]中找到有关 Btrfs 的更改的更多信息。
你对针对 Fedora 33 发行版的这一更改有何看法?你是否要将 btrfs 文件系统作为默认文件系统?
请在下面的评论中让我知道你的想法!
--------------------------------------------------------------------------------
via: https://itsfoss.com/btrfs-default-fedora/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://fedoraproject.org/wiki/Releases/33/ChangeSet
[2]: https://fedoraproject.org/wiki/Test_Day:2020-07-08_Btrfs_default?rd=Test_Day:F33_btrfs_by_default_2020-07-08
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/btrfs-default-fedora.png?ssl=1
[4]: https://getfedora.org/en/workstation/
[5]: https://en.wikipedia.org/wiki/Btrfs
[6]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux
[7]: https://www.opensuse.org
[8]: https://www.suse.com
[9]: https://fedoraproject.org/wiki/Changes/BtrfsByDefault