diff --git a/translated/tech/20200501 How to Handle Automatic Updates in Ubuntu.md b/published/20200501 How to Handle Automatic Updates in Ubuntu.md
similarity index 62%
rename from translated/tech/20200501 How to Handle Automatic Updates in Ubuntu.md
rename to published/20200501 How to Handle Automatic Updates in Ubuntu.md
index f86886d82c..4339c22494 100644
--- a/translated/tech/20200501 How to Handle Automatic Updates in Ubuntu.md
+++ b/published/20200501 How to Handle Automatic Updates in Ubuntu.md
@@ -1,44 +1,48 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
+[#]: reviewer: (wxy)
+[#]: publisher: (wxy)
+[#]: url: (https://linux.cn/article-12257-1.html)
[#]: subject: (How to Handle Automatic Updates in Ubuntu)
[#]: via: (https://itsfoss.com/auto-updates-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
-如何在 Ubuntu 中处理自动更新
+如何在 Ubuntu 中处理自动的无人值守升级
======
-_**简介:本教程教你如何处理无人值守的升级,即 Ubuntu Linux 的自动系统更新。**_
+> 本教程教你如何处理无人值守的升级,即 Ubuntu Linux 的自动系统更新。
+
+
有时,当你尝试[关闭 Ubuntu 系统][1]时,可能看到这个阻止你关闭的页面:
-**关机正在进行无人值守升级,请不要关闭计算机。**
+> 关机过程中正在进行无人值守升级,请不要关闭计算机。
![Unattended Upgrade In Progress In Ubuntu][2]
-你可能想知道什么是“无人值守的升级”,以及它是如何在你不知情的情况下运行的。
+你可能会问这个“无人值守升级”是什么,怎么会在你不知情的情况下运行呢?
-因为 [Ubuntu][3] 非常重视系统的安全性。默认情况下,它会每天自动检查系统更新,如果发现任何安全更新,那么会下载这些更新并自行安装。对于正常的系统和应用更新,它会通过软件更新程序通知你。
+原因是 [Ubuntu][3] 非常重视系统的安全性。默认情况下,它会每天自动检查系统更新,如果发现安全更新,它会下载这些更新并自行安装。对于正常的系统和应用更新,它会通过软件更新程序通知你。
-由于所有这些都是在后台发生的,因此你会直到关机或者尝试自己安装应用时才会意识到。
+由于所有这些都是在后台发生的,所以在你尝试关闭系统或尝试自行安装应用程序之前,你甚至不会意识到这一点。
-在进行这些无人值守的升级时尝试安装新软件会发生[无法获得锁的错误][4]。
+在这些无人值守的升级过程中,尝试安装新软件,会导致著名的[无法获得锁定的错误][4]。
![][5]
如你所见,自动更新带来了一些小麻烦。你可以选择禁用自动更新,但这意味着你必须一直手动检查并[更新你的 Ubuntu 系统][6]。
-你真的需要禁用自动更新吗?
+> **你真的需要禁用自动更新吗?**
+>
+> 请注意,这是一项安全功能。Linux 实际上允许你禁用系统中的所有功能,甚至禁用这些安全功能。
+>
+> 但是我认为,作为普通用户,**你不应该禁用自动更新**。毕竟,它可以确保你的系统安全。
+>
+> 为了确保系统的安全性,你可以忍受自动更新所带来的小麻烦。
-请注意,这是一项安全功能。Linux 实际上允许你禁用系统中的所有功能,甚至禁用这些安全功能。
-但是我认为,作为普通用户,_**你不应禁用自动更新**_。毕竟,它可以确保你的系统安全。
-为了确保系统的安全性,你可以忍受自动更新所带来的小麻烦。
+现在,你已经收到警告,还是觉得承担手动更新系统的额外任务更好,那么让我们看看如何处理自动更新。
-现在,你已经收到警告,并认为最好承担手动更新系统的额外任务,让我们看看如何处理自动更新。
-
-与往常一样,有两种方法可以做到:GUI 和命令行。 我将向您=你展示两种方法。
+与往常一样,有两种方法可以做到:GUI 和命令行。 我将向你展示两种方法。
我在这里使用 Ubuntu 20.04,但是这些步骤对 Ubuntu 18.04 和任何其他 Ubuntu 版本均有效。
@@ -50,7 +54,7 @@ _**简介:本教程教你如何处理无人值守的升级,即 Ubuntu Linux
在此处,进入“更新”选项卡。查找“自动检查更新”。默认情况下,它设置为“每日”。
-你可以将其更改为“从不”,你的系统将永远不会检查更新。如果不检查更新,它将不会找到要安装的新更新。
+你可以将其更改为“从不”,你的系统将永远不会检查更新。如果不检查更新,它就不会找到要安装的新的更新。
![Disable Auto Updates in Ubuntu Completely][8]
@@ -58,13 +62,13 @@ _**简介:本教程教你如何处理无人值守的升级,即 Ubuntu Linux
#### 在 Ubuntu 中处理自动更新的更好方法
-就个人而言,我建议让它自己检查更新。如果你不希望它自动安装更新,那么可以更改该行为以通知有关安全更新的可用性。
+就个人而言,我建议让它自己检查更新。如果你不希望它自动安装更新,那么可以更改该行为,以通知有关安全更新的可用性。
保持“自动检查更新”为“每日”,然后将“有安全更新时”选项更改为“立即显示”,而不是“自动下载并安装”。
![Get notified for security updates instead of automatically installing them][9]
-这样,它会检查是否有更新,而不是在后台自动安装更新,软件更新程序会通知你更新可用于系统。你的系统已经完成正常的系统和软件更新。
+这样,它会检查是否有更新,而不是在后台自动安装更新,软件更新程序会通知你有可用于系统的更新。而你的系统已经完成正常的系统和软件更新。
![Get notified about security updates][10]
@@ -76,13 +80,13 @@ _**简介:本教程教你如何处理无人值守的升级,即 Ubuntu Linux
### 如何在 Ubuntu 中使用命令行禁用自动更新
-你可以在 **/etc/apt/apt.conf.d/20auto-upgrades** 中找到自动升级设置。Ubuntu 终端中的默认文本编辑器是 Nano,因此你可以使用以下命令来编辑此文件:
+你可以在 `/etc/apt/apt.conf.d/20auto-upgrades` 中找到自动升级设置。Ubuntu 终端中的默认文本编辑器是 Nano,因此你可以使用以下命令来编辑此文件:
```
sudo nano /etc/apt/apt.conf.d/20auto-upgrades
```
-现在,如果你不希望系统自动检查更新,那么可以将 APT::Periodic::Update-Package-Lists 的值更改为 0。
+现在,如果你不希望系统自动检查更新,那么可以将 `APT::Periodic::Update-Package-Lists` 的值更改为 `"0"`。
```
APT::Periodic::Update-Package-Lists "0";
@@ -96,9 +100,9 @@ APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "0";
```
-**最后**
+### 最后
-由于某种原因,启用了自动安全更新,建议你保持这种状态。小的烦恼实际上并不值得冒险损害系统安全性。你怎么看?
+由于某种原因,启用了自动安全更新,建议你保持这种状态。这个小烦恼实际上并不值得冒险损害系统安全性。你怎么看?
--------------------------------------------------------------------------------
@@ -107,7 +111,7 @@ via: https://itsfoss.com/auto-updates-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
-校对:[校对者ID](https://github.com/校对者ID)
+校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
diff --git a/sources/news/20200528 Open Source YouTube Alternative PeerTube Needs Your Support to Launch Version 3.md b/sources/news/20200528 Open Source YouTube Alternative PeerTube Needs Your Support to Launch Version 3.md
new file mode 100644
index 0000000000..ec7ab97e21
--- /dev/null
+++ b/sources/news/20200528 Open Source YouTube Alternative PeerTube Needs Your Support to Launch Version 3.md
@@ -0,0 +1,87 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Open Source YouTube Alternative PeerTube Needs Your Support to Launch Version 3)
+[#]: via: (https://itsfoss.com/peertube-v3-campaign/)
+[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
+
+Open Source YouTube Alternative PeerTube Needs Your Support to Launch Version 3
+======
+
+[PeerTube][1] (developed by [Framasoft][2]) is a free and open-source decentralized alternative to YouTube somewhat like [LBRY][3]. As the name suggests, it relies on [peer-to-peer connections][4] to operate the video hosting services.
+
+You can also choose to self-host your instance and also have access to videos from other instances (a federated network, just like [Mastodon][5]).
+
+It is being actively developed for a few years now. And, to take it up a notch, they have decided to launch a crowdfunding campaign for the next major release.
+
+The funding campaign will help them develop v3.0 of PeerTube with some amazing key features planned for the release this fall.
+
+![PeerTube Instance Example][6]
+
+### PeerTube: Brief Overview
+
+In addition to what I just mentioned above, PeerTube is a fully-functional peer to peer video platform. The best thing about it is — it’s open-source and free. So, you can check them out on [GitHub][7] if you want.
+
+You can watch their official video here:
+
+**Note:** You need to be cautious about your IP address if you have concerns about that on PeerTube (try using one of the [best VPNs available][8]).
+
+### PeerTube’s Crowdfunding Campaign For v3 Launch
+
+You’ll be excited to know that the crowdfunding campaign of **€60,000** already managed to **get 10,000 Euros on Day 1** (at the time of writing this).
+
+Now, coming to the details. The campaign aims to focus on gathering **funds for the next 6 months of development for a v3 release planned for November 2020.** It looks like a lot of work for a single full-time developer — but no matter whether they reach the funding goal, they intend to release the v3 with the existing funds they have.
+
+In their [announcement post][9], PeerTube team mentioned:
+
+> We feel like we need to develop it, that we have to. Imposing a condition stating « if we do not get our 60,000€, then there will not be a v3 » here, would be a lie, marketing manipulation : this is not the kind of relation we want to maintain with you.
+
+Next, let’s talk about the new features they’ve planned to introduced in the next 6 months:
+
+ * Upon reaching the **€10,000 goal,** they plan to work on introducing a globalized video index to make it easier to search for videos across multiple instances.
+ * With **€**20,000 goal, PeerTube will dedicate one month on improving the moderation tools to make the best out of it.
+ * With 40,000 goal, they would work on the UX/UI of playlists. So, it will look better when you try to embed a playlist. In addition to this, the plugin system will be improved to make it easier to contribute to PeerTube’s code.
+ * With the end of the campaign reaching 60,000 goal, PeerTube’s live-streaming feature will be introduced.
+
+
+
+You can also find the details of their [roadmap on their site][10].
+
+### Wrapping Up
+
+The ability to have a global inter-connected video index among multiple instances is something that was needed and it will also allow you to configure your own index.
+
+The content moderation tool improvement is also a huge deal because it’s not that easy to manage a decentralized network of video hosting services. While they aim to prevent censorship, a strict moderation is required to make PeerTube a comfortable place to watch videos.
+
+Even though I’m not sure about how useful PeerTube’s live-streaming feature will be, at launch. It is going to be something exciting to keep an eye out for.
+
+We at It’s FOSS made a token donation of 25 Euro. I would also encourage you to donate and help this open source achieve their financial goal for version 3 development.
+
+[Support PeerTube][11]
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/peertube-v3-campaign/
+
+作者:[Ankush Das][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[b]: https://github.com/lujun9972
+[1]: https://joinpeertube.org
+[2]: https://framasoft.org/en/
+[3]: https://itsfoss.com/lbry/
+[4]: https://en.wikipedia.org/wiki/Peer-to-peer
+[5]: https://itsfoss.com/mastodon-open-source-alternative-twitter/
+[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/peertube-instance-screenshot.jpg?ssl=1
+[7]: https://github.com/Chocobozzz/PeerTube
+[8]: https://itsfoss.com/best-vpn-linux/
+[9]: https://framablog.org/2020/05/26/our-plans-for-peertube-v3-progressive-fundraising-live-streaming-coming-next-fall/
+[10]: https://joinpeertube.org/roadmap
+[11]: https://joinpeertube.org/roadmap#support
diff --git a/sources/tech/20200511 How to manage network services with firewall-cmd.md b/sources/tech/20200511 How to manage network services with firewall-cmd.md
index 5bfc5d947f..4a85a137c4 100644
--- a/sources/tech/20200511 How to manage network services with firewall-cmd.md
+++ b/sources/tech/20200511 How to manage network services with firewall-cmd.md
@@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
-[#]: translator: ( )
+[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
diff --git a/sources/tech/20200520 How to configure your router using VTY shell.md b/sources/tech/20200520 How to configure your router using VTY shell.md
deleted file mode 100644
index 36e9f951b8..0000000000
--- a/sources/tech/20200520 How to configure your router using VTY shell.md
+++ /dev/null
@@ -1,198 +0,0 @@
-[#]: collector: (lujun9972)
-[#]: translator: (geekpi)
-[#]: reviewer: ( )
-[#]: publisher: ( )
-[#]: url: ( )
-[#]: subject: (How to configure your router using VTY shell)
-[#]: via: (https://opensource.com/article/20/5/vty-shell)
-[#]: author: (M Umer https://opensource.com/users/noisybotnet)
-
-How to configure your router using VTY shell
-======
-Free range routing gives you options for implementing multiple
-protocols. This guide will get you started.
-![Multi-colored and directional network computer cables][1]
-
-Recently, I wrote an article explaining how we can implement Open Shortest Path First (OSPF) using the [Quagga][2] routing suite. There are multiple software suites that can be used instead of Quagga to implement different routing protocols. One such option is free range routing (FRR).
-
-### FRR
-
-[FRR][3] is a routing software suite, which has been derived from Quagga and is distributed under GNU GPL2 license. Like Quagga, it provides implementations of all major routing protocols such as OSPF, Routing Information Protocol (RIP), Border Gateway Protocol (BGP), and Intermediate system-to-intermediate system (IS-IS) for Unix-like platforms.
-
-Several companies, such as Big Switch Networks, Cumulus, Open Source Routing, and 6wind, who were behind the development of Quagga, created FRR to improve on Quagga's well-established foundations.
-
-#### Architecture
-
-FRR is a suite of daemons that work together to build the routing table. Each major protocol is implemented in its own daemon, and these daemons talk to the core and protocol-independent daemon Zebra, which provides kernel routing table updates, interface lookups, and redistribution of routes between different routing protocols. Each protocol-specific daemon is responsible for running the relevant protocol and building the routing table based on the information exchanged.
-
-![FRR architecture][4]
-
-### VTY shell
-
-[VTYSH][5] is an integrated shell for the FRR routing engine. It amalgamates all the CLI commands defined in each of the daemons and presents them to the user in a single shell. It provides a Cisco-like modal CLI, and many of the commands are similar to Cisco IOS commands. There are different modes to the CLI, and certain commands are only available within a specific mode.
-
-### Setup
-
-In this tutorial, we'll be implementing the routing information protocol (RIP) to configure dynamic routing using FRR. We can do this in two ways—either by editing the protocol daemon configuration file in an editor or by using the VTY shell. We'll be using the VTY shell in this example. Our setup includes two CentOS 7.7 hosts, named Alpha and Beta. Both hosts have two network interfaces and share access to the 192.168.122.0/24 network. We'll be advertising routes for 10.12.11.0/24 and 10.10.10.0/24 networks.
-
-**For Host Alpha:**
-
- * eth0 IP: 192.168.122.100/24
- * Gateway: 192.168.122.1
- * eth1 IP: 10.10.10.12/24
-
-
-
-**For Host Beta:**
-
- * eth0 IP: 192.168.122.50/24
- * Gateway: 192.168.122.1
- * eth1 IP: 10.12.11.12/24
-
-
-
-#### Installation of package
-
-First, we need to install the FRR package on both hosts; this can be done by following the instructions in the [official FRR documentation][6].
-
-#### Enable IP forwarding
-
-For routing, we need to enable IP forwarding on both hosts since that will performed by the Linux kernel.
-
-
-```
-sysctl -w net.ipv4.conf.all.forwarding = 1
-
-sysctl -w net.ipv6.conf.all.forwarding = 1
-sysctl -p
-```
-
-#### Enabling the RIPD daemon
-
-Once installed, all the configuration files will be stored in the **/etc/frr** directory. The daemons must be explicitly enabled by editing the **/etc/frr/daemons** file. This file determines which daemons are activated when the FRR service is started. To enable a particular daemon, simply change the corresponding "no" to "yes." A subsequent service restart should start the daemon.
-
-![FRR daemon restart][7]
-
-#### Firewall configuration
-
-Since RIP protocol uses UDP as its transport protocol and is assigned port 520, we need to allow this port in `firewalld` configuration.
-
-
-```
-firewall-cmd --add-port=520/udp –permanent
-
-firewalld-cmd -reload
-```
-
-We can now start the FRR service using:
-
-
-```
-`systemctl start frr`
-```
-
-#### Configuration using VTY
-
-Now, we need to configure RIP using the VTY shell.
-
-On Host Alpha:
-
-
-```
-[root@alpha ~]# vtysh
-
-Hello, this is FRRouting (version 7.2RPKI).
-Copyright 1996-2005 Kunihiro Ishiguro, et al.
-
-alpha# configure terminal
-alpha(config)# router rip
-alpha(config-router)# network 192.168.122.0/24
-alpha(config-router)# network 10.10.10.0/24
-alpha(config-router)# route 10.10.10.5/24
-alpha(config-router)# do write
-Note: this version of vtysh never writes vtysh.conf
-Building Configuration...
-Configuration saved to /etc/frr/ripd.conf
-Configuration saved to /etc/frr/staticd.conf
-alpha(config-router)# do write memory
-Note: this version of vtysh never writes vtysh.conf
-Building Configuration...
-Configuration saved to /etc/frr/ripd.conf
-Configuration saved to /etc/frr/staticd.conf
-alpha(config-router)# exit
-```
-
-Similarly, on Host Beta:
-
-
-```
-[root@beta ~]# vtysh
-
-Hello, this is FRRouting (version 7.2RPKI).
-Copyright 1996-2005 Kunihiro Ishiguro, et al.
-
-beta# configure terminal
-beta(config)# router rip
-beta(config-router)# network 192.168.122.0/24
-beta(config-router)# network 10.12.11.0/24
-beta(config-router)# do write
-Note: this version of vtysh never writes vtysh.conf
-Building Configuration...
-Configuration saved to /etc/frr/zebra.conf
-Configuration saved to /etc/frr/ripd.conf
-Configuration saved to /etc/frr/staticd.conf
-beta(config-router)# do write memory
-Note: this version of vtysh never writes vtysh.conf
-Building Configuration...
-Configuration saved to /etc/frr/zebra.conf
-Configuration saved to /etc/frr/ripd.conf
-Configuration saved to /etc/frr/staticd.conf
-beta(config-router)# exit
-```
-
-Once done, check the routes on both hosts as follows:
-
-
-```
-[root@alpha ~]# ip route show
-default via 192.168.122.1 dev eth0 proto static metric 100
-10.10.10.0/24 dev eth1 proto kernel scope link src 10.10.10.12 metric 101
-10.12.11.0/24 via 192.168.122.50 dev eth0 proto 189 metric 20
-192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.100 metric 100
-```
-
-We can see that the routing table on Alpha contains an entry of 10.12.11.0/24 via 192.168.122.50, which was offered through RIP. Similarly, on Beta, the table contains an entry of network 10.10.10.0/24 via 192.168.122.100.
-
-
-```
-[root@beta ~]# ip route show
-default via 192.168.122.1 dev eth0 proto static metric 100
-10.10.10.0/24 via 192.168.122.100 dev eth0 proto 189 metric 20
-10.12.11.0/24 dev eth1 proto kernel scope link src 10.12.11.12 metric 101
-192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.50 metric 100
-```
-
-### Conclusion
-
-As you can see, the setup and configuration are relatively simple. To add complexity, we can add more network interfaces to the router to provide routing for more networks. The configurations can be made by editing the configuration files in an editor, but using VTY shell provides us a frontend to all FRR daemons in a single, combined session.
-
---------------------------------------------------------------------------------
-
-via: https://opensource.com/article/20/5/vty-shell
-
-作者:[M Umer][a]
-选题:[lujun9972][b]
-译者:[译者ID](https://github.com/译者ID)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://opensource.com/users/noisybotnet
-[b]: https://github.com/lujun9972
-[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connections_wires_sysadmin_cable.png?itok=d5WqHmnJ (Multi-colored and directional network computer cables)
-[2]: https://opensource.com/article/20/4/quagga-linux
-[3]: https://en.wikipedia.org/wiki/FRRouting
-[4]: https://opensource.com/sites/default/files/uploads/frr_architecture.png (FRR architecture)
-[5]: http://docs.frrouting.org/projects/dev-guide/en/latest/vtysh.html
-[6]: http://docs.frrouting.org/projects/dev-guide/en/latest/building-frr-for-centos7.html
-[7]: https://opensource.com/sites/default/files/uploads/frr_daemon_restart.png (FRR daemon restart)
diff --git a/sources/tech/20200527 Add nodes to your private cloud using Cloud-init.md b/sources/tech/20200527 Add nodes to your private cloud using Cloud-init.md
new file mode 100644
index 0000000000..487d1b3688
--- /dev/null
+++ b/sources/tech/20200527 Add nodes to your private cloud using Cloud-init.md
@@ -0,0 +1,334 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Add nodes to your private cloud using Cloud-init)
+[#]: via: (https://opensource.com/article/20/5/create-simple-cloud-init-service-your-homelab)
+[#]: author: (Chris Collins https://opensource.com/users/clcollins)
+
+Add nodes to your private cloud using Cloud-init
+======
+Make adding machines to your private cloud at home similar to how the
+major cloud providers handle it.
+![Digital images of a computer desktop][1]
+
+[Cloud-init][2] is a widely utilized industry-standard method for initializing cloud instances. Cloud providers use Cloud-init to customize instances with network configuration, instance information, and even user-provided configuration directives. It is also a great tool to use in your "private cloud at home" to add a little automation to the initial setup and configuration of your homelab's virtual and physical machines—and to learn more about how large cloud providers work. For a bit more detail and background, see my previous article on [Cloud-init and why it is useful][3].
+
+![A screen showing the boot process for a Linux server running Cloud-init ][4]
+
+Boot process for a Linux server running Cloud-init (Chris Collins, [CC BY-SA 4.0][5])
+
+Admittedly, Cloud-init is more useful for a cloud provider provisioning machines for many different clients than for a homelab run by a single sysadmin, and much of what Cloud-init solves might be a little superfluous for a homelab. However, getting it set up and learning how it works is a great way to learn more about this cloud technology, not to mention that it's a great way to configure your devices on first boot.
+
+This tutorial uses Cloud-init's "NoCloud" datasource, which allows Cloud-init to be used outside a traditional cloud provider setting. This article will show you how to install Cloud-init on a client device and set up a container running a web service to respond to the client's requests. You will also learn to investigate what the client is requesting from the web service and modify the web service's container to serve a basic, static Cloud-init service.
+
+### Set up Cloud-init on an existing system
+
+Cloud-init probably is most useful on a new system's first boot to query for configuration data and make changes to customize the system as directed. It can be [included in a disk image for Raspberry Pis and single-board computers][6] or added to images used to provision virtual machines. For testing, it is easy to install and run Cloud-init on an existing system or to install a new system and then set up Cloud-init.
+
+As a major service used by most cloud providers, Cloud-init is supported on most Linux distributions. For this example, I will be using Fedora 31 Server for the Raspberry Pi, but it can be done the same way on Raspbian, Ubuntu, CentOS, and most other distributions.
+
+#### Install and enable the cloud-init services
+
+On a system that you want to be a Cloud-init client, install the Cloud-init package. If you're using Fedora:
+
+
+```
+# Install the cloud-init package
+dnf install -y cloud-init
+```
+
+Cloud-init is actually four different services (at least with systemd), and each is in charge of retrieving config data and performing configuration changes during a different part of the boot process, allowing greater flexibility in what can be done. While it is unlikely you will interact much with these services directly, it is useful to know what they are in the event you need to troubleshoot something. They are:
+
+ * cloud-init-local.service
+ * cloud-init.service
+ * cloud-config.service
+ * cloud-final.service
+
+
+
+Enable all four services:
+
+
+```
+# Enable the four cloud-init services
+systemctl enable cloud-init-local.service
+systemctl enable cloud-init.service
+systemctl enable cloud-config.service
+systemctl enable cloud-final.service
+```
+
+#### Configure the datasource to query
+
+Once the service is enabled, configure the datasource from which the client will query the config data. There are a [large number of datasource types][7], and most are configured for specific cloud providers. For your homelab, use the NoCloud datasource, which (as mentioned above) is designed for using Cloud-init without a cloud provider.
+
+NoCloud allows configuration information to be included a number of ways: as key/value pairs in kernel parameters, for using a CD (or virtual CD, in the case of virtual machines) mounted at startup, in a file included on the filesystem, or, as in this example, via HTTP from a specified URL (the "NoCloud Net" option).
+
+The datasource configuration can be provided via the kernel parameter or by setting it in the Cloud-init configuration file, `/etc/cloud/cloud.cfg`. The configuration file works very well for setting up Cloud-init with customized disk images or for testing on existing hosts.
+
+Cloud-init will also merge configuration data from any `*.cfg` files found in `/etc/cloud/cloud.cfg.d/`, so to keep things cleaner, configure the datasource in `/etc/cloud/cloud.cfg.d/10_datasource.cfg`. Cloud-init can be told to read from an HTTP datasource with the seedfrom key by using the syntax:
+
+
+```
+`seedfrom: http://ip_address:port/`
+```
+
+The IP address and port are the web service you will create later in this article. I used the IP of my laptop and port 8080; this can also be a DNS name.
+
+Create the `/etc/cloud/cloud.cfg.d/10_datasource.cfg` file:
+
+
+```
+# Add the datasource:
+# /etc/cloud/cloud.cfg.d/10_datasource.cfg
+
+# NOTE THE TRAILING SLASH HERE!
+datasource:
+ NoCloud:
+ seedfrom:
+```
+
+That's it for the client setup. Now, when the client is rebooted, it will attempt to retrieve configuration data from the URL you entered in the seedfrom key and make any configuration changes that are necessary.
+
+The next step is to set up a webserver to listen for client requests, so you can figure out what needs to be served.
+
+### Set up a webserver to investigate client requests
+
+You can create and run a webserver quickly with [Podman][8] or other container orchestration tools (like Docker or Kubernetes). This example uses Podman, but the same commands work with Docker.
+
+To get started, use the Fedora:31 container image and create a Containerfile (for Docker, this would be a Dockerfile) that installs and configures Nginx. From that Containerfile, you can build a custom image and run it on the host you want to act as the Cloud-init service.
+
+Create a Containerfile with the following contents:
+
+
+```
+FROM fedora:31
+
+ENV NGINX_CONF_DIR "/etc/nginx/default.d"
+ENV NGINX_LOG_DIR "/var/log/nginx"
+ENV NGINX_CONF "/etc/nginx/nginx.conf"
+ENV WWW_DIR "/usr/share/nginx/html"
+
+# Install Nginx and clear the yum cache
+RUN dnf install -y nginx \
+ && dnf clean all \
+ && rm -rf /var/cache/yum
+
+# forward request and error logs to docker log collector
+RUN ln -sf /dev/stdout ${NGINX_LOG_DIR}/access.log \
+ && ln -sf /dev/stderr ${NGINX_LOG_DIR}/error.log
+
+# Listen on port 8080, so root privileges are not required for podman
+RUN sed -i -E 's/(listen)([[:space:]]*)(\\[\:\:\\]\:)?80;$/\1\2\38080 default_server;/' $NGINX_CONF
+EXPOSE 8080
+
+# Allow Nginx PID to be managed by non-root user
+RUN sed -i '/user nginx;/d' $NGINX_CONF
+RUN sed -i 's/pid \/run\/nginx.pid;/pid \/tmp\/nginx.pid;/' $NGINX_CONF
+
+# Run as an unprivileged user
+USER 1001
+
+CMD ["nginx", "-g", "daemon off;"]
+```
+
+_Note: The example Containerfile and other files used in this example can be found in this project's [GitHub repository][9]._
+
+The most important part of the Containerfile above is the section that changes how the logs are stored (writing to STDOUT rather than a file), so you can see requests coming into the server in the container logs. A few other changes enable you to run the container with Podman without root privileges and to run processes in the container without root, as well.
+
+This first pass at the webserver does not serve any Cloud-init data; just use this to see what the Cloud-init client is requesting from it.
+
+With the Containerfile created, use Podman to build and run a webserver image:
+
+
+```
+# Build the container image
+$ podman build -f Containerfile -t cloud-init:01 .
+
+# Create a container from the new image, and run it
+# It will listen on port 8080
+$ podman run --rm -p 8080:8080 -it cloud-init:01
+```
+
+This will run the container, leaving your terminal attached and with a pseudo-TTY. It will appear that nothing is happening at first, but requests to port 8080 of the host machine will be routed to the Nginx server inside the container, and a log message will appear in the terminal window. This can be tested with a curl command from the host machine:
+
+
+```
+# Use curl to send an HTTP request to the Nginx container
+$ curl
+```
+
+After running that curl command, you should see a log message similar to this in the terminal window:
+
+
+```
+`127.0.0.1 - - [09/May/2020:19:25:10 +0000] "GET / HTTP/1.1" 200 5564 "-" "curl/7.66.0" "-"`
+```
+
+Now comes the fun part: reboot the Cloud-init client and watch the Nginx logs to see what Cloud-init requests from the webserver when the client boots up.
+
+As the client finishes its boot process, you should see log messages similar to:
+
+
+```
+2020/05/09 22:44:28 [error] 2#0: *4 open() "/usr/share/nginx/html/meta-data" failed (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET /meta-data HTTP/1.1", host: "instance-data:8080"
+127.0.0.1 - - [09/May/2020:22:44:28 +0000] "GET /meta-data HTTP/1.1" 404 3650 "-" "Cloud-Init/17.1" "-"
+```
+
+_Note: Use Ctrl+C to stop the running container._
+
+You can see the request is for the /meta-data path, i.e., `http://ip_address_of_the_webserver:8080/meta-data`. This is just a GET request—Cloud-init is not POSTing (sending) any data to the webserver. It is just blindly requesting the files from the datasource URL, so it is up to the datasource to identify what the host is asking. This simple example is just sending generic data to any client, but a larger homelab will need a more sophisticated service.
+
+Here, Cloud-init is requesting the [instance metadata][10] information. This file can include a lot of information about the instance itself, such as the instance ID, the hostname to assign the instance, the cloud ID—even networking information.
+
+Create a basic metadata file with an instance ID and a hostname for the host, and try serving that to the Cloud-init client.
+
+First, create a metadata file that can be copied into the container image:
+
+
+```
+instance-id: iid-local01
+local-hostname: raspberry
+hostname: raspberry
+```
+
+The instance ID can be anything. However, if you change the instance ID after Cloud-init runs and the file is served to the client, it will trigger Cloud-init to run again. You can use this mechanism to update instance configurations, but you should be aware that it works that way.
+
+The local-hostname and hostname keys are just that; they set the hostname information for the client when Cloud-init runs.
+
+Add the following line to the Containerfile to copy the metadata file into the new image:
+
+
+```
+# Copy the meta-data file into the image for Nginx to serve it
+COPY meta-data ${WWW_DIR}/meta-data
+```
+
+Now, rebuild the image (use a new tag for easy troubleshooting) with the metadata file, and create and run a new container with Podman:
+
+
+```
+# Build a new image named cloud-init:02
+podman build -f Containerfile -t cloud-init:02 .
+
+# Run a new container with this new meta-data file
+podman run --rm -p 8080:8080 -it cloud-init:02
+```
+
+With the new container running, reboot your Cloud-init client and watch the Nginx logs again:
+
+
+```
+127.0.0.1 - - [09/May/2020:22:54:32 +0000] "GET /meta-data HTTP/1.1" 200 63 "-" "Cloud-Init/17.1" "-"
+2020/05/09 22:54:32 [error] 2#0: *2 open() "/usr/share/nginx/html/user-data" failed (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET /user-data HTTP/1.1", host: "instance-data:8080"
+127.0.0.1 - - [09/May/2020:22:54:32 +0000] "GET /user-data HTTP/1.1" 404 3650 "-" "Cloud-Init/17.1" "-"
+```
+
+You see that this time the /meta-data path was served to the client. Success!
+
+However, the client is looking for a second file at the /user-data path. This file contains configuration data provided by the instance owner, as opposed to data from the cloud provider. For a homelab, both of these are you.
+
+There are a [large number of user-data modules][11] you can use to configure your instance. For this example, just use the write_files module to create some test files on the client and verify that Cloud-init is working.
+
+Create a user-data file with the following content:
+
+
+```
+#cloud-config
+
+# Create two files with example content using the write_files module
+write_files:
+ - content: |
+ "Does cloud-init work?"
+ owner: root:root
+ permissions: '0644'
+ path: /srv/foo
+ - content: |
+ "IT SURE DOES!"
+ owner: root:root
+ permissions: '0644'
+ path: /srv/bar
+```
+
+In addition to a YAML file using the user-data modules provided by Cloud-init, you could also make this an executable script for Cloud-init to run.
+
+After creating the user-data file, add the following line to the Containerfile to copy it into the image when the image is rebuilt:
+
+
+```
+# Copy the user-data file into the container image
+COPY user-data ${WWW_DIR}/user-data
+```
+
+Rebuild the image and create and run a new container, this time with the user-data information:
+
+
+```
+# Build a new image named cloud-init:03
+podman build -f Containerfile -t cloud-init:03 .
+
+# Run a new container with this new user-data file
+podman run --rm -p 8080:8080 -it cloud-init:03
+```
+
+Now, reboot your Cloud-init client, and watch the Nginx logs on the webserver:
+
+
+```
+127.0.0.1 - - [09/May/2020:23:01:51 +0000] "GET /meta-data HTTP/1.1" 200 63 "-" "Cloud-Init/17.1" "-"
+127.0.0.1 - - [09/May/2020:23:01:51 +0000] "GET /user-data HTTP/1.1" 200 298 "-" "Cloud-Init/17.1" "-
+```
+
+Success! This time both the metadata and user-data files were served to the Cloud-init client.
+
+### Validate that Cloud-init ran
+
+From the logs above, you know that Cloud-init ran on the client host and requested the metadata and user-data files, but did it do anything with them? You can validate that Cloud-init wrote the files you added in the user-data file, in the write_files section.
+
+On your Cloud-init client, check the contents of the `/srv/foo` and `/srv/bar` files:
+
+
+```
+# cd /srv/ && ls
+bar foo
+# cat foo
+"Does cloud-init work?"
+# cat bar
+"IT SURE DOES!"
+```
+
+Success! The files were written and have the content you expect.
+
+As mentioned above, there are plenty of other modules that can be used to configure the host. For example, the user-data file can be configured to add packages with apt, copy SSH authorized_keys, create users and groups, configure and run configuration-management tools, and many other things. I use it in my private cloud at home to copy my authorized_keys, create a local user and group, and set up sudo permissions.
+
+### What you can do next
+
+Cloud-init is useful in a homelab, especially a lab focusing on cloud technologies. The simple service demonstrated in this article may not be super useful for a homelab, but now that you know how Cloud-init works, you can move on to creating a dynamic service that can configure each host with custom data, making a private cloud at home more similar to the services provided by the major cloud providers.
+
+With a slightly more complicated datasource, adding new physical (or virtual) machines to your private cloud at home can be as simple as plugging them in and turning them on.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/20/5/create-simple-cloud-init-service-your-homelab
+
+作者:[Chris Collins][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clcollins
+[b]: https://github.com/lujun9972
+[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
+[2]: https://cloudinit.readthedocs.io/
+[3]: https://opensource.com/article/20/5/cloud-init-raspberry-pi-homelab
+[4]: https://opensource.com/sites/default/files/uploads/cloud-init.jpg (A screen showing the boot process for a Linux server running Cloud-init )
+[5]: https://creativecommons.org/licenses/by-sa/4.0/
+[6]: https://opensource.com/article/20/5/disk-image-raspberry-pi
+[7]: https://cloudinit.readthedocs.io/en/latest/topics/datasources.html
+[8]: https://podman.io/
+[9]: https://github.com/clcollins/homelabCloudInit/tree/master/simpleCloudInitService/data
+[10]: https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#what-is-instance-data
+[11]: https://cloudinit.readthedocs.io/en/latest/topics/modules.html
diff --git a/sources/tech/20200527 Manage startup using systemd.md b/sources/tech/20200527 Manage startup using systemd.md
new file mode 100644
index 0000000000..c0ff18a13d
--- /dev/null
+++ b/sources/tech/20200527 Manage startup using systemd.md
@@ -0,0 +1,567 @@
+[#]: collector: (lujun9972)
+[#]: translator: ( )
+[#]: reviewer: ( )
+[#]: publisher: ( )
+[#]: url: ( )
+[#]: subject: (Manage startup using systemd)
+[#]: via: (https://opensource.com/article/20/5/manage-startup-systemd)
+[#]: author: (David Both https://opensource.com/users/dboth)
+
+Manage startup using systemd
+======
+Learn how systemd determines the order services start, even though it is
+essentially a parallel system.
+![Penguin with green background][1]
+
+While setting up a Linux system recently, I wanted to know how to ensure that dependencies for services and other units were up and running before those dependent services and units start. Specifically, I needed more knowledge of how systemd manages the startup sequence, especially in determining the order services are started in what is essentially a parallel system.
+
+You may know that SystemV (systemd's predecessor, as I explained in the [first article][2] in this series) orders the startup sequence by naming the startup scripts with an SXX prefix, where XX is a number from 00 to 99. SystemV then uses the sort order by name and runs each start script in sequence for the desired runlevel.
+
+But systemd uses unit files, which can be created or modified by a sysadmin, to define subroutines for not only initialization but also for regular operation. In the [third article][3] in this series, I explained how to create a mount unit file. In this fifth article, I demonstrate how to create a different type of unit file—a service unit file that runs a program at startup. You can also change certain configuration settings in the unit file and use the systemd journal to view the location of your changes in the startup sequence.
+
+### Preparation
+
+Make sure you have removed `rhgb` and `quiet` from the `GRUB_CMDLINE_LINUX=` line in the `/etc/default/grub` file, as I showed in the [second article][4] in this series. This enables you to observe the Linux startup message stream, which you'll need for some of the experiments in this article.
+
+### The program
+
+In this tutorial, you will create a simple program that enables you to observe a message during startup on the console and later in the systemd journal.
+
+Create the shell program `/usr/local/bin/hello.sh` and add the following content. You want to ensure that the result is visible during startup and that you can easily find it when looking through the systemd journal. You will use a version of the "Hello world" program with some bars around it, so it stands out. Make sure the file is executable and has user and group ownership by root with [700 permissions][5] for security:
+
+
+```
+#!/usr/bin/bash
+# Simple program to use for testing startup configurations
+# with systemd.
+# By David Both
+# Licensed under GPL V2
+#
+echo "###############################"
+echo "######### Hello World! ########"
+echo "###############################"
+```
+
+Run this program from the command line to verify that it works correctly:
+
+
+```
+[root@testvm1 ~]# hello.sh
+###############################
+######### Hello World! ########
+###############################
+[root@testvm1 ~]#
+```
+
+This program could be created in any scripting or compiled language. The `hello.sh` program could also be located in other places based on the [Linux filesystem hierarchical structure][6] (FHS). I place it in the `/usr/local/bin` directory so that it can be easily run from the command line without having to prepend a path when I type the command. I find that many of the shell programs I create need to be run from the command line and by other tools such as systemd.
+
+### The service unit file
+
+Create the service unit file `/etc/systemd/system/hello.service` with the following content. This file does not need to be executable, but for security, it does need user and group ownership by root and [644][7] or [640][8] permissions:
+
+
+```
+# Simple service unit file to use for testing
+# startup configurations with systemd.
+# By David Both
+# Licensed under GPL V2
+#
+
+[Unit]
+Description=My hello shell script
+
+[Service]
+Type=oneshot
+ExecStart=/usr/local/bin/hello.sh
+
+[Install]
+WantedBy=multi-user.target
+```
+
+Verify that the service unit file performs as expected by viewing the service status. Any syntactical errors will show up here:
+
+
+```
+[root@testvm1 ~]# systemctl status hello.service
+● hello.service - My hello shell script
+ Loaded: loaded (/etc/systemd/system/hello.service; disabled; vendor preset: disabled)
+ Active: inactive (dead)
+[root@testvm1 ~]#
+```
+
+You can run this "oneshot" service type multiple times without problems. The oneshot type is intended for services where the program launched by the service unit file is the main process and must complete before systemd starts any dependent process.
+
+There are seven service types, and you can find an explanation of each (along with the other parts of a service unit file) in the [systemd.service(5)][9] man page. (You can also find more information in the [resources][10] at the end of this article.)
+
+As curious as I am, I wanted to see what an error might look like. So, I deleted the "o" from the `Type=oneshot` line, so it looked like `Type=neshot`, and ran the command again:
+
+
+```
+[root@testvm1 ~]# systemctl status hello.service
+● hello.service - My hello shell script
+ Loaded: loaded (/etc/systemd/system/hello.service; disabled; vendor preset: disabled)
+ Active: inactive (dead)
+
+May 06 08:50:09 testvm1.both.org systemd[1]: /etc/systemd/system/hello.service:12: Failed to parse service type, ignoring: neshot
+[root@testvm1 ~]#
+```
+
+These results told me precisely where the error was and made it very easy to resolve the problem.
+
+Just be aware that even after you restore the `hello.service` file to its original form, the error will persist. Although a reboot will clear the error, you should not have to do that, so I went looking for a method to clear out persistent errors like this. I have encountered service errors that require the command `systemctl daemon-reload` to reset an error condition, but that did not work in this case. The error messages that can be fixed with this command always seem to have a statement to that effect, so you know to run it.
+
+It is, however, recommended that you run `systemctl daemon-reload` after changing a unit file or creating a new one. This notifies systemd that the changes have been made, and it can prevent certain types of issues with managing altered services or units. Go ahead and run this command.
+
+After correcting the misspelling in the service unit file, a simple `systemctl restart hello.service` cleared the error. Experiment a bit by introducing some other errors into the `hello.service` file to see what kinds of results you get.
+
+### Start the service
+
+Now you are ready to start the new service and check the status to see the result. Although you probably did a restart in the previous section, you can start or restart a oneshot service as many times as you want since it runs once and then exits.
+
+Go ahead and start the service (as shown below), and then check the status. Depending upon how much you experimented with errors, your results may differ from mine:
+
+
+```
+[root@testvm1 ~]# systemctl start hello.service
+[root@testvm1 ~]# systemctl status hello.service
+● hello.service - My hello shell script
+ Loaded: loaded (/etc/systemd/system/hello.service; disabled; vendor preset: disabled)
+ Active: inactive (dead)
+
+May 10 10:37:49 testvm1.both.org hello.sh[842]: ######### Hello World! ########
+May 10 10:37:49 testvm1.both.org hello.sh[842]: ###############################
+May 10 10:37:49 testvm1.both.org systemd[1]: hello.service: Succeeded.
+May 10 10:37:49 testvm1.both.org systemd[1]: Finished My hello shell script.
+May 10 10:54:45 testvm1.both.org systemd[1]: Starting My hello shell script...
+May 10 10:54:45 testvm1.both.org hello.sh[1380]: ###############################
+May 10 10:54:45 testvm1.both.org hello.sh[1380]: ######### Hello World! ########
+May 10 10:54:45 testvm1.both.org hello.sh[1380]: ###############################
+May 10 10:54:45 testvm1.both.org systemd[1]: hello.service: Succeeded.
+May 10 10:54:45 testvm1.both.org systemd[1]: Finished My hello shell script.
+[root@testvm1 ~]#
+```
+
+Notice in the status command's output that the systemd messages indicate that the `hello.sh` script started and the service completed. You can also see the output from the script. This display is generated from the journal entries of the most recent invocations of the service. Try starting the service several times, and then run the status command again to see what I mean.
+
+You should also look at the journal contents directly; there are multiple ways to do this. One way is to specify the record type identifier, in this case, the name of the shell script. This shows the journal entries for previous reboots as well as the current session. As you can see, I have been researching and testing for this article for some time now:
+
+
+```
+[root@testvm1 ~]# journalctl -t hello.sh
+<snip>
+\-- Reboot --
+May 08 15:55:47 testvm1.both.org hello.sh[840]: ###############################
+May 08 15:55:47 testvm1.both.org hello.sh[840]: ######### Hello World! ########
+May 08 15:55:47 testvm1.both.org hello.sh[840]: ###############################
+\-- Reboot --
+May 08 16:01:51 testvm1.both.org hello.sh[840]: ###############################
+May 08 16:01:51 testvm1.both.org hello.sh[840]: ######### Hello World! ########
+May 08 16:01:51 testvm1.both.org hello.sh[840]: ###############################
+\-- Reboot --
+May 10 10:37:49 testvm1.both.org hello.sh[842]: ###############################
+May 10 10:37:49 testvm1.both.org hello.sh[842]: ######### Hello World! ########
+May 10 10:37:49 testvm1.both.org hello.sh[842]: ###############################
+May 10 10:54:45 testvm1.both.org hello.sh[1380]: ###############################
+May 10 10:54:45 testvm1.both.org hello.sh[1380]: ######### Hello World! ########
+May 10 10:54:45 testvm1.both.org hello.sh[1380]: ###############################
+[root@testvm1 ~]#
+```
+
+To locate the systemd records for the `hello.service` unit, you can search on systemd. You can use **G+Enter** to page to the end of the journal entries and then scroll back to locate the ones you are interested in. Use the `-b` option to show only the entries for the most recent startup:
+
+
+```
+[root@testvm1 ~]# journalctl -b -t systemd
+<snip>
+May 10 10:37:49 testvm1.both.org systemd[1]: Starting SYSV: Late init script for live image....
+May 10 10:37:49 testvm1.both.org systemd[1]: Started SYSV: Late init script for live image..
+May 10 10:37:49 testvm1.both.org systemd[1]: hello.service: Succeeded.
+May 10 10:37:49 testvm1.both.org systemd[1]: Finished My hello shell script.
+May 10 10:37:50 testvm1.both.org systemd[1]: Starting D-Bus System Message Bus...
+May 10 10:37:50 testvm1.both.org systemd[1]: Started D-Bus System Message Bus.
+```
+
+I copied a few other journal entries to give you an idea of what you might find. This command spews all of the journal lines pertaining to systemd—109,183 lines when I wrote this. That is a lot of data to sort through. You can use the pager's search facility, which is usually `less`, or you can use the built-in `grep` feature. The `-g` (or `--grep=`) option uses Perl-compatible regular expressions:
+
+
+```
+[root@testvm1 ~]# journalctl -b -t systemd -g "hello"
+[root@testvm1 ~]# journalctl -b -t systemd -g "hello"
+\-- Logs begin at Tue 2020-05-05 18:11:49 EDT, end at Sun 2020-05-10 11:01:01 EDT. --
+May 10 10:37:49 testvm1.both.org systemd[1]: Starting My hello shell script...
+May 10 10:37:49 testvm1.both.org systemd[1]: hello.service: Succeeded.
+May 10 10:37:49 testvm1.both.org systemd[1]: Finished My hello shell script.
+May 10 10:54:45 testvm1.both.org systemd[1]: Starting My hello shell script...
+May 10 10:54:45 testvm1.both.org systemd[1]: hello.service: Succeeded.
+May 10 10:54:45 testvm1.both.org systemd[1]: Finished My hello shell script.
+[root@testvm1 ~]#
+```
+
+You could use the standard GNU `grep` command, but that would not show the log metadata in the first line.
+
+If you do not want to see just the journal entries pertaining to your `hello` service, you can narrow things down a bit by specifying a time range. For example, I will start with the beginning time of `10:54:00` on my test VM, which was the start of the minute the entries above are from. ****Note that the `--since=` option must be enclosed in quotes and that this option can also be expressed as `-S "