Merge pull request #21 from LCTT/master

更新
This commit is contained in:
zEpoch 2021-07-03 22:32:27 +08:00 committed by GitHub
commit b8a811483b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 702 additions and 39 deletions

View File

@ -1,23 +1,25 @@
[#]: collector: (lujun9972)
[#]: translator: (baddate)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13544-1.html)
[#]: subject: (Learn Bash with this book of puzzles)
[#]: via: (https://opensource.com/article/20/4/bash-it-out-book)
[#]: author: (Carlos Aguayo https://opensource.com/users/hwmaster1)
用这本谜题书学习 Bash
《Bash it out》书评用这本谜题书学习 Bash
======
>“Bash it out”使用 16 个谜题,涵盖了基本、中级和高级 Bash 脚本。
![Puzzle pieces coming together to form a computer screen][1]
> 《Bash it out》使用 16 个谜题,涵盖了基本、中级和高级 Bash 脚本。
计算机既是我的爱好,也是我的职业。我的公寓里散布着大约 10 个,它们都运行 Linux包括我的 Mac。由于我喜欢升级我的电脑和提升我的电脑技能当我遇到 Sylvain Leroux 的[_Bash it out_][2]时,我抓住了购买它的机会。我在 Debian Linux 上经常使用命令行,这似乎是扩展我的 Bash 知识的好机会。当作者在前言中解释他使用 Debian Linux 时,我笑了,这是我最喜欢的两个发行版之一。
![](https://img.linux.net.cn/data/attachment/album/202107/03/154134jgm2m82o76mrm2o7.jpg)
Bash 可让你自动执行任务,因此它是一种省力、有趣且有用的工具。在阅读本书之前,我已经有相当多的 Unix 和 Linux 上的 Bash 经验。我不是专家,部分原因是脚本语言非常广泛和强大。当我在基于 Arch 的 Linux 发行版[EndeavourOS][3]的欢迎屏幕上看到 Bash 时,我第一次对 Bash 产生了兴趣。
计算机既是我的爱好,也是我的职业。我的公寓里散布着大约 10 台计算机,它们都运行 Linux包括我的 Mac。由于我喜欢升级我的电脑和提升我的电脑技能当我遇到 Sylvain Leroux 的《[Bash it out][2]》时,我抓住了购买它的机会。我在 Debian Linux 上经常使用命令行,这似乎是扩展我的 Bash 知识的好机会。当作者在前言中解释他使用 Debian Linux 时,我笑了,这是我最喜欢的两个发行版之一。
Bash 可让你自动执行任务,因此它是一种省力、有趣且有用的工具。在阅读本书之前,我已经有相当多的 Unix 和 Linux 上的 Bash 经验。我不是专家,部分原因是脚本语言非常广泛和强大。当我在基于 Arch 的 Linux 发行版 [EndeavourOS][3] 的欢迎屏幕上看到 Bash 时,我第一次对 Bash 产生了兴趣。
以下屏幕截图显示了 EndeavourOS 的一些选项。你可能不相信,这些面板只指向 Bash 脚本,每个脚本都完成一些相对复杂的任务。而且因为它都是开源的,所以我可以根据需要修改这些脚本中的任何一个。
以下屏幕截图显示了 EndeavourOS 的一些选项。不管你信不信,这些面板只指向 Bash 脚本,每个脚本都完成一些相对复杂的任务。而且因为它都是开源的,所以我可以根据需要修改这些脚本中的任何一个。
![EndeavourOS after install][4]
![EndeavourOS install apps][5]
@ -26,11 +28,11 @@ Bash 可让你自动执行任务,因此它是一种省力、有趣且有用的
我对这本书的印象非常好。虽然不长,但经过了深思熟虑。作者对 Bash 有非常广泛的了解,并且具有解释如何使用它的不可思议的能力。这本书使用 16 个谜题涵盖了基本、中级和高级 Bash 脚本,他称之为“挑战”。这教会了我将 Bash 脚本视为需要解决的编程难题,这让我玩起来更有趣。
Bash 一个令人兴奋的方面是它与 Linux 系统深度集成。虽然它的部分能力在于它的语法,但它也很强大,因为它可以访问很多系统资源。你可以编写重复性任务或简单但厌倦了手动执行的任务的脚本。没有什么太大或太小的事,*Bash it out*可以帮助你了解可以做什么以及如何实现它。
Bash 一个令人兴奋的方面是它与 Linux 系统深度集成。虽然它的部分能力在于它的语法,但它也很强大,因为它可以访问很多系统资源。你可以编写重复性任务或简单但厌倦了手动执行的任务的脚本。不管是大事还是小事《Bash it out》可以帮助你了解可以做什么以及如何实现它。
如果我不提及 David Both 的免费资源[_A sysadmin's guide to Bash scripting_][6]on Opensource.com这篇评论就不会完整。这个 17 页的 PDF 指南与Bash it out不同,但它们共同构成了任何想要了解它的人的成功组合。
如果我不提及 David Both 的发布在 Opensource.com 的免费资源《[A sysadmin's guide to Bash scripting_][6]》,这篇书评就不会完整。这个 17 页的 PDF 指南与《Bash it out》不同,但它们共同构成了任何想要了解它的人的成功组合。
我不是计算机程序员,但*Bash it out*增加了我进入更高级 Bash 脚本水平的欲望——虽然没有这个打算,但我可能最终无意中成为一名计算机程序员。
我不是计算机程序员,但《Bash it out》增加了我进入更高级 Bash 脚本水平的欲望——虽然没有这个打算,但我可能最终无意中成为一名计算机程序员。
我喜欢 Linux 的原因之一是因为它的操作系统功能强大且用途广泛。无论我对 Linux 了解多少,总有一些新东西需要学习,这让我更加欣赏 Linux。
@ -44,8 +46,8 @@ via: https://opensource.com/article/20/4/bash-it-out-book
作者:[Carlos Aguayo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/baddates)
校对:[校对者ID](https://github.com/校对者ID)
译者:[baddates](https://github.com/baddates)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,18 +3,18 @@
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13545-1.html)
在 WSL 上忘记了 Linux 密码?下面是如何轻松重设的方法
======
对于那些想从舒适的 Windows 中享受 Linux 命令行的人来说WSLWindows Subsystem for Linux 是一个方便的工具。
当你[在 Windows 上使用 WSL 安装 Linux][1]时,会要求你创建一个用户名和密码。当你在 WSL 上启动 Linux 时,这个用户会自动登录。
当你 [在 Windows 上使用 WSL 安装 Linux][1] 时,会要求你创建一个用户名和密码。当你在 WSL 上启动 Linux 时,这个用户会自动登录。
现在的问题是,如果你有一段时间没有使用它,你可能会忘记 WSL 的账户密码。而如果你要使用 sudo 的命令,这将成为一个问题,因为这里你需要输入密码。
现在的问题是,如果你有一段时间没有使用它,你可能会忘记 WSL 的账户密码。而如果你要使用 `sudo` 的命令,这将成为一个问题,因为这里你需要输入密码。
![][2]
@ -24,27 +24,25 @@
要在 WSL 中重设 Linux 密码,你需要:
* 将默认用户切换为 root
* 将默认用户切换为 `root`
* 重置普通用户的密码
* 将默认用户切换回正常用户
* 将默认用户切换回普通用户
让我向你展示详细的步骤和截图。
#### 步骤 1将默认用户切换为 root
记下你的普通/常规用户名将是明智之举。如你所见,我的普通帐户的用户名是 abhishek。
记下你的普通/常规用户名将是明智之举。如你所见,我的普通帐户的用户名是 `abhishek`
![Note down the account username][3]
WSL 中的 root 用户是解锁的,没有设置密码。这意味着你可以切换到 root 用户,然后利用 root 的能力来重置密码。
WSL 中的 `root` 用户是无锁的,没有设置密码。这意味着你可以切换到 `root` 用户,然后利用 `root` 的能力来重置密码。
由于你不记得帐户密码,切换到用户是通过改变你的 Linux WSL 应用的配置,使其默认使用 root 用户来完成。
由于你不记得帐户密码,切换到 `root` 用户是通过改变你的 Linux WSL 应用的配置,使其默认使用 `root` 用户来完成。
这是通过 Windows 命令提示符完成的,你需要知道你的 Linux 发行版需要运行哪个命令。
这个信息通常在 [Windows Store][4] 中的发行版应用的描述中提供。这是你首次下载发行版的地方。
这个信息通常在 [Windows 商店][4] 中的发行版应用的描述中提供。这是你首次下载发行版的地方。
![Know the command to run for your distribution app][5]
@ -66,17 +64,17 @@ ubuntu config --default-user root
发行版应用 | Windows 命令
---|---
Ubuntu | ubuntu config default-user root
Ubuntu 20.04 | ubuntu2004 config default-user root
Ubuntu 18.04 | ubuntu1804 config default-user root
Debian | debian config default-user root
Kali Linux | kali config default-user root
Ubuntu | `ubuntu config default-user root`
Ubuntu 20.04 | `ubuntu2004 config default-user root`
Ubuntu 18.04 | `ubuntu1804 config default-user root`
Debian | `debian config default-user root`
Kali Linux | `kali config default-user root`
#### 步骤 2重设帐户密码
现在,如果你启动 Linux 发行程序,你应该以 root 身份登录。你可以重新设置普通用户帐户的密码。
现在,如果你启动 Linux 发行程序,你应该以 `root` 身份登录。你可以重新设置普通用户帐户的密码。
你还记得 WSL 中的用户名吗?如果没有,你可以随时检查 /home 目录的内容。当你有了用户名后,使用这个命令:
你还记得 WSL 中的用户名吗?LCTT 译注:请使用你的“用户名”替换下列命令中的 `username`如果没有,你可以随时检查 `/home` 目录的内容。当你有了用户名后,使用这个命令:
```
passwd username
@ -86,13 +84,13 @@ passwd username
![Reset the password for the regular user][8]
恭喜你。用户账户的密码已经被重置。但你还没有完成。默认用户仍然是 root。你应该把它改回你的普通用户帐户否则它将一直以 root 用户的身份登录。
恭喜你。用户账户的密码已经被重置。但你还没有完成。默认用户仍然是 `root`。你应该把它改回你的普通用户帐户,否则它将一直以 `root` 用户的身份登录。
#### 步骤 3再次将普通用户设置为默认用户
你需要你在上一步中用 [passwd 命令][9]使用的普通帐户用户名。
你需要你在上一步中用 [passwd 命令][9] 使用的普通帐户用户名。
再次启动 Windows 命令提示符。**使用你的发行版命令**,方式与第 1 步中类似。然而,这一次,用普通用户代替 root。
再次启动 Windows 命令提示符。**使用你的发行版命令**,方式与第 1 步中类似。然而,这一次,用普通用户代替 `root`
```
ubuntu config --default-user username
@ -100,7 +98,7 @@ ubuntu config --default-user username
![Set regular user as default user][10]
现在,当你在 WSL 中启动你的 Linux 发行版时,你将以普通用户的身份登录。你已经重新设置了密码,可以用它来运行 sudo 命令。
现在,当你在 WSL 中启动你的 Linux 发行版时,你将以普通用户的身份登录。你已经重新设置了密码,可以用它来运行 `sudo` 命令。
如果你将来再次忘记了密码,你知道重置密码的步骤。
@ -121,7 +119,7 @@ via: https://itsfoss.com/reset-linux-password-wsl/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,333 @@
[#]: subject: (Bind a cloud event to Knative)
[#]: via: (https://opensource.com/article/21/7/cloudevents-bind-java-knative)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Bind a cloud event to Knative
======
CloudEvents provides a common format to describe events and increase
interoperability.
![woman on laptop sitting at the window][1]
Events have become an essential piece of modern reactive systems. Indeed, events can be used to communicate from one service to another, trigger out-of-band processing, or send a payload to a service like Kafka. The problem is that event publishers may express event messages in any number of different ways, regardless of content. For example, some messages are payloads in JSON format to serialize and deserialize messages by application. Other applications use binary formats such as [Avro][2] and [Protobuf][3] to transport payloads with metadata. This is an issue when building an event-driven architecture that aims to easily integrate external systems and reduce the complexity of message transmission.
[CloudEvents][4] is an open specification providing a common format to describe events and increase interoperability. Many cloud providers and middleware stacks, including [Knative][5], [Kogito][6], [Debezium][7], and [Quarkus][8] have adopted this format after the release of CloudEvents 1.0. Furthermore, developers need to decouple relationships between event producers and consumers in serverless architectures. [Knative Eventing][9] is consistent with the CloudEvents specification, providing common formats for creating, parsing, sending, and receiving events in any programming language. Knative Eventing also enables developers to late-bind event sources and event consumers. For example, a cloud event using JSON might look like this:
```
{
    "specversion" : "1.0", (1)
    "id" : "11111", (2)
    "source" : "<http://localhost:8080/cloudevents>", (3)
    "type" : "knative-events-binding", (4)
    "subject" : "cloudevents", (5)
    "time" : "2021-06-04T16:00:00Z", (6)
    "datacontenttype" : "application/json", (7)
    "data" : "{\"message\": \"Knative Events\"}", (8)
}
```
In the above code:
(1) Which version of the CloudEvents specification to use
(2) The ID field for a specific event; combining the `id` and the `source` provides a unique identifier
(3) The Uniform Resource Identifier (URI) identifies the event source in terms of the context where it happened or the application that emitted it
(4) The type of event with any random words
(5) Additional details about the event (optional)
(6) The event creation time (optional)
(7) The content type of the data attribute (optional)
(8) The business data for the specific event
Here is a quick example of how developers can enable a CloudEvents bind with Knative and the [Quarkus Funqy extension][10].
### 1\. Create a Quarkus Knative event Maven project
Generate a Quarkus project (e.g., `quarkus-serverless-cloudevent`) to create a simple function with Funqy Knative events binding extensions:
```
$ mvn io.quarkus:quarkus-maven-plugin:2.0.0.CR3:create \
       -DprojectGroupId=org.acme \
       -DprojectArtifactId=quarkus-serverless-cloudevent \
       -Dextensions="funqy-knative-events" \
       -DclassName="org.acme.getting.started.GreetingResource"
```
### 2\. Run the serverless event function locally
Open the `CloudEventGreeting.java` file in the `src/main/java/org/acme/getting/started/funqy/cloudevent` directory. The `@funq` annotation enables the `myCloudEventGreeting` method to map the input data to the cloud event message automatically:
```
private static final Logger log = Logger.getLogger(CloudEventGreeting.class);
    @Funq
    public void myCloudEventGreeting(Person input) {
        log.info("Hello " + input.getName());
    }
}
```
Run the function via Quarkus Dev Mode:
```
`$ ./mvnw quarkus:dev`
```
The output should look like this:
```
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,&lt; / /_/ /\ \  
\--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/  
INFO  [io.quarkus] (Quarkus Main Thread) quarkus-serverless-cloudevent 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.0.0.CR3) started in 1.546s. Listening on: <http://localhost:8080>
INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, funqy-knative-events, smallrye-context-propagation]
\--
Tests paused, press [r] to resume
```
**Note**: Quarkus 2.x provides a continuous testing feature so that you can keep testing your code when you add or update code by pressing `r` in the terminal.
Now the CloudEvents function is running in your local development environment. So, send a cloud event to the function over the HTTP protocol:
```
curl -v <http://localhost:8080> \
  -H "Content-Type:application/json" \
  -H "Ce-Id:1" \
  -H "Ce-Source:cloud-event-example" \
  -H "Ce-Type:myCloudEventGreeting" \
  -H "Ce-Specversion:1.0" \
  -d "{\"name\": \"Daniel\"}"
```
The output should end with:
```
`HTTP/1.1 204 No Content`
```
Go back to the terminal, and the log should look like this:
```
`INFO [org.acm.get.sta.fun.clo.CloudEventGreeting] (executor-thread-0) Hello Daniel`
```
### 3\. Deploy the serverless event function to Knative
Add a `container-image-docker` extension to the Quarkus Funqy project. The extension enables you to build a container image based on the serverless event function and then push it to an external container registry (e.g., [Docker Hub][11], [Quay.io][12]):
```
`$ ./mvnw quarkus:add-extension -Dextensions="container-image-docker"`
```
Open the `application.properties` file in the `src/main/resources/` directory. Then add the following variables to configure Knative and Kubernetes resources (make sure to replace `yourAccountName` with your container registry's account name, e.g., your username in Docker Hub):
```
quarkus.container-image.build=true
quarkus.container-image.push=true
quarkus.container-image.builder=docker
quarkus.container-image.image=docker.io/yourAccountName/funqy-knative-events-codestart
```
Run the following command to containerize the function and then push it to the Docker Hub container registry automatically:
```
`$ ./mvnw clean package`
```
The output should end with `BUILD SUCCESS`.
Open the `funqy-service.yaml` file in the `src/main/k8s` directory. Then replace `yourAccountName` with your account information in the Docker Hub registry:
```
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: funqy-knative-events-codestart
spec:
  template:
    metadata:
      name: funqy-knative-events-codestart-v1
      annotations:
        autoscaling.knative.dev/target: "1"
    spec:
      containers:
        - image: docker.io/yourAccountName/funqy-knative-events-codestart
```
Assuming the container image pushed successfully, create the Knative service based on the event function using the following `kubectl` command-line tool (be sure to log into the Kubernetes cluster and change the namespace where you want to create the Knative service):
```
`$ kubectl create -f src/main/k8s/funqy-service.yaml`
```
The output should look like this:
```
`service.serving.knative.dev/funqy-knative-events-codestart created`
```
Create a default broker to subscribe to the event function. Use the [kn][13] Knative Serving command-line tool:
```
`$ kn broker create default`
```
Open the `funqy-trigger.yaml` file in the `src/main/k8s` directory and replace it with:
```
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: my-cloudevent-greeting
spec:
  broker: default
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: funqy-knative-events-codestart
```
Create a trigger using the `kubectl` command-line tool:
```
`$ kubectl create -f src/main/k8s/funqy-trigger.yaml`
```
The output should look like this:
```
`trigger.eventing.knative.dev/my-cloudevent-greeting created`
```
### 4\. Send a cloud event to the serverless event function in Kubernetes
Find out the function's route URL and check that the output looks like this:
```
$ kubectl get rt
NAME URL READY REASON
funqy-knative-events-codestart  <http://funqy-knative-events-codestart-YOUR\_HOST\_DOMAIN>   True
```
Send a cloud event to the function over the HTTP protocol:
```
curl -v <http://funqy-knative-events-codestart-YOUR\_HOST\_DOMAIN> \
  -H "Content-Type:application/json" \
  -H "Ce-Id:1" \
  -H "Ce-Source:cloud-event-example" \
  -H "Ce-Type:myCloudEventGreeting" \
  -H "Ce-Specversion:1.0" \
  -d "{\"name\": \"Daniel\"}"
```
The output should end with:
```
`HTTP/1.1 204 No Content`
```
Once the function pod scales up, take a look at the pod logs. Use the following `kubectl` command to retrieve the pod's name:
```
`$ kubectl get pod`
```
The output will look like this:
```
NAME                                                           READY   STATUS    RESTARTS   AGE
funqy-knative-events-codestart-v1-deployment-6569f6dfc-zxsqs   2/2     Running   0          11s
```
Run the following `kubectl` command to verify that the pod's logs match the local testing's result: 
```
`$ kubectl logs funqy-knative-events-codestart-v1-deployment-6569f6dfc-zxsqs -c user-container | grep CloudEventGreeting`
```
The output looks like this:
```
`INFO  [org.acm.get.sta.fun.clo.CloudEventGreeting] (executor-thread-0) Hello Daniel`
```
If you deploy the event function to an [OpenShift Kubernetes Distribution][14] (OKD) cluster, you will find the deployment status in the topology view:
![Deployment status][15]
(Daniel Oh, [CC BY-SA 4.0][16])
You can also find the pod's logs in the **Pod details** tab:
![Pod details][17]
(Daniel Oh, [CC BY-SA 4.0][16])
### What's next?
Developers can bind a cloud event to Knative using Quarkus functions. Quarkus also scaffolds Kubernetes manifests, such as Knative services and triggers, to process cloud events over a channel or HTTP request.
Learn more serverless and Quarkus topics through OpenShift's [interactive self-service learning portal][18].
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/cloudevents-bind-java-knative
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://avro.apache.org/
[3]: https://developers.google.com/protocol-buffers
[4]: https://cloudevents.io/
[5]: https://knative.dev/
[6]: https://kogito.kie.org/
[7]: https://debezium.io/
[8]: https://quarkus.io/
[9]: https://knative.dev/docs/eventing/
[10]: https://opensource.com/article/21/6/quarkus-funqy
[11]: https://hub.docker.com/
[12]: https://quay.io/
[13]: https://knative.dev/docs/client/install-kn/
[14]: https://www.okd.io/
[15]: https://opensource.com/sites/default/files/uploads/5_deployment-status.png (Deployment status)
[16]: https://creativecommons.org/licenses/by-sa/4.0/
[17]: https://opensource.com/sites/default/files/uploads/5_pod-details.png (Pod details)
[18]: https://learn.openshift.com/serverless/

View File

@ -0,0 +1,330 @@
[#]: subject: (Run Prometheus at home in a container)
[#]: via: (https://opensource.com/article/21/7/run-prometheus-home-container)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Run Prometheus at home in a container
======
Keep tabs on your home network by setting up a Prometheus container
image.
![A graph of a wave.][1]
[Prometheus][2] is an open source monitoring and alerting system that provides insight into the state and history of a computer, application, or cluster by storing defined metrics in a time-series database. It provides a powerful query language, PromQL, to help you explore and understand the data it stores. Prometheus also includes an Alertmanager that makes it easy to trigger notifications when the metrics you collect cross certain thresholds. Most importantly, Prometheus is flexible and easy to set up to monitor all kinds of metrics from whatever system you need to track.
As site reliability engineers (SREs) on Red Hat's OpenShift Dedicated team, we use Prometheus as a central component of our monitoring and alerting for clusters and other aspects of our infrastructure. Using Prometheus, we can predict when problems may occur by following trends in the data we collect from nodes in the cluster and services we run. We can trigger alerts when certain thresholds are crossed or events occur. As a data source for [Grafana][3], Prometheus enables us to produce graphs of data over time to see how a cluster or service is behaving.
Prometheus is a strategic piece of infrastructure for us at work, but it is also useful to me at home. Luckily, it's not only powerful and useful but also easy to set up in a home environment, with or without Kubernetes, OpenShift, containers, etc. This article shows you how to build a Prometheus container image and set up the Prometheus Node Exporter to collect data from home computers. It also explains some basic PromQL, the query language Prometheus uses to return data and create graphs.
### Build a Prometheus container image
The Prometheus project publishes its own container image, `quay.io/prometheus/prometheus`. However, I enjoy building my own for home projects and prefer to use the [Red Hat Universal Base Image][4] family for my projects. These images are freely available for anyone to use. I prefer the [Universal Base Image 8 Minimal][5] (ubi8-minimal) image based on Red Hat Enterprise Linux 8. The ubi8-minimal image is a smaller version of the normal ubi8 images. It is larger than the official Prometheus container image's ultra-sparse Busybox image, but since I use the Universal Base Image for other projects, that layer is a wash in terms of disk space for me. (If two images use the same layer, that layer is shared between them and doesn't use any additional disk space after the first image.)
My Containerfile for this project is split into a [multi-stage build][6]. The first, `builder`, installs a few tools via DNF packages to make it easier to download and extract a Prometheus release from GitHub, then downloads a specific release for whatever architecture I need (either ARM64 for my [Raspberry Pi Kubernetes cluster][7] or AMD64 for running locally on my laptop), and extracts it:
```
# The first stage build, downloading Prometheus from Github and extracting it
FROM registry.access.redhat.com/ubi8/ubi-minimal as builder
LABEL maintainer "Chris Collins &lt;[collins.christopher@gmail.com][8]&gt;"
# Install packages needed to download and extract the Prometheus release
RUN microdnf install -y gzip jq tar
# Replace the ARCH for different architecture versions, eg: "linux-arm64.tar.tz"
ENV PROMETHEUS_ARCH="linux-amd64.tar.gz"
# Replace "tag/&lt;tag_name&gt;" with "latest" to build whatever the latest tag is at the time
ENV PROMETHEUS_VERSION="tags/v2.27.0"
ENV PROMETHEUS="<https://api.github.com/repos/prometheus/prometheus/releases/${PROMETHEUS\_VERSION}>"
# The checksum file for the Prometheus project is "sha256sums.txt"
ENV SUMFILE="sha256sums.txt"
RUN mkdir /prometheus
WORKDIR /prometheus
# Download the checksum
RUN /bin/sh -c "curl -sSLf $(curl -sSLf ${PROMETHEUS} -o - | jq -r '.assets[] | select(.name|test(env.SUMFILE)) | .browser_download_url') -o ${SUMFILE}"
# Download the binary tarball
RUN /bin/sh -c "curl -sSLf -O $(curl -sSLf ${PROMETHEUS} -o - | jq -r '.assets[] | select(.name|test(env.PROMETHEUS_ARCH)) |.browser_download_url')"
# Check the binary and checksum match
RUN sha256sum --check --ignore-missing ${SUMFILE}
# Extract the tarball
RUN tar --extract --gunzip --no-same-owner --strip-components=1 --directory /prometheus --file *.tar.gz
```
The second stage of the multi-stage build copies the extracted Prometheus files to a pristine ubi8-minimal image (there's no need for the extra tools from the first image to take up space in the final image) and links the binaries into the `$PATH`:
```
# The second build stage, creating the final image
FROM registry.access.redhat.com/ubi8/ubi-minimal
LABEL maintainer "Chris Collins &lt;[collins.christopher@gmail.com][8]&gt;"
# Get the binary from the builder image
COPY --from=builder /prometheus /prometheus
WORKDIR /prometheus
# Link the binary files into the $PATH
RUN ln prometheus /bin/
RUN ln promtool /bin/
# Validate prometheus binary
RUN prometheus --version
# Add dynamic target (file_sd_config) support to the prometheus config
# <https://prometheus.io/docs/prometheus/latest/configuration/configuration/\#file\_sd\_config>
RUN echo -e "\n\
  - job_name: 'dynamic'\n\
    file_sd_configs:\n\
    - files:\n\
      - data/sd_config*.yaml\n\
      - data/sd_config*.json\n\
      refresh_interval: 30s\
" &gt;&gt; prometheus.yml
EXPOSE 9090
VOLUME ["/prometheus/data"]
ENTRYPOINT ["prometheus"]
CMD ["--config.file=prometheus.yml"]
```
Build the image:
```
# Build the Prometheus image from the Containerfile
podman build --format docker -f Containerfile -t prometheus
```
I'm using [Podman][9] as my container engine at home, but you can use Docker if you prefer. Just replace the `podman` command with `docker` above.
After building this image, you're ready to run Prometheus locally and start collecting some metrics.
### Running Prometheus
```
# This only needs to be done once
# This directory will store the metrics Prometheus collects so they persist between container restarts
mkdir data
# Run Prometheus locally, using the ./data directory for persistent data storage
# Note that the image name, prometheus:latest, will be whatever image you are using
podman run --mount=type=bind,src=$(pwd)/data,dst=/prometheus/data,relabel=shared --publish=127.0.0.1:9090:9090 --detach prometheus:latest
```
The Podman command above runs Prometheus in a container, mounting the Data directory into the container and allowing you to access the Prometheus web interface with a browser only from the machine running the container. If you want to access Prometheus from other hosts, replace `--publish=127.0.0.1:9090:9090` in the command with `--publish=9090:9090`.
Once the container is running, you should be able to access Prometheus at `http://127.0.0.1:9000/graph`. There is not much to look at yet, though. By default, Prometheus knows only to check itself (the Prometheus service) for metrics related to itself. For example, navigating to the link above and entering a query for `prometheus_http_requests_total` will show how many HTTP requests Prometheus has received (most likely, just those you have made so far).
![number of HTTP requests Prometheus received][10]
(Chris Collins, [CC BY-SA 4.0][11])
This query can also be referenced as a URL:
```
`http://127.0.0.1:9090/graph?g0.expr=prometheus_http_requests_total&g0.tab=1&g0.stacked=0&g0.range_input=1h`
```
Clicking it should take you to the same results. By default, Prometheus scrapes for metrics every 15 seconds, so these metrics will update over time (assuming they have changed since the last scrape).
You can also graph the data over time by entering a query (as above) and clicking the **Graph** tab.
![Graphing data over time][12]
(Chris Collins, [CC BY-SA 4.0][11])
Graphs can also be referenced as a URL:
```
`http://127.0.0.1:9090/graph?g0.expr=prometheus_http_requests_total&g0.tab=0&g0.stacked=0&g0.range_input=1h`
```
This internal data is not helpful by itself, though. So let's add some useful metrics.
### Add some data
Prometheus—the project—publishes a program called [Node Exporter][13] for exporting useful metrics about the computer or node it is running on. You can use Node Exporter to quickly create a metrics target for your local machine, exporting data such as memory utilization and CPU consumption for Prometheus to track.
In the interest of brevity, just run the `quay.io/prometheus/node-exporter:latest` container image published by the Projetheus project to get started.
Run the following with Podman or your container engine of choice:
```
`podman run --net="host" --pid="host" --mount=type=bind,src=/,dst=/host,ro=true,bind-propagation=rslave --detach quay.io/prometheus/node-exporter:latest --path.rootfs=/host`
```
This will start a Node Exporter on your local machine and begin publishing metrics on port 9100. You can see which metrics are being generated by opening `http://127.0.0.1:9100/metrics` in your browser. It will look similar to this:
```
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0.000176569
go_gc_duration_seconds{quantile="0.25"} 0.000176569
go_gc_duration_seconds{quantile="0.5"} 0.000220407
go_gc_duration_seconds{quantile="0.75"} 0.000220407
go_gc_duration_seconds{quantile="1"} 0.000220407
go_gc_duration_seconds_sum 0.000396976
go_gc_duration_seconds_count 2
```
Now you just need to tell Prometheus that the data is there. Prometheus uses a set of rules called [scrape_configs][14] that are defined in its configuration file, `prometheus.yml`, to decide what hosts to check for metrics and how often to check them. The scrape_configs can be set statically in the Prometheus config file, but that doesn't make Prometheus very flexible. Every time you add a new target, you would have to update the config file, stop Prometheus manually, and restart it. Prometheus has a better way, called [file-based service discovery][15].
In the Containerfile above, there's a stanza adding a dynamic file-based service discovery configuration to the Prometheus config file:
```
RUN echo -e "\n\
  - job_name: 'dynamic'\n\
    file_sd_configs:\n\
    - files:\n\
      - data/sd_config*.yaml\n\
      - data/sd_config*.json\n\
      refresh_interval: 30s\
" &gt;&gt; prometheus.ym
```
This tells Prometheus to look for files named `sd_config*.yaml` or `sd_config*.json` in the Data directory that are mounted into the running container and to check every 30 seconds to see if there are more config files or if they have changed at all. Using files with that naming convention, you can tell Prometheus to start looking for other targets, such as the Node Exporter you started earlier.
Create a file named `sd_config_01.json` in the Data directory with the following contents, replacing `your_hosts_ip_address` with the IP address of the host running the Node Exporter:
```
`[{"labels": {"job": "node"}, "targets": ["your_hosts_ip_address:9100"]}`
```
Check `http://127.0.0.1:9090/targets` in Prometheus; you should see Prometheus monitoring itself (inside the container) and the target you added for the host with the Node Exporter. Click on the link for this new target to see the raw data Prometheus has scraped. It should look familiar:
```
# NOTE: Truncated for brevity
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.6547e-05
go_gc_duration_seconds{quantile="0.25"} 0.000107517
go_gc_duration_seconds{quantile="0.5"} 0.00017582
go_gc_duration_seconds{quantile="0.75"} 0.000503352
go_gc_duration_seconds{quantile="1"} 0.008072206
go_gc_duration_seconds_sum 0.029700021
go_gc_duration_seconds_count 55
```
This is the same data the Node Exporter is exporting:
```
`http://127.0.0.1:9090/graph?g0.expr=rate(node_network_receive_bytes_total%7B%7D%5B5m%5D)&g0.tab=0&g0.stacked=0&g0.range_input=15m`
```
With this information, you can create your own rules and instrument your own applications to provide metrics for Prometheus to consume.
### A light introduction to PromQL
PromQL is Prometheus' query language and a powerful way to aggregate the time-series data stored in Prometheus. Prometheus shows you the output of a query as the raw result, or it can be displayed as a graph showing the trend of the data over time, like the `node_network_receive_bytes_total` example above. PromQL can be daunting to get into, and this article will not dive into a full tutorial on how to use it, but I will cover some basics.
To get started, pull up the query interface for Prometheus:
```
`http://127.0.0.1:9090/graph`
```
Look at the `node_network_receive_bytes_total` metrics in this example. Enter that string into the query field, and press Enter to display all the collected network metrics from the computer on which the Node Exporter is running. (Note that Prometheus provides an autocomplete feature, making it easy to explore the metrics it collects.) You may see several results, each with labels that have been applied to the data sent by the Node Exporter:
![Network data received][16]
(Chris Collins, [CC BY-SA 4.0][11])
Looking at the image above, you can see eight interfaces, each labeled by the device name (e.g., `{device="ensp12s0u1"}`), the instance they were collected from (in this case, all the same node), and the job node that was assigned in the `sd_config_01.json`. To the right of these is the latest raw metric data for this device. In the case of the `ensp12s0u1` device, it's received `4007938272` bytes of data over the interface since Prometheus started tracking the data.
Note: The "job" label is useful in defining what kind of data is being collected. For example, "node" for metrics sent by the Node Exporter, or "cluster" for Kubernetes cluster data, or perhaps an application name for a specific service you may be monitoring.
Click on the **Graph** tab, and you can see the metrics for these devices graphed over time (one hour by default). The time period can be adjusted using the `- +` toggle on the left. Historical data is displayed and graphed along with the current value. This provides valuable insight into how the data changes over time:
![Graph of data changing over time][17]
(Chris Collins, [CC BY-SA 4.0][11])
You can further refine the displayed data using the labels. This graph displays all the interfaces reported by the Node Exporter, but what if you are interested just in the wireless device? By changing the query to include the label `node_network_receive_bytes_total{device="wlp2s0"}`, you can evaluate just the data matching that label. Prometheus automatically adjusts the scale to a more human-readable one after the other devices' data is removed:
![Graph of network data for one label][18]
(Chris Collins, [CC BY-SA 4.0][11])
This data is helpful in itself, but Prometheus' PromQL also has several query functions that can be applied to the data to provide more information. For example, look again at the `rate()` function. The `rate()` function "calculates the per-second average rate of increase of the time series in the range vector." That's a fancy way of saying "shows how quickly the data grew."
Looking at the graph for the wireless device above, you can see a slight curve—a slightly more vertical increase—in the line graph right around 19:00 hours. It doesn't look like much on its own but, using the `rate()` function, it is possible to calculate just how much larger the growth spike was around that timeframe. Using the query `rate(node_network_receive_bytes_total{device="wlp2s0"}[15m])` shows the rate that the received bytes increased for the wireless device, averaged per second over a 15-minute period:
![Graph showing rate data increased][19]
(Chris Collins, [CC BY-SA 4.0][11])
It is much more evident that around 19:00 hours, the wireless device received almost three times as much traffic for a brief period.
PromQL can do much more than this. Using the `predict_linear()` function, Prometheus can make an educated guess about when a certain threshold will be crossed. Using the same wireless `network_receive_bytes` data, you can predict where the value will be over the next four hours based on the data from the previous four hours (or any combination you might be interested in). Try querying `predict_linear(node_network_receive_bytes_total{device="wlp2s0"}[4h], 4 * 3600)`.
The important bit of the `predict_linear()` function above is `[4h], 4 * 3600`. The `[4h]` tells Prometheus to use the past four hours as a dataset and then to predict where the value will be over the next four hours (or `4 * 3600` since there are 3,600 seconds in an hour). Using the example above, Prometheus predicts that the wireless device will have received almost 95MB of data about an hour from now (your data will vary):
![Graph showing predicted data that will be received][20]
(Chris Collins, [CC BY-SA 4.0][11])
You can start to see how this might be useful, especially in an operations capacity. Kubernetes exports node disk usage metrics and includes a built-in alert using `predict_linear()` to estimate when a disk might run out of space. You can use all of these queries in conjunction with Prometheus' Alertmanager to notify you when various conditions are met—from network utilization being too high to disk space _probably_ running out in the next four hours and more. Alertmanager is another useful topic that I'll cover in a future article.
### Conclusion
Prometheus consumes metrics by scraping endpoints for specially formatted data. Data is tracked and can be queried for point-in-time info or graphed to show changes over time. Even better, Prometheus supports, out of the box, alerting rules that can hook in with your infrastructure in a variety of ways. Prometheus can also be used as a data source for other projects, like Grafana, to provide more sophisticated graphing information.
In the real world at work, we use Prometheus to track metrics and provide alert thresholds that page us when clusters are unhealthy, and we use Grafana to make dashboards of data we need to view regularly. We export node data to track our nodes and instrument our operators to track their performance and health. Prometheus is the backbone of all of it.
If you have been interested in Prometheus, keep your eyes peeled for follow-up articles. You'll learn about alerting when certain conditions are met, using Prometheus' built-in Alertmanager and integrations with it, more complicated PromQL, and how to instrument your own application and integrate it with Prometheus.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/run-prometheus-home-container
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_wavegraph.png?itok=z4pXCf_c (A graph of a wave.)
[2]: https://prometheus.io/
[3]: https://grafana.com/
[4]: https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image
[5]: https://catalog.redhat.com/software/containers/ubi8/ubi-minimal/5c359a62bed8bd75a2c3fba8
[6]: https://docs.docker.com/develop/develop-images/multistage-build/
[7]: https://opensource.com/article/20/6/kubernetes-raspberry-pi
[8]: mailto:collins.christopher@gmail.com
[9]: https://docs.podman.io/en/latest/Introduction.html
[10]: https://opensource.com/sites/default/files/uploads/prometheus_http_requests_total_query.png (number of HTTP requests Prometheus received)
[11]: https://creativecommons.org/licenses/by-sa/4.0/
[12]: https://opensource.com/sites/default/files/uploads/prometheus_http_requests_total.png (Graphing data over time)
[13]: https://prometheus.io/docs/guides/node-exporter/
[14]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
[15]: https://prometheus.io/docs/guides/file-sd/
[16]: https://opensource.com/sites/default/files/uploads/node_network_receive_bytes_total.png (Network data received)
[17]: https://opensource.com/sites/default/files/uploads/node_network_receive_bytes_total_graph_1.png (Graph of data changing over time)
[18]: https://opensource.com/sites/default/files/uploads/node_network_receive_bytes_total_wireless_graph.png (Graph of network data for one label)
[19]: https://opensource.com/sites/default/files/uploads/rate_network_receive_bytes_total_wireless_graph.png (Graph showing rate data increased)
[20]: https://opensource.com/sites/default/files/uploads/predict_linear_node_network_receive_bytes_total_wireless_graph.png (Graph showing predicted data that will be received)