mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-30 02:40:11 +08:00
Merge branch 'LCTT:master' into master
This commit is contained in:
commit
359d6a7f6b
published
20200511 Start using systemd as a troubleshooting tool.md20210325 Identify Linux performance bottlenecks using open source tools.md20210427 Upgrade your Linux PC hardware-using open source tools.md20210518 Are you using this magic method for filesystems from Python 3.6.md20210519 Slice infinite generators with this Python 3.7 feature.md20210520 Make your API better with this positional trick from Python 3.8.md20210524 How to Install and Use XRDP on Ubuntu for Remote Desktop Connection.md20210526 Make Command Not Found- Here-s How to Fix it.md20210528 What you need to know about Quarkus in 2021.md20210602 Convert Images to ASCII Art in Linux Terminal With This Nifty Little Tool.md20210602 openSUSE Leap 15.3 Release Finally Closes the Gap With SUSE Linux Enterprise.md
sources
news
20210602 OBS Studio 27 Adds Wayland Support, Undo-Redo, and Browser Docks.md20210603 You Can Now Try the New COSMIC Desktop Environment with Pop-_OS 21.04 Beta.md20210608 You Can Still Use the Old Firefox Interface (but not for long).md
talk
tech
20200108 6 requirements of cloud-native software.md20210222 A friendly guide to the syntax of C-- method pointers.md20210325 Identify Linux performance bottlenecks using open source tools.md20210518 4 essential characteristics of successful APIs.md20210518 Manage your Raspberry Pi with Cockpit.md20210524 How to Install and Use XRDP on Ubuntu for Remote Desktop Connection.md20210526 How I monitor my greenhouse with CircuitPython and open source tools.md20210601 Get started with FreeDOS.md20210601 Start monitoring your Kubernetes cluster with Prometheus and Grafana.md20210602 Convert Images to ASCII Art in Linux Terminal With This Nifty Little Tool.md20210602 Establish an SSH connection between Windows and Linux.md20210602 How to navigate FreeDOS with CD and DIR.md20210602 New ways to learn about open organizations.md20210602 Test Kubernetes cluster failures and experiments in your terminal.md20210603 FreeDOS commands for Linux fans.md20210603 Get started with Kustomize for Kubernetes configuration management.md20210603 Test your Kubernetes experiments with an open source web interface.md20210604 Optimize Java serverless functions in Kubernetes.md20210604 Set and use environment variables in FreeDOS.md20210605 15 Useful Visual Studio Code Keyboard Shortcuts to Increase Productivity.md20210606 5 handy guides to open source for teachers.md20210607 Automate tasks with BAT files on FreeDOS.md20210607 Comparing Linux Mint and Fedora- Which One Should You Use.md20210607 Identify security properties on Linux using checksec.md20210607 Test arbitrary pod failures on Kubernetes with kube-monkey.md20210608 Analyze community health metrics with this open source tool.md20210608 How FreeDOS boots.md20210608 Play Doom on Kubernetes.md20210608 Subtitld- A Cross-Platform Open-Source Subtitle Editor.md20210608 Tune your MySQL queries like a pro.md20210609 Helix- A Terminal Based Text Editor for Power Linux Users.md
translated/tech
20200108 6 requirements of cloud-native software.md20210518 4 essential characteristics of successful APIs.md20210518 Manage your Raspberry Pi with Cockpit.md20210601 Get started with FreeDOS.md20210603 Explore the Kubernetes ecosystem in 2021.md20210603 How to Install Code Blocks IDE on Ubuntu Linux.md
@ -1,27 +1,29 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (tt67wq)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13452-1.html)
|
||||
[#]: subject: (Start using systemd as a troubleshooting tool)
|
||||
[#]: via: (https://opensource.com/article/20/5/systemd-troubleshooting-tool)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
使用 systemd 作为问题定位工具
|
||||
======
|
||||
虽然 systemd 并非真正的故障定位工具,但其输出中的信息为解决问题指明了方向。![Magnifying glass on code][1]
|
||||
|
||||
没有人会认为 systemd 是一个故障定位工具,但当我的网络服务器遇到问题时,我对 systemd 和它的一些功能的不断了解帮助我找到并规避了问题。
|
||||
> 虽然 systemd 并非真正的故障定位工具,但其输出中的信息为解决问题指明了方向。
|
||||
|
||||

|
||||
|
||||
我遇到的问题是这样,为我的家庭办公网络提供名称服务 、DHCP、NTP、HTTPD 和 SendMail 邮件服务的服务器 yorktown,它在正常启动时未能启动 Apache HTTPD 守护程序。在我意识到它没有运行之后,我不得不手动启动它。这个问题已经持续了一段时间,我最近才开始尝试去解决它。
|
||||
没有人会认为 systemd 是一个故障定位工具,但当我的 web 服务器遇到问题时,我对 systemd 和它的一些功能的不断了解帮助我找到并规避了问题。
|
||||
|
||||
你们中的一些人会说,systemd 本身就是这个问题的原因,根据我现在了解的情况,我同意你们的看法。然而,我在使用 SystemV 时也遇到了类似的问题。(在本系列文章的[第一篇 ][2] 中,我探讨了围绕 systemd 作为旧有 SystemV 启动程序和启动脚本的替代品所产生的争议。如果你有兴趣了解更多关于 systemd 的信息,也可以阅读[第二篇 ][3] 和[第三篇 ][4] 文章。) 没有完美的软件,systemd 和 SystemV 也不例外,但 systemd 为解决问题提供的信息远远多于 SystemV。
|
||||
我遇到的问题是这样,我的服务器 yorktown 为我的家庭办公网络提供名称服务 、DHCP、NTP、HTTPD 和 SendMail 邮件服务,它在正常启动时未能启动 Apache HTTPD 守护程序。在我意识到它没有运行之后,我不得不手动启动它。这个问题已经持续了一段时间,我最近才开始尝试去解决它。
|
||||
|
||||
你们中的一些人会说,systemd 本身就是这个问题的原因,根据我现在了解的情况,我同意你们的看法。然而,我在使用 SystemV 时也遇到了类似的问题。(在本系列文章的 [第一篇][2] 中,我探讨了围绕 systemd 作为旧有 SystemV 启动程序和启动脚本的替代品所产生的争议。如果你有兴趣了解更多关于 systemd 的信息,也可以阅读 [第二篇][3] 和 [第三篇][4] 文章。)没有完美的软件,systemd 和 SystemV 也不例外,但 systemd 为解决问题提供的信息远远多于 SystemV。
|
||||
|
||||
### 确定问题所在
|
||||
|
||||
找到这个问题根源的第一步是确定 httpd 服务的状态:
|
||||
|
||||
|
||||
```
|
||||
[root@yorktown ~]# systemctl status httpd
|
||||
● httpd.service - The Apache HTTP Server
|
||||
@ -43,46 +45,44 @@ Apr 16 11:54:37 yorktown.both.org systemd[1]: Failed to start The Apache HTTP Se
|
||||
[root@yorktown ~]#
|
||||
```
|
||||
|
||||
这种状态信息是 systemd 的功能之一,我觉得比 SystemV 提供的任何功能都要有用。这里的大量有用信息使我很容易得出逻辑性的结论,让我找到正确的方向。我从旧的 `chkconfig` 命令中得到的是服务是否在运行,以及如果它在运行的话,进程 ID(PID)是多少。这可没多大帮助。
|
||||
|
||||
这种状态信息是 systemd 的功能之一,我觉得比 SystemV 提供的任何功能都要有用。这里的大量有用信息使我很容易得出逻辑性的结论,让我找到正确的方向。我从旧的 **chkconfig** 命令中得到的是服务是否在运行,以及如果它在运行的话,进程 ID(PID) 是多少。这可没多大帮助。
|
||||
|
||||
该状态报告中的关键条目显示,HTTPD 不能与 IP 地址绑定,这意味着它不能接受传入的请求。这表明网络启动速度不够快,因为 IP 地址还没有设置好,所以 HTTPD 服务还没有准备好与 IP 地址绑定。这是不应该发生的,所以我查看了我的网络服务的 systemd 启动配置文件;在正确的 "after" 和 "requires" 声明下,所有这些似乎都没问题。下面是我服务器上的 **/lib/systemd/system/httpd.service** 文件:
|
||||
|
||||
该状态报告中的关键条目显示,HTTPD 不能与 IP 地址绑定,这意味着它不能接受传入的请求。这表明网络启动速度不够快,因为 IP 地址还没有设置好,所以 HTTPD 服务还没有准备好与 IP 地址绑定。这是不应该发生的,所以我查看了我的网络服务的 systemd 启动配置文件;在正确的 `after` 和 `requires` 语句下,所有这些似乎都没问题。下面是我服务器上的 `/lib/systemd/system/httpd.service` 文件:
|
||||
|
||||
```
|
||||
# Modifying this file in-place is not recommended, because changes
|
||||
# will be overwritten during package upgrades. To customize the
|
||||
# behaviour, run "systemctl edit httpd" to create an override unit.
|
||||
|
||||
# For example, to pass additional options (such as -D definitions) to
|
||||
# the httpd binary at startup, create an override unit (as is done by
|
||||
# systemctl edit) and enter the following:
|
||||
|
||||
# [Service]
|
||||
# Environment=OPTIONS=-DMY_DEFINE
|
||||
|
||||
[Unit]
|
||||
Description=The Apache HTTP Server
|
||||
Wants=httpd-init.service
|
||||
After=network.target remote-fs.target nss-lookup.target httpd-init.service
|
||||
Documentation=man:httpd.service(8)
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
Environment=LANG=C
|
||||
|
||||
ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND
|
||||
ExecReload=/usr/sbin/httpd $OPTIONS -k graceful
|
||||
# Send SIGWINCH for graceful stop
|
||||
KillSignal=SIGWINCH
|
||||
KillMode=mixed
|
||||
PrivateTmp=true
|
||||
|
||||
[Install]
|
||||
# Modifying this file in-place is not recommended, because changes
|
||||
# will be overwritten during package upgrades. To customize the
|
||||
# behaviour, run "systemctl edit httpd" to create an override unit.
|
||||
|
||||
# For example, to pass additional options (such as -D definitions) to
|
||||
# the httpd binary at startup, create an override unit (as is done by
|
||||
# systemctl edit) and enter the following:
|
||||
|
||||
# [Service]
|
||||
# Environment=OPTIONS=-DMY_DEFINE
|
||||
|
||||
[Unit]
|
||||
Description=The Apache HTTP Server
|
||||
Wants=httpd-init.service
|
||||
After=network.target remote-fs.target nss-lookup.target httpd-init.service
|
||||
Documentation=man:httpd.service(8)
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
Environment=LANG=C
|
||||
|
||||
ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND
|
||||
ExecReload=/usr/sbin/httpd $OPTIONS -k graceful
|
||||
# Send SIGWINCH for graceful stop
|
||||
KillSignal=SIGWINCH
|
||||
KillMode=mixed
|
||||
PrivateTmp=true
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
**httpd.service** 的单元文件明确规定,它应该在 **network.target** 和 **httpd-init.service**( 以及其他)之后加载。我试着用 **systemctl list-units** 命令找到所有这些服务,并在结果数据流中搜索它们。所有这些服务都存在,应该可以确保在设置网络 IP 地址之前,httpd 服务没有加载。
|
||||
`httpd.service` 单元文件明确规定,它应该在 `network.target` 和 `httpd-init.service`(以及其他)之后加载。我试着用 `systemctl list-units` 命令找到所有这些服务,并在结果数据流中搜索它们。所有这些服务都存在,应该可以确保在设置网络 IP 地址之前,httpd 服务没有加载。
|
||||
|
||||
### 第一个解决方案
|
||||
|
||||
@ -90,10 +90,9 @@ WantedBy=multi-user.target
|
||||
|
||||
我搞不清楚为什么花了这么久才把 IP 地址分配给网卡。所以我想,如果我可以将 HTTPD 服务的启动推迟合理的一段时间,那么 IP 地址就会在那个时候分配。
|
||||
|
||||
幸运的是,上面的 **/lib/systemd/system/httpd.service** 文件提供了一些方向。虽然它说不要修改它,但是它还是指出了如何操作:使用 **systemctl edit httpd** 命令,它会自动创建一个新文 (**/etc/systemd/system/httpd.service.d/override.conf**) 件并打开 [GNU Nano][5] 编辑器(如果你对 Nano 不熟悉,一定要看一下 Nano 界面底部的提示)。
|
||||
|
||||
在新文件中加入以下代码并保存。
|
||||
幸运的是,上面的 `/lib/systemd/system/httpd.service` 文件提供了一些方向。虽然它说不要修改它,但是它还是指出了如何操作:使用 `systemctl edit httpd` 命令,它会自动创建一个新文件(`/etc/systemd/system/httpd.service.d/override.conf`)并打开 [GNU Nano][5] 编辑器(如果你对 Nano 不熟悉,一定要看一下 Nano 界面底部的提示)。
|
||||
|
||||
在新文件中加入以下代码并保存:
|
||||
|
||||
```
|
||||
[root@yorktown ~]# cd /etc/systemd/system/httpd.service.d/
|
||||
@ -111,8 +110,7 @@ total 4
|
||||
ExecStartPre=/bin/sleep 30
|
||||
```
|
||||
|
||||
这个覆盖文件的 **[Service]** 段有一行代码,将 HTTPD 服务的启动时间推迟了 30 秒。下面的状态命令显示了等待时间里的服务状态:
|
||||
|
||||
这个覆盖文件的 `[Service]` 段有一行代码,将 HTTPD 服务的启动时间推迟了 30 秒。下面的状态命令显示了等待时间里的服务状态:
|
||||
|
||||
```
|
||||
[root@yorktown ~]# systemctl status httpd
|
||||
@ -138,7 +136,6 @@ Apr 16 12:15:01 yorktown.both.org systemd[1]: Started The Apache HTTP Server.
|
||||
|
||||
这个命令显示了 30 秒延迟过后 HTTPD 服务的状态。该服务已经启动并正常运行。
|
||||
|
||||
|
||||
```
|
||||
[root@yorktown ~]# systemctl status httpd
|
||||
● httpd.service - The Apache HTTP Server
|
||||
@ -172,23 +169,26 @@ Apr 16 12:15:01 yorktown.both.org systemd[1]: Started The Apache HTTP Server.
|
||||
|
||||
### 更好的解决方案
|
||||
|
||||
把这个问题作为 bug 上报几天后,我收到了回复,表示 systemd 只是一个管理工具,如果 httpd 需要在满足某些要求之后被拉起,需要在单元文件中表达出来。这个回复指引我去查阅 **httpd.service** 的 man 手册。我希望我能早点发现这个,因为它是比我自己想出的更优秀的解决方案。这种方案明确的针对了前置目标单元,而不仅仅是随机延迟。
|
||||
把这个问题作为 bug 上报几天后,我收到了回复,表示 systemd 只是一个管理工具,如果 httpd 需要在满足某些要求之后被拉起,需要在单元文件中表达出来。这个回复指引我去查阅 `httpd.service` 的手册页。我希望我能早点发现这个,因为它是比我自己想出的更优秀的解决方案。这种方案明确的针对了前置目标单元,而不仅仅是随机延迟。
|
||||
|
||||
来自 [**httpd.service** man page][7]:
|
||||
来自 [httpd.service 手册页][7]:
|
||||
|
||||
> **Starting the service at boot time**
|
||||
> httpd.service 和 httpd.socket 单元默认是 _disabled_ 的。为了在启动阶段开启 httpd 服务,执行:**systemctl enable httpd.service**。在默认配置中,httpd 守护进程会接受任意配置的 IPV4 或 IPV6 地址的 80 口上的连接(如果安装了 mod_ssl,就会接受 443 端口上的 TLS 连接)。
|
||||
> **在启动时开启服务**
|
||||
>
|
||||
> `httpd.service` 和 `httpd.socket` 单元默认是 _禁用_ 的。为了在启动阶段开启 httpd 服务,执行:`systemctl enable httpd.service`。在默认配置中,httpd 守护进程会接受任何配置好的 IPv4 或 IPv6 地址的 80 口上的连接(如果安装了 mod_ssl,就会接受 443 端口上的 TLS 连接)。
|
||||
>
|
||||
> 如果 httpd 被配置成依赖任一特定的 IP 地址(比如使用 "Listen" 指令),该地址可能只在启动阶段可用,又或者 httpd 依赖其他服务(比如数据库守护进程),那么必须配置该服务,以确保正确的启动顺序。
|
||||
>
|
||||
> 例如,为了确保 httpd 在所有配置的网络接口配置完成之后再运行,可以创建一个带有以下代码段的 drop-in 文件(如上述):
|
||||
> 如果 httpd 被配置成依赖任一特定的 IP 地址(比如使用 `Listen` 指令),该地址可能只在启动阶段可用,又或者 httpd 依赖其他服务(比如数据库守护进程),那么必须配置该服务,以确保正确的启动顺序。
|
||||
>
|
||||
> 例如,为了确保 httpd 在所有配置的网络接口配置完成之后再运行,可以创建一个带有以下代码段的 drop-in 文件(如上述):
|
||||
>
|
||||
> ```
|
||||
> [Unit]
|
||||
> After=network-online.target
|
||||
> Wants=network-online.target
|
||||
> After=network-online.target
|
||||
> Wants=network-online.target
|
||||
> ```
|
||||
|
||||
|
||||
我仍然觉得这是个 bug,因为在 **httpd.conf** 配置文件中使用 Listen 指令是很常见的,至少在我的经验中。我一直在使用 Listen 指令,即使在只有一个 IP 地址的主机上,在多个网卡 (NICS) 和 IP 地址的机器上这显然也是有必要的。在 **/usr/lib/systemd/system/httpd.service** 默认配置文件中加入上述几行,对不使用 **Listen** 指令的不会造成问题,对使用 **Listen** 指令的则会规避这个问题。
|
||||
我仍然觉得这是个 bug,因为在 `httpd.conf` 配置文件中使用 Listen 指令是很常见的,至少在我的经验中。我一直在使用 Listen 指令,即使在只有一个 IP 地址的主机上,在多个网卡和 IP 地址的机器上这显然也是有必要的。在 `/usr/lib/systemd/system/httpd.service` 默认配置文件中加入上述几行,对不使用 `Listen` 指令的不会造成问题,对使用 `Listen` 指令的则会规避这个问题。
|
||||
|
||||
同时,我将使用建议的方法。
|
||||
|
||||
@ -196,7 +196,7 @@ Apr 16 12:15:01 yorktown.both.org systemd[1]: Started The Apache HTTP Server.
|
||||
|
||||
本文描述了一个我在服务器上启动 Apache HTTPD 服务时遇到的一个问题。它指引你了解我在解决这个问题上的思路,并说明了我是如何使用 systemd 来协助解决问题。我也介绍了我用 systemd 实现的规避方法,以及我按照我的 bug 报告得到的更好的解决方案。
|
||||
|
||||
如我在开头处提到的那样,这有很大可能是一个 systemd 的问题,特别是 httpd 启动的配置问题。尽管如此,systemd 还是提供了工具让我找到了问题的可能来源,并制定和实现了规避方案。两种方案都没有真正令我满意地解决问题。目前,这个问题根源依旧存在,必须要解决。如果只是在 **/usr/lib/systemd/system/httpd.service** 文件中添加推荐的代码,那对我来说是可行的。
|
||||
如我在开头处提到的那样,这有很大可能是一个 systemd 的问题,特别是 httpd 启动的配置问题。尽管如此,systemd 还是提供了工具让我找到了问题的可能来源,并制定和实现了规避方案。两种方案都没有真正令我满意地解决问题。目前,这个问题根源依旧存在,必须要解决。如果只是在 `/usr/lib/systemd/system/httpd.service` 文件中添加推荐的代码,那对我来说是可行的。
|
||||
|
||||
在这个过程中我发现了一件事,我需要了解更多关于定义服务启动顺序的知识。我会在下一篇文章中探索这个领域,即本系列的第五篇。
|
||||
|
||||
@ -204,13 +204,10 @@ Apr 16 12:15:01 yorktown.both.org systemd[1]: Started The Apache HTTP Server.
|
||||
|
||||
网上有大量的关于 systemd 的参考资料,但是大部分都有点简略、晦涩甚至有误导性。除了本文中提到的资料,下列的网页提供了跟多可靠且详细的 systemd 入门信息。
|
||||
|
||||
Fedora 项目有一篇切实好用的 systemd 入门,它囊括了几乎所有你需要知道的关于如何使用 systemd 配置、管理和维护 Fedora 计算机的信息。
|
||||
|
||||
Fedora 项目也有一个不错的 备忘录,交叉引用了过去 SystemV 命令和 systemd 命令做对比。
|
||||
|
||||
关于 systemd 的技术细节和创建这个项目的原因,请查看 Freedesktop.org 上的 systemd 描述。
|
||||
|
||||
Linux.com 的“更多 systemd 的乐趣”栏目提供了更多高级的 systemd 信息和技巧。
|
||||
- Fedora 项目有一篇切实好用的 [systemd 入门][8],它囊括了几乎所有你需要知道的关于如何使用 systemd 配置、管理和维护 Fedora 计算机的信息。
|
||||
- Fedora 项目也有一个不错的 [备忘录][9],交叉引用了过去 SystemV 命令和 systemd 命令做对比。
|
||||
- 关于 systemd 的技术细节和创建这个项目的原因,请查看 [Freedesktop.org][10] 上的 [systemd 描述][11]。
|
||||
- [Linux.com][12] 的“更多 systemd 的乐趣”栏目提供了更多高级的 systemd [信息和技巧][13]。
|
||||
|
||||
此外,还有一系列深度的技术文章,是由 systemd 的设计者和主要开发者 Lennart Poettering 为 Linux 系统管理员撰写的。这些文章写于 2010 年 4 月至 2011 年 9 月间,但它们现在和当时一样具有现实意义。关于 systemd 及其生态的许多其他好文章都是基于这些文章:
|
||||
|
||||
@ -227,8 +224,6 @@ Linux.com 的“更多 systemd 的乐趣”栏目提供了更多高级的 system
|
||||
* [systemd for Administrators,Part X][24]
|
||||
* [systemd for Administrators,Part XI][25]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/5/systemd-troubleshooting-tool
|
||||
@ -236,7 +231,7 @@ via: https://opensource.com/article/20/5/systemd-troubleshooting-tool
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[tt67wq](https://github.com/tt67wq)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,265 @@
|
||||
[#]: subject: (Identify Linux performance bottlenecks using open source tools)
|
||||
[#]: via: (https://opensource.com/article/21/3/linux-performance-bottlenecks)
|
||||
[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13462-1.html)
|
||||
|
||||
使用开源工具识别 Linux 的性能瓶颈
|
||||
======
|
||||
|
||||
> 不久前,识别硬件瓶颈还需要深厚的专业知识。今天的开源 GUI 性能监视器使它变得相当简单。
|
||||
|
||||

|
||||
|
||||
计算机是一个集成的系统,它的性能取决于最慢的硬件组件。如果一个组件的能力比其他组件差,性能落后而不能跟上,它就会拖累你的整个系统。这就是一个 _性能瓶颈_。消除一个严重的瓶颈可以使你的系统飞起来。
|
||||
|
||||
本文解释了如何识别 Linux 系统中的硬件瓶颈。这些技术同时适用于个人的电脑和服务器。我强调的是个人电脑 —— 我不会涉及局域网管理或数据库系统等领域的服务器特定的瓶颈。这些通常涉及专门的工具。
|
||||
|
||||
我也不会多谈解决方案。这对本文来说是个太大的话题。相反,我将写一篇关于性能调整的后续文章。
|
||||
|
||||
我将只使用开源的图形用户界面(GUI)工具来完成这项工作。大多数关于 Linux 瓶颈的文章都相当复杂。它们使用专门的命令,并深入研究神秘的细节。
|
||||
|
||||
开源提供的 GUI 工具使得识别许多瓶颈变得简单。我的目标是给你一个快速、简单的方法,你可以在任何地方使用。
|
||||
|
||||
### 从哪里开始
|
||||
|
||||
一台计算机由六个关键的硬件资源组成。
|
||||
|
||||
* 处理器
|
||||
* 内存
|
||||
* 存储器
|
||||
* USB 端口
|
||||
* 互联网连接
|
||||
* 图形处理器
|
||||
|
||||
如果任何一个资源表现不佳,就会产生一个性能瓶颈。为了识别瓶颈,你必须监测这六种资源。
|
||||
|
||||
开源提供了大量的工具来完成这项工作。我会使用 [GNOME 系统监视器][2]。它的输出很容易理解,而且你可以在大多数软件库中找到它。
|
||||
|
||||
启动它并点击“资源”标签。你可以马上发现许多性能问题。
|
||||
|
||||
![系统监控-资源面板][3]
|
||||
|
||||
*图 1. 系统监控器发现问题。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
在“资源”面板上显示三个部分:CPU 历史、内存和交换历史,以及网络历史。一眼就能看出你的处理器是否不堪负荷了,还是你的电脑没有内存了,抑或你的网络带宽被用光了。
|
||||
|
||||
我将在下面探讨这些问题。现在,当你的电脑速度变慢时,首先检查系统监视器。它可以立即为你提供最常见的性能问题的线索。
|
||||
|
||||
现在让我们来探讨一下如何识别特定方面的瓶颈。
|
||||
|
||||
### 如何识别处理器的瓶颈
|
||||
|
||||
要发现瓶颈,你必须首先知道你有什么硬件。开源为这个目的提供了几个工具。我喜欢 [HardInfo][5],因为它的屏幕显示很容易阅读,而且广泛流行。
|
||||
|
||||
启动 HardInfo。它的“计算机->摘要”面板可以识别你的 CPU 并告诉你它的核心数、线程数和速度。它还能识别你的主板和其他计算机部件。
|
||||
|
||||
![HardInfo Summary Panel][6]
|
||||
|
||||
*图 2. HardInfo 显示了硬件细节。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
HardInfo 显示,这台计算机有一个物理 CPU 芯片。该芯片包含两个处理器(或称为核心)。每个核心支持两个线程(或称为逻辑处理器)。这就是总共四个逻辑处理器 —— 正是图 1 中系统监控器的 CPU 历史部分所显示的。
|
||||
|
||||
当处理器不能在其时间内对请求做出反应时,就会出现 _处理器瓶颈_,说明它们已经很忙了。
|
||||
|
||||
当系统监控器显示逻辑处理器的利用率持续在 80% 或 90% 以上时,你就可以确定这一点。这里有一个例子,四个逻辑处理器中有三个被淹没在 100% 的利用率中。这是一个瓶颈,因为它没有留下多少 CPU 用于其他工作。
|
||||
|
||||
![系统监视器的处理器瓶颈][7]
|
||||
|
||||
*图 3. 一个处理器的瓶颈。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
#### 哪个程序导致了这个问题?
|
||||
|
||||
你需要找出是哪个程序在消耗所有的 CPU。点击系统监视器的“进程”标签。然后点击“CPU 百分比”标头,根据它们消耗的 CPU 的多少对进程进行排序。你将看到哪些应用程序正在扼杀你的系统。
|
||||
|
||||
![系统监控进程面板][8]
|
||||
|
||||
*图 4. 识别违规的进程。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
前三个进程各消耗了 _总 CPU 资源的 24%_。由于有四个逻辑处理器,这意味着每个进程消耗了一整个处理器。这就像图 3 所示。
|
||||
|
||||
在“进程”面板上,一个名为“analytical_AI”的程序被确定为罪魁祸首。你可以在面板上右键单击它,以查看其资源消耗的更多细节,包括内存使用、它所打开的文件、其输入/输出细节,等等。
|
||||
|
||||
如果你的登录会话有管理员权限,你可以管理这个进程。你可以改变它的优先级,并停止、继续、结束或杀死它。因此,你可以在这里立即解决你的瓶颈问题。
|
||||
|
||||
![系统监视器管理一个进程][9]
|
||||
|
||||
*图 5. 右键点击一个进程来管理它。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
如何解决处理瓶颈问题?除了实时管理违规的进程外,你也可以防止瓶颈的发生。例如,你可以用另一个应用程序来代替违规进程,绕过它,改变你使用该应用程序的行为,将该应用程序安排在非工作时间,解决潜在的内存问题,对该应用程序或你的系统软件进行性能调整,或升级你的硬件。这里涉及的内容太多,所以我将在下一篇文章中探讨这些方式。
|
||||
|
||||
#### 常见的处理器瓶颈
|
||||
|
||||
在用系统监控器监控你的 CPU 时,你会遇到几种常见的瓶颈问题。
|
||||
|
||||
有时一个逻辑处理器出现瓶颈,而其他所有的处理器都处于低利用率。这意味着你有一个应用程序,它的代码不够智能,无法利用一个以上的逻辑处理器,而且它已经把正在使用的那个处理器耗尽了。这个应用程序完成的时间将比使用更多的处理器要长。但另一方面,至少它能让你的其他处理器腾出手来做别的工作,而不会接管你的电脑。
|
||||
|
||||
你也可能看到一个逻辑处理器永远停留在 100% 的利用率。要么它非常忙,要么是一个进程被挂起了。判断它是否被挂起的方法是,是看该进程是否从不进行任何磁盘活动(正如系统监视器“进程”面板所显示的那样)。
|
||||
|
||||
最后,你可能会注意到,当你所有的处理器都陷入瓶颈时,你的内存也被完全利用了。内存不足的情况有时会导致处理器瓶颈。在这种情况下,你要解决的是根本的内存问题,而不是体现出症状的 CPU 问题。
|
||||
|
||||
### 如何识别内存瓶颈
|
||||
|
||||
鉴于现代 PC 中有大量的内存,内存瓶颈比以前要少得多。然而,如果你运行内存密集型程序,特别是当你的计算机没有很多的随机存取内存(RAM)时,你仍然可能遇到这些问题。
|
||||
|
||||
Linux [使用内存][10] 既用于程序,也用于缓存磁盘数据。后者加快了磁盘数据的访问速度。Linux 可以在它需要的任何时候回收这些内存供程序使用。
|
||||
|
||||
系统监视器的“资源”面板显示了你的总内存和它被使用的程度。在“进程”面板上,你可以看到单个进程的内存使用情况。
|
||||
|
||||
下面是系统监控器“资源”面板中跟踪总内存使用的部分。
|
||||
|
||||
![系统监控器的内存瓶颈][11]
|
||||
|
||||
*图 6. 一个内存瓶颈。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
在“内存”的右边,你会注意到 [交换空间][12]。这是 Linux 在内存不足时使用的磁盘空间。它将内存写入磁盘以继续操作,有效地将交换空间作为你的内存的一个较慢的扩展。
|
||||
|
||||
你要注意的两个内存性能问题是:
|
||||
|
||||
1. 内存被大量使用,而且你看到交换空间的活动频繁或不断增加。
|
||||
2. 内存和交换空间都被大量使用。
|
||||
|
||||
情况一意味着更慢的性能,因为交换空间总是比内存更慢。你是否认为这是一个性能问题,取决于许多因素(例如,你的交换空间有多活跃、它的速度、你的预期,等等)。我的看法是,对于现代个人电脑来说,交换空间任何超过象征性的使用都是不可接受的。
|
||||
|
||||
情况二是指内存和交换空间都被大量使用。这是一个 _内存瓶颈_。计算机变得反应迟钝。它甚至可能陷入一种“咆哮”的状态,在这种状态下,除了内存管理之外,它几乎不能完成其他任务。
|
||||
|
||||
上面的图 6 显示了一台只有 2GB 内存的旧电脑。当内存使用量超过 80% 时,系统开始向交换空间写入,响应速度下降了。这张截图显示了内存使用量超过了 90%,而且这台电脑已经无法使用。
|
||||
|
||||
解决内存问题的最终答案是要么少用内存,要么多买内存。我将在后续文章中讨论解决方案。
|
||||
|
||||
### 如何识别存储瓶颈
|
||||
|
||||
如今的存储有固态和机械硬盘等多个品种。设备接口包括 PCIe、SATA、雷电和 USB。无论有哪种类型的存储,你都要使用相同的程序来识别磁盘瓶颈。
|
||||
|
||||
从系统监视器开始。它的“进程”面板显示各个进程的输入/输出率。因此,你可以快速识别哪些进程做了最多的磁盘 I/O。
|
||||
|
||||
但该工具并不显示每个磁盘的总数据传输率。你需要查看特定磁盘上的总负载,以确定该磁盘是否是一个存储瓶颈。
|
||||
|
||||
要做到这一点,使用 [atop][13] 命令。它在大多数 Linux 软件库中都有。
|
||||
|
||||
只要在命令行提示符下输入 `atop` 即可。下面的输出显示,设备 `sdb` 达到 `busy 101%`。很明显,它已经达到了性能极限,限制了你的系统完成工作的速度。
|
||||
|
||||
![atop 磁盘瓶颈][14]
|
||||
|
||||
*图 7. atop 命令识别了一个磁盘瓶颈。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
注意到其中一个 CPU 有 85% 的时间在等待磁盘完成它的工作(`cpu001 w 85%`)。这是典型的存储设备成为瓶颈的情况。事实上,许多人首先看 CPU 的 I/O 等待时间来发现存储瓶颈。
|
||||
|
||||
因此,要想轻松识别存储瓶颈,请使用 `atop` 命令。然后使用系统监视器上的“进程”面板来识别导致瓶颈的各个进程。
|
||||
|
||||
### 如何识别 USB 端口的瓶颈
|
||||
|
||||
有些人整天都在使用他们的 USB 端口。然而,他们从不检查这些端口是否被最佳地使用。无论你是插入外部磁盘、U 盘,还是其他东西,你都要确认你是否从 USB 连接的设备中获得了最大性能。
|
||||
|
||||
这个图表显示了原因。潜在的 USB 数据传输率差异 _很大_。
|
||||
|
||||
![USB 标准][15]
|
||||
|
||||
*图 8. USB 速度变化很大。(Howard Fosdick,根据 [Tripplite][16] 和 [Wikipedia][17] 提供的数字,[CC BY-SA 4.0][4])*
|
||||
|
||||
HardInfo 的“USB 设备”标签显示了你的计算机支持的 USB 标准。大多数计算机提供不止一种速度。你怎么知道一个特定端口的速度呢?供应商对它们进行颜色编码,如图表中所示。或者你可以在你的计算机的文档中查找。
|
||||
|
||||
要看到你得到的实际速度,可以使用开源的 [GNOME 磁盘][18] 程序进行测试。只要启动 GNOME 磁盘,选择它的“磁盘基准”功能,然后运行一个基准测试。这将告诉你在一个端口插入特定设备时,你将得到的最大实际速度。
|
||||
|
||||
你可能会得到不同的端口传输速度,这取决于你将哪个设备插入它。数据速率取决于端口和设备的特定组合。
|
||||
|
||||
例如,一个可以以 3.1 速度运行的设备如果使用 2.0 端口就会以 2.0 的速度运行。(而且它不会告诉你它是以较慢的速度运行的!)相反,如果你把一个 USB 2.0 设备插入 3.1 端口,它能工作,但速度是 2.0 的速度。所以要获得快速的 USB,你必须确保端口和设备都支持它。GNOME 磁盘为你提供了验证这一点的方法。
|
||||
|
||||
要确定 USB 的处理瓶颈,使用你对固态和硬盘所做的同样程序。运行 `atop` 命令来发现 USB 存储瓶颈。然后,使用系统监视器来获取违规进程的详细信息。
|
||||
|
||||
### 如何识别互联网带宽瓶颈
|
||||
|
||||
系统监控器的“资源”面板会实时告诉你互联网连接速度(见图 1)。
|
||||
|
||||
有 [很好的 Python 工具][19] 可以测试你的最大网速,但你也可以在 [Speedtest][20]、[Fast.com][21] 和 [Speakeasy][22] 等网站进行测试。为了获得最佳结果,关闭所有东西,只运行 _速度测试_;关闭你的虚拟私有网络;在一天中的不同时间运行测试;并比较几个测试网站的结果。
|
||||
|
||||
然后将你的结果与你的供应商声称的下载和上传速度进行比较。这样,你就可以确认你得到的是你所付费的速度。
|
||||
|
||||
如果你有一个单独的路由器,在有和没有它的情况下进行测试。这可以告诉你,你的路由器是否是一个瓶颈。如果你使用 WiFi,在有 WiFi 和没有 WiFi 的情况下进行测试(通过将你的笔记本电脑直接与调制解调器连接)。我经常看到人们抱怨他们的互联网供应商,而实际上他们只是有一个 WiFi 瓶颈,可以自己解决。
|
||||
|
||||
如果某些程序正在消耗你的整个互联网连接,你想知道是哪一个。通过使用 `nethogs` 命令找到它。它在大多数软件库中都有。
|
||||
|
||||
有一天,我的系统监视器突然显示我的互联网访问量激增。我只是在命令行中输入了 `nethogs`,它立即确定带宽消耗者是 Clamav 防病毒更新。
|
||||
|
||||
![Nethogs][23]
|
||||
|
||||
*图 9. Nethogs 识别带宽用户。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
### 如何识别图形处理瓶颈
|
||||
|
||||
如果你把显示器插在台式电脑后面的主板上,你就在使用 _板载显卡_。如果你把它插在后面的卡上,你就有一个专门的图形子系统。大多数人称它为 _视频卡_ 或 _显卡_。对于台式电脑来说,附加显卡通常比主板上的显卡更强大、更昂贵。笔记本电脑总是使用板载显卡。
|
||||
|
||||
HardInfo 的“PCI 设备”面板告诉你关于你的图形处理单元(GPU)。它还显示你的专用视频内存的数量(寻找标有“可预取”的内存)。
|
||||
|
||||
![视频芯片组信息][24]
|
||||
|
||||
*图 10. HardInfo提供图形处理信息。(Howard Fosdick, [CC BY-SA 4.0][4])*
|
||||
|
||||
CPU 和 GPU [非常密切地][25] 一起工作。简而言之,CPU 为 GPU 准备渲染的帧,然后 GPU 渲染这些帧。
|
||||
|
||||
当你的 CPU 在等待 100% 繁忙的 GPU 时,就会出现 _GPU 瓶颈_。
|
||||
|
||||
为了确定这一点,你需要监控 CPU 和 GPU 的利用率。像 [Conky][26] 和 [Glances][27] 这样的开源监控器,如果它们的扩展插件支持你的图形芯片组,就可以做到这一点。
|
||||
|
||||
看一下 Conky 的这个例子。你可以看到,这个系统有很多可用的 CPU。GPU 只有 25% 的使用率。想象一下,如果这个 GPU 的数量接近 100%。那么你就会知道 CPU 在等待 GPU,你就会有一个 GPU 的瓶颈。
|
||||
|
||||
![Conky CPU 和 GPU 监控][28]
|
||||
|
||||
*图 11. Conky 显示 CPU 和 GPU 的利用率。 (图片来源:[AskUbuntu论坛][29])*
|
||||
|
||||
在某些系统上,你需要一个供应商专属的工具来监控你的 GPU。它们可以从 GitHub 上下载,并在 [GPU 监控和诊断命令行工具][30] 这篇文章中有所描述。
|
||||
|
||||
### 总结
|
||||
|
||||
计算机由一系列集成的硬件资源组成。如果它们中的任何一个在工作量上远远落后于其他资源,就会产生性能瓶颈。这可能会拖累你的整个系统。你需要能够识别和纠正瓶颈,以实现最佳性能。
|
||||
|
||||
不久前,识别瓶颈需要深厚的专业知识。今天的开源 GUI 性能监控器使它变得相当简单。
|
||||
|
||||
在我的下一篇文章中,我将讨论改善你的 Linux 电脑性能的具体方法。同时,请在评论中分享你自己的经验。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/linux-performance-bottlenecks
|
||||
|
||||
作者:[Howard Fosdick][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/howtech
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightning.png?itok=wRzjWIlm (Lightning in a bottle)
|
||||
[2]: https://wiki.gnome.org/Apps/SystemMonitor
|
||||
[3]: https://opensource.com/sites/default/files/uploads/1_system_monitor_resources_panel.jpg (System Monitor - Resources Panel )
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://itsfoss.com/hardinfo/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/2_hardinfo_summary_panel.jpg (HardInfo Summary Panel)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/3_system_monitor_100_processor_utilization.jpg (System Monitor processor bottleneck)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/4_system_monitor_processes_panel.jpg (System Monitor Processes panel)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/5_system_monitor_manage_a_process.jpg (System Monitor managing a process)
|
||||
[10]: https://www.networkworld.com/article/3394603/when-to-be-concerned-about-memory-levels-on-linux.html
|
||||
[11]: https://opensource.com/sites/default/files/uploads/6_system_monitor_out_of_memory.jpg (System Monitor memory bottleneck)
|
||||
[12]: https://opensource.com/article/18/9/swap-space-linux-systems
|
||||
[13]: https://opensource.com/life/16/2/open-source-tools-system-monitoring
|
||||
[14]: https://opensource.com/sites/default/files/uploads/7_atop_storage_bottleneck.jpg (atop disk bottleneck)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/8_usb_standards_speeds.jpg (USB standards)
|
||||
[16]: https://www.samsung.com/us/computing/memory-storage/solid-state-drives/
|
||||
[17]: https://en.wikipedia.org/wiki/USB
|
||||
[18]: https://wiki.gnome.org/Apps/Disks
|
||||
[19]: https://opensource.com/article/20/1/internet-speed-tests
|
||||
[20]: https://www.speedtest.net/
|
||||
[21]: https://fast.com/
|
||||
[22]: https://www.speakeasy.net/speedtest/
|
||||
[23]: https://opensource.com/sites/default/files/uploads/9_nethogs_bandwidth_consumers.jpg (Nethogs)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/10_hardinfo_video_card_information.jpg (Video Chipset Information)
|
||||
[25]: https://www.wepc.com/tips/cpu-gpu-bottleneck/
|
||||
[26]: https://itsfoss.com/conky-gui-ubuntu-1304/
|
||||
[27]: https://opensource.com/article/19/11/monitoring-linux-glances
|
||||
[28]: https://opensource.com/sites/default/files/uploads/11_conky_cpu_and_gup_monitoring.jpg (Conky CPU and GPU monitoring)
|
||||
[29]: https://askubuntu.com/questions/387594/how-to-measure-gpu-usage
|
||||
[30]: https://www.cyberciti.biz/open-source/command-line-hacks/linux-gpu-monitoring-and-diagnostic-commands/
|
@ -4,15 +4,15 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13472-1.html)
|
||||
|
||||
使用开源工具升级你的 Linux PC 硬件
|
||||
======
|
||||
|
||||
> 升级你的电脑硬件来提升性能,以获得最大的回报。
|
||||
|
||||
![笔记本电脑上的商务女性坐在窗前][1]
|
||||

|
||||
|
||||
在我的文章《[使用开源工具识别 Linux 性能瓶颈][2]》中,我解释了一些使用开源的图形用户界面(GUI)工具监测 Linux 性能的简单方法。我的重点是识别 _性能瓶颈_,即硬件资源达到极限并阻碍你的 PC 性能的情况。
|
||||
|
||||
@ -22,33 +22,33 @@
|
||||
|
||||
开源工具是关键。GUI 工具可以帮助你监控你的系统,预测哪些硬件改进会有效。否则,你可能买了硬件后发现它并没有提高性能。在升级之后,这些工具也有助于验证升级是否产生了你预期的好处。
|
||||
|
||||
这篇文章概述了一种简单的 PC 硬件升级的方法。其“秘诀”是开源的 GUI 工具。
|
||||
这篇文章概述了一种简单的 PC 硬件升级的方法,其“秘诀”是开源的 GUI 工具。
|
||||
|
||||
### 如何升级内存
|
||||
|
||||
几年前,升级内存是不用多想的。增加内存几乎总是能提高性能。
|
||||
|
||||
今天,情况不再是这样了。个人电脑配备了更多的内存,而且 Linux 能非常有效地使用它。如果你买了系统用不完的内存,你就浪费了钱。
|
||||
今天,情况不再是这样了。个人电脑配备了更多的内存,而且 Linux 能非常有效地使用它。如果你购买了系统用不完的内存,就浪费了钱。
|
||||
|
||||
因此,你要花一些时间来监测你的电脑,看看内存升级是否会有助于提升它的性能。例如,在你进行典型的一天工作时观察内存的使用情况。而且一定要检查在内存密集型工作负载中发生了什么。
|
||||
|
||||
各种各样的开源工具可以帮助你进行这种监测,不过我用的是 [GNOME系统监视器][3]。它在大多数 Linux 软件库中都有。
|
||||
各种各样的开源工具可以帮助你进行这种监测,不过我用的是 [GNOME 系统监视器][3]。它在大多数 Linux 软件库中都有。
|
||||
|
||||
当你启动系统监视器时,它的**资源**面板会显示这样的输出:
|
||||
当你启动系统监视器时,它的“资源”面板会显示这样的输出:
|
||||
|
||||
![用 GNOME 系统监控器监控内存][4]
|
||||
|
||||
*图 1. 用 GNOME 系统监视器监控内存 (Howard Fosdick, [CC BY-SA 4.0][5])*
|
||||
|
||||
屏幕中间显示内存的使用情况。[交换空间][6] 是 Linux 在内存不足时使用的磁盘空间。Linux 通过使用交换空间作为内存的一个较慢的扩展来有效地增加内存。
|
||||
屏幕中间显示了内存的使用情况。[交换空间][6] 是 Linux 在内存不足时使用的磁盘空间。Linux 通过使用交换空间作为内存的一个较慢的扩展来有效地增加内存。
|
||||
|
||||
由于交换空间比内存慢,如果内存交换活动变得显著,增加内存将改善你的计算机的性能。你会得到多大的改善取决于交换活动的数量和交换空间所在的设备的速度。
|
||||
|
||||
如果使用了大量的交换空间,你通过增加内存得到的性能改善会比只使用了少量交换空间的情况大。
|
||||
如果使用了大量的交换空间,你通过增加内存会得到比只使用了少量交换空间更多的性能改善。
|
||||
|
||||
如果交换空间位于慢速的机械硬盘上,你会发现增加内存比交换空间位于最快的固态硬盘上有更大的改善。
|
||||
如果交换空间位于慢速的机械硬盘上,你会发现增加内存比将交换空间放在最快的固态硬盘上改善更多。
|
||||
|
||||
下面是一个关于何时增加内存的例子。这台电脑在内存利用率达到 80% 后显示出交换活动在增加。当内存利用率超过 90% 时,它就变得没有反应了。
|
||||
下面是一个关于何时增加内存的例子。这台电脑在内存利用率达到 80% 后显示交换活动在增加。当内存利用率超过 90% 时,它就变得失去反应了。
|
||||
|
||||
![系统监控 - 内存不足的情况][7]
|
||||
|
||||
@ -70,15 +70,15 @@
|
||||
|
||||
升级后,启动系统监视器。运行之前使你的内存超载的相同程序。
|
||||
|
||||
系统监控器应该显示出你扩展的内存,而且你应该看到更好的性能。
|
||||
系统监控器应该显示出你扩充的内存,而且你应该发现性能更好了。
|
||||
|
||||
### 如何升级存储
|
||||
|
||||
我们正处在一个存储快速改进的时代。即使是只有几年历史的计算机也可以从磁盘升级中受益。但首先,你要确保升级对你的计算机和工作负载是有意义的。
|
||||
我们正处在一个存储快速改进的时代。即使是只用了几年的计算机也可以从磁盘升级中受益。但首先,你要确保升级对你的计算机和工作负载是有意义的。
|
||||
|
||||
首先,要找出你有什么磁盘。许多开源工具会告诉你。[Hardinfo][8] 或 [GNOME 磁盘][9] 是不错的选择,因为它们都是广泛可用的,而且它们的输出很容易理解。这些应用程序会告诉你磁盘的品牌、型号和其他细节。
|
||||
|
||||
接下来,通过基准测试来确定你的磁盘性能。GNOME 磁盘让这一切变得简单。只要启动该工具并点击它的**磁盘基准测试**选项。这会给出你磁盘的读写率和平均磁盘访问时间。
|
||||
接下来,通过基准测试来确定你的磁盘性能。GNOME 磁盘让这一切变得简单。只要启动该工具并点击它的“磁盘基准测试”选项。这会给出你磁盘的读写率和平均磁盘访问时间。
|
||||
|
||||
![GNOME 磁盘基准测试][10]
|
||||
|
||||
@ -104,7 +104,7 @@
|
||||
|
||||
很明显,你可以用一个更快的磁盘来提高性能。
|
||||
|
||||
你也会想知道是哪个程序使用了磁盘。只要启动系统监视器并点击其**进程**标签。
|
||||
你也会想知道是哪个程序使用了磁盘。只要启动系统监视器并点击其“进程”标签。
|
||||
|
||||
现在你知道了你的磁盘有多忙,以及哪些程序在使用它,所以你可以做出一个有根据的判断,是否值得花钱买一个更快的磁盘。
|
||||
|
||||
@ -126,7 +126,7 @@
|
||||
* **绿色柱形图:** 固态硬盘比机械硬盘快。但如果固态硬盘使用 SATA 接口,就会限制其性能。这是因为 SATA 接口是十多年前为机械硬盘设计的。
|
||||
* **蓝色柱形图:** 最快的内置磁盘技术是新的 [PCIe 接口的 NVMe 固态盘][19]。这些可以比 SATA 连接的固态硬盘大约快 5 倍,比机械硬盘快 20 倍。
|
||||
|
||||
对于外置 SSD,你会发现 [最新的雷电和 USB 接口][20] 是最快的。
|
||||
对于外置 SSD,你会发现 [最新的雷电接口和 USB 接口][20] 是最快的。
|
||||
|
||||
#### 如何安装一个内置磁盘
|
||||
|
||||
@ -152,7 +152,7 @@
|
||||
|
||||
*图 7. USB 速度差别很大(Howard Fosdick,[CC BY-SA 4.0][5],基于 [Tripplite][23] 和 [维基][24] 的数据*
|
||||
|
||||
要查看你得到的实际 USB 速度,请启动 GNOME 磁盘。GNOME 磁盘可以对 USB 连接的设备进行基准测试,就像对内部磁盘一样。选择其**基准磁盘**选项。
|
||||
要查看你得到的实际 USB 速度,请启动 GNOME 磁盘。GNOME 磁盘可以对 USB 连接的设备进行基准测试,就像对内部磁盘一样。选择其“磁盘基准测试”选项。
|
||||
|
||||
你插入的设备和 USB 端口共同决定了你将得到的速度。如果端口和设备不匹配,你将体验到两者中较慢的速度。
|
||||
|
||||
@ -170,7 +170,7 @@ USB 3.0 卡的价格只有 25 美元左右。较新、较贵的卡提供 USB 3.1
|
||||
|
||||
升级你的互联网带宽很容易。只要给你的 ISP 写一张支票即可。
|
||||
|
||||
问题是:应该升级吗?
|
||||
问题是,应该升级吗?
|
||||
|
||||
系统监控器显示了你的带宽使用情况(见图 1)。如果你经常遇到你从 ISP 购买的带宽限额,你会从购买更高的限额中受益。
|
||||
|
||||
@ -192,13 +192,13 @@ USB 3.0 卡的价格只有 25 美元左右。较新、较贵的卡提供 USB 3.1
|
||||
|
||||
大多数台式机主板支持一系列的 CPU,并且是可以升级的 —— 假设你还没有使用该系列中最顶级的处理器。
|
||||
|
||||
使用系统监视器观察你的 CPU,并确定升级是否有帮助。它的**资源**面板将显示你的 CPU 负载。如果你的所有逻辑处理器始终保持在 80% 或 90% 以上,你可以从更多的 CPU 功率中受益。
|
||||
使用系统监视器观察你的 CPU,并确定升级是否有帮助。它的“资源”面板将显示你的 CPU 负载。如果你的所有逻辑处理器始终保持在 80% 或 90% 以上,你可以从更多的 CPU 功率中受益。
|
||||
|
||||
这是一个升级 CPU 的有趣项目。只要小心谨慎,任何人都可以做到这一点。
|
||||
|
||||
不幸的是,这几乎没有成本效益。大多数卖家对单个 CPU 芯片收取溢价,比他们卖给你的新系统要高。因此,对许多人来说,升级 CPU 并不具有经济意义。
|
||||
|
||||
如果你将显示器直接插入台式机的主板,你可能会通过升级图形处理而受益。只需添加一块显卡。
|
||||
如果你将显示器直接插入台式机的主板,你可能会通过升级图形处理器而受益。只需添加一块显卡。
|
||||
|
||||
诀窍是在新显卡和你的 CPU 之间实现平衡的工作负荷。这个 [在线工具][27] 能准确识别哪些显卡能与你的 CPU 最好地配合。[这篇文章][28] 详细解释了如何去升级你的图形处理。
|
||||
|
||||
@ -222,7 +222,7 @@ via: https://opensource.com/article/21/4/upgrade-linux-hardware
|
||||
[a]: https://opensource.com/users/howtech
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
|
||||
[2]: https://opensource.com/article/21/3/linux-performance-bottlenecks
|
||||
[2]: https://linux.cn/article-13462-1.html
|
||||
[3]: https://vitux.com/how-to-install-and-use-task-manager-system-monitor-in-ubuntu/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/system_monitor_-_resources_panel_0.jpg (Monitoring memory with GNOME System Monitor)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
@ -3,29 +3,40 @@
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13454-1.html)
|
||||
|
||||
你有在使用 Python 3.6 中针对文件系统的这个神奇方法吗?
|
||||
你使用过 Python 3.6 中针对文件系统的这个神奇方法吗?
|
||||
======
|
||||
探索 os.fspath 和其他两个未被充分利用但仍然有用的 Python 特性。
|
||||
![Computer screen with files or windows open][1]
|
||||
|
||||
这是关于首次出现在 Python 3.x 版本中的特性的系列文章中的第七篇。Python 3.6 首次发布于 2016 年,尽管它已经发布了一段时间,但它引入的许多特性都没有得到充分利用,而且相当酷。下面是其中的三个。
|
||||
> 探索 os.fspath 和其他两个未被充分利用但仍然有用的 Python 特性。
|
||||
|
||||

|
||||
|
||||
这是 Python 3.x 首发特性系列文章中的第七篇。Python 3.6 首次发布于 2016 年,尽管它已经发布了一段时间,但它引入的许多特性都没有得到充分利用,而且相当酷。下面是其中的三个。
|
||||
|
||||
### 分隔数字常数
|
||||
|
||||
快,哪个更大,`10000000` 还是 `200000`?你在看代码时能正确回答吗?根据当地的习惯,在写作中,你会用 10,000,000 或 10.000.000 来表示第一个数字。问题是,Python 使用逗号和句号是用于其他地方。
|
||||
快回答哪个更大,`10000000` 还是 `200000`?你在看代码时能正确回答吗?根据当地的习惯,在写作中,你会用 10,000,000 或 10.000.000 来表示第一个数字。问题是,Python 使用逗号和句号是用于其他地方。
|
||||
|
||||
幸运的是,从 Python 3.6 开始,你可以使用下划线来分隔数字。这在代码中和使用字符串的 `int()` 转换器时都可以使用:
|
||||
|
||||
|
||||
```
|
||||
import math
|
||||
math.log(10_000_000) / math.log(10)
|
||||
```
|
||||
|
||||
[/code] [code]` 7.0`[/code] [code]`math.log(int("10_000_000")) / math.log(10)`[/code] [code]` 7.0`
|
||||
```
|
||||
7.0
|
||||
```
|
||||
|
||||
```
|
||||
math.log(int("10_000_000")) / math.log(10)
|
||||
```
|
||||
|
||||
```
|
||||
7.0
|
||||
```
|
||||
|
||||
### Tau 是对的
|
||||
@ -34,30 +45,28 @@ math.log(10_000_000) / math.log(10)
|
||||
|
||||
在 Python 3.6 及以后的版本中,你的数学代码可以使用更直观的常数:
|
||||
|
||||
|
||||
```
|
||||
print("Tan of an eighth turn should be 1, got", round(math.tan(math.tau/8), 2))
|
||||
print("Cos of an sixth turn should be 1/2, got", round(math.cos(math.tau/6), 2))
|
||||
print("Sin of a quarter turn should be 1, go", round(math.sin(math.tau/4), 2))
|
||||
```
|
||||
|
||||
[/code] [code]
|
||||
|
||||
Tan of an eighth turn should be 1, got 1.0
|
||||
Cos of an sixth turn should be 1/2, got 0.5
|
||||
Sin of a quarter turn should be 1, go 1.0
|
||||
```
|
||||
Tan of an eighth turn should be 1, got 1.0
|
||||
Cos of an sixth turn should be 1/2, got 0.5
|
||||
Sin of a quarter turn should be 1, go 1.0
|
||||
```
|
||||
|
||||
### os.fspath
|
||||
|
||||
从 Python 3.6 开始,有一个神奇的方法表示”转换为文件系统路径“。当给定一个 `str` 或 `bytes` 时,它返回输入。
|
||||
从 Python 3.6 开始,有一个神奇的方法表示“转换为文件系统路径”。当给定一个 `str` 或 `bytes` 时,它返回输入。
|
||||
|
||||
对于所有类型的对象,它寻找 `__fspath__` 方法并调用它。这允许传递的对象是”带有元数据的文件名“。
|
||||
对于所有类型的对象,它寻找 `__fspath__` 方法并调用它。这允许传递的对象是“带有元数据的文件名”。
|
||||
|
||||
像 `open()` 或 `stat` 这样的普通函数仍然能够使用它们,只要 `__fspath__` 返回正确的东西。
|
||||
|
||||
例如,这里有一个函数将一些数据写入一个文件,然后检查其大小。它还将文件名记录到标准输出,以便追踪:
|
||||
|
||||
|
||||
```
|
||||
def write_and_test(filename):
|
||||
print("writing into", filename)
|
||||
@ -68,17 +77,17 @@ def write_and_test(filename):
|
||||
|
||||
你可以用你期望的方式来调用它,用一个字符串作为文件名:
|
||||
|
||||
```
|
||||
write_and_test("plain.txt")
|
||||
```
|
||||
|
||||
```
|
||||
`write_and_test("plain.txt")`[/code] [code]
|
||||
|
||||
writing into plain.txt
|
||||
size of plain.txt is 5
|
||||
```
|
||||
|
||||
然而,可以定义一个新的类,为文件名的字符串表示法添加信息。这样可以使日志记录更加详细,而不改变原来的功能:
|
||||
|
||||
|
||||
```
|
||||
class DocumentedFileName:
|
||||
def __init__(self, fname, why):
|
||||
@ -92,10 +101,11 @@ class DocumentedFileName:
|
||||
|
||||
用 `DocumentedFileName` 实例作为输入运行该函数,允许 `open` 和 `os.getsize` 函数继续工作,同时增强日志:
|
||||
|
||||
```
|
||||
write_and_test(DocumentedFileName("documented.txt", "because it's fun"))
|
||||
```
|
||||
|
||||
```
|
||||
`write_and_test(DocumentedFileName("documented.txt", "because it's fun"))`[/code] [code]
|
||||
|
||||
writing into DocumentedFileName(fname='documented.txt', why="because it's fun")
|
||||
size of DocumentedFileName(fname='documented.txt', why="because it's fun") is 5
|
||||
```
|
||||
@ -110,8 +120,8 @@ via: https://opensource.com/article/21/5/python-36-features
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,86 +3,84 @@
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13459-1.html)
|
||||
|
||||
用这个 Python 3.7 的特性来切片无限生成器
|
||||
======
|
||||
了解更多关于这个和其他两个未被充分利用但仍然有用的 Python 特性。
|
||||
![Hands on a keyboard with a Python book ][1]
|
||||
|
||||
这是关于首次出现在 Python 3.x 版本中的特性的系列文章的第八篇。[Python 3.7][2] 于 2018 年首次发布,尽管它已经发布了几年,但它引入的许多特性都未被充分利用,而且相当酷。下面是其中的三个。
|
||||
> 了解更多关于这个和其他两个未被充分利用但仍然有用的 Python 特性。
|
||||
|
||||

|
||||
|
||||
这是关于 Python 3.x 首发特性系列文章的第八篇。[Python 3.7][2] 于 2018 年首次发布,尽管它已经发布了几年,但它引入的许多特性都未被充分利用,而且相当酷。下面是其中的三个。
|
||||
|
||||
### 注解推迟评估
|
||||
|
||||
在 Python 3.7 中,只要激活了正确的 `__future__` 标志,注解在运行时就不会被评估:
|
||||
|
||||
|
||||
```
|
||||
from __future__ import annotations
|
||||
|
||||
def another_brick(wall: List[Brick], brick: Brick) -> Education:
|
||||
pass
|
||||
|
||||
```
|
||||
```
|
||||
`another_brick.__annotations__`
|
||||
```
|
||||
```
|
||||
` {'wall': 'List[Brick]', 'brick': 'Brick', 'return': 'Education'}`
|
||||
def another_brick(wall: List[Brick], brick: Brick) -> Education:
|
||||
pass
|
||||
```
|
||||
|
||||
它允许递归类型(指向自己的类)和其他有趣的事情。然而,这意味着如果你想做自己的类型分析,你需要明确地使用 `ast`。
|
||||
```
|
||||
another_brick.__annotations__
|
||||
```
|
||||
|
||||
```
|
||||
{'wall': 'List[Brick]', 'brick': 'Brick', 'return': 'Education'}
|
||||
```
|
||||
|
||||
它使递归类型(指向自己的类)和其他有趣的事情成为了可能。然而,这意味着如果你想做自己的类型分析,你需要明确地使用 `ast`。
|
||||
|
||||
```
|
||||
import ast
|
||||
raw_type = another_brick.__annotations__['wall']
|
||||
[parsed_type] = ast.parse(raw_type).body
|
||||
|
||||
```
|
||||
```
|
||||
|
||||
```
|
||||
subscript = parsed_type.value
|
||||
f"{subscript.value.id}[{subscript.slice.id}]"
|
||||
```
|
||||
|
||||
```
|
||||
```
|
||||
` 'List[Brick]'`
|
||||
'List[Brick]'
|
||||
```
|
||||
|
||||
### itertools.islice 支持 __index__
|
||||
|
||||
Python 中的序列切片长期以来一直接受各种_类 int 对象_(具有 `__index__()` 的对象)作为有效的切片部分。然而,直到 Python 3.7,`itertools.islice`,即核心 Python 中对无限生成器进行切片的唯一方法,才获得了这种支持。
|
||||
Python 中的序列切片长期以来一直接受各种 _类 int 对象_(具有 `__index__()` 的对象)作为有效的切片部分。然而,直到 Python 3.7,`itertools.islice`,即核心 Python 中对无限生成器进行切片的唯一方法,才获得了这种支持。
|
||||
|
||||
例如,现在可以用 `numpy.short` 大小的整数来切片无限生成器:
|
||||
|
||||
|
||||
```
|
||||
import numpy
|
||||
short_1 = numpy.short(1)
|
||||
short_3 = numpy.short(3)
|
||||
short_1, type(short_1)
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
` (1, numpy.int16)`
|
||||
```
|
||||
(1, numpy.int16)
|
||||
```
|
||||
|
||||
```
|
||||
import itertools
|
||||
list(itertools.islice(itertools.count(), short_1, short_3))
|
||||
```
|
||||
|
||||
```
|
||||
```
|
||||
` [1, 2]`
|
||||
[1, 2]
|
||||
```
|
||||
|
||||
### functools.singledispatch() 注解注册
|
||||
|
||||
如果你认为 [singledispatch][3] 不能再酷了,你错了。现在可以根据注解来注册了:
|
||||
|
||||
如果你认为 [singledispatch][3] 已经很酷了,你错了。现在可以根据注解来注册了:
|
||||
|
||||
```
|
||||
import attr
|
||||
@ -111,10 +109,10 @@ def _get_area_circle(shape: Circle):
|
||||
return math.pi * (shape.radius ** 2)
|
||||
|
||||
get_area(Circle(1)), get_area(Square(1))
|
||||
```
|
||||
|
||||
```
|
||||
```
|
||||
` (3.141592653589793, 1)`
|
||||
(3.141592653589793, 1)
|
||||
```
|
||||
|
||||
### 欢迎来到 2017 年
|
||||
@ -128,7 +126,7 @@ via: https://opensource.com/article/21/5/python-37-features
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,34 +3,35 @@
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13466-1.html)
|
||||
|
||||
用 Python 3.8 中的这个位置技巧让你的 API 变得更好
|
||||
======
|
||||
探索只接受位置参数和其他两个未被充分利用但仍然有用的 Python 特性。
|
||||
![Women in computing and open source v5][1]
|
||||
|
||||
这是关于首次出现在 Python 3.x 版本中的特性的系列文章的第九篇。Python 3.8 于 2019 年首次发布,两年后,它的许多很酷的新特性仍然没有被使用。下面是其中的三个。
|
||||
> 探索只接受位置参数和其他两个未被充分利用但仍然有用的 Python 特性。
|
||||
|
||||

|
||||
|
||||
这是 Python 3.x 首发特性系列文章的第九篇。Python 3.8 于 2019 年首次发布,两年后,它的许多很酷的新特性仍然没有被使用。下面是其中的三个。
|
||||
|
||||
### importlib.metadata
|
||||
|
||||
[入口点][2]在 Python 包中被用来做各种事情。最熟悉的是 [console_scripts][3] 入口点,但 Python 中的许多插件系统都使用它们。
|
||||
[入口点][2] 在 Python 包中被用来做各种事情。大多数人熟悉的是 [console_scripts][3] 入口点,不过 Python 中的许多插件系统都使用它们。
|
||||
|
||||
在 Python 3.8 之前,从 Python 中读取入口点的最好方法是使用 `pkg_resources`,这是一个有点笨重的模块,它是 `setuptools` 的一部分。
|
||||
|
||||
新的 `importlib.metadata` 是一个内置模块,它允许访问同样的东西:
|
||||
|
||||
|
||||
```
|
||||
from importlib import metadata as importlib_metadata
|
||||
|
||||
distribution = importlib_metadata.distribution("numpy")
|
||||
distribution.entry_points
|
||||
```
|
||||
|
||||
[/code] [code]
|
||||
|
||||
```
|
||||
[EntryPoint(name='f2py', value='numpy.f2py.f2py2e:main', group='console_scripts'),
|
||||
EntryPoint(name='f2py3', value='numpy.f2py.f2py2e:main', group='console_scripts'),
|
||||
EntryPoint(name='f2py3.9', value='numpy.f2py.f2py2e:main', group='console_scripts')]
|
||||
@ -40,56 +41,67 @@ distribution.entry_points
|
||||
|
||||
|
||||
```
|
||||
`f"{distribution.metadata['name']}=={distribution.version}"`[/code] [code]` 'numpy==1.20.1'`
|
||||
f"{distribution.metadata['name']}=={distribution.version}"`[/code] [code]` 'numpy==1.20.1'
|
||||
```
|
||||
|
||||
### 只接受位置参数
|
||||
|
||||
强制关键字的参数在传达 API 作者的意图方面取得巨大成功之后,另一个空白被填补了:只接受位置参数。
|
||||
|
||||
特别是对于那些允许使用任意关键字的函数(例如,生成数据结构),这意味着对允许的参数名称的限制更少:
|
||||
|
||||
特别是对于那些允许使用任意关键字的函数(例如,生成数据结构),这意味着对允许的参数名称的限制更少:
|
||||
|
||||
```
|
||||
def some_func(prefix, /, **kwargs):
|
||||
print(prefix, kwargs)
|
||||
```
|
||||
|
||||
[/code] [code]`some_func("a_prefix", prefix="prefix keyword value")`[/code] [code]` a_prefix {'prefix': 'prefix keyword value'}`
|
||||
```
|
||||
some_func("a_prefix", prefix="prefix keyword value")
|
||||
```
|
||||
|
||||
```
|
||||
a_prefix {'prefix': 'prefix keyword value'}`
|
||||
```
|
||||
|
||||
注意,令人困惑的是,_变量_ `prefix` 的值与 `kwargs["prefix"]` 的值不同。就像在很多地方一样,要注意小心使用这个功能。
|
||||
|
||||
### 自我调试表达式
|
||||
|
||||
50多年来, `print()` 语句(及其在其他语言中的对应语句)一直是快速调试输出的最爱。
|
||||
50 多年来,`print()` 语句(及其在其他语言中的对应语句)一直是快速调试输出的最爱。
|
||||
|
||||
但是我们在打印语句方面取得了很大的进展,比如:
|
||||
|
||||
|
||||
```
|
||||
special_number = 5
|
||||
print("special_number = %s" % special_number)
|
||||
```
|
||||
|
||||
[/code] [code]` special_number = 5`
|
||||
```
|
||||
special_number = 5
|
||||
```
|
||||
|
||||
然而,自我记录的 f-strings 使它更容易明确:
|
||||
|
||||
|
||||
```
|
||||
`print(f"{special_number=}")`[/code] [code]` special_number=5`
|
||||
print(f"{special_number=}")
|
||||
```
|
||||
|
||||
```
|
||||
special_number=5`
|
||||
```
|
||||
|
||||
在 f-string 插值部分的末尾添加一个 `=`,可以保留字面部分,同时添加数值。
|
||||
|
||||
当更复杂的表达式在该部分内时,这就更有用了:
|
||||
|
||||
|
||||
```
|
||||
values = {}
|
||||
print(f"{values.get('something', 'default')=}")
|
||||
```
|
||||
|
||||
[/code] [code]` values.get('something', 'default')='default'`
|
||||
```
|
||||
values.get('something', 'default')='default'
|
||||
```
|
||||
|
||||
### 欢迎来到 2019 年
|
||||
@ -103,7 +115,7 @@ via: https://opensource.com/article/21/5/python-38-features
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,180 @@
|
||||
[#]: subject: (How to Install and Use XRDP on Ubuntu for Remote Desktop Connection)
|
||||
[#]: via: (https://itsfoss.com/xrdp-ubuntu/)
|
||||
[#]: author: (Hunter Wittenborn https://itsfoss.com/author/hunter/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13463-1.html)
|
||||
|
||||
如何在 Ubuntu 上安装和使用 XRDP 进行远程桌面连接
|
||||
======
|
||||
|
||||

|
||||
|
||||
> 这是一份初学者指南,展示了在基于 Ubuntu 的 Linux 发行版上设置 XRDP 所需要遵循的步骤。有了它,你就可以从不同的电脑上访问你的 Ubuntu 系统,并以图形方式使用它。
|
||||
|
||||
微软的 [远程桌面协议][1](RDP) 是一个允许从一台计算机到另一台计算机进行图形化远程桌面连接的协议。RDP 的工作原理是让一台主机运行软件,允许其他几台计算机连接到它。
|
||||
|
||||
[XRDP][2] 是 RDP 的一个开源实现,不需要运行任何专有程序。XRDP 不仅试图遵循 RDP,而且还与常规的 RDP 客户端兼容,如 [Remmina][3] 和 [GNOME Boxes][4]。
|
||||
|
||||
下面是 XRDP 连接屏幕的样子。
|
||||
|
||||
![][5]
|
||||
|
||||
### 使用 XRDP 需要注意的事项
|
||||
|
||||
虽然 XRDP 对于机器的远程访问非常好用,但重要的是要知道 XRDP **不** 适合什么。
|
||||
|
||||
#### 如果你需要一个安全的连接,请不要使用 XRDP
|
||||
|
||||
通过 XRDP 建立的连接可以被攻击者查看和修改,因此应避免任何敏感信息。这一点可以通过使用 SSH 连接或证书来缓解,但这两者都需要更复杂的设置,这里就不一一介绍了。
|
||||
|
||||
#### XRDP 在默认情况下不能很好地应用主题
|
||||
|
||||
在我的测试中,XRDP 默认似乎从未应用过 [Ubuntu][6] 主题。在文章的结尾处有关于解决这个问题的说明。
|
||||
|
||||
#### 如果你只想/需要一个 CLI 环境,就不要使用 XRDP
|
||||
|
||||
XRDP 是为在 GUI 环境中使用而设计和制造的。如果你打算在 CLI 环境中使用它,比如在服务器上,你应该看看其他工具,比如 SSH。
|
||||
|
||||
### 在 Ubuntu 上安装和使用 XRDP
|
||||
|
||||
下面是这个远程连接设置正常工作所需的设置:
|
||||
|
||||
* 一个安装了 XRDP 服务器的 Linux 系统。这是一个将被远程访问的系统。
|
||||
* 远程系统应该和你的系统在同一个网络上,或者它应该有一个 [公共 IP 地址][15]。
|
||||
* 远程 Linux 系统的用户名和密码。
|
||||
* 安装有 RDP 客户端的另一个系统(无论是 Linux、macOS 还是 Windows)。
|
||||
|
||||
![][8]
|
||||
|
||||
#### 第 1 步:在远程计算机上安装 XRDP
|
||||
|
||||
安装 XRDP 只需几个步骤,而且是相当直接的操作。
|
||||
|
||||
> 备注:在访问任何地方之前,请注意,这里说的 “远程机器” 是其他人连接到的机器。
|
||||
|
||||
XRDP 包含在大多数发行版的软件库中。在 Ubuntu 上,你可以在 universe 库中找到它。
|
||||
|
||||
你可以用下面的命令来安装它:
|
||||
|
||||
```
|
||||
sudo apt install xrdp
|
||||
```
|
||||
|
||||
#### 第 2 步:连接到远程机器
|
||||
|
||||
好消息是,XRDP 开箱就能使用!
|
||||
|
||||
要连接到你安装了 XRDP 的机器上,你首先需要在本地机器上安装一个 RDP 客户端。
|
||||
|
||||
我将使用 GNOME Boxes,它可以通过以下方式安装:
|
||||
|
||||
```
|
||||
sudo apt install gnome-boxes
|
||||
```
|
||||
|
||||
GNOME Boxes 更多的是以虚拟机使用而闻名,但它也支持其他各种协议,包括 XRDP。
|
||||
|
||||
如果由于某种原因你不想使用 Boxes,你也可以使用一个叫做 Remmina 的客户端。
|
||||
|
||||
```
|
||||
sudo apt install remmina
|
||||
```
|
||||
|
||||
不过,请注意,在本教程的其余部分,我将使用 Boxes。
|
||||
|
||||
首先,启动 GNOME Boxes,并点击 “+” 号,选择 “连接到远程计算机…”。
|
||||
|
||||
![][10]
|
||||
|
||||
接下来,输入你要连接的机器的 IP 地址,前缀为 `rdp://`,然后按下图连接:
|
||||
|
||||
> **不确定你的 IP 地址是什么?**
|
||||
>
|
||||
> 你可以用 `ip address` 命令找到你的 IP 地址。你需要寻找一个看起来像分成四组的数字的东西:
|
||||
> ```
|
||||
> abhishek@its-foss:~$ ip address
|
||||
> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
|
||||
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
> inet 127.0.0.1/8 scope host lo
|
||||
> valid_lft forever preferred_lft forever
|
||||
> 2: wlp0s20f3: mtu 1500 qdisc noqueue state UP group default qlen 1000
|
||||
> link/ether dc:46:b9:fb:7a:c5 brd ff:ff:ff:ff:ff:ff
|
||||
> inet 192.168.0.107/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp0s20f3
|
||||
> valid_lft 6183sec preferred_lft 6183sec
|
||||
> ```
|
||||
|
||||
避免任何名为 `127.0.0.1` 的 IP 地址,因为那个地址指向你运行命令的机器。输出中应该有更多的 IP 地址,如上图所示。
|
||||
|
||||
![][11]
|
||||
|
||||
然后,你应该会看到一个登录页面。将“会话”设置为 “Xorg”,只需输入你的用户名和密码,然后点击 “OK”。
|
||||
|
||||
![][5]
|
||||
|
||||
之后,你应该看到远程主机的桌面:
|
||||
|
||||
![][12]
|
||||
|
||||
至此,一切都会像机器在你面前时一样表现。
|
||||
|
||||
### 故障排除:修复 XRDP 连接的主题问题
|
||||
|
||||
在我对 Ubuntu 20.04 的测试中,默认的 Yaru 主题似乎在连接时没有应用。这可以通过一些努力来解决。
|
||||
|
||||
首先,在远程计算机上运行这个命令:
|
||||
|
||||
```
|
||||
sudo apt install gnome-tweaks gnome-shell-extensions dconf-editor -y
|
||||
```
|
||||
|
||||
接下来,打开 “扩展” 应用,并打开如下开关:
|
||||
|
||||
![][13]
|
||||
|
||||
接下来,关闭你的远程桌面会话并重新登录。现在,打开 Tweaks,按照下面的截图配置:
|
||||
|
||||
![][14]
|
||||
|
||||
最后,打开 dconf 编辑器,并进入 `/org/gnome/shell/extensions/dash-toock/`。设置如下所示的值:
|
||||
|
||||
* `custom-theme-shrink`:`On`
|
||||
* `dock-fixed`:`On`
|
||||
* `transparency-mode`:`FIXED`
|
||||
|
||||
### 总结
|
||||
|
||||
至此,一切都准备好了,可以做你需要做的事了。
|
||||
|
||||
如果有什么地方做得不太对,或者你有什么问题或意见,请在下面留言。我会尽力帮助你的。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/xrdp-ubuntu/
|
||||
|
||||
作者:[Hunter Wittenborn][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/hunter/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Remote_Desktop_Protocol
|
||||
[2]: https://en.wikipedia.org/wiki/Xrdp
|
||||
[3]: https://remmina.org/
|
||||
[4]: https://wiki.gnome.org/Apps/Boxes
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_connected_login.png?resize=716%2C582&ssl=1
|
||||
[6]: https://ubuntu.com/
|
||||
[7]: https://itsfoss.com/install-gui-ubuntu-server/
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[9]: https://itsfoss.com/check-ip-address-ubuntu/
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_gnome-boxes_connect-begin.png?resize=744%2C580&ssl=1
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_gnome-boxes_rdp-connect.png?resize=757%2C514&ssl=1
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_connected_homescreen.png?resize=711%2C595&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_extensions.png?resize=800%2C557&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_tweaks.png?resize=800%2C550&ssl=1
|
||||
[15]: https://itsfoss.com/check-ip-address-ubuntu/
|
@ -3,30 +3,32 @@
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13451-1.html)
|
||||
|
||||
Make 命令未找到?这里是如何修复它
|
||||
Make 命令未找到?这是修复它的方法
|
||||
======
|
||||
|
||||
有一天,我试图在一个新的 Ubuntu 系统上编译一个程序,当我试图使用 make 命令时,它向我抛出一个错误:
|
||||

|
||||
|
||||
有一天,我试图在一个新的 Ubuntu 系统上编译一个程序,当我试图使用 `make` 命令时,它向我抛出一个错误:
|
||||
|
||||
```
|
||||
The program 'make' is currently not installed. You can install it by typing:
|
||||
sudo apt install make
|
||||
```
|
||||
|
||||
这表明 make 命令还没有安装。你可以用这些命令在 Ubuntu 上逐步安装 make:
|
||||
这表明 `make` 命令还没有安装。你可以用这些命令在 Ubuntu 上逐步安装 `make`:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt install make
|
||||
```
|
||||
|
||||
第一个命令是更新本地的软件包缓存。如果是一个新安装的 Ubuntu 系统,这是很有必要的。有了刷新的软件包缓存,你的系统就会知道应该从哪个仓库下载 make 包。
|
||||
第一个命令是更新本地的软件包缓存。如果是一个新安装的 Ubuntu 系统,这是很有必要的。有了刷新的软件包缓存,你的系统就会知道应该从哪个仓库下载 `make` 包。
|
||||
|
||||
并验证 make 是否已经正确安装:
|
||||
并验证 `make` 是否已经正确安装:
|
||||
|
||||
```
|
||||
make --version
|
||||
@ -36,7 +38,7 @@ make --version
|
||||
|
||||
### 在 Ubuntu 上安装 make 的更好方法
|
||||
|
||||
安装 make 命令的一个更好的方法是使用 build-essential 包。这个包包含 make、gcc、g++ 和其他一些编译器和开发工具。
|
||||
安装 `make` 命令的一个更好的方法是使用 `build-essential` 包。这个包包含 `make`、`gcc`、`g++` 和其他一些编译器和开发工具。
|
||||
|
||||
```
|
||||
sudo apt install build-essential
|
||||
@ -44,21 +46,21 @@ sudo apt install build-essential
|
||||
|
||||
![Installing Build Essential package][2]
|
||||
|
||||
安装了这个 build-essential 包后,你就可以[在 Linux 中轻松地运行 C/C++ 程序][3]。
|
||||
安装了这个 `build-essential` 包后,你就可以[在 Linux 中轻松地运行 C/C++ 程序][3]。
|
||||
|
||||
### 如果 make 已经安装了,但它没有工作怎么办?
|
||||
|
||||
在一些罕见的情况下,可能会发生 make 已经安装了,但却无法工作的情况。
|
||||
在一些罕见的情况下,可能会发生 `make` 已经安装了,但却无法工作的情况。
|
||||
|
||||
其原因是 make 命令不在 $PATH 变量中。你可以用这个命令重新安装 make:
|
||||
其原因是 `make` 命令不在 `$PATH` 变量中。你可以用这个命令重新安装 `make`:
|
||||
|
||||
```
|
||||
sudo apt install --reinstall make
|
||||
```
|
||||
|
||||
如果这不起作用,你可以尝试[手动添加二进制文件到你的 PATH 中][4],但这应该不需要手动。
|
||||
如果这不起作用,你可以尝试 [手动添加二进制文件到你的 PATH 中][4],但这应该不需要手动。
|
||||
|
||||
我希望这个快速提示能帮助你。仍然有问题或对相关主题有疑问?请随时在评论区留言。我将在我的能力范围内尽力帮助你。如果你想得到更快速的回应,你可以[加入 It's FOSS 社区论坛][5]。祝你愉快 :)
|
||||
我希望这个快速提示能帮助你。仍然有问题或对相关主题有疑问?请随时在评论区留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -67,7 +69,7 @@ via: https://itsfoss.com/make-command-not-found-ubuntu/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,16 +3,18 @@
|
||||
[#]: author: (Alan Smithee https://opensource.com/users/alansmithee)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13456-1.html)
|
||||
|
||||
在 2021 年你需要知道 Quarkus 些什么?
|
||||
======
|
||||
Quarkus 受益于 20 多年的 Java 开发历史,使开发应用变得更快、更容易。
|
||||
![Tools in a cloud][1]
|
||||
|
||||
在云上发布服务的一部分是通过简单可靠的方式为用户和开发者提供对这些服务的便捷访问。与在线应用对接的最流行的方法之一是通过应用编程接口(API),这是一个花哨的术语,意味着你允许用户通过代码与你的应用进行互动。
|
||||
> Quarkus 受益于 20 多年的 Java 开发历史,使开发应用变得更快、更容易。
|
||||
|
||||

|
||||
|
||||
在云上发布服务部分是为了通过简单可靠的方式为用户和开发者提供对这些服务的便捷访问。与在线应用对接的最流行的方法之一是通过应用编程接口(API),这是一个花哨的术语,意味着你允许用户通过代码与你的应用进行互动。
|
||||
|
||||
API 的概念很重要,因为它可以帮助其他人在你的应用基础上进行开发。假设你设计了一个网站,当用户点击一个按钮时返回一个随机数字。通常情况下,这需要用户打开你的网站并点击一个按钮。网站可能是有用的,但只是在一定程度上。如果你包含一个 API,用户可以直接向你的服务器发送一个信号,要求一个随机数,或者他们可以自己编程,“调用”你的服务器来获取一个数字,而不需要点击或手动交互。开发者可以使用你的随机数作为游戏的数值,或作为密码生成器的一部分,或其他任何开发者需要随机数的地方(总是有的)。一个好的 API 可以解锁你的应用,让其他人使用你的代码结果,本质上,将你在网络上的工作转变为一个软件库。
|
||||
|
||||
@ -26,16 +28,15 @@ Quarkus 让你用一个有用的 API 开发应用,几乎不需要配置,也
|
||||
|
||||
### 开始使用 Quarkus
|
||||
|
||||
在 Saumya Singh 的_[如何创建你的第一个 Quarkus 应用][4]_中,你了解了 Quarkus 和无服务器交付的好处,并在大约 10 分钟内创建了一个简单的演示应用。事实上,_10_ 分钟以内更准确,因为在 Maven 和 Quarkus 之间,几乎没有你想象中的那么多设置。它几乎感觉不到像 Java(我指的是坏的方面),但它感觉非常像 Java(我指的是好的方面)。
|
||||
在 Saumya Singh 的《[如何创建你的第一个 Quarkus 应用][4]》中,你可以了解 Quarkus 和无服务器交付的好处,并在大约 10 分钟内创建了一个简单的演示应用。事实上,_10_ 分钟以内更准确,因为在 Maven 和 Quarkus 之间,几乎没有你想象中的那么多设置。它几乎感觉不到像 Java 一样的坏处,而感觉像 Java 一样好。
|
||||
|
||||
### 边缘开发
|
||||
|
||||
Linux 是创建物联网 (IoT) [边缘应用][5]的一个流行平台。这有很多原因,包括安全性、编程语言和开发模型的广泛选择以及协议支持。不出所料,Quarkus 对物联网的处理非常好。Quarkus 的内存效率高,启动快,并且有快速的运行时,所以它不仅是物联网的可行解决方案,而且是理想的解决方案。你可以通过 Daniel Oh 的_[在 Linux 上使用开源的边缘开发入门][6]_来开始使用 Quarkus 和物联网。
|
||||
|
||||
Linux 是创建物联网 (IoT) [边缘应用][5] 的一个流行平台。这有很多原因,包括安全性、编程语言和开发模型的广泛选择以及协议支持。不出所料,Quarkus 对物联网的处理非常好。Quarkus 的内存效率高,启动快,并且有快速的运行时,所以它不仅是物联网的可行解决方案,而且是理想的解决方案。你可以通过 Daniel Oh 的《[在 Linux 上使用开源的边缘开发入门][6]》来开始使用 Quarkus 和物联网。
|
||||
|
||||
### Quarkus 和 VS Code
|
||||
|
||||
当你处理代码时,一个集成开发环境(IDE)会有很大的不同。微软的开源 [VS Code][7](或无品牌标志的 [VSCodium][8])是一个伪装成 IDE 的流行文本编辑器(或者说是伪装成文本编辑器的 IDE?),它有很多扩展,可以使它成为几乎任何编程语言的专门环境。如果你正在使用或考虑使用 VS Code,那么请阅读 Daniel Oh 的 [Quarkus in VS Code][9] 使用指南,了解一些关于 Maven、Quarkus 和 VS Code 如何协同工作的专业技巧。
|
||||
当你处理代码时,一个集成开发环境(IDE)会有很大的不同。微软的开源 [VS Code][7](或无品牌标志的 [VSCodium][8])是一个伪装成 IDE 的流行文本编辑器(或者说是伪装成文本编辑器的 IDE?),它有很多扩展,可以使它成为几乎任何编程语言的专门环境。如果你正在使用或考虑使用 VS Code,那么请阅读 Daniel Oh 的《[Quarkus in VS Code][9]》使用指南,了解一些关于 Maven、Quarkus 和 VS Code 如何协同工作的专业技巧。
|
||||
|
||||
### 获得 Quarkus
|
||||
|
||||
@ -48,7 +49,7 @@ via: https://opensource.com/article/21/5/quarkus
|
||||
作者:[Alan Smithee][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,115 @@
|
||||
[#]: subject: (Convert Images to ASCII Art in Linux Terminal With This Nifty Little Tool)
|
||||
[#]: via: (https://itsfoss.com/ascii-image-converter/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13461-1.html)
|
||||
|
||||
在 Linux 终端将图像转换成 ASCII 艺术
|
||||
======
|
||||
|
||||

|
||||
|
||||
想在 Linux 终端中做一些有趣的事情吗?把一张普通的图片转换成 ASCII 艺术怎么样?
|
||||
|
||||
你知道 [什么是 ASCII][1] 么?它是一个标准,在 8 位码中的 256 个空位上分配字母、数字和其他字符。ASCII 艺术是一个由可打印的 ASCII 字符组成的图形。基本上,它是由一堆字母、数字和特殊字符组成的。
|
||||
|
||||
你可能见过有人 [以 ASCII 格式显示他们发行版的标志][2],像这样:
|
||||
|
||||
![][3]
|
||||
|
||||
这很酷,对吗?把一张普通的图片转换成 ASCII 艺术怎么样?这就是在这篇文章中要探讨的问题。
|
||||
|
||||
### Ascii Image Converter
|
||||
|
||||
顾名思义,[Ascii Image Converter][4] 是一个将图片转换为 ASCII 艺术的工具。它是一个用 Go 语言编写的基于命令行的工具,它打印出提供给它的图片的ASCII版本。
|
||||
|
||||
你可能认不出我,但下面的图片中的 ASCII 版就是我。那是我的 8 位头像。
|
||||
|
||||
![][5]
|
||||
|
||||
该工具支持以下格式的输入图像:
|
||||
|
||||
* JPEG/JPG
|
||||
* PNG
|
||||
* BMP
|
||||
* WEBP
|
||||
* TIFF/TIF
|
||||
|
||||
让我们看看如何安装和使用它。
|
||||
|
||||
### 在 Linux 上安装 Ascii Image Converter
|
||||
|
||||
这个有趣的工具也可以在 Windows 上使用,但我不打算这么做。在本教程中,让我们坚持使用 Linux。
|
||||
|
||||
如果你的发行版中启用了 [Snap][6],你可以用下面的命令轻松地安装它的 snap 包:
|
||||
|
||||
```
|
||||
sudo snap install ascii-image-converter
|
||||
```
|
||||
|
||||
你也可以从它的发布页面下载 Linux 的可执行文件,并把可执行文件放在 `/usr/local/bin/` 目录下。这样,你就能像普通的 Linux 命令一样运行它。如果你想知道为什么会这样,请了解一下 [Linux 目录层次结构][7]。
|
||||
|
||||
### 使用 Ascii Image Converter
|
||||
|
||||
使用很简单。安装后,你只需要提供你想转换的图像的路径。
|
||||
|
||||
```
|
||||
ascii-image-converter path_to_image
|
||||
```
|
||||
|
||||
你也可以提供图片的 URL,直接从网上把图片转换成 ASCII。
|
||||
|
||||
这是我的个人资料照片转换成 ASCII 格式。我把我的原始照片放在这里供大家参考。
|
||||
|
||||
![][8]
|
||||
|
||||
你也可以转换成彩色的 ASCII。
|
||||
|
||||
```
|
||||
ascii-image-converter -C path_to_image
|
||||
```
|
||||
|
||||
![][9]
|
||||
|
||||
你可以通过提供它们的路径将多个图像转换为 ASCII。它将在终端显示器上一个接一个地打印 ASCII 版本。
|
||||
|
||||
也有一个选项可以保存生成的 ASCII 艺术。在旧版本中,它只会被保存为文本文件,而不是图像。开发者 Zoraiz Hassan 发布了一个新版本,现在该工具默认将生成的 ASCII 图像保存为 PNG 格式。
|
||||
|
||||
```
|
||||
ascii-image-converter path_to_image -s .
|
||||
```
|
||||
|
||||
还有一些可用的选项,比如给输出一个特定的尺寸,使用更多的 ASCII 字符,或者使用你自己的字符集来打印 ASCII 艺术。你可以在 [项目的仓库][4] 上阅读相关内容。
|
||||
|
||||
### 喜欢它吗?
|
||||
|
||||
你喜欢更多的 ASCII 相关的东西吗?那么 [在 Linux 上玩 ASCII 游戏][10] 怎么样?是的,你完全可以这么做。
|
||||
|
||||
如果你喜欢在终端做实验,你可能会喜欢这个工具。虽然我不知道 ASCII 转换后的图像能有什么好的实际用途。有什么想法吗?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ascii-image-converter/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.computerhope.com/jargon/a/ascii.htm
|
||||
[2]: https://itsfoss.com/display-linux-logo-in-ascii/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-focal-neofetch.png?resize=800%2C543&ssl=1
|
||||
[4]: https://github.com/TheZoraiz/ascii-image-converter
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/abhishek-prakash-in-ascii.png?resize=800%2C445&ssl=1
|
||||
[6]: https://itsfoss.com/enable-snap-support-linux-mint/
|
||||
[7]: https://linuxhandbook.com/linux-directory-structure/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/abhishek-prakash-ascii-converted.png?resize=800%2C437&ssl=1
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/abhishek-colored-ascii.png?resize=800%2C429&ssl=1
|
||||
[10]: https://itsfoss.com/best-ascii-games/
|
@ -0,0 +1,78 @@
|
||||
[#]: subject: (openSUSE Leap 15.3 Release Finally Closes the Gap With SUSE Linux Enterprise)
|
||||
[#]: via: (https://news.itsfoss.com/opensuse-leap-15-3-release/)
|
||||
[#]: author: (Ankush Das https://news.itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13469-1.html)
|
||||
|
||||
openSUSE Leap 15.3 版本缩小了与 SUSE Linux 企业版的差距
|
||||
======
|
||||
|
||||

|
||||
|
||||
> 随着 openSUSE 15.3 的发布,与 SUSE Linux 企业版的差距终于缩小了。对于开发团队来说,这应该是一个令人兴奋的用于测试的更新。
|
||||
|
||||
去年,在 [openSUSE Leap 15.2 发行版][1] 中他们希望通过使用与企业版相同二进制软件包来构建 openSUSE Leap,从而缩小 openSUSE Leap 与 SUSE Linux 企业版之间的差距。
|
||||
|
||||
这样一来的话,如果有人在使用 openSUSE 测试后切换到 SUSE Linux 企业版,部署的迁移过程都将大大简化。此外,openSUSE Leap 将是开发团队进行测试的一个轻松选择。
|
||||
|
||||
随着 openSUSE Leap 15.3 的发布,这个构想成为了现实。本文我将重点介绍这次发布的主要变化。
|
||||
|
||||
### openSUSE Leap 15.3: 最新变化
|
||||
|
||||
最重要的变化是,它使用与 SUSE Linux 企业版相同的二进制软件包构建。
|
||||
|
||||
并且,[发布公告][2] 中提到了这一巨大变化的好处:
|
||||
|
||||
> 此版本对于迁移项目和用户验收测试非常有益,使用 openSUSE leap 15.3 进行运行调优和测试工作负载的大型开发团队将会获得最大的好处,因为这些工作负载可以轻松提升并转移到 SUSE Linux 企业版 15 SP3 上进行长期维护。
|
||||
|
||||
除了这个巨大的变化,还有其他几个重要的变化使它成为一个令人激动的版本。
|
||||
|
||||

|
||||
|
||||
对于 Xfce 4.16 桌面,有一些视觉变化,包括新的图标和调色板。设置管理器还增加了一个视觉刷新功能,提供了更清晰的外观。
|
||||
|
||||
如果有需要,KDE Plasma 5.18 也可以作为 LTS 选项与此版本一起提供。而且,GNOME 3.34 在某些应用程序的外观和感觉上有一些细微的变化。虽然 Cinnamon 没有大的变化,但是你会发现它有了一个新的模式。
|
||||
|
||||
在这个版本中,你将发现 gnu health 3.8 添加了一些新特性供你探索。
|
||||
|
||||
DNF 包管理器有一个更新计划,但是当前没有释放出来,你可以通过后续的维护更新获得它。
|
||||
|
||||

|
||||
|
||||
IBM Z 和 LinuxONE(s390x)是 Leap 15.3 中新支持的两种架构。
|
||||
|
||||
所包含的容器技术仍然保持不变,但是它们在本版本中收到了安全更新。当然,你需要去找 Linode 等托管解决方案提供的最新云镜像。
|
||||
|
||||
几个应用程序升级包括 Ononishare 2.2、Chromium 89 等。你可以在 [官方特性列表][4] 中找到更多详细信息。
|
||||
|
||||
### 下载 openSUSE Leap 15.3
|
||||
|
||||
需要注意的是,从今天起,Leap 15.2 将有六个月的寿命(EOL)。
|
||||
|
||||
在尝试升级到 Leap 15.3 之前,你必须确保运行的是 Leap 15.2。你可以在他们的 [官方发行说明][5] 中找到有关升级过程的更多信息。
|
||||
|
||||
从下面的按钮链接的官方下载页面获取最新的 ISO。
|
||||
|
||||
- [下载 openSUSE Leap 15.3][6]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://news.itsfoss.com/opensuse-leap-15-3-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://news.itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/opensuse-leap-15-2-release/
|
||||
[2]: https://news.opensuse.org/2021/06/02/opensuse-leap-bridges-path-to-enterprise/
|
||||
[4]: https://en.opensuse.org/Features_15.3
|
||||
[5]: https://en.opensuse.org/Release_announcement_15.3
|
||||
[6]: https://get.opensuse.org/leap/
|
@ -0,0 +1,99 @@
|
||||
[#]: subject: (OBS Studio 27 Adds Wayland Support, Undo/Redo, and Browser Docks)
|
||||
[#]: via: (https://news.itsfoss.com/obs-studio-27-release/)
|
||||
[#]: author: (Ankush Das https://news.itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
OBS Studio 27 Adds Wayland Support, Undo/Redo, and Browser Docks
|
||||
======
|
||||
|
||||
Open Broadcaster Software is a free and open-source streaming/recording solution available for multiple platforms.
|
||||
|
||||
Not long ago, we [spotted native Wayland support coming to OBS Studio][1].
|
||||
|
||||
Now, with the release of OBS Studio 27, it is finally a reality. Not just limited to wayland support, but there are some significant feature additions.
|
||||
|
||||
Here, I shall highlight the key changes introduced with OBS Studio 27.
|
||||
|
||||
### OBS Studio 27: What’s New?
|
||||
|
||||
![][2]
|
||||
|
||||
The key new feature addition with this release is **Undo/Redo**.
|
||||
|
||||
While it sounds like a basic feature that should have existed from the start — but they needed to have a proper implementation, for which they waited.
|
||||
|
||||
The [release announcement][3] mentions a note about it back in 2018:
|
||||
|
||||
> _This is on the agenda. Fortunately, it’s not a “difficult” feature to write, it’s actually pretty simple, but the implementation is what’s delicate, and requires a fair amount of experience to get right._
|
||||
|
||||
Now that it is here, you can work stress-free without needing to constantly tweak when you accidentally modify something.
|
||||
|
||||
Of course, there’s a limit to what the Undo function can do, and it looks like it should be able to revert **5000 actions** in a particular session. It is worth noting that if you restart the app, you can no longer undo the last actions.
|
||||
|
||||
![][4]
|
||||
|
||||
More about it in the announcement post:
|
||||
|
||||
> Undo is built to track actions that affect the preview. This means every potential modification to scenes, sources, groups, filters, stingers, and scripts. These have the potential to affect the feed in real-time, without a chance to “Apply” changes, and can sometimes result in complex changes that are harder to quickly revert or recreate. The Undo stack is capable of tracking the last 5,000 actions of the session, and is cleared when switching scene collections or restarting the app.
|
||||
|
||||
The next important addition is the **browser dock**, which was already present for Windows users but introduced for macOS and Linux with this release.
|
||||
|
||||
The dock will let you quickly access other sites or services while using the OBS app such as chats, Twitch account linking, and more.
|
||||
|
||||
### Other Improvements
|
||||
|
||||
The release also addresses the display capture support on Laptops with different GPUs, which should provide a good experience for Laptop users.
|
||||
|
||||
There’s also a new missing files dialogue which will clearly list what’s missing to spot the sources that you need to add.
|
||||
|
||||
For more information on the changes, you may refer to the [official announcement post][3].
|
||||
|
||||
### Download OBS Studio 27
|
||||
|
||||
You can directly install OBS Studio using the Ubuntu PPA but the Wayland support is available only for Ubuntu 21.04 and above through this method.
|
||||
|
||||
In case you want to do that, here’s what you need to type in the terminal (ensure you have [ffmpeg][5] installed):
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:obsproject/obs-studio
|
||||
sudo apt install obs-studio
|
||||
```
|
||||
|
||||
So, it is best to use the [Flatpak package][6] to get started. You can also take the help of our [Flatpak guide][7] if you’re using it for the first time.
|
||||
|
||||
You can find other download options in the official download page or its [GitHub page][8].
|
||||
|
||||
[Download OBS Studio 27][9]
|
||||
|
||||
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
|
||||
|
||||
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
|
||||
|
||||
I'm not interested
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://news.itsfoss.com/obs-studio-27-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://news.itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://news.itsfoss.com/obs-studio-wayland/
|
||||
[2]: https://i0.wp.com/i.ytimg.com/vi/LUkMxYNIyj0/hqdefault.jpg?w=780&ssl=1
|
||||
[3]: https://obsproject.com/blog/obs-studio-27-released
|
||||
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQyMyIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
|
||||
[5]: https://itsfoss.com/ffmpeg/
|
||||
[6]: https://flathub.org/apps/details/com.obsproject.Studio
|
||||
[7]: https://itsfoss.com/flatpak-guide/
|
||||
[8]: https://github.com/obsproject/obs-studio
|
||||
[9]: https://obsproject.com/download
|
@ -0,0 +1,94 @@
|
||||
[#]: subject: (You Can Now Try the New COSMIC Desktop Environment with Pop!_OS 21.04 Beta)
|
||||
[#]: via: (https://news.itsfoss.com/pop-os-21-04-beta-release/)
|
||||
[#]: author: (Ankush Das https://news.itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
You Can Now Try the New COSMIC Desktop Environment with Pop!_OS 21.04 Beta
|
||||
======
|
||||
|
||||
Pop!_OS 21.04 is one of the [most anticipated distros][1] of this year and the public beta has finally arrived.
|
||||
|
||||
While we do not have an official list of changes that follows with this release, but it comes packed with its brand new COSMIC Desktop Environment.
|
||||
|
||||
Let me highlight a few things about the desktop, how you can download it, and my initial thoughts on it.
|
||||
|
||||
### COSMIC Desktop Environment on Pop!_OS 21.04
|
||||
|
||||
![][2]
|
||||
|
||||
With Pop!_OS 21.04 beta, we have more information on it than we had when [System76 first revealed the COSMIC Desktop Environment][3].
|
||||
|
||||
The horizontal dock sure looks pretty. By default, the dock extends to the edges of the screen, but you can change that as well. Here’s how it looks without the dock extending (macOS-like layout?)
|
||||
|
||||
![][4]
|
||||
|
||||
The icons look colorful and attractive. You will find options to tweak the dock as well (to hide it, change the position, etc.)
|
||||
|
||||
There are some options like adding a launcher icon or workspace icon in the dock itself that are in the to-do list and should be added with the updates to beta version. In addition to that, you can also expect the hot corner feature to arrive soon enough.
|
||||
|
||||
![][5]
|
||||
|
||||
### Other Improvements
|
||||
|
||||
The overall color theme looks to be the same — so there are subtle visual changes, no big makeovers.
|
||||
|
||||
Coming from Pop!_OS 20.04, it will surely feel quite unfamiliar, and you will need to take some time to adjust the workflow.
|
||||
|
||||
However, thanks to the extension manager, you can get back the old workspace layout by disabling the Pop COSMIC extension. Also, you get a nice multi-monitor add-on that offers many options for users with multi-monitors.
|
||||
|
||||
![][6]
|
||||
|
||||
The desktop environment seems to be using GNOME 3.38 based applications. So, it is safe to assume that the COSMIC Desktop Environment is based on the same as well.
|
||||
|
||||
It is important to note that the LTS release (i.e. Pop!_OS 20.04) will not be getting an update to include COMIC Desktop Environment.
|
||||
|
||||
So, you will have to opt for Pop!_OS 21.04 stable version when it releases or just wait for the next LTS release that follows.
|
||||
|
||||
### Pop!_OS 21.04 Without GNOME 40 Is Exciting
|
||||
|
||||
As I expected, even without the direct implementation of GNOME 40, Pop!_OS 21.04 is an exciting release to look out for.
|
||||
|
||||
The COSMIC Desktop Environment may not be a unique or breathtaking experience – but it manages to add essential options out of the box for a great desktop experience.
|
||||
|
||||
Also, considering that multi-monitor users do not usually get an outstanding set of options on other Linux distributions by default, including an add-on for that with COSMIC desktop is a good idea.
|
||||
|
||||
### Try Pop!_OS 21.04 Beta
|
||||
|
||||
You can download the Pop!_OS 21.04 beta ISO from their [GitHub page][7]. You can find both the images for NVIDIA, and AMD/Intel as per your system configuration.
|
||||
|
||||
Do note that this is a beta release and should be only installed for testing purposes. It is prone to bugs, and you may have to re-install it.
|
||||
|
||||
The stable release should come out later this month, I will make sure to review it and also possibly make a video, if that helps.
|
||||
|
||||
[Download Pop!_OS 21.04 Beta][7]
|
||||
|
||||
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
|
||||
|
||||
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
|
||||
|
||||
I'm not interested
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://news.itsfoss.com/pop-os-21-04-beta-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://news.itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://news.itsfoss.com/linux-distros-for-2021/
|
||||
[2]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQzOCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
|
||||
[3]: https://news.itsfoss.com/cosmic-desktop-pop-os/
|
||||
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ0MCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
|
||||
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUwMyIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
|
||||
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjMzNCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
|
||||
[7]: https://github.com/pop-os/beta
|
@ -0,0 +1,76 @@
|
||||
[#]: subject: (You Can Still Use the Old Firefox Interface (but not for long))
|
||||
[#]: via: (https://news.itsfoss.com/firefox-old-design-switch/)
|
||||
[#]: author: (Ankush Das https://news.itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
You Can Still Use the Old Firefox Interface (but not for long)
|
||||
======
|
||||
|
||||
[Firefox 89][1] is finally available to download with a major redesign. While some like what they are trying to do, as a competitive alternative to Google Chrome in terms of user experience, many do not seem to like the modern design.
|
||||
|
||||
Of course, the design choices will always have distinct perspective for all kinds of users. But is there a way to go back to the old Firefox design?
|
||||
|
||||
Well, for now, it seems like a yes. But you may not be able to revert the design after the next Firefox update.
|
||||
|
||||
Here, let me briefly highlight what it is all about.
|
||||
|
||||
### How to Switch Back to the Old Firefox Design?
|
||||
|
||||
Just like how [we enabled the proton design to take an early look before the final release][2], you need to disable the proton UI elements to get back to the old design.
|
||||
|
||||
To get started, just type in “**about:config**” in the address bar and proceed by ignoring the warning.
|
||||
|
||||
![][3]
|
||||
|
||||
Once you click on “**Accept the Risk and Continue**“, you will notice a search field with no options to choose/select.
|
||||
|
||||
Now, you just need to type in “**Proton**” to list all the options related to the new redesign, as shown below:
|
||||
|
||||
![][3]
|
||||
|
||||
Here, you need to click on the toggle button at the right-side as highlighted in this screenshot to disable all those options.
|
||||
|
||||
You should already see it in action after disabling the options, but you may want to restart the browser if it does not seem right.
|
||||
|
||||
I should mention that even with all the options disabled, it does not necessarily look exactly like the old Firefox design, but yes — for most of the part.
|
||||
|
||||
In the same way, you just have to enable the options back to switch to the new design.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Now that you can easily go back and forth between the new and old Firefox design, you may not be able to go back again after the next Firefox update.
|
||||
|
||||
As originally spotted by [UbuntuHanbook][4], the options that you disabled above will no longer be available with the next Firefox update as per a [bug report][5].
|
||||
|
||||
What do you think about the new Firefox design? Would you be willing go back to the old design using the method mentioned?
|
||||
|
||||
Feel free to let me know what you think in the comments below.
|
||||
|
||||
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
|
||||
|
||||
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
|
||||
|
||||
I'm not interested
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://news.itsfoss.com/firefox-old-design-switch/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://news.itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://news.itsfoss.com/firefox-89-release/
|
||||
[2]: https://news.itsfoss.com/firefox-proton-redesign/
|
||||
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ4OCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
|
||||
[4]: https://ubuntuhandbook.org/index.php/2021/06/revert-old-user-interface-firefox-89/
|
||||
[5]: https://bugzilla.mozilla.org/show_bug.cgi?id=1709425
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (osu-zxf)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
56
sources/talk/20210606 How Real-World Apps Lose Data.md
Normal file
56
sources/talk/20210606 How Real-World Apps Lose Data.md
Normal file
@ -0,0 +1,56 @@
|
||||
[#]: subject: (How Real-World Apps Lose Data)
|
||||
[#]: via: (https://theartofmachinery.com/2021/06/06/how_apps_lose_data.html)
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
How Real-World Apps Lose Data
|
||||
======
|
||||
|
||||
A great thing about modern app development is that there are cloud providers to worry about things like hardware failures or how to set up RAID. Decent cloud providers are extremely unlikely to lose your app’s data, so sometimes I get asked what backups are really for these days. Here are some real-world stories that show exactly what.
|
||||
|
||||
### Story #1
|
||||
|
||||
This first story is from a data science project: it was basically a big, complex pipeline that took data collected from ongoing research and crunched it in various ways to feed some cutting-edge model. The user-facing application hadn’t been launched yet, but a team of data scientists and developers had been working on building the model and its dataset for several months.
|
||||
|
||||
The people working on the project had their own development environments for experimental work. They’d do something like `export ENVIRONMENT=simonsdev` in a terminal, and then all the software running in that terminal would run against that environment instead of the production environment.
|
||||
|
||||
The team was under a lot of pressure to get a user-facing app launched so that stakeholders could actually see some results from their several months of investment. One Saturday, an engineer tried to catch up with some work. He finished an experiment he was doing late in the evening, and decided to tidy up and go home. He fired off a cleanup script to delete everything from his development environment, but strangely it took a lot longer than usual. That’s when he realised he’d lost track of which terminal was configured to point to which environment.
|
||||
|
||||
### Story #2
|
||||
|
||||
Story #2 is from a commercial web and mobile app. The backend had a microservice architecture worked on by a team of engineers. That meant deployments required co-ordination, but things were simplified a bit using a formal release process and automation. New code would get reviewed and merged into master when ready, and every so often a senior developer would tag a release for each microservice, which would then automatically deploy to the staging environment. The releases in the staging environment would periodically get collected together into a meta-release that got signoff from various people (it was a compliance environment) before being automatically deployed to production.
|
||||
|
||||
One day a developer was working on a complex feature, and the other developers working on that microservice agreed that the work-in-progress code should be committed to master with the understanding that it shouldn’t be actually released yet. To cut a long story short, not everyone in the team got the message, and the code got into the release pipeline. Worse, the experimental code required a new way to represent user profile data, so it had an ad-hoc data migration that ran on launch into production and corrupted all user profiles.
|
||||
|
||||
### Story #3
|
||||
|
||||
Story #3 is from another web app. This one had a much simpler architecture: most of the code was in one app, and the data was in a database. However, this app had also been written under a lot of deadline pressure. It turned out that early on in development, when radical database schema changes were common, a feature was added to detect such changes and clean up old data. This was actually useful for early development before launch, and was always meant to be a temporary feature for development environments only. Unfortunately, the code was forgotten about in the rush to build the rest of the app and get to launch. Until, of course, one day it got triggered in the production environment.
|
||||
|
||||
### Postmortem
|
||||
|
||||
With any outage postmortem, it’s easy to lose sight of the big picture and end up blaming everything on some little detail. A special case of that is finding some mistake someone made and then blaming that person. All of the engineers in these stories were actually good engineers (companies that hire SRE consultants aren’t the ones to cut corners with their permanent hires), so firing them and replacing them wouldn’t have solved any problem. Even if you have 100x developers, that 100x is still finite, so mistakes will happen with enough complexity and pressure. The big-picture solution is back ups, which help you however you lose the data (including from malware, which is a hot topic in the news lately). If you’re not okay with having zero copies of it, don’t have one copy.
|
||||
|
||||
Story #1 had a bad end: there were no backups. The project was set back by nearly six months of data collection. By the way, some places only keep a single daily snapshot as a backup, and this story is a good example of how that can go wrong, too: if the data loss happened on Saturday and recovery was attempted on Monday, the one-day backup would only have an empty database from the Sunday.
|
||||
|
||||
Story #2 wasn’t fun, but worked out much better. Backups were available, but the data migration was reversible, too. The unfun part was that the release was done just before lunch and the fix had to be coded up while the production site was down. The main reason I’m telling this story is as a reminder that backups aren’t just about catastrophic data loss. Partial data corruption happens, too, and can be extra messy.
|
||||
|
||||
Story #3 was so-so. A small amount of data was lost permanently, but most was recovered from the backup. Everyone on the team felt pretty bad about not flagging the now-extremely-obviously-dangerous code. I wasn’t involved in the early development, but I felt bad because the recovery took a lot longer than it should have. With a well-tested recovery process, I think the site should have been back online in under 15mins total. But the recovery didn’t work first time, and I had to debug why not and retry. When a production site is down and it’s on you to get it up again, every 10s feels like an eternity. Thankfully, the stakeholders were much more understanding than some. They were actually relieved that a one-off disaster that could have sunk the company only resulted in minutes of lost data and under an hour of downtime.
|
||||
|
||||
It’s extremely common in practice for the backup to “work” but the recovery to fail. Often the recovery works when tested on small datasets, but fails on production-sized datasets. Disaster is most likely to strike when everyone is stressed out, and having the production site down only increases the pressure. It’s a really good idea to test and document the full recovery process while times are good.
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://theartofmachinery.com/2021/06/06/how_apps_lose_data.html
|
||||
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://theartofmachinery.com
|
||||
[b]: https://github.com/lujun9972
|
@ -1,66 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 requirements of cloud-native software)
|
||||
[#]: via: (https://opensource.com/article/20/1/cloud-native-software)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
6 requirements of cloud-native software
|
||||
======
|
||||
A checklist for developing and implementing cloud-native
|
||||
(container-first) software.
|
||||
![Team checklist][1]
|
||||
|
||||
For many years, monolithic applications were the standard enterprise architecture for achieving business requirements. But that changed significantly once cloud infrastructure began treating business acceleration at scale and speed. Application architectures have also transformed to fit into the cloud-native applications and the [microservices][2], [serverless][3], and event-driven services that are running on immutable infrastructures across hybrid and multi-cloud platforms.
|
||||
|
||||
### The cloud-native connection to Kubernetes
|
||||
|
||||
According to the [Cloud Native Computing Foundation][4] (CNCF):
|
||||
|
||||
> "Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
|
||||
>
|
||||
> "These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil."
|
||||
|
||||
Container orchestration platforms like [Kubernetes][5] allow DevOps teams to build immutable infrastructures to develop, deploy, and manage application services. The speed at which rapid iteration is possible now aligns with business needs. Developers building containers to run in Kubernetes need an effective place to do so.
|
||||
|
||||
### Requirements for cloud-native software
|
||||
|
||||
What capabilities are required to create a cloud-native application architecture, and what benefits will developers gain from it?
|
||||
|
||||
While there are many ways to build and architect cloud-native applications, the following are some ingredients to consider:
|
||||
|
||||
* **Runtimes:** They are more likely to be written in the container-first or/and Kubernetes-native language, which means runtimes such as Java, Node.js, Go, Python, and Ruby.
|
||||
* **Security:** When deploying and maintaining applications in a multi-cloud or hybrid cloud application environment, security is of utmost importance and should be part of the environment.
|
||||
* **Observability:** Use tools such as Prometheus, Grafana, and Kiali that can enhance observability by providing realtime metrics and more information about how applications are being used and behave in the cloud.
|
||||
* **Efficiency:** Focus on a tiny memory footprint, small artifact size, and fast boot time to make applications portable across hybrid/multi-cloud platforms.
|
||||
* **Interoperability:** Integrate cloud-native apps with open source technologies that enable you to meet the requirements listed above, including Infinispan, MicroProfile, Hibernate, Kafka, Jaeger, Prometheus, and more, for building standard runtime architectures.
|
||||
* **DevOps/DevSecOps:** These methodologies are designed for continuous deployment to production, in-line with the minimum viable product (MVP) and with security as part of the tooling.
|
||||
|
||||
|
||||
|
||||
### Making cloud-native concrete
|
||||
|
||||
Cloud-native can seem like an abstract term, but reviewing the definition and thinking like a developer can make it more concrete. In order for cloud-native applications to be successful, they need to include a long, well-defined list of ingredients.
|
||||
|
||||
How are you planning for cloud-native application design? Share your thoughts in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/cloud-native-software
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
|
||||
[2]: https://opensource.com/resources/what-are-microservices
|
||||
[3]: https://opensource.com/article/18/11/open-source-serverless-platforms
|
||||
[4]: https://github.com/cncf/toc/blob/master/DEFINITION.md
|
||||
[5]: https://opensource.com/resources/what-is-kubernetes
|
@ -7,212 +7,199 @@
|
||||
[#]: via: (https://opensource.com/article/21/2/ccc-method-pointers)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
|
||||
A friendly guide to the syntax of C++ method pointers
|
||||
C++ 类成员函数指针语法的友好指南
|
||||
======
|
||||
Once you understand the general principles, C++ method pointers become
|
||||
less intimidating.
|
||||
一旦您理解了一般原则,C++ 类成员函数指针不再那么令人生畏。
|
||||
|
||||
![Person drinking a hot drink at the computer][1]
|
||||
|
||||
If you're looking for performance, complexity, or many possible solutions to solve a problem, [C ++][2] is always a good candidate when it comes to extremes. Of course, functionality usually comes with complexity, but some C++ peculiarities are almost illegible. From my point of view, C++ [method pointers][3] may be the most complex expressions I've ever come across, but I'll start with something simpler.
|
||||
如果您正在寻找性能、复杂性或许多可能的解决方法来解决问题,那么 [C++][2] 总是一个很好的选择。当然,功能通常伴随着复杂性,但是一些 C++ 的特性几乎难以分辨。根据我的观点,C++ 的 [类成员函数指针](3) 也许是我接触过的最复杂的表达式,但是我会先从一些较简单的开始。
|
||||
|
||||
The examples in this article are available in my [GitHub repository][4].
|
||||
文章中的例子可以在我的 [Github 仓库][4] 里找到。
|
||||
|
||||
### C: Pointer to functions
|
||||
### C 语言:函数指针
|
||||
|
||||
Let's begin with some basics: Assume you have a function that takes two integers as arguments and returns an integer:
|
||||
让我们先从一些基础开始:假设您有一个函数接收两个整数作为参数返回一个整数:
|
||||
|
||||
|
||||
```
|
||||
int sum(int a, intb){
|
||||
return a+b;
|
||||
```c
|
||||
int sum(int a, int b) {
|
||||
return a+b;
|
||||
}
|
||||
```
|
||||
|
||||
In plain C, you can create a pointer to this function, assign it to your `sum(...)` function, and call it by dereferencing. The function's signature (arguments, return type) must comply with the pointer's signature. Aside from that, a function pointer behaves like an ordinary pointer:
|
||||
在纯 C 语言中,您可以创建一个指向这个函数的指针,将其分配给您的 `sum(...)` 函数,通过解引用来调用它。函数的签名(参数、返回类型)必须符合指针的签名。除此之外,一个函数指针表现和普通的指针相同:
|
||||
|
||||
|
||||
```
|
||||
```c
|
||||
int (*funcPtrOne)(int, int);
|
||||
|
||||
funcPtrOne = ∑
|
||||
funcPtrOne = ∑
|
||||
|
||||
int resultOne = funcPtrOne(2, 5);
|
||||
```
|
||||
|
||||
It gets a bit uglier if you take a pointer as an argument and return a pointer:
|
||||
如果您使用指针作为参数并返回一个指针,这会显得很丑陋:
|
||||
|
||||
|
||||
```
|
||||
```c
|
||||
int *next(int *arrayOfInt){
|
||||
return ++arrayOfInt;
|
||||
return ++arrayOfInt;
|
||||
}
|
||||
|
||||
int *(*funcPtrTwo)(int *intPtr);
|
||||
|
||||
funcPtrTwo = &next;
|
||||
funcPtrTwo = &next;
|
||||
|
||||
int resultTwo = *funcPtrTwo(&array[0]);
|
||||
int resultTwo = *funcPtrTwo(&array[0]);
|
||||
```
|
||||
|
||||
Function pointers in C store the address of a subroutine.
|
||||
C 语言中的函数指针存储着子程序的地址。
|
||||
|
||||
### Pointers to methods
|
||||
### 指向类成员函数的指针
|
||||
|
||||
Let's step into C++: The good news is that you probably won't need to use pointers to methods, except in a few rare cases, like the following one. First, define a class with member functions you already know:
|
||||
让我们来进入 C++:好消息是您也许不需要使用类成员函数指针,除非在一个特别罕见的情况下,像接下来的例子。首先,您已经知道定义一个类和其中一个成员函数:
|
||||
|
||||
|
||||
```
|
||||
```cpp
|
||||
class MyClass
|
||||
{
|
||||
public:
|
||||
|
||||
int sum(int a, int b) {
|
||||
return a+b;
|
||||
}
|
||||
int sum(int a, int b) {
|
||||
return a+b;
|
||||
}
|
||||
|
||||
};
|
||||
```
|
||||
|
||||
#### 1\. Define a pointer to a method of a certain class type
|
||||
#### 1\. 定义一个指针指向某一个类中一个成员函数
|
||||
|
||||
Declare a pointer to a method of the `MyClass` type. At this point, you don't know the exact method you want to call. You've only declared a pointer to some arbitrary `MyClass` method. Of course, the signature (arguments, return type) matches the `sum(…)` method you want to call later:
|
||||
声明一个指针指向 `MyClass` 类成员函数。在此时,您并不知道想调用的具体函数。您仅仅声明了一个指向 `MyClass` 类中任意成员函数的指针。当然,签名(参数、返回值类型)需要匹配您接下想要调用的 `sum(...)` 函数:
|
||||
|
||||
|
||||
```
|
||||
`int (MyClass::*methodPtrOne)(int, int);`
|
||||
```cpp
|
||||
int (MyClass::*methodPtrOne)(int, int);
|
||||
```
|
||||
|
||||
#### 2\. Assign a certain method
|
||||
#### 2\. 赋值给一个具体的函数
|
||||
|
||||
In contrast to C (or [static member functions][5]), method pointers don't point to absolute addresses. Each class type in C++ has a virtual method table (vtable) that stores the address offset for each method. A method pointer refers to a certain entry in the vtable, so it also stores only the offset value. This principle also enables [dynamic dispatch][6].
|
||||
为了和 C 语言(或者 [静态成员函数][5])对比,类成员函数指针不需要指向绝对地址。在 C++ 中,每一个类中都有一个虚拟函数表(vtable)用来储存每个成员函数的地址偏移量。一个类成员函数指针指向 vtable 中的某个条目,因此它也只存储偏移值。这样的原则使得 [多态][6] 变得可行。
|
||||
|
||||
Because the signature of the `sum(…)` method matches your pointer's declaration, you can assign the signature to it:
|
||||
因为 `sum(...)` 函数的签名和您的指针声明匹配,您可以赋值签名给它:
|
||||
|
||||
|
||||
```
|
||||
`methodPtrOne = &MyClass::sum;`
|
||||
```cpp
|
||||
methodPtrOne = &MyClass::sum;
|
||||
```
|
||||
|
||||
#### 3\. Invoke the method
|
||||
#### 3\. 调用成员函数
|
||||
|
||||
If you want to invoke the method with the pointer, you have to provide an instance of the class type:
|
||||
如果您想使用指针调用一个类成员函,您必须提供一个类的实例:
|
||||
|
||||
|
||||
```
|
||||
```cpp
|
||||
MyClass clsInstance;
|
||||
int result = (clsInstance.*methodPtrOne)(2,3);
|
||||
```
|
||||
|
||||
You can access the instance with the `.` operator, dereference the pointer with a `*`, and thus call the method by providing two integers as arguments. Ugly, right? But you can still go a step further.
|
||||
您可以使用 `.` 操作符来访问,使用 `*` 对指针解引用,通过提供两个整数作为调用函数时的参数。这是丑陋的,对吧?但是您可以进一步应用。
|
||||
|
||||
### Using method pointers within a class
|
||||
### 在类内使用类成员函数指针
|
||||
|
||||
Assume you are creating an application with a [client/server][7] principle architecture with a backend and a frontend. You don't care about the backend for now; instead, you will focus on the frontend, which is based on a C++ class. The frontend's complete initialization relies on data provided by the backend, so you need an additional initialization mechanism. Also, you want to implement this mechanism generically so that you can extend your frontend with other initialization methods in the future (maybe dynamically).
|
||||
假设您正在创建一个带有后端和前端的 [客户端/服务器][7] 原理架构的应用程序。您现在并不需要关心后端,相反的,您将基于 C++ 类的前端。前端依赖于后端提供的数据完成初始化,所以您需要一个额外的初始化机制。同时,您希望通用地实现此机制,以便将来可以使用其他初始化函数(可能是动态的)来拓展您的前端。
|
||||
|
||||
First, define a data type that can store a method pointer to an initialization method (`init`) and the information describing when this method should be called (`ticks`):
|
||||
首先定义一个数据类型用来存储初始化函数(`init`)的指针,同时描述何时应调用此函数的信息(`ticks`):
|
||||
|
||||
|
||||
```
|
||||
template<typename T>
|
||||
```cpp
|
||||
template<typename T>
|
||||
struct DynamicInitCommand {
|
||||
void (T::*init)(); // Pointer to additional initialization method
|
||||
unsigned int ticks; // Number of ticks after init() is called
|
||||
void (T::*init)(); // 指向额外的初始化函数
|
||||
unsigned int ticks; // 在 init() 调用后 ticks 的数量
|
||||
};
|
||||
```
|
||||
|
||||
Here is what the `Frontend` class looks like:
|
||||
下面一个 `Frontend` 类示例代码:
|
||||
|
||||
|
||||
```
|
||||
class Frontend
|
||||
```cpp
|
||||
class Frontend
|
||||
{
|
||||
public:
|
||||
|
||||
Frontend(){
|
||||
DynamicInitCommand<Frontend> init1, init2, init3;
|
||||
Frontend(){
|
||||
DynamicInitCommand<Frontend> init1, init2, init3;
|
||||
|
||||
init1 = { &Frontend::dynamicInit1, 5};
|
||||
init2 = { &Frontend::dynamicInit2, 10};
|
||||
init3 = { &Frontend::dynamicInit3, 15};
|
||||
init1 = { &Frontend::dynamicInit1, 5};
|
||||
init2 = { &Frontend::dynamicInit2, 10};
|
||||
init3 = { &Frontend::dynamicInit3, 15};
|
||||
|
||||
m_dynamicInit.push_back(init1);
|
||||
m_dynamicInit.push_back(init2);
|
||||
m_dynamicInit.push_back(init3);
|
||||
}
|
||||
|
||||
|
||||
m_dynamicInit.push_back(init1);
|
||||
m_dynamicInit.push_back(init2);
|
||||
m_dynamicInit.push_back(init3);
|
||||
}
|
||||
|
||||
|
||||
|
||||
void tick(){
|
||||
std::cout << "tick: " << ++m_ticks << std::endl;
|
||||
|
||||
/* Check for delayed initializations */
|
||||
std::vector<DynamicInitCommand<Frontend>>::iterator it = m_dynamicInit.begin();
|
||||
void tick(){
|
||||
std::cout << "tick: " << ++m_ticks << std::endl;
|
||||
|
||||
/* 检查延迟初始化 */
|
||||
std::vector<DynamicInitCommand<Frontend>>::iterator it = m_dynamicInit.begin();
|
||||
|
||||
while (it != m_dynamicInit.end()){
|
||||
if (it->ticks < m_ticks){
|
||||
|
||||
if(it->init)
|
||||
((*this).*(it->init))(); // here it is
|
||||
while (it != m_dynamicInit.end()){
|
||||
if (it->ticks < m_ticks){
|
||||
|
||||
if(it->init)
|
||||
((*this).*(it->init))(); // 这里是具体调用
|
||||
|
||||
it = m_dynamicInit.erase(it);
|
||||
it = m_dynamicInit.erase(it);
|
||||
|
||||
} else {
|
||||
it++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
unsigned int m_ticks{0};
|
||||
|
||||
} else {
|
||||
it++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
unsigned int m_ticks{0};
|
||||
|
||||
private:
|
||||
|
||||
void dynamicInit1(){
|
||||
std::cout << "dynamicInit1 called" << std::endl;
|
||||
};
|
||||
void dynamicInit1(){
|
||||
std::cout << "dynamicInit1 called" << std::endl;
|
||||
};
|
||||
|
||||
void dynamicInit2(){
|
||||
std::cout << "dynamicInit2 called" << std::endl;
|
||||
}
|
||||
void dynamicInit2(){
|
||||
std::cout << "dynamicInit2 called" << std::endl;
|
||||
}
|
||||
|
||||
void dynamicInit3(){
|
||||
std::cout << "dynamicInit3 called" << std::endl;
|
||||
}
|
||||
void dynamicInit3(){
|
||||
std::cout << "dynamicInit3 called" << std::endl;
|
||||
}
|
||||
|
||||
unsigned int m_initCnt{0};
|
||||
std::vector<DynamicInitCommand<Frontend> > m_dynamicInit;
|
||||
unsigned int m_initCnt{0};
|
||||
std::vector<DynamicInitCommand<Frontend> > m_dynamicInit;
|
||||
};
|
||||
```
|
||||
|
||||
After `Frontend` is instantiated, the `tick()` method is called at fixed intervals by the backend. For example, you can call it every 200ms:
|
||||
在 `Frontend` 完成实例化后,`tick()` 函数会被后端以固定的时间时间调用。例如,您可以每 200 毫秒调用一次:
|
||||
|
||||
```cpp
|
||||
int main(int argc, char* argv[]){
|
||||
Frontend frontendInstance;
|
||||
|
||||
```
|
||||
int main(int argc, char* argv[]){
|
||||
Frontend frontendInstance;
|
||||
|
||||
while(true){
|
||||
frontendInstance.tick(); // just for simulation purpose
|
||||
std::this_thread::sleep_for(std::chrono::milliseconds(200));
|
||||
}
|
||||
while(true){
|
||||
frontendInstance.tick(); // 仅用于模拟目的
|
||||
std::this_thread::sleep_for(std::chrono::milliseconds(200));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`Frontend` has three additional initialization methods that must be called based on the value of `m_ticks`. The information about which initialization method to call at which tick is stored in the vector `m_dynamicInit`. In the constructor (`Frontend()`), append this information to the vector so that the additional initialization functions are called after five, 10, and 15 ticks. When the backend calls the `tick()` method, the value `m_ticks` is incremented, and you iterate over the vector `m_dynamicInit` to check whether an initialization method has to be called.
|
||||
|
||||
If this is the case, the method pointer must be dereferenced by referring to `this`:
|
||||
`Fronted` 有三个额外的初始化函数,它们必须根据 `m_ticks` 的值来选择调用哪个。在 tick 等于何值调用哪个初始化函数的信息存储在数组 `m_dynamicInit` 中。在构造函数(`Frontend()`)中,将此信息附加到数组中,以便在 5、10 和 15 个 tick 后调用其他初始化函数。当后端调用 `tick()` 函数时,`m_ticks` 值会递增,同时遍历数组 `m_dynamicInit` 以检查是否必须调用初始化函数。
|
||||
|
||||
如果是这种情况,则必须通过引用 `this` 指针来取消引用成员函数指针:
|
||||
|
||||
```
|
||||
`((*this).*(it->init))()`
|
||||
((*this).*(it->init))()
|
||||
```
|
||||
|
||||
### Summary
|
||||
### 总结
|
||||
|
||||
Methods pointers can get a bit complicated if you're not familiar with them. I did a lot of trial and error, and it took time to find the correct syntax. However, once you understand the general principle, method pointers become less terrifying.
|
||||
如果您并不熟悉类成员函数指针,它们可能会显得有些复杂。我做了很多尝试和经历了很多错误,花了一些时间来找到正确的语法。然而,一旦你理解了一般原理后,方法指针就变得不那么可怕了。
|
||||
|
||||
This is the most complex syntax I have found in C++ so far. Do you know something even worse? Post it in the comments!
|
||||
|
||||
The pros and cons of each, and why you should consider Python for embedded programming.
|
||||
这是迄今为止我在 C++ 中发现的最复杂的语法。 您还知道更糟糕的吗? 在评论中发布您的观点!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
@ -1,268 +0,0 @@
|
||||
[#]: subject: (Identify Linux performance bottlenecks using open source tools)
|
||||
[#]: via: (https://opensource.com/article/21/3/linux-performance-bottlenecks)
|
||||
[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Identify Linux performance bottlenecks using open source tools
|
||||
======
|
||||
Not long ago, identifying hardware bottlenecks required deep expertise.
|
||||
Today's open source GUI performance monitors make it pretty simple.
|
||||
![Lightning in a bottle][1]
|
||||
|
||||
Computers are integrated systems that only perform as fast as their slowest hardware component. If one component is less capable than the others—if it falls behind and can't keep up—it can hold your entire system back. That's a _performance bottleneck_. Removing a serious bottleneck can make your system fly.
|
||||
|
||||
This article explains how to identify hardware bottlenecks in Linux systems. The techniques apply to both personal computers and servers. My emphasis is on PCs—I won't cover server-specific bottlenecks in areas such as LAN management or database systems. Those often involve specialized tools.
|
||||
|
||||
I also won't talk much about solutions. That's too big a topic for this article. Instead, I'll write a follow-up article with performance tweaks.
|
||||
|
||||
I'll use only open source graphical user interface (GUI) tools to get the job done. Most articles on Linux bottlenecking are pretty complicated. They use specialized commands and delve deep into arcane details.
|
||||
|
||||
The GUI tools that open source offers make identifying many bottlenecks simple. My goal is to give you a quick, easy approach that you can use anywhere.
|
||||
|
||||
### Where to start
|
||||
|
||||
A computer consists of six key hardware resources:
|
||||
|
||||
* Processors
|
||||
* Memory
|
||||
* Storage
|
||||
* USB ports
|
||||
* Internet connection
|
||||
* Graphics processor
|
||||
|
||||
|
||||
|
||||
Should any one resource perform poorly, it can create a performance bottleneck. To identify a bottleneck, you must monitor these six resources.
|
||||
|
||||
Open source offers a plethora of tools to do the job. I'll use the [GNOME System Monitor][2]. Its output is easy to understand, and you can find it in most repositories.
|
||||
|
||||
Start it up and click on the **Resources** tab. You can identify many performance problems right off.
|
||||
|
||||
![System Monitor - Resources Panel ][3]
|
||||
|
||||
Fig. 1. System Monitor spots problems. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
The **Resources** panel displays three sections: **CPU History**, **Memory and Swap History**, and **Network History**. A quick glance tells you immediately whether your processors are swamped, or your computer is out of memory, or you're using up all your internet bandwidth.
|
||||
|
||||
I'll explore these problems below. For now, check the System Monitor first when your computer slows down. It instantly clues you in on the most common performance problems.
|
||||
|
||||
Now let's explore how to identify bottlenecks in specific areas.
|
||||
|
||||
### How to identify processor bottlenecks
|
||||
|
||||
To spot a bottleneck, you must first know what hardware you have. Open source offers several tools for this purpose. I like [HardInfo][5] because its screens are easy to read and it's widely popular.
|
||||
|
||||
Start up HardInfo. Its **Computer -> Summary** panel identifies your CPU and tells you about its cores, threads, and speeds. It also identifies your motherboard and other computer components.
|
||||
|
||||
![HardInfo Summary Panel][6]
|
||||
|
||||
Fig. 2. HardInfo shows hardware details. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
HardInfo reveals that this computer has one physical CPU chip. That chip contains two processors, or cores. Each core supports two threads, or logical processors. That's a total of four logical processors—exactly what System Monitor's CPU History section showed in Fig. 1.
|
||||
|
||||
A _processor bottleneck_ occurs when processors can't respond to requests for their time. They're already busy.
|
||||
|
||||
You can identify this when System Monitor shows logical processor utilization at over 80% or 90% for a sustained period. Here's an example where three of the four logical processors are swamped at 100% utilization. That's a bottleneck because it doesn't leave much CPU for any other work.
|
||||
|
||||
![System Monitor processor bottleneck][7]
|
||||
|
||||
Fig. 3. A processor bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
#### Which app is causing the problem?
|
||||
|
||||
You need to find out which program(s) is consuming all that CPU. Click on System Monitor's **Processes** tab. Then click on the **% CPU** header to sort the processes by how much CPU they're consuming. You'll see which apps are throttling your system.
|
||||
|
||||
![System Monitor Processes panel][8]
|
||||
|
||||
Fig. 4. Identifying the offending processes. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
The top three processes each consume 24% of the _total_ CPU resource. Since there are four logical processors, this means each consumes an entire processor. That's just as Fig. 3 shows.
|
||||
|
||||
The **Processes** panel identifies a program named **analytical_AI** as the culprit. You can right-click on it in the panel to see more details on its resource consumption, including memory use, the files it has open, its input/output details, and more.
|
||||
|
||||
If your login has administrator privileges, you can manage the process. You can change its priority and stop, continue, end, or kill it. So, you could immediately resolve your bottleneck here.
|
||||
|
||||
![System Monitor managing a process][9]
|
||||
|
||||
Fig. 5. Right-click on a process to manage it. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
How do you fix processing bottlenecks? Beyond managing the offending process in real time, you could prevent the bottleneck from happening. For example, you might substitute another app for the offender, work around it, change your behavior when using that app, schedule the app for off-hours, address an underlying memory issue, performance-tweak the app or your system software, or upgrade your hardware. That's too much to cover here, so I'll explore those options in my next article.
|
||||
|
||||
#### Common processor bottlenecks
|
||||
|
||||
You'll encounter several common bottlenecks when monitoring your CPUs with System Monitor.
|
||||
|
||||
Sometimes one logical processor is bottlenecked while all the others are at low utilization. This means you have an app that's not coded smartly enough to take advantage of more than one logical processor, and it's maxed out the one it's using. That app will take longer to finish than it would if it used more processors. On the other hand, at least it leaves your other processors free for other work and doesn't take over your computer.
|
||||
|
||||
You might also see a logical processor stuck forever at 100% utilization. Either it's very busy, or a process is hung. The way to tell if it's hung is if the process never does any disk activity (as the System Monitor **Processes** panel will show).
|
||||
|
||||
Finally, you might notice that when all your processors are bottlenecked, your memory is fully utilized, too. Out-of-memory conditions sometimes cause processor bottlenecks. In this case, you want to solve the underlying memory problem, not the symptomatic CPU issue.
|
||||
|
||||
### How to identify memory bottlenecks
|
||||
|
||||
Given the large amount of memory in modern PCs, memory bottlenecks are much less common than they once were. Yet you can still run into them if you run memory-intensive programs, especially if you have a computer that doesn't contain much random access memory (RAM).
|
||||
|
||||
Linux [uses memory][10] both for programs and to cache disk data. The latter speeds up disk data access. Linux can reclaim that memory any time it needs it for program use.
|
||||
|
||||
The System Monitor's **Resources** panel displays your total memory and how much of it is used. In the **Processes** panel, you can see individual processes' memory use.
|
||||
|
||||
Here's the portion of the System Monitor **Resources** panel that tracks aggregate memory use:
|
||||
|
||||
![System Monitor memory bottleneck][11]
|
||||
|
||||
Fig. 6. A memory bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
To the right of Memory, you'll notice [Swap][12]. This is disk space Linux uses when it runs low on memory. It writes memory to disk to continue operations, effectively using swap as a slower extension to your RAM.
|
||||
|
||||
The two memory performance problems you'll want to look out for are:
|
||||
|
||||
> 1. Memory appears largely used, and you see frequent or increasing activity on the swap space.
|
||||
> 2. Both memory and swap are largely used up.
|
||||
>
|
||||
|
||||
|
||||
Situation 1 means slower performance because swap is always slower than memory. Whether you consider it a performance problem depends on many factors (e.g., how active your swap space is, its speed, your expectations, etc.). My opinion is that anything more than token swap use is unacceptable for a modern personal computer.
|
||||
|
||||
Situation 2 is where both memory and swap are largely in use. This is a _memory bottleneck._ The computer becomes unresponsive. It could even fall into a state of _thrashing_, where it accomplishes little more than memory management.
|
||||
|
||||
Fig. 6 above shows an old computer with only 2GB of RAM. As memory use surpassed 80%, the system started writing to swap. Responsiveness declined. This screenshot shows over 90% memory use, and the computer is unusable.
|
||||
|
||||
The ultimate answer to memory problems is to either use less of it or buy more. I'll discuss solutions in my follow-up article.
|
||||
|
||||
### How to identify storage bottlenecks
|
||||
|
||||
Storage today comes in several varieties of solid-state and mechanical hard disks. Device interfaces include PCIe, SATA, Thunderbolt, and USB. Regardless of which type of storage you have, you use the same procedure to identify disk bottlenecks.
|
||||
|
||||
Start with System Monitor. Its **Processes** panel displays the input/output rates for individual processes. So you can quickly identify which processes are doing the most disk I/O.
|
||||
|
||||
But the tool doesn't show the _aggregate data transfer rate per disk._ You need to see the total load on a specific disk to determine if that disk is a storage bottleneck.
|
||||
|
||||
To do so, use the [atop][13] command. It's available in most Linux repositories.
|
||||
|
||||
Just type `atop` at the command-line prompt. The output below shows that device `sdb` is `busy 101%`. Clearly, it's reached its performance limit and is restricting how fast your system can get work done.
|
||||
|
||||
![atop disk bottleneck][14]
|
||||
|
||||
Fig. 7. The atop command identifies a disk bottleneck. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
Notice that one of the CPUs is waiting on the disk to do its job 85% of the time (`cpu001 w 85%`). This is typical when a storage device becomes a bottleneck. In fact, many look first at CPU I/O waits to spot storage bottlenecks.
|
||||
|
||||
So, to easily identify a storage bottleneck, use the `atop` command. Then use the **Processes** panel on System Monitor to identify the individual processes that are causing the bottleneck.
|
||||
|
||||
### How to identify USB port bottlenecks
|
||||
|
||||
Some people use their USB ports all day long. Yet, they never check if those ports are being used optimally. Whether you plug in an external disk, a memory stick, or something else, you'll want to verify that you're getting maximum performance from your USB-connected devices.
|
||||
|
||||
This chart shows why. Potential USB data transfer rates vary _enormously_.
|
||||
|
||||
![USB standards][15]
|
||||
|
||||
Fig. 8. USB speeds vary a lot. (Howard Fosdick, based on figures provided by [Tripplite][16] and [Wikipedia][17], [CC BY-SA 4.0][4])
|
||||
|
||||
HardInfo's **USB Devices** tab displays the USB standards your computer supports. Most computers offer more than one speed. How can you tell the speed of a specific port? Vendors color-code them, as shown in the chart. Or you can look in your computer's documentation.
|
||||
|
||||
To see the actual speeds you're getting, test by using the open source [GNOME Disks][18] program. Just start up GNOME Disks, select its **Benchmark Disk** feature, and run a benchmark. That tells you the maximum real speed you'll get for a port with the specific device plugged into it.
|
||||
|
||||
You may get different transfer speeds for a port, depending on which device you plug into it. Data rates depend on the particular combination of port and device.
|
||||
|
||||
For example, a device that could fly at 3.1 speed will use a 2.0 port—at 2.0 speed—if that's what you plug it into. (And it won't tell you it's operating at the slower speed!) Conversely, if you plug a USB 2.0 device into a 3.1 port, it will work, but at the 2.0 speed. So to get fast USB, you must ensure both the port and the device support it. GNOME Disks gives you the means to verify this.
|
||||
|
||||
To identify a USB processing bottleneck, use the same procedure you did for solid-state and hard disks. Run the `atop` command to spot a USB storage bottleneck. Then, use System Monitor to get the details on the offending process(es).
|
||||
|
||||
### How to identify internet bandwidth bottlenecks
|
||||
|
||||
The System Monitor **Resources** panel tells you in real time what internet connection speed you're experiencing (see Fig. 1).
|
||||
|
||||
There are [great Python tools out there][19] to test your maximum internet speed, but you can also test it on websites like [Speedtest][20], [Fast.com][21], and [Speakeasy][22]. For best results, close everything and run _only_ the speed test; turn off your VPN; run tests at different times of day; and compare the results from several testing sites.
|
||||
|
||||
Then compare your results to the download and upload speeds that your vendor claims you're getting. That way, you can confirm you're getting the speeds you're paying for.
|
||||
|
||||
If you have a separate router, test with and without it. That can tell you if your router is a bottleneck. If you use WiFi, test with it and without it (by directly cabling your laptop to the modem). I've often seen people complain about their internet vendor when what they actually have is a WiFi bottleneck they could fix themselves.
|
||||
|
||||
If some program is consuming your entire internet connection, you want to know which one. Find it by using the `nethogs` command. It's available in most repositories.
|
||||
|
||||
The other day, my System Monitor suddenly showed my internet access spiking. I just typed `nethogs` in the command line, and it instantly identified the bandwidth consumer as a Clamav antivirus update.
|
||||
|
||||
![Nethogs][23]
|
||||
|
||||
Fig. 9. Nethogs identifies bandwidth consumers. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
### How to identify graphics processing bottlenecks
|
||||
|
||||
If you plug your monitor into the motherboard in the back of your desktop computer, you're using _onboard graphics_. If you plug it into a card in the back, you have a dedicated graphics subsystem. Most call it a _video card_ or _graphics card._ For desktop computers, add-in cards are typically more powerful and more expensive than motherboard graphics. Laptops always use onboard graphics.
|
||||
|
||||
HardInfo's **PCI Devices** panel tells you about your graphics processing unit (GPU). It also displays the amount of dedicated video memory you have (look for the memory marked "prefetchable").
|
||||
|
||||
![Video Chipset Information][24]
|
||||
|
||||
Fig. 10. HardInfo provides graphics processing information. (Howard Fosdick, [CC BY-SA 4.0][4])
|
||||
|
||||
CPUs and GPUs work [very closely][25] together. To simplify, the CPU prepares frames for the GPU to render, then the GPU renders the frames.
|
||||
|
||||
A _GPU bottleneck_ occurs when your CPUs are waiting on a GPU that is 100% busy.
|
||||
|
||||
To identify this, you need to monitor CPU and GPU utilization rates. Open source monitors like [Conky][26] and [Glances][27] do this if their extensions work with your graphics chipset.
|
||||
|
||||
Take a look at this example from Conky. You can see that this system has a lot of available CPU. The GPU is only 25% busy. Imagine if that GPU number were instead near 100%. Then you'd know that the CPUs were waiting on the GPU, and you'd have a GPU bottleneck.
|
||||
|
||||
![Conky CPU and GPU monitoring][28]
|
||||
|
||||
Fig. 11. Conky displays CPU and GPU utilization. (Image courtesy of [AskUbuntu forum][29])
|
||||
|
||||
On some systems, you'll need a vendor-specific tool to monitor your GPU. They're all downloadable from GitHub and are described in this article on [GPU monitoring and diagnostic command-line tools][30].
|
||||
|
||||
### Summary
|
||||
|
||||
Computers consist of a collection of integrated hardware resources. Should any of them fall way behind the others in its workload, it creates a performance bottleneck. That can hold back your entire system. You need to be able to identify and correct bottlenecks to achieve optimal performance.
|
||||
|
||||
Not so long ago, identifying bottlenecks required deep expertise. Today's open source GUI performance monitors make it pretty simple.
|
||||
|
||||
In my next article, I'll discuss specific ways to improve your Linux PC's performance. Meanwhile, please share your own experiences in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/linux-performance-bottlenecks
|
||||
|
||||
作者:[Howard Fosdick][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/howtech
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightning.png?itok=wRzjWIlm (Lightning in a bottle)
|
||||
[2]: https://wiki.gnome.org/Apps/SystemMonitor
|
||||
[3]: https://opensource.com/sites/default/files/uploads/1_system_monitor_resources_panel.jpg (System Monitor - Resources Panel )
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://itsfoss.com/hardinfo/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/2_hardinfo_summary_panel.jpg (HardInfo Summary Panel)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/3_system_monitor_100_processor_utilization.jpg (System Monitor processor bottleneck)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/4_system_monitor_processes_panel.jpg (System Monitor Processes panel)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/5_system_monitor_manage_a_process.jpg (System Monitor managing a process)
|
||||
[10]: https://www.networkworld.com/article/3394603/when-to-be-concerned-about-memory-levels-on-linux.html
|
||||
[11]: https://opensource.com/sites/default/files/uploads/6_system_monitor_out_of_memory.jpg (System Monitor memory bottleneck)
|
||||
[12]: https://opensource.com/article/18/9/swap-space-linux-systems
|
||||
[13]: https://opensource.com/life/16/2/open-source-tools-system-monitoring
|
||||
[14]: https://opensource.com/sites/default/files/uploads/7_atop_storage_bottleneck.jpg (atop disk bottleneck)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/8_usb_standards_speeds.jpg (USB standards)
|
||||
[16]: https://www.samsung.com/us/computing/memory-storage/solid-state-drives/
|
||||
[17]: https://en.wikipedia.org/wiki/USB
|
||||
[18]: https://wiki.gnome.org/Apps/Disks
|
||||
[19]: https://opensource.com/article/20/1/internet-speed-tests
|
||||
[20]: https://www.speedtest.net/
|
||||
[21]: https://fast.com/
|
||||
[22]: https://www.speakeasy.net/speedtest/
|
||||
[23]: https://opensource.com/sites/default/files/uploads/9_nethogs_bandwidth_consumers.jpg (Nethogs)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/10_hardinfo_video_card_information.jpg (Video Chipset Information)
|
||||
[25]: https://www.wepc.com/tips/cpu-gpu-bottleneck/
|
||||
[26]: https://itsfoss.com/conky-gui-ubuntu-1304/
|
||||
[27]: https://opensource.com/article/19/11/monitoring-linux-glances
|
||||
[28]: https://opensource.com/sites/default/files/uploads/11_conky_cpu_and_gup_monitoring.jpg (Conky CPU and GPU monitoring)
|
||||
[29]: https://askubuntu.com/questions/387594/how-to-measure-gpu-usage
|
||||
[30]: https://www.cyberciti.biz/open-source/command-line-hacks/linux-gpu-monitoring-and-diagnostic-commands/
|
@ -1,202 +0,0 @@
|
||||
[#]: subject: (4 essential characteristics of successful APIs)
|
||||
[#]: via: (https://opensource.com/article/21/5/successful-apis)
|
||||
[#]: author: (Tom Wilson https://opensource.com/users/tomwillson4014)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ywxgod)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
4 essential characteristics of successful APIs
|
||||
======
|
||||
An API needs to do much more than "just work."
|
||||
![Looking at a map][1]
|
||||
|
||||
If you are building an application that uses some variation of a client/server model, you need an application programming interface (API). An API is a clearly defined boundary between one process and another. A common boundary in web applications is a REST/JSON API.
|
||||
|
||||
While developers may be mainly focused on making the API work (or function), there are some "non-functional" requirements that need their attention. Four _must-have_ non-functional requirements for all APIs are:
|
||||
|
||||
* Security
|
||||
* Documentation
|
||||
* Validation
|
||||
* Testing
|
||||
|
||||
|
||||
|
||||
### Security
|
||||
|
||||
Security is an essential requirement in software development. There are four areas for API developers to include regarding security:
|
||||
|
||||
1. HTTPS/SSL certificates
|
||||
2. Cross-origin resource sharing
|
||||
3. Authentication and JSON Web Tokens
|
||||
4. Authorizations and scopes
|
||||
|
||||
|
||||
|
||||
#### 1\. HTTPS/SSL certificates
|
||||
|
||||
The gold standard for the web is HTTPS using SSL certificates, and [Let's Encrypt][2] can help you achieve this. It is a free, automated, and open certificate authority from the non-profit Internet Security Research Group (ISRG).
|
||||
|
||||
Let's Encrypt's software generates central authority certificates for your domain. These certificates ensure payloads of data from your API to the client are encrypted from point to point.
|
||||
|
||||
Let's Encrypt supports several deployment options for certificate management; check out its [documentation][3] to find the right solution for your needs.
|
||||
|
||||
#### 2\. Cross-origin resource sharing
|
||||
|
||||
CORS is a browser-specific security policy preflight check. If your API server is not in the same domain as the requesting client's domain, you will need to deal with CORS. For example, if your server is running on **api.domain-a.com **and gets a client request from **domain-b.com**, CORS sends an HTTP precheck request to see if your API service will accept client-side requests from the client's domain.
|
||||
|
||||
[According to MDN][4]:
|
||||
|
||||
> "Cross-origin resource sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any other origins (domain, scheme, or port) than its own from which a browser should permit loading of resources."
|
||||
|
||||
![CORS principles][5]
|
||||
|
||||
([MDN Web Docs][4], [CC-BY-SA 2.5][6])
|
||||
|
||||
There are many helper libraries for [Node.js][7] to [help API developers with CORS][8].
|
||||
|
||||
#### 3\. Authentication and JSON Web Tokens
|
||||
|
||||
There are several approaches to validate an authenticated user in your API, but one of the best ways is to use JSON Web Tokens (JWT). These tokens are signed using various types of well-known cryptographic libraries.
|
||||
|
||||
When a client logs in, an identity-management service provides the client with a JWT. The client can then use this token to make requests to the API. The API has access to a public key or a secret that it uses to verify the token.
|
||||
|
||||
There are several libraries available to help verify tokens, including [jsonwebtoken][9]. For more information about JWT and the libraries that support it in every language, check out [JWT.io][10].
|
||||
|
||||
![JWT verification example][11]
|
||||
|
||||
(Tom Wilson, [Hyper63 blog][12])
|
||||
|
||||
#### 4\. Authorizations and scopes
|
||||
|
||||
Authentication (or identity verification) is important, but so is authorization, i.e., _does the verified client have the privilege to execute this request?_ This is where **scopes** are valuable. When the client authenticates with the identity management server and a JWT token is created, having the identity management service provide the scopes for the given authenticated client can enable the API service to determine if this verified client request can be performed without having to perform an additional costly lookup to an access control list.
|
||||
|
||||
A scope is a text block (usually space-delimited) that describes the access capability of an API endpoint. Normally, scopes are broken down between Resources and Actions. This pattern works well for REST/JSON APIs since they are very similarly structured in a RESOURCE:ACTION format (e.g., ARTICLE:WRITE or ARTICLE:READ, where ARTICLE is the resource and READ and WRITE are the actions).
|
||||
|
||||
This allows the API to focus on function and not roles or users. The identity access management service can relate roles and users to scopes, then provide the scopes to the client in a verified JWT.
|
||||
|
||||
#### Summary
|
||||
|
||||
When building and deploying APIs, security should always be one of the most important requirements. While security is a broad topic, addressing these four areas will position your API well for production environments.
|
||||
|
||||
### Documentation
|
||||
|
||||
_What's worse than no documentation? Outdated documentation._
|
||||
|
||||
Developers have a love–hate relationship with documentation. Still, documentation is a crucial part of an API's definition of success. Developers need to know how to use the API, and the documentation you create plays a huge role in educating developers on how to best use it.
|
||||
|
||||
There are three areas to focus on in API documentation:
|
||||
|
||||
1. Developer onboarding (READMEs)
|
||||
2. Technical reference (Specifications)
|
||||
3. Usage (Getting started and other guides)
|
||||
|
||||
|
||||
|
||||
#### 1\. Developer onboarding
|
||||
|
||||
When building an API service, you need to specify things like: What does the API do? How do you set up a developer environment? How do you test the service? How do you submit an issue? How do you deploy it?
|
||||
|
||||
The usual way to answer these questions is with a README file. It is the file in your code repository that gives developers a starting point for working with your project.
|
||||
|
||||
A README should contain:
|
||||
|
||||
* A description of the API
|
||||
* Links to technical references and guides
|
||||
* How to set up the project as a developer
|
||||
* How to test the project
|
||||
* How to deploy the project
|
||||
* Dependency management
|
||||
* Contribution guide
|
||||
* Code of conduct
|
||||
* License
|
||||
* Gratitude
|
||||
|
||||
|
||||
|
||||
Be concise in your README; you do not have to explain every aspect but give enough information so that developers can drill deeper as they become familiar with your project.
|
||||
|
||||
#### 2\. Technical reference
|
||||
|
||||
In a REST/JSON API, every endpoint is a specific function with a purpose. It is important to have technical documentation that specifies each endpoint; defines the description, inputs, and outputs that can occur; and provides examples for various clients.
|
||||
|
||||
REST/JSON has a specification standard called [OpenAPI][13], which can guide you through the details required to document an API. OpenAPI can also generate presentation documentation for your API.
|
||||
|
||||
#### 3\. Usage
|
||||
|
||||
Your API's users want more than just technical specifications. They want to know how to use your API in specific situations or cases. Most potential users have a problem and they are looking to your API to solve it.
|
||||
|
||||
A great way to introduce users to your API is with a "getting started" page. This can walk the user through a simple use case that gets them up to speed quickly on the benefits of your API.
|
||||
|
||||
#### Summary
|
||||
|
||||
Documentation is a key component of any successful API. When creating documentation, think about the three areas of focus—onboarding, technical, and usage—cover those bases, and you will have a well-documented API.
|
||||
|
||||
### Validation
|
||||
|
||||
One of the most often overlooked aspects of API development is validation. Validation is the process of verifying input from external sources. These sources might be a client sending JSON or a service responding to your request. More than just checking types, ensuring that the data is what it is supposed to be can eliminate many potential problems. Understanding your boundaries and what you do and don't have control over is an important aspect of validation.
|
||||
|
||||
The best strategy is to validate at the edges before your logic takes place. When a client sends your API some data, apply validation before you do anything else with that data. Make sure an email is an actual email address, a date is properly formatted, a string meets length requirements.
|
||||
|
||||
This simple check will add safety and consistency to your application. Also, when you receive data from a service, like a database or a cache, revalidate it to make sure the returned result meets your data checks.
|
||||
|
||||
You can always validate by hand or use utility function libraries like [Lodash][14] or [Ramda][15]. These work great for small data objects. Validation libraries like [Joi][16], [Yup][17], or [Zod][18] work even better, as they contain common validations that can save time and effort and create a very readable schema. If you need something language-agnostic, look at [JSON Schema][19].
|
||||
|
||||
#### Summary
|
||||
|
||||
Validation is not sexy, but it can save a ton of time that would otherwise be spent troubleshooting and writing data migration scripts. Don't make the mistake of trusting your client to send clean data; you don't want bad data leaked into your business logic or persistent data store. Take the time and validate your API endpoints and service responses. While it may cause some frustration upfront, it is much easier to loosen the reigns than to tighten them later.
|
||||
|
||||
### Testing
|
||||
|
||||
Testing is a best practice for software development and should be a primary non-functional requirement. Defining a test strategy can be a challenge for any project, including APIs. Always understand your constraints and define your strategy accordingly.
|
||||
|
||||
Integration testing is one of the most effective methods for testing APIs. In this pattern, the development team creates a test to cover some part of the application flow, from one specific point to another. A great integration test flow includes testing the API's entry point and mocking the request point to the service. By picking those two points, you cover the entire logic, from the beginning of the API request to the service request, and the mock service gives you a response to hand back to the API response.
|
||||
|
||||
Although it uses mocks, this method allows you to focus on the code in the logic layer and not depend on back-end services or presentation logic to run the test. Having no dependencies makes running the test much more reliable, easier to automate, and simpler to include in your continuous integration pipeline.
|
||||
|
||||
One setup I use for integration testing uses [Tape][20], [Test-server][21], and [Fetch-mock][22]. These libraries enable me to run isolated tests against API endpoints from the request to the response, with Fetch-mock catching the outbound request to the persistence layer.
|
||||
|
||||
#### Summary
|
||||
|
||||
While all types of testing and type checking are beneficial to APIs, integration testing offers the largest benefit in terms of effectiveness vs. time to build and manage. Using tools like Fetch-mock can provide clean mocking scenarios at the service boundary.
|
||||
|
||||
### Focus on the fundamentals
|
||||
|
||||
As you design and build your application and API, make sure to include these four fundamentals. These are not the only non-functional requirements to consider; others include application monitoring, logging, and API management. Even so, security, documentation, validation, and testing are crucial focus points for designing and building a successful API for any use case.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/5/successful-apis
|
||||
|
||||
作者:[Tom Wilson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tomwillson4014
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
|
||||
[2]: https://letsencrypt.org/
|
||||
[3]: https://letsencrypt.org/docs/
|
||||
[4]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
|
||||
[5]: https://opensource.com/sites/default/files/uploads/cors_principle_1.png (CORS principles)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/2.5/
|
||||
[7]: https://nodejs.org
|
||||
[8]: https://www.npmjs.com/search?q=CORS
|
||||
[9]: https://github.com/auth0/node-jsonwebtoken
|
||||
[10]: https://jwt.io
|
||||
[11]: https://opensource.com/sites/default/files/uploads/jwt-verify-example.png (JWT verification example)
|
||||
[12]: https://blog.hyper63.com/content/images/2021/03/jwt-verify-example.png
|
||||
[13]: https://spec.openapis.org/oas/v3.1.0
|
||||
[14]: https://lodash.com
|
||||
[15]: https://ramdajs.com/
|
||||
[16]: https://joi.dev/
|
||||
[17]: https://github.com/jquense/yup
|
||||
[18]: https://github.com/colinhacks/zod/tree/v3
|
||||
[19]: https://json-schema.org/
|
||||
[20]: https://github.com/substack/tape
|
||||
[21]: https://github.com/twilson63/test-server
|
||||
[22]: http://www.wheresrhys.co.uk/fetch-mock/
|
@ -1,164 +0,0 @@
|
||||
[#]: subject: (Manage your Raspberry Pi with Cockpit)
|
||||
[#]: via: (https://opensource.com/article/21/5/raspberry-pi-cockpit)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ShuyRoy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Manage your Raspberry Pi with Cockpit
|
||||
======
|
||||
Establish central control over your Raspberry Pis with Cockpit.
|
||||
![Neon colorized Raspberry Pi cluster with LEGOs][1]
|
||||
|
||||
Last year, I wrote about using [Cockpit to manage my Linux servers][2]. It is a web-based tool that gives you a clean, powerful interface for managing multiple servers and their associated services and applications. It also eases regular day-to-day administrative tasks.
|
||||
|
||||
In this article, I'll describe how to install the Cockpit web console for Linux servers on the Raspberry Pi operating system (OS), the standard OS provided by the Raspberry Pi Foundation. I'll also provide brief descriptions of its features.
|
||||
|
||||
### Installing Cockpit on Raspberry Pi OS
|
||||
|
||||
Log into your Raspberry Pi system using secure shell (SSH) using an account with sudo privileges. Set up an account if you haven't already done so:
|
||||
|
||||
|
||||
```
|
||||
$ ssh pibox
|
||||
alan@pibox's password:
|
||||
Linux pibox.someplace.org 5.10.17-v7+ #1403 SMP Mon Feb 22 11:29:51 GMT 2021 armv7l
|
||||
|
||||
The programs included with the Debian GNU/Linux system are free software;
|
||||
the exact distribution terms for each program are described in the
|
||||
individual files in /usr/share/doc/*/copyright.
|
||||
|
||||
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
|
||||
permitted by applicable law.
|
||||
Last login: Tue May 4 09:55:57 2021 from 172.1.4.5
|
||||
alan@pibox:~ $
|
||||
```
|
||||
|
||||
The command to install the Cockpit web console is as simple on Raspberry Pi OS as it is on Linux servers:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install cockpit`
|
||||
```
|
||||
|
||||
Cockpit only requires 60.4 kB of disk space. Together with its several package dependencies, total usage is 115MB.
|
||||
|
||||
The installation process will take care of setting up and starting the services. You can verify the status by using the `systemctl` command:
|
||||
|
||||
|
||||
```
|
||||
$ systemctl status cockpit.socket
|
||||
● cockpit.socket - Cockpit Web Service Socket
|
||||
Loaded: loaded (/lib/systemd/system/cockpit.socket; enabled; vendor preset: enabled)
|
||||
Active: active (listening) since Tue 2021-05-04 10:24:43 EDT; 35s ago
|
||||
Docs: man:cockpit-ws(8)
|
||||
Listen: 0.0.0.0:9090 (Stream)
|
||||
Process: 6563 ExecStartPost=/usr/share/cockpit/motd/update-motd localhost (code=exited, status=0/SUCCESS)
|
||||
Process: 6570 ExecStartPost=/bin/ln -snf active.motd /run/cockpit/motd (code=exited, status=0/SUCCESS)
|
||||
Tasks: 0 (limit: 2181)
|
||||
CGroup: /system.slice/cockpit.socket
|
||||
```
|
||||
|
||||
### Using Cockpit
|
||||
|
||||
#### Connecting
|
||||
|
||||
The default listening port is 9090. Open your favorite web browser and enter the address, e.g., `https://pibox:9090`.
|
||||
|
||||
![Cockpit home page][3]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
You can now log in with your regular user account. Again, it is helpful to have sudo privileges on this account—most likely the same one you use to SSH and run Apt. Be sure to check the box for "Reuse my password for privileged tasks".
|
||||
|
||||
#### Managing your Pi
|
||||
|
||||
Cockpit's initial screen starts with **System** and will provide details and graphs of current CPU and memory usage.
|
||||
|
||||
![Initial Cockpit screen][5]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
You can view hardware details from this screen.
|
||||
|
||||
![Cockpit hardware details][6]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
Explore the column on the left by clicking each item (e.g., Logs, Storage, Services, etc.). These are the standard Cockpit sections and are fairly self explanatory. Let me quickly describe each.
|
||||
|
||||
#### Logs
|
||||
|
||||
This section shows the logs. They can be filtered by date and severity.
|
||||
|
||||
#### Storage
|
||||
|
||||
The storage section shows the physical drives and RAID devices that are installed. Details such as size and serial number are shown. Graphs for read/write activity and actual space usage are displayed. Storage specific logs are presented at the bottom.
|
||||
|
||||
#### Networking
|
||||
|
||||
This section displays send and recieve activity, IP addresses, and network specific logs. You can also add more networking devices; such as bonds, bridges, and VLANs using the respective buttons.
|
||||
|
||||
#### Accounts
|
||||
|
||||
Existing accounts are shown here. Click each to manage or use the _Create New Account_ button to add users. Accounts can be deleted here also.
|
||||
|
||||
#### Services
|
||||
|
||||
This section allows the administrator to view the status of all of the system services. Clicking any service takes you to a screen with the standard tasks of start, restart, and disable.
|
||||
|
||||
#### Applications
|
||||
|
||||
Normally, this screen provides various applications for managing functions such as the 389 Directory Server or creation of Podman containers. On my Raspberry OS though, this screen only displayed the message, "No applications installed or available". At the time of writing, perhaps this has not yet been implemented. Although, you do have to wonder whether these types of processes would be too heavy for the Raspberry PI hardware.
|
||||
|
||||
#### Software Updates
|
||||
|
||||
Keeping software up to date is one of the most important tasks for any system administrator. Cockpit's Software Updates section checks and applies updates.
|
||||
|
||||
![Software updates in Cockpit][7]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
#### Terminal
|
||||
|
||||
One of Cockpit's neatest features is the terminal. You can use it instead of opening a separate terminal emulator and using SSH. I used the terminal to install [ScreenFetch][8]:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install screenfetch`
|
||||
```
|
||||
|
||||
And I used ScreenFetch to produce this screenshot:
|
||||
|
||||
![Terminal in Cockpit][9]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
### Centralized control with Cockpit
|
||||
|
||||
Cockpit behaves on Raspberry Pi just like it does on any other Linux system. You can add it to a dashboard for centralized control. It allows organizations to integrate Raspberry Pi-based services and systems into their overall Linux infrastructure anywhere Cockpit is used as a management dashboard solution. This is highly convenient, given that Pis are often run headless in high-density racked data centers that generally lack KVM access.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/5/raspberry-pi-cockpit
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[RiaXu](https://github.com/ShuyRoy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi_kuberenetes_cluster_lead2_0.jpeg?itok=kx0Zc0NK (Neon colorized Raspberry Pi cluster with LEGOs)
|
||||
[2]: https://opensource.com/article/20/11/cockpit-server-management
|
||||
[3]: https://opensource.com/sites/default/files/uploads/cockpit_homepage.png (Cockpit home page)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/cockpit_initialscreen.png (Initial Cockpit screen)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/hardware_details.png (Cockpit hardware details)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/software_updates.png (Software updates in Cockpit)
|
||||
[8]: https://opensource.com/article/20/1/screenfetch-neofetch
|
||||
[9]: https://opensource.com/sites/default/files/uploads/pi_cockpit_terminal.png (Terminal in Cockpit)
|
@ -1,182 +0,0 @@
|
||||
[#]: subject: (How to Install and Use XRDP on Ubuntu for Remote Desktop Connection)
|
||||
[#]: via: (https://itsfoss.com/xrdp-ubuntu/)
|
||||
[#]: author: (Hunter Wittenborn https://itsfoss.com/author/hunter/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
How to Install and Use XRDP on Ubuntu for Remote Desktop Connection
|
||||
======
|
||||
|
||||
_**Brief: This is a beginner’s guide that shows the steps you need to follow for setting up XRDP on Ubuntu-based Linux distributions. With that, you can access your Ubuntu system from a different computer and use it graphically.**_
|
||||
|
||||
[Microsoft Remote Desktop Protocol][1](RDP) is a protocol that allows for graphical remote desktop connections from one computer to another. RDP works by having a main machine run software that allows several other computers to connect to it.
|
||||
|
||||
[XRDP][2] is an open-source implementation of RDP, removing the need to run any proprietary programs. XRDP not only tries to follow in the direction of RDP, but is also compatible with regular RDP clients such as [Remmina][3] and [GNOME Boxes][4].
|
||||
|
||||
Here’s what the XRDP connection screen looks like.
|
||||
|
||||
![][5]
|
||||
|
||||
### Things to keep in mind about using XRDP
|
||||
|
||||
While XRDP works great for getting remote access to machine, it’s important to know what XRDP _**isn’t**_ good at.
|
||||
|
||||
#### Do _n**ot**_ use XRDP if you need a secure connection
|
||||
|
||||
Connections made over XRDP can be viewed and modified by attackers, and should thus be avoided for any sensitive information. This can be alleviated through the use of an SSH connection or certificates, but both require a more complex setup and won’t be covered here.
|
||||
|
||||
#### XRDP doesn’t work well with theming by default
|
||||
|
||||
In my testing, XRDP didn’t ever seem to apply the theming [Ubuntu][6] comes with by default. Instructions for fixing this are available at the end of the article.
|
||||
|
||||
#### You need a desktop environment installed on the remote computer
|
||||
|
||||
You’ll need a graphical environment installed on the machine everything will connect to for any of this to work. If you are using a desktop Linux to be accessed remotely, it’s all good.
|
||||
|
||||
But if you are using a server operating system, it won’t work. Of course, [you can install GUI on your Ubuntu server][7] but you’ll be a lot better using SSH to use the remote system via command line.
|
||||
|
||||
### Using XRDP to connect to a Ubuntu Linux system remotely
|
||||
|
||||
Here’s the setup you need for this remote connection setup to work properly.
|
||||
|
||||
* A Linux system with XRDP server installed on it. This is the system which will be accessed remotely.
|
||||
* The remote system should either be on the same network as yours or it should have a public IP address.
|
||||
* You need to know the username and password of the remote Linux system, obviously.
|
||||
* Another system (be it Linux, macOS or Windows) with an RDP client installed on it.
|
||||
|
||||
|
||||
|
||||
![][8]
|
||||
|
||||
The process is really simple. Let’s see it in steps.
|
||||
|
||||
#### Step 1: Install XRDP on the ‘remote computer’
|
||||
|
||||
I am calling it remote computer for reference only. Of course, you need to have access to it in the first place for installing the XRDP package.
|
||||
|
||||
XRDP is included in the repositories of most distributions. On Ubuntu, you can find it in the universe repository and install it using this command:
|
||||
|
||||
```
|
||||
sudo apt install xrdp
|
||||
```
|
||||
|
||||
#### Step 2: Get the IP address of the ‘remote computer’
|
||||
|
||||
You’ll need the IP address of the remote system in order to connect to it. You can [get the IP address in Linux][9] using the ip command:
|
||||
|
||||
```
|
||||
ip address
|
||||
```
|
||||
|
||||
As you can see, the system in the example has IP address 192.168.0.107. This is on the subnet, of course.
|
||||
|
||||
```
|
||||
[email protected]:~$ ip address
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
2: wlp0s20f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
|
||||
link/ether dc:46:b9:fb:7a:c5 brd ff:ff:ff:ff:ff:ff
|
||||
inet 192.168.0.107/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp0s20f3
|
||||
valid_lft 6183sec preferred_lft 6183sec
|
||||
```
|
||||
|
||||
#### Step 3: Connecting to a XRDP machine from ‘local computer’
|
||||
|
||||
The good news is that XRDP works right out of the box!
|
||||
|
||||
To connect to the machine you installed XRDP on, you’ll first need to install an RDP client on your local system (from where you are trying to connect to the remote system).
|
||||
|
||||
I’ll be using GNOME Boxes in this tutorial, which can be installed with the following:
|
||||
|
||||
```
|
||||
sudo apt install gnome-boxes
|
||||
```
|
||||
|
||||
GNOME Boxes is primarily used for virtual machines but it is also a good XRDP client. You may use other tools like Remmina.
|
||||
|
||||
Start the GNOME Boxes application. Click on the + sign and select “**Connect to a Remote Computer…**“.
|
||||
|
||||
![][10]
|
||||
|
||||
Next, enter the IP address of the machine you’re connecting to, prefixed with `rdp://`, and then connect as shown below:
|
||||
|
||||
![][11]
|
||||
|
||||
In the above example, I deployed an Ubuntu server on Linode cloud server. I also installed GNOME desktop on it. This server has a public IP address that can be accessed from anywhere. I have used the public IP address.
|
||||
|
||||
You should then be presented with a login screen. Keep “Session” set to “Xorg”, and just enter your username and password, then click “OK”:
|
||||
|
||||
![][5]
|
||||
|
||||
After, you should be presented with your desktop:
|
||||
|
||||
![][12]
|
||||
|
||||
And now you’re good to go! Everything will (mostly – more on that below) behave just the same as if the machine was right in front of you.
|
||||
|
||||
### Troubleshooting: Fixing theming issues with XRDP connection
|
||||
|
||||
In my testing on Ubuntu 20.04, the default Yaru theme didn’t seem to apply by default when connecting over. This can be fixed with some effort.
|
||||
|
||||
First, run this command on the **remote computer**:
|
||||
|
||||
```
|
||||
sudo apt install gnome-tweaks gnome-shell-extensions dconf-editor -y
|
||||
```
|
||||
|
||||
Next, open the Extensions app, and turn on the toggles shown below:
|
||||
|
||||
![][13]
|
||||
|
||||
Next, close your remote desktop session and log back in. Now, open up Tweaks and configure everything per the screenshot below:
|
||||
|
||||
![][14]
|
||||
|
||||
Lastly, open up dconf Editor, and navigate to `/org/gnome/shell/extensions/dash-to-dock/`. Set the values that are shown below:
|
||||
|
||||
* `custom-theme-shrink`: On
|
||||
* `dock-fixed`: On
|
||||
* `transparency-mode`: FIXED
|
||||
|
||||
|
||||
|
||||
And there you go, everything is good to go!
|
||||
|
||||
### Wrapping up
|
||||
|
||||
This should help you get started with XRDP on Ubuntu and other Linux systems. This is a convenient tool for connecting to remote systems, specially on the same network.
|
||||
|
||||
If something didn’t work quite right, or you just have any questions or comments, feel free to leave them below. I’ll try to help you out.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/xrdp-ubuntu/
|
||||
|
||||
作者:[Hunter Wittenborn][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/hunter/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Remote_Desktop_Protocol
|
||||
[2]: https://en.wikipedia.org/wiki/Xrdp
|
||||
[3]: https://remmina.org/
|
||||
[4]: https://wiki.gnome.org/Apps/Boxes
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_connected_login.png?resize=716%2C582&ssl=1
|
||||
[6]: https://ubuntu.com/
|
||||
[7]: https://itsfoss.com/install-gui-ubuntu-server/
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp-ubuntu.png?resize=800%2C450&ssl=1
|
||||
[9]: https://itsfoss.com/check-ip-address-ubuntu/
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_gnome-boxes_connect-begin.png?resize=744%2C580&ssl=1
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_gnome-boxes_rdp-connect.png?resize=757%2C514&ssl=1
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_connected_homescreen.png?resize=711%2C595&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_extensions.png?resize=800%2C557&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/xrdp_tweaks.png?resize=800%2C550&ssl=1
|
@ -2,7 +2,7 @@
|
||||
[#]: via: (https://opensource.com/article/21/5/monitor-greenhouse-open-source)
|
||||
[#]: author: (Darin London https://opensource.com/users/dmlond)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (alim0x)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,103 +0,0 @@
|
||||
[#]: subject: (Get started with FreeDOS)
|
||||
[#]: via: (https://opensource.com/article/21/6/get-started-freedos)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Get started with FreeDOS
|
||||
======
|
||||
It looks like retro computing, but it's a modern OS you can use to get
|
||||
stuff done.
|
||||
![Old UNIX computer][1]
|
||||
|
||||
Throughout the 1980s and into the 1990s, I was primarily a DOS user. I loved the command line environment offered in DOS, which became more powerful with each successive release. I even learned how to write my own DOS programs in the C programming language so I could extend the DOS command line, and write more powerful replacements for the standard DOS commands. I'd experimented with Microsoft's Windows—but if you remember Windows 3 from that time, you know it was slow and tended to crash. But I preferred the command line anyway, so I stuck to DOS.
|
||||
|
||||
That all changed in 1994. Popular tech magazines talked about an upcoming version of Windows that would completely do away with DOS. I didn't want to be forced to Windows. On the discussion boards I visited on Usenet, others felt the same. So [on 29 June 1994][2], I decided that if we wanted to keep DOS, we needed to write our own. So on June 29, I announced a small project that would become [The FreeDOS Project][3].
|
||||
|
||||
Since then, we've released several full distributions of FreeDOS. We started with the alpha series from 1994 to 1997, the beta series from 1998 to 2005, before finally releasing the FreeDOS 1.0 distribution in 2006. Progress has been slow but steady since then. We haven't really been rushed to release each new version after 1.0, because DOS stopped being a moving target in 1995.
|
||||
|
||||
Each FreeDOS distribution since 1.0 has been a continual re-imagining of what a modern DOS might look like. We've included lots of compilers, assemblers for developers to write software. We also provide lots of "power tools" so you can do real work. And we offer a variety of editors because everyone has their favorite.
|
||||
|
||||
We recently released the FreeDOS 1.3 RC4 distribution. This is technically a release candidate towards our upcoming FreeDOS 1.3 distribution, but it's a full-featured distribution. I'm very excited about all the great features in FreeDOS 1.3 RC4.
|
||||
|
||||
### Run FreeDOS without installing FreeDOS
|
||||
|
||||
In all our previous FreeDOS distributions, we focused on _installing_ FreeDOS to a computer. But we recognize that most users don't actually run FreeDOS on actual hardware anymore—they run FreeDOS in [a virtual machine like QEMU or VirtualBox][4]. So in FreeDOS 1.3 RC4, we improved the "LiveCD" environment.
|
||||
|
||||
With FreeDOS 1.3 RC4, you can just boot the LiveCD image in your favorite virtual machine, and start using FreeDOS right away. That's how I run FreeDOS now; I have a small virtual hard drive image where I store all my files, but I boot and run FreeDOS from the LiveCD.
|
||||
|
||||
![Booting the FreeDOS 1.3 RC4 LiveCD on QEMU][5]
|
||||
|
||||
Booting the FreeDOS 1.3 RC4 LiveCD (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
### Installing is really easy
|
||||
|
||||
If you don't want to run FreeDOS from the LiveCD, you can also install it on your hard drive. We updated the installer in FreeDOS so it's not really a "program" per se, but instead is a very smart DOS "batch" file that detects all sorts of things and takes the appropriate action, like creating a new disk partition for FreeDOS if none exist already.
|
||||
|
||||
Older FreeDOS distributions used to prompt you for everything, even selecting individual programs to install. The new installer is very streamlined. It asks you a few questions to get started, then does everything else on its own. Installing FreeDOS on an empty virtual machine takes only a few minutes.
|
||||
|
||||
![Installing FreeDOS 1.3 RC4][7]
|
||||
|
||||
Installing FreeDOS 1.3 RC4 (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
### You can install it from floppy
|
||||
|
||||
Not everyone prefers to run FreeDOS in a virtual machine. There's a retrocomputing community out there that collects and lovingly restores classic PC hardware like Pentium or '486 systems. You can even find some XT (8088) or AT (80286) systems out there, kept running by a dedicated user community.
|
||||
|
||||
And while we consider FreeDOS a _modern_ DOS, we wouldn't be "DOS" if we didn't also run on the older PC hardware too. So with FreeDOS 1.3, we include a Floppy-Only Edition! This edition should run on any hardware that can run FreeDOS and has EGA or better graphics.
|
||||
|
||||
Are you running a '286 or another classic system without a CD-ROM drive? Install from these floppies to install FreeDOS. Do you have just one hard drive and no CD or floppy drive? Just copy the contents of the floppies to a temporary directory and run the installer from there. Want to perform a "headless" install to a different DOS directory? It's easy with the command-line options.
|
||||
|
||||
The Floppy-Only Edition uses a completely different installer and contains a limited FreeDOS set of programs that are more useful on classic PC hardware.
|
||||
|
||||
![Installing the FreeDOS Floppy-Only Edition][8]
|
||||
|
||||
Installing the FreeDOS Floppy-Only Edition (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
### Filled with open source apps and games
|
||||
|
||||
FreeDOS isn't a _free_ DOS if it's a closed source DOS. We want everyone to be able to use and study FreeDOS, including its source code. As we planned the FreeDOS 1.3 distribution, we took a close look at every license in every package and focused on including only _open source_ programs. (A few programs in previous FreeDOS distributions were not quite "open source," and one or two programs didn't include source code but were otherwise "free to use and distribute." In this release, everything is open source, using the Open Source Definition as our model.)
|
||||
|
||||
And what a great collection of open source apps and games. The games are my favorite addition to FreeDOS 1.3 RC4. Many people use FreeDOS to play classic DOS games, but we wanted to provide our own open source games for people to play.
|
||||
|
||||
You can find two games already installed in the LiveCD: Simple Senet (a board game dating to ancient Egypt) and Floppy Bird (a version of the Flappy Bird game). If you install FreeDOS, you'll also find lots of other games to try, including Sudoku86 (a sudoku game), Wing (a space shooter), and Bolitaire (solitaire card game).
|
||||
|
||||
![Playing the Floppy Bird game][9]
|
||||
|
||||
Playing the Floppy Bird game (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
![The ancient game of Senet][10]
|
||||
|
||||
The ancient game of Senet (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
### Try FreeDOS 1.3 RC4 now
|
||||
|
||||
You can find the new FreeDOS 1.3 RC4 from the FreeDOS website, on our [Downloads][11] page. To install FreeDOS, you'll need at least 20MB of free disk space: 20MB to install a plain FreeDOS system, or 250MB to install everything, including applications and games. To install the source code too, you'll need up to 450MB of free space.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/get-started-freedos
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/retro_old_unix_computer.png?itok=SYAb2xoW (Old UNIX computer)
|
||||
[2]: https://groups.google.com/g/comp.os.msdos.apps/c/oQmT4ETcSzU/m/O1HR8PE2u-EJ
|
||||
[3]: https://www.freedos.org/
|
||||
[4]: https://opensource.com/article/20/8/virt-tools
|
||||
[5]: https://opensource.com/sites/default/files/freedos-livecd.png (Booting the FreeDOS 1.3 RC4 LiveCD)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/sites/default/files/install6.png (Installing FreeDOS 1.3 RC4)
|
||||
[8]: https://opensource.com/sites/default/files/freedos-floppy.png (Installing the FreeDOS Floppy-Only Edition)
|
||||
[9]: https://opensource.com/sites/default/files/floppy-bird.png (Playing the Floppy Bird game)
|
||||
[10]: https://opensource.com/sites/default/files/simple-senet.png (The ancient game of Senet)
|
||||
[11]: https://www.freedos.org/download/
|
@ -0,0 +1,437 @@
|
||||
[#]: subject: (Start monitoring your Kubernetes cluster with Prometheus and Grafana)
|
||||
[#]: via: (https://opensource.com/article/21/6/chaos-grafana-prometheus)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Start monitoring your Kubernetes cluster with Prometheus and Grafana
|
||||
======
|
||||
Before you can measure chaos, you need to know what your system's steady
|
||||
state looks like. Learn how in the second article in this series about
|
||||
chaos engineering.
|
||||
![A ship wheel with someone steering][1]
|
||||
|
||||
In my introductory [article about chaos engineering][2], one of the main things I covered was the importance of getting the steady state of your working Kubernetes cluster. Before you can start causing chaos, you need to know what the cluster looks like in a steady state.
|
||||
|
||||
This article will cover how to [get those metrics using Prometheus][3] and [Grafana][4]. This walkthrough also uses Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19.
|
||||
|
||||
### Configure Minikube
|
||||
|
||||
[Install Minikube][5] in whatever way makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power:
|
||||
|
||||
|
||||
```
|
||||
$ minikube config set memory 8192
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
$ minikube config set cpus 6
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
```
|
||||
|
||||
Then start and check your system's status:
|
||||
|
||||
|
||||
```
|
||||
$ minikube start
|
||||
😄 minikube v1.14.2 on Debian bullseye/sid
|
||||
🎉 minikube 1.19.0 is available! Download it: <https://github.com/kubernetes/minikube/releases/tag/v1.19.0>
|
||||
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
|
||||
|
||||
✨ Using the docker driver based on user configuration
|
||||
👍 Starting control plane node minikube in cluster minikube
|
||||
🔥 Creating docker container (CPUs=6, Memory=8192MB) ...
|
||||
🐳 Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
|
||||
🔎 Verifying Kubernetes components...
|
||||
🌟 Enabled addons: storage-provisioner, default-storageclass
|
||||
🏄 Done! kubectl is now configured to use "minikube" by default
|
||||
$ minikube status
|
||||
minikube
|
||||
type: Control Plane
|
||||
host: Running
|
||||
kubelet: Running
|
||||
apiserver: Running
|
||||
kubeconfig: Configured
|
||||
```
|
||||
|
||||
### Install Prometheus
|
||||
|
||||
Once the cluster is set up, start your installations. Install [Prometheus][6] first by following the instructions below.
|
||||
|
||||
First, add the repository in Helm:
|
||||
|
||||
|
||||
```
|
||||
$ helm repo add prometheus-community <https://prometheus-community.github.io/helm-charts>
|
||||
"prometheus-community" has been added to your repositories
|
||||
```
|
||||
|
||||
Then install your Prometheus Helm chart. You should see:
|
||||
|
||||
|
||||
```
|
||||
$ helm install prometheus prometheus-community/prometheus
|
||||
NAME: prometheus
|
||||
LAST DEPLOYED: Sun May 9 11:37:19 2021
|
||||
NAMESPACE: default
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
|
||||
prometheus-server.default.svc.cluster.local
|
||||
```
|
||||
|
||||
Get the Prometheus server URL by running these commands in the same shell:
|
||||
|
||||
|
||||
```
|
||||
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl --namespace default port-forward $POD_NAME 9090
|
||||
```
|
||||
|
||||
You can access the Prometheus Alertmanager via port 80 on this DNS name from within your cluster:
|
||||
|
||||
|
||||
```
|
||||
`prometheus-alertmanager.default.svc.cluster.local`
|
||||
```
|
||||
|
||||
Get the Alertmanager URL by running these commands in the same shell:
|
||||
|
||||
|
||||
```
|
||||
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl --namespace default port-forward $POD_NAME 9093
|
||||
#################################################################################
|
||||
###### WARNING: Pod Security Policy has been moved to a global property. #####
|
||||
###### use .Values.podSecurityPolicy.enabled with pod-based #####
|
||||
###### annotations #####
|
||||
###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) #####
|
||||
#################################################################################
|
||||
```
|
||||
|
||||
You can access the Prometheus PushGateway via port 9091 on this DNS name from within your cluster:
|
||||
|
||||
|
||||
```
|
||||
`prometheus-pushgateway.default.svc.cluster.local`
|
||||
```
|
||||
|
||||
Get the PushGateway URL by running these commands in the same shell:
|
||||
|
||||
|
||||
```
|
||||
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl --namespace default port-forward $POD_NAME 9091
|
||||
|
||||
For more information on running Prometheus, visit:
|
||||
<https://prometheus.io/>
|
||||
```
|
||||
|
||||
Check to confirm your pods are running:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods -n default
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
prometheus-alertmanager-ccf8f68cd-hcrqr 2/2 Running 0 3m22s
|
||||
prometheus-kube-state-metrics-685b975bb7-mhv54 1/1 Running 0 3m22s
|
||||
prometheus-node-exporter-mfcwj 1/1 Running 0 3m22s
|
||||
prometheus-pushgateway-74cb65b858-7ffhs 1/1 Running 0 3m22s
|
||||
prometheus-server-d9fb67455-2g2jw 2/2 Running 0 3m22s
|
||||
```
|
||||
|
||||
Next, expose your port on the Prometheus server pod so that you can see the Prometheus web interface. To do this, you need the service name and port. You also need to come up with a name to open the service using the Minikube service command.
|
||||
|
||||
Get the service name for `prometheus-server`:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get svc -n default
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m
|
||||
prometheus-alertmanager ClusterIP 10.106.68.12 <none> 80/TCP 8m22s
|
||||
prometheus-kube-state-metrics ClusterIP 10.104.167.239 <none> 8080/TCP 8m22s
|
||||
prometheus-node-exporter ClusterIP None <none> 9100/TCP 8m22s
|
||||
prometheus-pushgateway ClusterIP 10.99.90.233 <none> 9091/TCP 8m22s
|
||||
prometheus-server ClusterIP 10.103.195.104 <none> 9090/TCP 8m22s
|
||||
```
|
||||
|
||||
Expose the service as type `Node-port`. Provide a target port of `9090` and a name you want to call the server. The node port is the server listening port. This is an extract of the Helm chart:
|
||||
|
||||
|
||||
```
|
||||
## Port for Prometheus Service to listen on
|
||||
##
|
||||
port: 9090
|
||||
```
|
||||
|
||||
The command is:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl expose service prometheus-server --type=NodePort --target-port=9090 --name=prom-server
|
||||
service/prom-server exposed
|
||||
```
|
||||
|
||||
Next, you need Minikube to open the service and browser:
|
||||
|
||||
|
||||
```
|
||||
jess@Athena:~$ minikube service prom-server
|
||||
|-----------|-------------|-------------|---------------------------|
|
||||
| NAMESPACE | NAME | TARGET PORT | URL |
|
||||
|-----------|-------------|-------------|---------------------------|
|
||||
| default | prom-server | 80 | <http://192.168.49.2:32169> |
|
||||
|-----------|-------------|-------------|---------------------------|
|
||||
🎉 Opening service default/prom-server in default browser...
|
||||
```
|
||||
|
||||
Your browser should open and show you the Prometheus service.
|
||||
|
||||
![Prometheus interface][7]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Congratulations! You now have Prometheus installed on your cluster.
|
||||
|
||||
### Install Grafana
|
||||
|
||||
Next, install Grafana and configure it to work with Prometheus. Follow the steps below to expose a service to configure Grafana and collect data from Prometheus to gather your steady state.
|
||||
|
||||
Start with getting your Helm chart:
|
||||
|
||||
|
||||
```
|
||||
$ helm repo add grafana <https://grafana.github.io/helm-charts>
|
||||
"grafana" has been added to your repositories
|
||||
```
|
||||
|
||||
Search for your chart:
|
||||
|
||||
|
||||
```
|
||||
$ helm search repo grafana
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
bitnami/grafana 5.2.11 7.5.5 Grafana is an open source, feature rich metrics...
|
||||
bitnami/grafana-operator 0.6.5 3.10.0 Kubernetes Operator based on the Operator SDK f...
|
||||
grafana/grafana 6.9.0 7.5.5 The leading tool for querying and visualizing t...
|
||||
stable/grafana 5.5.7 7.1.1 DEPRECATED - The leading tool for querying and ...
|
||||
```
|
||||
|
||||
Since stable/grafana is depreciated, install bitnami/grafana. Then install your chart:
|
||||
|
||||
|
||||
```
|
||||
helm install grafana bitnami/grafana
|
||||
NAME: grafana
|
||||
LAST DEPLOYED: Sun May 9 12:09:53 2021
|
||||
NAMESPACE: default
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
** Please be patient while the chart is being deployed **
|
||||
```
|
||||
|
||||
1. Get the application URL by running: [code] echo "Browse to <http://127.0.0.1:8080>"
|
||||
kubectl port-forward svc/grafana 8080:3000 &
|
||||
```
|
||||
2. Get the admin credentials: [code] echo "User: admin"
|
||||
echo "Password: $(kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 --decode)"
|
||||
```
|
||||
|
||||
|
||||
|
||||
As you can see in the Helm installation output, the target port for Grafana is 3000, so you will use that port for exposing the service to see Grafana's web frontend. Before exposing the service, confirm your services are running:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods -A
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
default grafana-6b84bbcd8f-xt6vd 1/1 Running 0 4m21s
|
||||
```
|
||||
|
||||
Expose the service:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-server
|
||||
service/grafana-server exposed
|
||||
```
|
||||
|
||||
Enable the service to open a browser with a Minikube service:
|
||||
|
||||
|
||||
```
|
||||
jess@Athena:~$ minikube service grafana-server
|
||||
|-----------|----------------|-------------|---------------------------|
|
||||
| NAMESPACE | NAME | TARGET PORT | URL |
|
||||
|-----------|----------------|-------------|---------------------------|
|
||||
| default | grafana-server | 3000 | <http://192.168.49.2:30549> |
|
||||
|-----------|----------------|-------------|---------------------------|
|
||||
🎉 Opening service default/grafana-server in default browser...
|
||||
```
|
||||
|
||||
You will see the welcome screen where you can log in.
|
||||
|
||||
![Grafana welcome screen][9]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Set up credentials to log into Grafana using kubectl. The commands appeared in the installation's output; here are the commands in use:
|
||||
|
||||
|
||||
```
|
||||
$ echo "User: admin"
|
||||
User: admin
|
||||
$ echo "Password: $(kubectl get secret grafana-admin --namespace default -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 --decode)"
|
||||
Password: G6U5VeAejt
|
||||
```
|
||||
|
||||
Log in with your new credentials, and you will see the Grafana dashboard.
|
||||
|
||||
![Grafana dashboard][10]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Congratulations! You now have a working Grafana installation in your Minikube cluster with the ability to log in. The next step is to configure Grafana to work with Prometheus to gather data and show your steady state.
|
||||
|
||||
### Configure Grafana with Prometheus
|
||||
|
||||
Now that you can log in to your Grafana instance, you need to set up the data collection and dashboard. Since this is an entirely web-based configuration, I will go through the setup using screenshots. Start by adding your Prometheus data collection. Click the **gear icon** on the left-hand side of the display to open the **Configuration** settings, then select **Data Source**.
|
||||
|
||||
![Configure data source option][11]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
On the next screen, click **Add data source**.
|
||||
|
||||
![Add data source option][12]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Select **Prometheus**.
|
||||
|
||||
![Select Prometheus data source][13]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Because you configured your Prometheus instance to be exposed on port 80, use the service name **prometheus-server** and the server **port 80**.
|
||||
|
||||
![Configuring Prometheus data source][14]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Save and test your new data source by scrolling to the bottom of the screen and clicking **Save and Test**. You should see a green banner that says **Data source is working**.
|
||||
|
||||
![Confirming Data source is working][15]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Return to the top of the page and click **Dashboards**.
|
||||
|
||||
![Select Dashboards option][16]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Import all three dashboard options.
|
||||
|
||||
![Import three dashboards][17]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Click the **magnifying glass** icon on the left-hand side to confirm all three dashboards have been imported.
|
||||
|
||||
![Confirming dashboard import][18]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Now that everything is configured, click **Prometheus 2.0 Stats**, and you should see something similar to this.
|
||||
|
||||
![Prometheus 2.0 Stats][19]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Congratulations! You have a set up basic data collection from Prometheus about your cluster.
|
||||
|
||||
### Import more monitoring dashboards
|
||||
|
||||
You can import additional detailed dashboards from Grafana Labs' [community dashboards][20] collection. I picked two of my favorites, [Dash-minikube][21] and [Kubernetes Cluster Monitoring][22], for this quick walkthrough.
|
||||
|
||||
To import a dashboard, you need its ID from the dashboards collection. First, click the plus (**+**) sign on the left-hand side to create a dashboard, then click **Import** in the dropdown list, and enter the ID. For Dash-minikube, it's ID 10219.
|
||||
|
||||
![Import Dash-minikube dashboard][23]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
![Import Dash-minikube dashboard][24]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Click **Load**, and enter the data source on the next screen. Since this uses Prometheus, enter your Prometheus data source.
|
||||
|
||||
![Import Dash-minikube][25]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Click **Import**, and the new dashboard will appear.
|
||||
|
||||
![Import Dash-minikube dashboard][26]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Now you have a new dashboard to keep track of your Minikube stats. If you follow the same steps using Kubernetes Cluster Monitoring (ID 2115), you will see a more verbose monitoring dashboard.
|
||||
|
||||
![Kubernetes Cluster Monitoring dashboard][27]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][8])
|
||||
|
||||
Now you can keep track of your steady state with Grafana and Prometheus data collections and visuals.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
With these open source tools, you can collect your cluster's steady state and maintain a good pulse on it. This is important in chaos engineering because it allows you to check everything in a destructive, unstable state and use that data to test your hypothesis about what could happen to its state during an outage.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/chaos-grafana-prometheus
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cherrybomb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv (A ship wheel with someone steering)
|
||||
[2]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos
|
||||
[3]: https://opensource.com/article/19/11/introduction-monitoring-prometheus
|
||||
[4]: htpp://grafana.com
|
||||
[5]: https://minikube.sigs.k8s.io/docs/start/
|
||||
[6]: http://prometheus.io
|
||||
[7]: https://opensource.com/sites/default/files/uploads/prometheus-interface.png (Prometheus interface)
|
||||
[8]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/grafana_welcome.png (Grafana welcome screen)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/grafana_dashboard.png (Grafana dashboard)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/grafana_datasource.png (Configure data source option)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/grafana_adddatasource.png (Add data source option)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/grafana_prometheusdatasource.png (Select Prometheus data source)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/grafana_configureprometheusdatasource.png (Configuring Prometheus data source)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/datasource_save-test.png (Confirming Data source is working)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/dashboards.png (Select Dashboards option)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/importdatasources.png (Import three dashboards)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/importeddashboard.png (Confirming dashboard import)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/prometheus2stats.png (Prometheus 2.0 Stats)
|
||||
[20]: https://grafana.com/grafana/dashboards
|
||||
[21]: https://grafana.com/grafana/dashboards/10219
|
||||
[22]: https://grafana.com/grafana/dashboards/2115
|
||||
[23]: https://opensource.com/sites/default/files/uploads/importdashminikube.png (Import Dash-minikube dashboard)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/importdashminikube2.png (Import Dash-minikube dashboard)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/importdashminikube3.png (Import Dash-minikube)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/importdashminikube4.png (Import Dash-minikube dashboard)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/kubernetesclustermonitoring-dashboard.png (Kubernetes Cluster Monitoring dashboard)
|
@ -1,115 +0,0 @@
|
||||
[#]: subject: (Convert Images to ASCII Art in Linux Terminal With This Nifty Little Tool)
|
||||
[#]: via: (https://itsfoss.com/ascii-image-converter/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Convert Images to ASCII Art in Linux Terminal With This Nifty Little Tool
|
||||
======
|
||||
|
||||
Want to do some fun stuff in the Linux terminal? How about converting a regular image into an ASCII art?
|
||||
|
||||
You know [what’s ASCII][1]? It’s a standard that assigns letters, numbers and other characters in the 256 slots available in the 8-bit code. The ASCII art is a graphics composed of the printable ASCII characters. Basically, it is composed of a bunch of letters, numbers and special characters.
|
||||
|
||||
You might have seen people [displaying their distribution’s logo in ASCII format][2] like this:
|
||||
|
||||
![][3]
|
||||
|
||||
That’s cool, right? How about converting a normal picture into ASCII art? That’s what you are going to explore in this article.
|
||||
|
||||
### Ascii Image Converter
|
||||
|
||||
As the name suggests, [Ascii Image Converter][4] is a tool that converts an image into ASCII art. It is a command line based tool written in Go and it prints the ASCII version of the image supplied to it.
|
||||
|
||||
You probably won’t recognize me, but that’s me in ASCII in the image below. That’s my 8-bit avatar.
|
||||
|
||||
![][5]
|
||||
|
||||
The tool supports input images in the following format:
|
||||
|
||||
* JPEG/JPG
|
||||
* PNG
|
||||
* BMP
|
||||
* WEBP
|
||||
* TIFF/TIF
|
||||
|
||||
|
||||
|
||||
Let’s see about installing and using it.
|
||||
|
||||
### Installing Ascii Image Converter on Linux
|
||||
|
||||
This nifty tool is also available on Windows but I am not going that way. Let’s stick to Linux in this tutorial.
|
||||
|
||||
If you have [Snap enabled in your distribution][6], you can easily install its snap package using the following command:
|
||||
|
||||
```
|
||||
sudo snap install ascii-image-converter
|
||||
```
|
||||
|
||||
You may also download the Linux executable file from its release page and put the executable in the /usr/local/bin/ directory. This way, you’ll be able to run it like a regular Linux command. If you wonder why so, please learn about [Linux directory hierarchy][7].
|
||||
|
||||
### Using Ascii Image Converter
|
||||
|
||||
The usage is simple. Once installed, you just have to provide the path of the image you want to convert.
|
||||
|
||||
```
|
||||
ascii-image-converter path_to_image
|
||||
```
|
||||
|
||||
You may also provide the URL of the image to convert an image into ASCII directly from the web.
|
||||
|
||||
Here is my profile picture converted into ASCII. I have put my original photo for the reference.
|
||||
|
||||
![][8]
|
||||
|
||||
You may also have a colored ASCII conversion.
|
||||
|
||||
```
|
||||
ascii-image-converter -C path_to_image
|
||||
```
|
||||
|
||||
![][9]
|
||||
|
||||
You may convert multiple images into ASCII by providing their paths. It will print the ASCII version one after another on the terminal display.
|
||||
|
||||
There is an option to save the generated ASCII art but as a text file, not as an image. The command below will save the ASCII art by adding “-ascii-art.txt” to the image name in the directory path passed to the flag.
|
||||
|
||||
```
|
||||
ascii-image-converter path_to_image -s .
|
||||
```
|
||||
|
||||
There are a few more options available such as giving the output a specific dimension, use more ASCII characters, or use your own set of characters for printing the ASCII art. You can read about it on the [project’s repository][4].
|
||||
|
||||
### Like it?
|
||||
|
||||
Do you like more ASCII stuff? How about [playing ASCII games on Linux][10]? Yes, you can totally do that.
|
||||
|
||||
If you like experimenting in the terminal, you may like this tool. Though I wonder what could be a good practical use of an ASCII converted image. Any ideas?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ascii-image-converter/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.computerhope.com/jargon/a/ascii.htm
|
||||
[2]: https://itsfoss.com/display-linux-logo-in-ascii/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-mate-focal-neofetch.png?resize=800%2C543&ssl=1
|
||||
[4]: https://github.com/TheZoraiz/ascii-image-converter
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/abhishek-prakash-in-ascii.png?resize=800%2C445&ssl=1
|
||||
[6]: https://itsfoss.com/enable-snap-support-linux-mint/
|
||||
[7]: https://linuxhandbook.com/linux-directory-structure/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/abhishek-prakash-ascii-converted.png?resize=800%2C437&ssl=1
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/abhishek-colored-ascii.png?resize=800%2C429&ssl=1
|
||||
[10]: https://itsfoss.com/best-ascii-games/
|
@ -0,0 +1,230 @@
|
||||
[#]: subject: (Establish an SSH connection between Windows and Linux)
|
||||
[#]: via: (https://opensource.com/article/21/6/ssh-windows)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Establish an SSH connection between Windows and Linux
|
||||
======
|
||||
Use the open source tool, PuTTY to establish an SSH connection from a
|
||||
Windows machine to a Linux system.
|
||||
![clouds in windows][1]
|
||||
|
||||
The secure shell protocol (SSH) is the most common method for controlling remote machines over the command line in the Linux world. SSH is a true Linux original, and it is also gaining popularity in the Windows world. There is even official [Windows documentation for SSH][2], which covers controlling Windows machines using [OpenSSH][3].
|
||||
|
||||
This article describes how to establish an SSH connection from a Windows machine to a Fedora 33 Linux system using the popular open source tool [PuTTY][4].
|
||||
|
||||
### Ways to use SSH
|
||||
|
||||
SSH uses a client-server architecture, where an SSH client establishes a connection to an SSH server. The SSH server is usually running as a system daemon, so it is often called SSHD. You can hardly find a Linux distribution that does not come with the SSH daemon. In Fedora 33, the SSH daemon is installed but not activated.
|
||||
|
||||
You can use SSH to control almost any Linux machine, whether it's running as a virtual machine or as a physical device on your network. A common use case is the headless configuration of embedded devices, including the Raspberry Pi. SSH can also be used to tunnel other network services. Because SSH traffic is encrypted, you can use SSH as a transport layer for any protocol that does not provide encryption by default.
|
||||
|
||||
In this article, I'll explain four ways to use SSH: 1. how to configure the SSH daemon on the Linux side, 2. how to set up a remote console connection, 3. how to copy files over the network, and 4. how to tunnel a certain protocol over SSH.
|
||||
|
||||
### 1\. Configure SSHD
|
||||
|
||||
The Linux system (Fedora 33 in my case) acts as the SSH server that allows the PuTTY SSH client to connect. First, check the daemon's SSH configuration. The configuration file is located at `/etc/ssh/sshd_config` and contains a lot of switches that can be activated by commenting out related lines:
|
||||
|
||||
|
||||
```
|
||||
# $OpenBSD: sshd_config,v 1.100 2016/08/15 12:32:04 naddy Exp $
|
||||
|
||||
# This is the sshd server system-wide configuration file. See
|
||||
# sshd_config(5) for more information.
|
||||
|
||||
# This sshd was compiled with PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
|
||||
|
||||
# The strategy used for options in the default sshd_config shipped with
|
||||
# OpenSSH is to specify options with their default value where
|
||||
# possible, but leave them commented. Uncommented options override the
|
||||
# default value.
|
||||
|
||||
Include /etc/ssh/sshd_config.d/*.conf
|
||||
|
||||
#Port 22
|
||||
#AddressFamily any
|
||||
#ListenAddress 0.0.0.0
|
||||
#ListenAddress ::
|
||||
```
|
||||
|
||||
The default configuration, where no line is uncommented, should work for this example. Check whether the SSH daemon is already running by typing `systemctl status sshd`:
|
||||
|
||||
|
||||
```
|
||||
$ systemctl status sshd
|
||||
● sshd.service - OpenSSH server daemon
|
||||
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
|
||||
Active: active (running) since Fri 2018-06-22 11:12:05 UTC; 2 years 11 months ago
|
||||
Docs: man:sshd(8)
|
||||
man:sshd_config(5)
|
||||
Main PID: 577 (sshd)
|
||||
Tasks: 1 (limit: 26213)
|
||||
CGroup: /system.slice/sshd.service
|
||||
└─577 /usr/sbin/sshd -D -oCiphers=[aes256-gcm@openssh.com][5],chacha20-[...]
|
||||
```
|
||||
|
||||
If it's inactive, start it with the `systemctl start sshd` command.
|
||||
|
||||
### 2\. Set up a remote console
|
||||
|
||||
On Windows, [download the PuTTY installer][6], then install and open it. You should see a window like this:
|
||||
|
||||
![PuTTY configuration screen][7]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][8])
|
||||
|
||||
In the **Host Name (or IP address)** input field, enter the connection information for your Linux system. In this example, I set up a Fedora 33 virtual machine with a bridged network adapter that I can use to contact the system at the IP address `192.168.1.60`. Click **Open**, and a window like this should open:
|
||||
|
||||
![PutTTY security alert][9]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][8])
|
||||
|
||||
This is an SSH security mechanism to prevent a [man-in-the-middle attack][10]. The fingerprint in the message should match the key on the Linux system at `/etc/ssh/ssh_host_ed25519_key.pub.`. PuTTY prints the key as an [MD5 hash][11]. To check its authenticity, switch to the Linux system, open a command shell, and enter:
|
||||
|
||||
|
||||
```
|
||||
`ssh-keygen -l -E md5 -f /etc/ssh/ssh_host_ed25519_key.pub`
|
||||
```
|
||||
|
||||
The output should match the fingerprint shown by PuTTY:
|
||||
|
||||
|
||||
```
|
||||
$ ssh-keygen -l -E md5 -f /etc/ssh/ssh_host_ed25519_key.pub
|
||||
256 MD5:E4:5F:01:05:D0:F7:DC:A6:32 no comment (ED25519)
|
||||
```
|
||||
|
||||
Confirm the PuTTY Security Alert by clicking **Yes**. The host system's fingerprint is now in PuTTYs trust list, which is located in the Windows registry under:
|
||||
|
||||
|
||||
```
|
||||
`HKEY_CURRENT_USER\SOFTWARE\SimonTatham\PuTTY\SshHostKeys`
|
||||
```
|
||||
|
||||
Enter your correct login credentials, and you should be on the console in your home directory:
|
||||
|
||||
![Logged in to SSH][12]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][8])
|
||||
|
||||
### 3\. Copy files over the network
|
||||
|
||||
In addition to the remote console, you can use PuTTY to transfer files via SSH. Look in the installation folder under `C:\\Program Files (x86)\\PuTTY` and find `pscp.exe`. You can use this to copy files to and from a Linux system.
|
||||
|
||||
Open a command prompt with **Windows + R** and enter **cmd**. Copy the file `MyFile.txt` from your Linux user home directory to your Windows home directory by entering:
|
||||
|
||||
|
||||
```
|
||||
`C:\"Program Files (x86)"\PuTTY\pscp.exe stephan@192.168.1.60:/home/stephan/MyFile.txt .`
|
||||
```
|
||||
|
||||
To copy a file from the Windows home directory to the Linux user home directory, enter:
|
||||
|
||||
|
||||
```
|
||||
`C:\"Program Files (x86)"\PuTTY\pscp.exe MyFile.txt stephan@192.168.1.60:/home/stephan/`
|
||||
```
|
||||
|
||||
As you may have already figured out, the copy command's general structure is:
|
||||
|
||||
|
||||
```
|
||||
`pscp.exe <source> <target>`
|
||||
```
|
||||
|
||||
### 4\. Tunnel a protocol
|
||||
|
||||
Imagine you have a Linux machine that is running an HTTP-based service for some arbitrary application. You want to access this HTTP service from your Windows machine over the internet. Of course, you cannot expose the related TCP port to the public because:
|
||||
|
||||
1. The server is running HTTP, not HTTPS
|
||||
2. There is no user management nor login at all
|
||||
|
||||
|
||||
|
||||
At first glance, it looks like an impossible task to set up this architecture without producing a horrible security flaw. But SSH makes it relatively easy to set up a safe solution for this scenario.
|
||||
|
||||
I will demonstrate this procedure with my software project [Pythonic][13]. Running as a container, Pythonic exposes two TCP ports: TCP port 7000 (main editor) and TCP port 8000 (the [code-server][14] source-code editor).
|
||||
|
||||
To install Pythonic on a Linux machine, run:
|
||||
|
||||
|
||||
```
|
||||
podman pull pythonicautomation/pythonic
|
||||
podman run -d -p 7000:7000 -p 8000:8000 pythonic
|
||||
```
|
||||
|
||||
Switch to your Windows machine, open PuTTY, and navigate to **Connection -> SSH -> Tunnels**. Add the two TCP ports you want to forward:
|
||||
|
||||
* Source: `7000` / Destination: `localhost:7000`
|
||||
* Source: `8000` / Destination: `localhost:8000`
|
||||
|
||||
|
||||
|
||||
![Port forwarding in PuTTY][15]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][8])
|
||||
|
||||
Then go back to the **Session** section, and establish an SSH connection as you did before. Open a browser and navigate to `http://localhost:7000`; you should see a screen like this:
|
||||
|
||||
![Pythonic][16]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][8])
|
||||
|
||||
You have successfully configured port forwarding!
|
||||
|
||||
**Warning**: If you expose TCP Port 22 to the public, don't use easy-to-guess login credentials. You will receive login attempts from all over the world trying to access your Linux machine with common, standard credentials. Instead, permit only known clients to log in. This login restriction can be achieved using [public-key cryptography][17], which uses a key pair in which the public key is stored on the SSH host machine, and the private key remains at the client.
|
||||
|
||||
### Debugging
|
||||
|
||||
If you are struggling to connect to your Linux machine, you can follow the processes in your SSH daemon with:
|
||||
|
||||
|
||||
```
|
||||
`journalctl -f -u sshd`
|
||||
```
|
||||
|
||||
This is how an ordinary log-in process looks like with LogLevel DEBUG :
|
||||
|
||||
![LogLevel DEBUG output][18]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][8])
|
||||
|
||||
### Learn more
|
||||
|
||||
This article barely scratched the surface about ways to use SSH. If you are looking for information about a specific use case, you can probably find it among the tons of SSH tutorials on the internet. I use PuTTY heavily at work because its easy configuration and good interoperability between operating systems make it a Swiss Army knife tool for connectivity solutions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/ssh-windows
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows-building-containers.png?itok=0XvZLZ8k (clouds in windows)
|
||||
[2]: https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_overview
|
||||
[3]: https://www.openssh.com/
|
||||
[4]: https://www.putty.org/
|
||||
[5]: mailto:aes256-gcm@openssh.com
|
||||
[6]: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
|
||||
[7]: https://opensource.com/sites/default/files/uploads/putty_connection_settings.png (PuTTY configuration screen)
|
||||
[8]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[9]: https://opensource.com/sites/default/files/uploads/putty_host_key.png (PutTTY security alert)
|
||||
[10]: https://en.wikipedia.org/wiki/Man-in-the-middle_attack
|
||||
[11]: https://en.wikipedia.org/wiki/MD5
|
||||
[12]: https://opensource.com/sites/default/files/uploads/ssh_successfull_login.png (Logged in to SSH)
|
||||
[13]: https://github.com/hANSIc99/Pythonic
|
||||
[14]: https://github.com/cdr/code-server
|
||||
[15]: https://opensource.com/sites/default/files/uploads/ssh_port_forwarding.png (Port forwarding in PuTTY)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/pythonic_screen.png (Pythonic)
|
||||
[17]: https://opensource.com/article/21/4/encryption-decryption-openssl
|
||||
[18]: https://opensource.com/sites/default/files/uploads/sshd_debug_log.png (LogLevel DEBUG output)
|
@ -0,0 +1,71 @@
|
||||
[#]: subject: (How to navigate FreeDOS with CD and DIR)
|
||||
[#]: via: (https://opensource.com/article/21/6/navigate-freedos-cd-dir)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
How to navigate FreeDOS with CD and DIR
|
||||
======
|
||||
Armed with just two commands DIR and CD, you can navigate your FreeDOS
|
||||
system from the command line.
|
||||
![4 different color terminal windows with code][1]
|
||||
|
||||
FreeDOS is an open source DOS-compatible operating system that you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS.
|
||||
|
||||
But if you've never used DOS, you might be confused about how to navigate the system. FreeDOS is primarily a command-line interface; there is no default graphical user interface (GUI) in FreeDOS. You need to type every command at the command line.
|
||||
|
||||
Two commands that help you find your way around FreeDOS: `CD` and `DIR`. I've written those commands in all uppercase, but DOS is actually _case insensitive_, so you can type your commands using either uppercase or lowercase letters. DOS doesn't care.
|
||||
|
||||
Let's start with the `DIR` command. This command name is short for _directory_ and is similar to the `ls` command on Linux systems. You can run `DIR` anywhere on your system to see what files you have. Just type the command `DIR` to get a list of files and directories:
|
||||
|
||||
![DIR listing of the D: drive][2]
|
||||
|
||||
Jim Hall, CC-BY SA 4.0
|
||||
|
||||
The output from `DIR` is very utilitarian. At the top, `DIR` prints the "volume name" of the current drive. Then `DIR` shows all the files and directories. In the screenshot, you can see the directory listing of the FreeDOS 1.3 RC4 LiveCD. It contains several directories, including the `FREEDOS` directory which contains all of the core FreeDOS programs and utilities. You can also see several files, starting with the `COMMAND.COM` shell, which is similar to Bash on Linux—except much simpler. The FreeDOS kernel itself is the `KERNEL.SYS `file further down the list.
|
||||
|
||||
At the top level of any drive, before you go into a directory, you are at the _root directory_. DOS uses the `\` ("back slash") character to separate directories in a path, which is slightly different from the `/` ("slash") character in Linux systems.
|
||||
|
||||
To navigate into a directory, you can use the `CD` command. Like `cd` on Linux, this stands for _change directory_. The `CD` command sets the new _working directory_ to wherever you want to go. For example, you might go into the `GAMES` directory and use `DIR` to list its contents:
|
||||
|
||||
![Use CD to change your working directory][3]
|
||||
|
||||
Jim Hall, CC-BY SA 4.0
|
||||
|
||||
You can also specify a path to `CD`, to jump to a specific directory elsewhere on your system. If I wanted to change to the `FREEDOS` directory, I could simply specify the full path relative to the root directory. In this case, that's the `\FREEDOS` directory. From there, I can run another `DIR` command to see the files and directories stored there:
|
||||
|
||||
![Specify a full path to change to another working directory][4]
|
||||
|
||||
Jim Hall, CC-BY SA 4.0
|
||||
|
||||
Like Linux, DOS also uses `.` and `..` to represent a _relative path_. The `.` directory is the current directory, and `..` is the directory that's one level before it, or the _parent_ directory. Using `..` allows you to "back up" one directory with the `CD` command, so you don't need to specify a full path.
|
||||
|
||||
From the first `DIR` screenshot, we can see the root directory also contains a `DEVEL` directory. If we're already in the `\FREEDOS` directory, we can navigate to `DEVEL` by "backing up" one directory level, and "going into" the `..\DEVEL` directory via a relative path:
|
||||
|
||||
![Use .. to navigate using a relative path][5]
|
||||
|
||||
Jim Hall, CC-BY SA 4.0
|
||||
|
||||
Armed with just two commands `DIR` and `CD`, you can navigate your FreeDOS system from the command line. Try it on your FreeDOS system to locate files and execute programs.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/navigate-freedos-cd-dir
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos.png?itok=aOBLy7Ky (4 different color terminal windows with code)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/dir1.png (DIR listing of the D: drive)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/cd-games2.png (Use CD to change your working directory)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/cd-freedos3.png (Specify a full path to change to another working directory)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/cd-devel4.png (Use .. to navigate using a relative path)
|
@ -0,0 +1,137 @@
|
||||
[#]: subject: (New ways to learn about open organizations)
|
||||
[#]: via: (https://opensource.com/open-organization/21/6/celebrate-sixth-anniversary)
|
||||
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
New ways to learn about open organizations
|
||||
======
|
||||
Celebrate the Open Organization community's sixth anniversary by getting
|
||||
involved in two exciting new projects.
|
||||
![][1]
|
||||
|
||||
The Open Organization community celebrates its sixth anniversary on June 02. That's six years of articles ([hundreds][2]), books (an [evolving series][3]), conversations ([always inspiring][4]), teaching (we [love it][5]), and learning. We're so proud to be a vibrant community of open experts and leaders working to bring [open principles][6] to organizations large and small. In fact, many of the [Open Organization Ambassadors][7] have made careers out of helping others become more open, and our community remains dedicated to helping leaders across various industries integrate open mindsets and behaviors into their communities and contexts.
|
||||
|
||||
[Last year][8] was a period of [growth][9] and [renewal][10] for the Open Organization project. And this year, we're building on that momentum. Today, we're proud to introduce two new initiatives—and, of course, invite you to participate.
|
||||
|
||||
### Turn on, tune in, open up
|
||||
|
||||
First, we're excited to announce a brand new venue for our community's work: [OpenOrgTV][11]. It's more than a new platform. It's an experiment in another medium: video.
|
||||
|
||||
On our channel, we'll be hosting all kinds of conversations—from in-depth book reviews to community roundtables. To get started, check out the "[Open Leadership Conversations][12]" series, which features interviews with insightful leaders offering their perspectives on what it means to lead according to open principles. Or watch "[Ask the Ambassadors][13]," our Q&A-style write-in show starring community experts answering _your_ questions about organizational culture and design. Want to be part of the show? Submit your questions to community members in our [new dedicated forum][14].
|
||||
|
||||
All month long, we'll be featuring introductions to the [Open Organization Ambassadors][15], so you can finally see the faces and hear the voices behind the stories, case studies, and interviews you've been reading for years.
|
||||
|
||||
### Defining open leadership
|
||||
|
||||
Since we released it several years ago, the [Open Organization Definition][16] has become a guiding framework for organizations looking to better understand the nature of open organizational culture and design (and we've done lots to [teach others about it][17]). Over time, we even developed [a maturity model][18] that operationalizes the definition, so organizations can assess their own levels of openness and make concrete plans to become even _more_ open.
|
||||
|
||||
Now we think it's time to take that work a step further.
|
||||
|
||||
But the Open Organization community is more than any combination of platforms, tools, or projects. It's people, all working enthusiastically together to help spread open principles and practices.
|
||||
|
||||
Inspired by our own experience, pre-existing frameworks from open organizations like [Red Hat][19] and [Mozilla][20], years of studying and interviewing open leaders in the field, and a desire to better understand how open leadership _really_ works, we're pleased to unveil an early draft of a brand new document: the Open Leadership Definition.
|
||||
|
||||
This document outlines the mindsets and behaviors unique to the kinds of leaders who build open organizations and make them places where open-minded people can grow and thrive. It builds on the Open Organization Definition, explaining how open leaders embody and champion open organization characteristics—like transparency, inclusivity, adaptability, collaboration, and community.
|
||||
|
||||
And we're keen to share it with the world.
|
||||
|
||||
Beginning today (and continuing for the next two weeks), we're collecting _your_ insights and comments on our draft document. We're eager to hear your ideas, and will take them _en masse_ or in snippets. You can comment on individual parts of the document, or the entire thing. Just see the links below. We look forward to hearing from you.
|
||||
|
||||
* * *
|
||||
|
||||
####
|
||||
|
||||
![Open Leadership Definition word cloud][21]
|
||||
|
||||
_Open Leadership Definition word cloud by Laura Hiliger (CC BY-SA)_
|
||||
|
||||
#### The Open Leadership Definition
|
||||
|
||||
[Open Leadership: Introduction][22]
|
||||
|
||||
[Open Leadership: Transparency][23]
|
||||
|
||||
[Open Leadership: Inclusivity][24]
|
||||
|
||||
[Open Leadership: Adaptability][25]
|
||||
|
||||
[Open Leadership: Collaboration][26]
|
||||
|
||||
[Open Leadership: Community][27]
|
||||
|
||||
[Read the entire thing][28] in our shared folder.
|
||||
|
||||
* * *
|
||||
|
||||
### Let's connect
|
||||
|
||||
And of course, you can still find our community in all the usual places like:
|
||||
|
||||
* [Our project website][29], your portal to the entire Open Organization project and community
|
||||
* [Our conversation hub][4], where you can interact with community members, ask questions, learn about new projects, find resources, and help others
|
||||
* [Our GitHub organization][30], where we're always working on new materials in the open and invite you to join us
|
||||
* [Our publication channel at Opensource.com][2], where we're publishing the latest analyses, case studies, interviews, and resources for practitioners in various regions and industries
|
||||
* Our [Twitter][31] and [LinkedIn][32] platforms, where we're sharing our latest updates and fostering new conversations
|
||||
|
||||
|
||||
|
||||
But the Open Organization community is more than any combination of platforms, tools, or projects. It's _people_, all working enthusiastically together to help spread open principles and practices. Those people are what makes our community so great.
|
||||
|
||||
That's been the case for six years now. And it always will be.
|
||||
|
||||
### By the numbers
|
||||
|
||||
![][33]
|
||||
|
||||
_Infographic via Jen Kelchner_
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/21/6/celebrate-sixth-anniversary
|
||||
|
||||
作者:[Laura Hilliger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laurahilliger
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openorg_sixth_anniversary.png?itok=3RWyEk5S
|
||||
[2]: https://opensource.com/open-organization
|
||||
[3]: https://theopenorganization.org/books
|
||||
[4]: https://www.theopenorganization.community/
|
||||
[5]: https://www.youtube.com/watch?v=Snf6vICDbzw&list=PLLIYDJHuxOkaPEH76mIJe-HHplsiSAVej
|
||||
[6]: https://theopenorganization.org/definition
|
||||
[7]: https://theopenorganization.org/about
|
||||
[8]: https://opensource.com/open-organization/20/6/scaling-energetic-community
|
||||
[9]: https://opensource.com/open-organization/20/7/evolving-project-governance
|
||||
[10]: https://opensource.com/open-organization/20/8/open-community-rebrands
|
||||
[11]: http://theopenorganization.tv
|
||||
[12]: https://www.youtube.com/watch?v=07YBs0ss9rU&list=PLLIYDJHuxOkYDTLbKRjcd9THTFtpnK8lh
|
||||
[13]: https://www.youtube.com/watch?v=ukkZMYqRuUQ&list=PLLIYDJHuxOkY1gDbOFLDxGxwwmxeOATrI
|
||||
[14]: https://www.theopenorganization.community/c/ask-community/19
|
||||
[15]: http://theopenorganization.org/roster/
|
||||
[16]: https://theopenorganization.org/definition/
|
||||
[17]: https://youtu.be/NYngFYGgxro
|
||||
[18]: https://github.com/open-organization/open-org-maturity-model
|
||||
[19]: https://github.com/red-hat-people-team/red-hat-multiplier
|
||||
[20]: https://mozilla.github.io/open-leadership-framework/framework/#the-open-leadership-framework
|
||||
[21]: https://opensource.com/sites/default/files/images/open-org/open_leadership_word_cloud.png (Open Leadership Definition word cloud)
|
||||
[22]: https://docs.google.com/document/d/1blmf94ED_p4BHGv0luU_XrU26aF7tCzV6WTmh_v-PDY/edit?usp=sharing
|
||||
[23]: https://docs.google.com/document/d/14ssBBL0h2vxU0WZoMnWs6eo_8oRfJhnAr5yr-fAiLGU/edit?usp=sharing
|
||||
[24]: https://docs.google.com/document/d/1lRutADes5E0mcwtc6GR_Qw06PuJLc9-wUK5W1Gcf_BA/edit?usp=sharing
|
||||
[25]: https://docs.google.com/document/d/1RcwWTpkT42bgkf6EPiECt8LyAJ1XZjNGhzk0cQuBB7c/edit?usp=sharing
|
||||
[26]: https://docs.google.com/document/d/1hTvnpqQkOc76-0UJbV6tAvRxOE--bdt96mqGmAKGqiI/edit?usp=sharing
|
||||
[27]: https://docs.google.com/document/d/1Zl1smi-4jDZNNWd0oNY8qRH-GDi9q5VfvgyZ7YLkvm4/edit?usp=sharing
|
||||
[28]: https://drive.google.com/drive/folders/1e1N_0p5lJEwAo_s6hQ3OK0KaJIfc7fgF?usp=sharing
|
||||
[29]: http://theopenorganization.org/
|
||||
[30]: https://github.com/open-organization
|
||||
[31]: https://twitter.com/openorgproject
|
||||
[32]: https://www.linkedin.com/company/the-open-organization/
|
||||
[33]: https://opensource.com/sites/default/files/images/open-org/openorgproject_6_anniversary_stats.png
|
@ -0,0 +1,486 @@
|
||||
[#]: subject: (Test Kubernetes cluster failures and experiments in your terminal)
|
||||
[#]: via: (https://opensource.com/article/21/6/kubernetes-litmus-chaos)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Test Kubernetes cluster failures and experiments in your terminal
|
||||
======
|
||||
Litmus is an effective tool to cause chaos to test how your system will
|
||||
respond to failure.
|
||||
![Science lab with beakers][1]
|
||||
|
||||
Do you know how your system will respond to an arbitrary failure? Will your application fail? Will anything survive after a loss? If you're not sure, it's time to see if your system passes the [Litmus][2] test, a detailed way to cause chaos at random with many experiments.
|
||||
|
||||
In the first article in this series, I explained [what chaos engineering is][3], and in the second article, I demonstrated how to get your [system's steady state][4] so that you can compare it against a chaos state. This third article will show you how to install and use Litmus to test arbitrary failures and experiments in your Kubernetes cluster. In this walkthrough, I'll use Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19.
|
||||
|
||||
### Configure Minikube
|
||||
|
||||
If you haven't already, [install Minikube][5] in whatever way makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power:
|
||||
|
||||
|
||||
```
|
||||
$ minikube config set memory 8192
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
$ minikube config set cpus 6
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
```
|
||||
|
||||
Then start and check your system's status:
|
||||
|
||||
|
||||
```
|
||||
$ minikube start
|
||||
😄 minikube v1.14.2 on Debian bullseye/sid
|
||||
🎉 minikube 1.19.0 is available! Download it: <https://github.com/kubernetes/minikube/releases/tag/v1.19.0>
|
||||
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
|
||||
|
||||
✨ Using the docker driver based on user configuration
|
||||
👍 Starting control plane node minikube in cluster minikube
|
||||
🔥 Creating docker container (CPUs=6, Memory=8192MB) ...
|
||||
🐳 Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
|
||||
🔎 Verifying Kubernetes components...
|
||||
🌟 Enabled addons: storage-provisioner, default-storageclass
|
||||
🏄 Done! kubectl is now configured to use "minikube" by default
|
||||
jess@Athena:~$ minikube status
|
||||
minikube
|
||||
type: Control Plane
|
||||
host: Running
|
||||
kubelet: Running
|
||||
apiserver: Running
|
||||
kubeconfig: Configured
|
||||
```
|
||||
|
||||
### Install Litmus
|
||||
|
||||
As outlined on [Litmus' homepage][6], the steps to install Litmus are: add your repo to Helm, create your Litmus namespace, then install your chart:
|
||||
|
||||
|
||||
```
|
||||
$ helm repo add litmuschaos <https://litmuschaos.github.io/litmus-helm/>
|
||||
"litmuschaos" has been added to your repositories
|
||||
|
||||
$ kubectl create ns litmus
|
||||
namespace/litmus created
|
||||
|
||||
$ helm install chaos litmuschaos/litmus --namespace=litmus
|
||||
NAME: chaos
|
||||
LAST DEPLOYED: Sun May 9 17:05:36 2021
|
||||
NAMESPACE: litmus
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
```
|
||||
|
||||
### Verify the installation
|
||||
|
||||
You can run the following commands if you want to verify all the desired components are installed correctly.
|
||||
|
||||
Check if **api-resources** for chaos are available:
|
||||
|
||||
|
||||
```
|
||||
root@demo:~# kubectl api-resources | grep litmus
|
||||
chaosengines litmuschaos.io true ChaosEngine
|
||||
chaosexperiments litmuschaos.io true ChaosExperiment
|
||||
chaosresults litmuschaos.io true ChaosResult
|
||||
```
|
||||
|
||||
Check if the Litmus chaos operator deployment is running successfully:
|
||||
|
||||
|
||||
```
|
||||
root@demo:~# kubectl get pods -n litmus
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
litmus-7d998b6568-nnlcd 1/1 Running 0 106s
|
||||
```
|
||||
|
||||
### Start running chaos experiments
|
||||
|
||||
With this out of the way, you are good to go! Refer to Litmus' [chaos experiment documentation][7] to start executing your first experiment.
|
||||
|
||||
To confirm your installation is working, check that the pod is up and running correctly:
|
||||
|
||||
|
||||
```
|
||||
jess@Athena:~$ kubectl get pods -n litmus
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
litmus-7d6f994d88-2g7wn 1/1 Running 0 115s
|
||||
```
|
||||
|
||||
Confirm the Custom Resource Definitions (CRDs) are also installed correctly:
|
||||
|
||||
|
||||
```
|
||||
jess@Athena:~$ kubectl get crds | grep chaos
|
||||
chaosengines.litmuschaos.io 2021-05-09T21:05:33Z
|
||||
chaosexperiments.litmuschaos.io 2021-05-09T21:05:33Z
|
||||
chaosresults.litmuschaos.io 2021-05-09T21:05:33Z
|
||||
```
|
||||
|
||||
Finally, confirm your API resources are also installed:
|
||||
|
||||
|
||||
```
|
||||
jess@Athena:~$ kubectl api-resources | grep chaos
|
||||
chaosengines litmuschaos.io true ChaosEngine
|
||||
chaosexperiments litmuschaos.io true ChaosExperiment
|
||||
chaosresults litmuschaos.io true ChaosResult
|
||||
```
|
||||
|
||||
That's what I call easy installation and confirmation. The next step is setting up deployments for chaos.
|
||||
|
||||
### Prep for destruction
|
||||
|
||||
To test for chaos, you need something to test against. Add a new namespace:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create namespace more-apps
|
||||
namespace/more-apps created
|
||||
```
|
||||
|
||||
Then add a deployment to the new namespace:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create deployment ghost --namespace more-apps --image=ghost:3.11.0-alpine
|
||||
deployment.apps/ghost created
|
||||
```
|
||||
|
||||
Finally, scale your deployment up so that you have more than one pod in your deployment to test against:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl scale deployment/ghost --namespace more-apps --replicas=4
|
||||
deployment.apps/ghost scaled
|
||||
```
|
||||
|
||||
For Litmus to cause chaos, you need to add an [annotation][8] to your deployment to mark it ready for chaos. Currently, annotations are available for deployments, StatefulSets, and DaemonSets. Add the annotation `chaos=true` to your deployment:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl annotate deploy/ghost litmuschaos.io/chaos="true" -n more-apps
|
||||
deployment.apps/ghost annotated
|
||||
```
|
||||
|
||||
Make sure the experiments you will install have the correct permissions to work in the "more-apps" namespace.
|
||||
|
||||
Make a new **rbac.yaml** file for the prepper bindings and permissions:
|
||||
|
||||
|
||||
```
|
||||
`$ touch rbac.yaml`
|
||||
```
|
||||
|
||||
Then add permissions for the generic testing by copying and pasting the code below into your **rbac.yaml** file. These are just basic, minimal permissions to kill pods in your namespace and give Litmus permissions to delete a pod for a namespace you provide:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: pod-delete-sa
|
||||
namespace: more-apps
|
||||
labels:
|
||||
name: pod-delete-sa
|
||||
\---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: pod-delete-sa
|
||||
namespace: more-apps
|
||||
labels:
|
||||
name: pod-delete-sa
|
||||
rules:
|
||||
\- apiGroups: [""]
|
||||
resources: ["pods","events"]
|
||||
verbs: ["create","list","get","patch","update","delete","deletecollection"]
|
||||
\- apiGroups: [""]
|
||||
resources: ["pods/exec","pods/log","replicationcontrollers"]
|
||||
verbs: ["create","list","get"]
|
||||
\- apiGroups: ["batch"]
|
||||
resources: ["jobs"]
|
||||
verbs: ["create","list","get","delete","deletecollection"]
|
||||
\- apiGroups: ["apps"]
|
||||
resources: ["deployments","statefulsets","daemonsets","replicasets"]
|
||||
verbs: ["list","get"]
|
||||
\- apiGroups: ["apps.openshift.io"]
|
||||
resources: ["deploymentconfigs"]
|
||||
verbs: ["list","get"]
|
||||
\- apiGroups: ["argoproj.io"]
|
||||
resources: ["rollouts"]
|
||||
verbs: ["list","get"]
|
||||
\- apiGroups: ["litmuschaos.io"]
|
||||
resources: ["chaosengines","chaosexperiments","chaosresults"]
|
||||
verbs: ["create","list","get","patch","update"]
|
||||
\---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: pod-delete-sa
|
||||
namespace: more-apps
|
||||
labels:
|
||||
name: pod-delete-sa
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: pod-delete-sa
|
||||
subjects:
|
||||
\- kind: ServiceAccount
|
||||
name: pod-delete-sa
|
||||
namespace: more-apps
|
||||
```
|
||||
|
||||
Apply the **rbac.yaml** file:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl apply -f rbac.yaml
|
||||
serviceaccount/pod-delete-sa created
|
||||
role.rbac.authorization.k8s.io/pod-delete-sa created
|
||||
rolebinding.rbac.authorization.k8s.io/pod-delete-sa created
|
||||
```
|
||||
|
||||
The next step is to prepare your chaos engine to delete pods. The chaos engine will connect the experiment you need to your application instance by creating a **chaosengine.yaml** file and copying the information below into the .yaml file. This will connect your experiment to your namespace and the service account with the role bindings you created above.
|
||||
|
||||
This chaos engine file only specifies the pod to delete during chaos testing:
|
||||
|
||||
|
||||
```
|
||||
apiVersion: litmuschaos.io/v1alpha1
|
||||
kind: ChaosEngine
|
||||
metadata:
|
||||
name: moreapps-chaos
|
||||
namespace: more-apps
|
||||
spec:
|
||||
appinfo:
|
||||
appns: 'more-apps'
|
||||
applabel: 'app=ghost'
|
||||
appkind: 'deployment'
|
||||
# It can be true/false
|
||||
annotationCheck: 'true'
|
||||
# It can be active/stop
|
||||
engineState: 'active'
|
||||
#ex. values: ns1:name=percona,ns2:run=more-apps
|
||||
auxiliaryAppInfo: ''
|
||||
chaosServiceAccount: pod-delete-sa
|
||||
# It can be delete/retain
|
||||
jobCleanUpPolicy: 'delete'
|
||||
experiments:
|
||||
- name: pod-delete
|
||||
spec:
|
||||
components:
|
||||
env:
|
||||
# set chaos duration (in sec) as desired
|
||||
- name: TOTAL_CHAOS_DURATION
|
||||
value: '30'
|
||||
|
||||
# set chaos interval (in sec) as desired
|
||||
- name: CHAOS_INTERVAL
|
||||
value: '10'
|
||||
|
||||
# pod failures without '--force' & default terminationGracePeriodSeconds
|
||||
- name: FORCE
|
||||
value: 'false'
|
||||
```
|
||||
|
||||
Don't apply this file until you install the experiments in the next section.
|
||||
|
||||
### Add new experiments for causing chaos
|
||||
|
||||
Now that you have an entirely new environment with deployments, roles, and the chaos engine to test against, you need some experiments to run. Since Litmus has a large community, you can find some great experiments in the [Chaos Hub][9].
|
||||
|
||||
In this walkthrough, I'll use the generic experiment of [killing a pod][10].
|
||||
|
||||
Run a kubectl command to install the generic experiments into your cluster. Install this in your `more-apps` namespace; you will see the tests created when you run it:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl apply -f <https://hub.litmuschaos.io/api/chaos/1.13.3?file=charts/generic/experiments.yaml> -n more-apps
|
||||
chaosexperiment.litmuschaos.io/pod-network-duplication created
|
||||
chaosexperiment.litmuschaos.io/node-cpu-hog created
|
||||
chaosexperiment.litmuschaos.io/node-drain created
|
||||
chaosexperiment.litmuschaos.io/docker-service-kill created
|
||||
chaosexperiment.litmuschaos.io/node-taint created
|
||||
chaosexperiment.litmuschaos.io/pod-autoscaler created
|
||||
chaosexperiment.litmuschaos.io/pod-network-loss created
|
||||
chaosexperiment.litmuschaos.io/node-memory-hog created
|
||||
chaosexperiment.litmuschaos.io/disk-loss created
|
||||
chaosexperiment.litmuschaos.io/pod-io-stress created
|
||||
chaosexperiment.litmuschaos.io/pod-network-corruption created
|
||||
chaosexperiment.litmuschaos.io/container-kill created
|
||||
chaosexperiment.litmuschaos.io/node-restart created
|
||||
chaosexperiment.litmuschaos.io/node-io-stress created
|
||||
chaosexperiment.litmuschaos.io/disk-fill created
|
||||
chaosexperiment.litmuschaos.io/pod-cpu-hog created
|
||||
chaosexperiment.litmuschaos.io/pod-network-latency created
|
||||
chaosexperiment.litmuschaos.io/kubelet-service-kill created
|
||||
chaosexperiment.litmuschaos.io/k8-pod-delete created
|
||||
chaosexperiment.litmuschaos.io/pod-delete created
|
||||
chaosexperiment.litmuschaos.io/node-poweroff created
|
||||
chaosexperiment.litmuschaos.io/k8-service-kill created
|
||||
chaosexperiment.litmuschaos.io/pod-memory-hog created
|
||||
```
|
||||
|
||||
Verify the experiments installed correctly:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get chaosexperiments -n more-apps
|
||||
NAME AGE
|
||||
container-kill 72s
|
||||
disk-fill 72s
|
||||
disk-loss 72s
|
||||
docker-service-kill 72s
|
||||
k8-pod-delete 72s
|
||||
k8-service-kill 72s
|
||||
kubelet-service-kill 72s
|
||||
node-cpu-hog 72s
|
||||
node-drain 72s
|
||||
node-io-stress 72s
|
||||
node-memory-hog 72s
|
||||
node-poweroff 72s
|
||||
node-restart 72s
|
||||
node-taint 72s
|
||||
pod-autoscaler 72s
|
||||
pod-cpu-hog 72s
|
||||
pod-delete 72s
|
||||
pod-io-stress 72s
|
||||
pod-memory-hog 72s
|
||||
pod-network-corruption 72s
|
||||
pod-network-duplication 72s
|
||||
pod-network-latency 72s
|
||||
pod-network-loss 72s
|
||||
```
|
||||
|
||||
### Run the experiments
|
||||
|
||||
Now that everything is installed and configured, use your **chaosengine.yaml** file to run the pod-deletion experiment you defined. Apply your chaos engine file:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl apply -f chaosengine.yaml
|
||||
chaosengine.litmuschaos.io/more-apps-chaos created
|
||||
```
|
||||
|
||||
Confirm the engine started by getting all the pods in your namespace; you should see `pod-delete` being created:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods -n more-apps
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
ghost-5bdd4cdcc4-blmtl 1/1 Running 0 53m
|
||||
ghost-5bdd4cdcc4-z2lnt 1/1 Running 0 53m
|
||||
ghost-5bdd4cdcc4-zlcc9 1/1 Running 0 53m
|
||||
ghost-5bdd4cdcc4-zrs8f 1/1 Running 0 53m
|
||||
moreapps-chaos-runner 1/1 Running 0 17s
|
||||
pod-delete-e443qx-lxzfx 0/1 ContainerCreating 0 7s
|
||||
```
|
||||
|
||||
Next, you need to be able to observe your experiments using Litmus. The following command uses the ChaosResult CRD and provides a large amount of output:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl describe chaosresult moreapps-chaos-pod-delete -n more-apps
|
||||
Name: moreapps-chaos-pod-delete
|
||||
Namespace: more-apps
|
||||
Labels: app.kubernetes.io/component=experiment-job
|
||||
app.kubernetes.io/part-of=litmus
|
||||
app.kubernetes.io/version=1.13.3
|
||||
chaosUID=a6c9ab7e-ff07-4703-abe4-43e03b77bd72
|
||||
controller-uid=601b7330-c6f3-4d9b-90cb-2c761ac0567a
|
||||
job-name=pod-delete-e443qx
|
||||
name=moreapps-chaos-pod-delete
|
||||
Annotations: <none>
|
||||
API Version: litmuschaos.io/v1alpha1
|
||||
Kind: ChaosResult
|
||||
Metadata:
|
||||
Creation Timestamp: 2021-05-09T22:06:19Z
|
||||
Generation: 2
|
||||
Managed Fields:
|
||||
API Version: litmuschaos.io/v1alpha1
|
||||
Fields Type: FieldsV1
|
||||
fieldsV1:
|
||||
f:metadata:
|
||||
f:labels:
|
||||
.:
|
||||
f:app.kubernetes.io/component:
|
||||
f:app.kubernetes.io/part-of:
|
||||
f:app.kubernetes.io/version:
|
||||
f:chaosUID:
|
||||
f:controller-uid:
|
||||
f:job-name:
|
||||
f:name:
|
||||
f:spec:
|
||||
.:
|
||||
f:engine:
|
||||
f:experiment:
|
||||
f:status:
|
||||
.:
|
||||
f:experimentStatus:
|
||||
f:history:
|
||||
Manager: experiments
|
||||
Operation: Update
|
||||
Time: 2021-05-09T22:06:53Z
|
||||
Resource Version: 8406
|
||||
Self Link: /apis/litmuschaos.io/v1alpha1/namespaces/more-apps/chaosresults/moreapps-chaos-pod-delete
|
||||
UID: 08b7e3da-d603-49c7-bac4-3b54eb30aff8
|
||||
Spec:
|
||||
Engine: moreapps-chaos
|
||||
Experiment: pod-delete
|
||||
Status:
|
||||
Experiment Status:
|
||||
Fail Step: N/A
|
||||
Phase: Completed
|
||||
Probe Success Percentage: 100
|
||||
Verdict: Pass
|
||||
History:
|
||||
Failed Runs: 0
|
||||
Passed Runs: 1
|
||||
Stopped Runs: 0
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Pass 104s pod-delete-e443qx-lxzfx experiment: pod-delete, Result: Pass
|
||||
```
|
||||
|
||||
You can see the pass or fail output from your testing as you run the chaos engine definitions.
|
||||
|
||||
Congratulations on your first (and hopefully not last) chaos engineering test! Now you have a powerful tool to use and help your environment grow.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
You might be thinking, "I can't run this manually every time I want to run chaos. How far can I take this, and how can I set it up for the long term?"
|
||||
|
||||
Litmus' best part (aside from the Chaos Hub) is its [scheduler][11] function. You can use it to define times and dates, repetitions or sporadic, to run experiments. This is a great tool for detailed admins who have been working with Kubernetes for a while and are ready to create some chaos. I suggest staying up to date on Litmus and how to use this tool for regular chaos engineering. Happy pod hunting!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/kubernetes-litmus-chaos
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cherrybomb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/science_experiment_beaker_lab.png?itok=plKWRhlU (Science lab with beakers)
|
||||
[2]: https://github.com/litmuschaos/litmus
|
||||
[3]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos
|
||||
[4]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus
|
||||
[5]: https://minikube.sigs.k8s.io/docs/start/
|
||||
[6]: https://litmuschaos.io/
|
||||
[7]: https://docs.litmuschaos.io
|
||||
[8]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
|
||||
[9]: https://hub.litmuschaos.io/
|
||||
[10]: https://docs.litmuschaos.io/docs/pod-delete/
|
||||
[11]: https://docs.litmuschaos.io/docs/scheduling/
|
189
sources/tech/20210603 FreeDOS commands for Linux fans.md
Normal file
189
sources/tech/20210603 FreeDOS commands for Linux fans.md
Normal file
@ -0,0 +1,189 @@
|
||||
[#]: subject: (FreeDOS commands for Linux fans)
|
||||
[#]: via: (https://opensource.com/article/21/6/freedos-linux-users)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( shipsw )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
FreeDOS commands for Linux fans
|
||||
======
|
||||
If you're already familiar with the Linux command line, try these
|
||||
commands to help ease into FreeDOS.
|
||||
![FreeDOS fish logo and command prompt on computer][1]
|
||||
|
||||
If you've tried FreeDOS, you might have been stymied by the command line. The DOS commands are slightly different from how you might use the Linux command line, so getting around on the command line requires learning a few new commands.
|
||||
|
||||
But it doesn't have to be an "all new" experience for Linux users. We've always included some standard Unix commands in FreeDOS, in addition to the DOS commands that are already similar to Linux. So if you're already familiar with the Linux command line, try these commands to help ease into FreeDOS:
|
||||
|
||||
### Getting Around
|
||||
|
||||
Use the `cd` command to _change directory_ in the FreeDOS filesystem. The usage is basically the same on FreeDOS as it is on Linux. To change into a subdirectory called `apps`, type `cd apps`. To go back to the previous directory, type `cd ..`.
|
||||
|
||||
The only difference when navigating through directories and paths is that on FreeDOS, the directory separator is `\` ("backslash") instead of `/` ("forward slash") that you use on Linux. For example, let's say you were in the `\devel` directory and you wanted to move to the `\fdos` directory. Both of those are at the same "level" relative to the _root_ directory. So you could type `cd ..\fdos` to "back up" one directory level (with `..`) and then "go into" the `fdos` directory.
|
||||
|
||||
To change to a new directory, you could instead give the full path with the leading backslash. This is handy if you are already deep into another path, and just want to switch immediately to the new location. For example, to change to the `\temp` directory, you can type `cd \temp`.
|
||||
|
||||
|
||||
|
||||
```
|
||||
C:\>cd apps
|
||||
C:\APPS>cd ..
|
||||
C:\>cd devel
|
||||
C:\DEVEL>cd ..\fdos
|
||||
C:\FDOS>cd \temp
|
||||
C:\TEMP>_
|
||||
```
|
||||
|
||||
In FreeDOS, like most DOS systems, you can see your current path as part of the DOS prompt. On Linux, your prompt is probably something like `$`. On FreeDOS, the prompt lists the current drive, the current path within that drive, then `>` as the prompt (taking the place of `$` on Linux).
|
||||
|
||||
### Listing and Displaying Files
|
||||
|
||||
On Linux, the standard command to list files in the current directory is the `ls` command. On FreeDOS, it's a different command: `dir`. But you can get a similar behavior as `ls` by creating an _alias_.
|
||||
|
||||
To create an alias to another command, use the built-in `alias` command. For example, use this command to define an alias for `ls` that will display a directory listing in a similar way to using `ls` on Linux:
|
||||
|
||||
|
||||
|
||||
```
|
||||
C:\>alias ls=dir /one /w /b /l
|
||||
C:\>ls
|
||||
[apps] command.com [devel] fdauto.bat fdconfig.sys
|
||||
[fdos] kernel.sys [src] [temp]
|
||||
C:\>
|
||||
```
|
||||
|
||||
The command option format is slightly different on FreeDOS than on Linux. On Linux, you start options with a hyphen character (`-`). But on FreeDOS, options start with a forward slash. The `alias` command above uses the slash character—those are options to `dir`. The `/one` option tells `dir` to order (o) in a certain way: sort any files and directories by name (n) and then by extension (e). Using `/w` says to use a "wide" directory listing, `/b` uses a "bare" display without the other information `dir` usually provides, and `/l` instructs `dir` to display files and directories in lowercase.
|
||||
|
||||
Note that the command-line options for the FreeDOS `dir` command are quite different from the options to Linux `ls`, so you can't use this `ls` alias exactly like you would on Linux. For example, typing `ls -l` with this alias on FreeDOS will result in a "File not found" error, because the underlying FreeDOS `dir` command will be unable to find a file called `-l`. But for basic "see what files I have on my system," this `ls` alias is good enough to help Linux users get started with FreeDOS.
|
||||
|
||||
Similarly, you can create an alias for the FreeDOS `type` command, to act like the Linux `cat` command. Both programs display the contents of a text file. While `type` doesn't support the command-line options you might use under Linux, the basic usage to display a single file will be the same.
|
||||
|
||||
|
||||
```
|
||||
C:\FDOS>alias cat=type
|
||||
C:\FDOS>cat version.fdi
|
||||
PLATFORM=FreeDOS
|
||||
VERSION=1.3-RC4
|
||||
RELEASE=2021-04-30
|
||||
C:\FDOS>
|
||||
```
|
||||
|
||||
### Other Unix-like Commands
|
||||
|
||||
FreeDOS includes a selection of other common Unix-like commands, so Linux users will feel more at home. To use these Linux commands on FreeDOS, you may need to install the **Unix Like Tools** package from the **FreeDOS Installer - My Package List Editor Software** (FDIMPLES) package manager.
|
||||
|
||||
![Installing the Unix-like package set][2]
|
||||
|
||||
Jim Hall, CC-BY SA 4.0
|
||||
|
||||
Not all of the Unix-like utilities work _exactly_ like their Linux counterparts. That's why we call them _Unix-like_. You might want to check the compatibility if you're using some esoteric command-line options, but typical usage should be fine. Start with these common Unix-like commands on FreeDOS:
|
||||
|
||||
The `cal` command is the standard Unix calendar program. For example, to display the calendar for the current month, just type `cal`. To view a specific month, give the month and year as arguments:
|
||||
|
||||
|
||||
```
|
||||
C:\>cal 6 1994
|
||||
|
||||
June 1994
|
||||
Su Mo Tu We Th Fr Sa
|
||||
1 2 3 4
|
||||
5 6 7 8 9 10 11
|
||||
12 13 14 15 16 17 18
|
||||
19 20 21 22 23 24 25
|
||||
26 27 28 29 30
|
||||
```
|
||||
|
||||
View your disk usage with the `du` command. This is a simple version of the Linux _disk usage_ command and doesn't support any command-line options other than a path.
|
||||
|
||||
|
||||
```
|
||||
C:\>du -s apps
|
||||
usage: du (start path)
|
||||
C:\>du apps
|
||||
158784 C:\APPS\FED
|
||||
0 C:\APPS
|
||||
Total from C:\APPS is 158784
|
||||
C:\>
|
||||
```
|
||||
|
||||
The `head` command displays the first few lines of a file. For example, this is a handy way to determine if a file contains the correct data.
|
||||
|
||||
|
||||
```
|
||||
C:\>head fdauto.bat
|
||||
@ECHO OFF
|
||||
set DOSDIR=C"\FDOS
|
||||
set LANG=EN
|
||||
set TZ=UTC
|
||||
set PATH=%dosdir%\BIN
|
||||
if exist %dosdir%\LINKS\NUL set PATH=%path%;%dosdir%\LINKS
|
||||
set NLSPATH=%dosdir%\NLS
|
||||
set HELPPATH=%dosdir%\HELP
|
||||
set TEMP=%dosdir%\TEMP
|
||||
set TMP=%TEMP%
|
||||
C:\>
|
||||
```
|
||||
|
||||
To view an entire file, use the `more` command, the default file viewer on FreeDOS. This displays a file one screenful at a time, then prints a prompt to press a key before displaying the next screenful of information. The `more` command is a very simple file viewer; for a more full-featured viewer like you might use on Linux, try the `less` command. The `less` command provides the ability to scroll "backwards" through a file, in case you missed something. You can also search for specific text.
|
||||
|
||||
|
||||
```
|
||||
C:\>less fdauto.bat
|
||||
@ECHO OFF
|
||||
set DOSDIR=C"\FDOS
|
||||
set LANG=EN
|
||||
set TZ=UTC
|
||||
set PATH=%dosdir%\BIN
|
||||
if exist %dosdir%\LINKS\NUL set PATH=%path%;%dosdir%\LINKS
|
||||
set NLSPATH=%dosdir%\NLS
|
||||
set HELPPATH=%dosdir%\HELP
|
||||
set TEMP=%dosdir%\TEMP
|
||||
set TMP=%TEMP%
|
||||
[...]
|
||||
```
|
||||
|
||||
If you have a lot of directories in your program path variable (`PATH`) and aren't sure where a certain program is running from, you can use the `which` command. This scans the program path variable, and prints the full location of the program you are looking for.
|
||||
|
||||
|
||||
```
|
||||
C:\>which less
|
||||
less C:\>FDOS\BIN\LESS.EXE
|
||||
C:\>_
|
||||
```
|
||||
|
||||
FreeDOS 1.3 RC4 includes other Unix-like commands that you might use in other, more specific situations. These include:
|
||||
|
||||
* **bc**: Arbitrary precision numeric processing language
|
||||
* **sed**: Stream editor
|
||||
* **grep** and **xgrep**: Search a text file using regular expression
|
||||
* **md5sum**: Generate an MD5 signature of a file
|
||||
* **nro**: Simple typesetting using nroff macros
|
||||
* **sleep**: Pause the system for a few seconds
|
||||
* **tee**: Save a copy of a command-line stream
|
||||
* **touch**: Modify a file's timestamp
|
||||
* **trch**: Translate single characters (like Linux tr)
|
||||
* **uptime**: Report how long your FreeDOS system has been running
|
||||
|
||||
|
||||
|
||||
### FreeDOS at your command
|
||||
|
||||
FreeDOS, like Linux and BSD, is open source. Whether you want to challenge yourself by learning a new style of command-line interaction, or you want to fall back on the comfort of familiar Unix-like tools, FreeDOS is a fun and fresh operating system to explore. Give it a try!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/freedos-linux-users
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos-fish-laptop-color.png?itok=vfv_Lpph (FreeDOS fish logo and command prompt on computer)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/unix-like.png (Installing the Unix-like package set)
|
@ -0,0 +1,280 @@
|
||||
[#]: subject: (Get started with Kustomize for Kubernetes configuration management)
|
||||
[#]: via: (https://opensource.com/article/21/6/kustomize-kubernetes)
|
||||
[#]: author: (Brent Laster https://opensource.com/users/bclaster)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Get started with Kustomize for Kubernetes configuration management
|
||||
======
|
||||
Modify your Kubernetes manifests without losing control of what's in the
|
||||
original versions.
|
||||
![Ship captain sailing the Kubernetes seas][1]
|
||||
|
||||
Preparing to run a new (or convert an existing) application in [Kubernetes][2] takes work. Working with Kubernetes requires defining and creating multiple "manifests" for the different types of objects in your application. Even a simple microservice is likely to have a deployment.yaml, service.yaml, configmap.yaml, and other files. These declarative YAML files for Kubernetes are usually known as "manifests." You might also have to set up secrets, ingresses, persistent volumes, and other supporting pieces.
|
||||
|
||||
Once those are created, you're done with managing your manifests, _right_? Well, it depends. What happens if someone else needs to work with your manifest but needs a slightly (or significantly) different version? Or what happens if someone wants to leverage your manifests for different stages or environments? You need to handle reuse and updates for the different use cases without losing track of your original version.
|
||||
|
||||
### Typical approaches for reusing manifests
|
||||
|
||||
Approaches for reusing manifests have typically been rather brute force. You make a copy, modify it in whatever way is appropriate, and save it with a different name or location. This process works for the immediate use case, but things can quickly get out of sync or become unwieldy to manage.
|
||||
|
||||
#### The copy approach
|
||||
|
||||
Suppose you want to change a manifest to add a new resource or update a value in a manifest copy. Someone or something will have to monitor the original, figure out the differences, and merge them into the copy.
|
||||
|
||||
The problem becomes even worse if other people make their own copies and change them to suit their particular use case. Very quickly, the content diverges. People might miss important or significant updates to the original manifests, and they might end up using confusing variations of similar files.
|
||||
|
||||
And over time, the situation can worsen, and a significant amount of time can be spent just trying to keep things up to date. If copies of the copies are made, you can end up with something that diverges significantly from the original and even lose track of what was in the original. This, in turn, can dramatically affect usability and maintainability.
|
||||
|
||||
#### The parameterization approach
|
||||
|
||||
Another approach is to create parameterized templates from the files. That is, to make the manifests into generic "templates" by replacing static, hardcoded values with placeholders that can be filled in with any value. Values are usually supplied at deployment time, with placeholders replaced by values passed in from a command line or read in from a data file. The resulting templates with the values filled in are rendered as valid manifests for Kubernetes.
|
||||
|
||||
This is the approach the well-known tool [Helm][3] takes. However, this also has challenges. Removing values and using placeholders fundamentally changes and adds complexity to the manifests, which are now templates. They are no longer usable on their own; they require an application or process like Helm to find or derive and fill in the values. And, as templates, the original files are no longer easily parsable by anyone who looks at them.
|
||||
|
||||
The templates are also still susceptible to the issues that copies of the files have. In fact, the problem can be compounded when using templates due to copies having more placeholders and separate data values stored elsewhere. Functions and pipes that join functions can also be added. At some level, this can turn the templates into sort of "programmed YAML" files. At the extreme, this may make the files unusable and unreadable unless you use Helm to render them with the data values into a form that people (and Kubernetes) can understand and use.
|
||||
|
||||
### Kustomize's alternative approach
|
||||
|
||||
Ideally, you would be able to keep using your existing files in their original forms and produce variations without making permanent changes or copies that can easily diverge from the original and each other. And you would keep the differences between versions small and simple.
|
||||
|
||||
These are the basic tenets of [Kustomize][4]'s approach. It's an Apache 2.0-licensed tool that generates custom versions of manifests by "overlaying" declarative specifications on top of existing ones. "Declarative" refers to the standard way to describe resources in Kubernetes: declaring what you want a resource to be and how to look and behave, in contrast to "imperative," which defines the process to create it.
|
||||
|
||||
"Overlaying" describes the process where separate files are layered over (or "stacked on top of") each other to create altered versions. Kustomize applies specific kinds of overlays to the original manifest. The changes to be made in the rendered versions are declared in a separate, dedicated file named kustomization.yaml, while leaving the original files intact.
|
||||
|
||||
Kustomize reads the kustomization.yaml file to drive its behavior. One section of the kustomization.yaml file, titled Resources, lists the names (and optionally the paths) of the original manifests to base changes on. After loading the resources, Kustomize applies the overlays and renders the result.
|
||||
|
||||
You can think of this as applying the specified customizations "on top of" a temporary copy of the original manifest. These operations produce a "customized" copy of the manifest that, if you want, can be fed directly into Kubernetes via a `kubectl apply` command.
|
||||
|
||||
The types of functions built into Kustomize "transform" your Kubernetes manifests, given a simple set of declarative rules. These sets of rules are called "transformers."
|
||||
|
||||
The simplest kind of transformer applies a common identifier to the same set of resources, as Figure 1 demonstrates.
|
||||
|
||||
![A simple example][5]
|
||||
|
||||
Figure 1: Example structure and content for basic Kustomize use (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
This example has a simple directory with a set of YAML files for a web app with a MySQL backend. The files are:
|
||||
|
||||
* `roar-web-deploy.yaml` is the Kubernetes deployment manifest for the web app part of an app.
|
||||
* `roar-web-svc.yaml` is the Kubernetes service manifest for the web app part of an app.
|
||||
* `kustomization.yaml` is the Kustomize input file that declares the type of transformations you want to make to the manifests.
|
||||
|
||||
|
||||
|
||||
In the kustomization.yaml file in Figure 1, the `commonLabels` section (the bottom center) is an example of a transformer. As the name implies, this transformer's intent is to make the designated label common in the files after the transformation.
|
||||
|
||||
The kustomization.yaml file also includes a `resources` section, which lists the files to be included and possibly customized or transformed (highlighted in Figure 2).
|
||||
|
||||
![Resources section in kustomization.yaml][7]
|
||||
|
||||
Figure 2: Kustomize resource section denotes which original manifests to include (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
The kustomization.yaml file is a simple set of declarations about the manifests you want to change and how you want to change them; it is a specification of resources plus customizations. The modifications happen when you run the `kustomize build` command. The build operation reads the kustomization.yaml file, pulls in the resources, and applies the transformers appropriate to each file. This example pulls in the two `roar-web` YAML files, produces copies of them, and adds the requested label in the metadata section for each one.
|
||||
|
||||
By default, the files are not saved anywhere, and the original files are not overwritten. The "transformed" content can be piped directly to a `kubectl apply` command or redirected and saved to another file if you want. However, it's generally not a good idea to save generated files because it becomes too easy for them to get out of sync with the source. You can view the output from the `kustomize build` step as generated content.
|
||||
|
||||
Instead, you should save the associated original files and the kustomization.yaml file. Since the kustomization.yaml file pulls in the original files and transforms them for rendering, they can stay the same and be reused in their original form.
|
||||
|
||||
### Other transformations
|
||||
|
||||
Kustomize provides a set of transformations that you can apply to a set of resources. These include:
|
||||
|
||||
* `commonLabel` adds a common label (name:value) to each Kubernetes (K8s) resource.
|
||||
* `commonAnnotations` adds an annotation to all K8s resources.
|
||||
* `namePrefix` adds a common prefix to all resource names.
|
||||
|
||||
|
||||
|
||||
Figure 3 shows examples of other types of common changes.
|
||||
|
||||
![commonAnnotations and namePrefix transformers][8]
|
||||
|
||||
Figure 3: Some common transformations provided by Kustomize (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
#### Image transformers
|
||||
|
||||
As its name implies, image transformers produce a version of a manifest with a different `newname` or `newTag` for an image spec, such as a container or an initcontainer. The name value must match the image value in the original resource.
|
||||
|
||||
Figure 4 shows an example of a kustomization.yaml file with changes for an image.
|
||||
|
||||
![kustomization.yaml file for an image transformer][9]
|
||||
|
||||
Figure 4: Updating image selection with Kustomize (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
While it's useful to do these kinds of transformations, a more strategic feature is creating separate versions for different environments from a set of resources. In Kustomize, these are called "variants."
|
||||
|
||||
#### Variants
|
||||
|
||||
In Kubernetes, it's common to need multiple variations (variants) of a set of resources and the manifests that declare them. A simple example is building on top of a set of Kubernetes resources to create different variants for different stages of product development, such as dev, stage, and prod.
|
||||
|
||||
To facilitate these sorts of changes, Kustomize uses the concepts of "overlays" and "bases." A "base" declares things variants have in common, and an "overlay" declares differences between variants. Both bases and overlays are represented within a kustomization.yaml file. Figure 5 includes an example of this structure. It has the original resource manifests and a base kustomization.yaml file in the root of the tree. The kustomization.yaml files define variants as a set of overlays in subdirectories for prod and stage.
|
||||
|
||||
![base/overlay approach][10]
|
||||
|
||||
Figure 5: Example structure for Kustomize with bases and overlays to implement variants (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
Variants can also apply patches. Patches in Kustomize are a partial spec or a "delta" for a K8s object. They describe what a section should look like after it changes and how it should be modified when Kustomize renders an updated version. They represent a more "surgical" approach to targeting one or more specific sections in a resource.
|
||||
|
||||
The next set of figures demonstrate leveraging the Kustomize patching functionality. Going back to an earlier example, you have a set of core resource files (a deployment and a service) and the associated kustomization.yaml files for them (Figures 6a and 6b). There are two parts to the app: a database portion and a web app portion. The patch in this example renames the database service.
|
||||
|
||||
![Patching database content][11]
|
||||
|
||||
Figure 6a: Patching content in the database portion of the project (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Renaming database service][12]
|
||||
|
||||
Figure 6b: The service definition for the database resource (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
Figures 7a through 7d highlight the patch portion within the kustomization.yaml file associated with the service. Line 12 defines the type of patch, a "replace" in this example. Lines 13 and 14 identify a "location" in the YAML hierarchy to find the value you want to patch and the replacement value to use. Lines 15–17 identify the specific type of item in the K8s resources you wish to change.
|
||||
|
||||
![Patch block][13]
|
||||
|
||||
Figure 7a: Example patch block in kustomization.yaml (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Patch to apply][14]
|
||||
|
||||
Figure 7b: More detail on the type of patch (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Target location][15]
|
||||
|
||||
Figure 7c: More detail on the location in the hierarchy in the base files and replacement value (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![value to modify][16]
|
||||
|
||||
Figure 7d: More detail on the exact item to search for—and replace (per the "op" setting) (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
When you execute the `kustomize build` command against this set of files, Kustomize first locates the K8s resource you're interested in—the service—and then finds the path identified in the patch block (`metadata.name.<value>`). Then it renders a version of the spec with the value `roar-db` replaced with `mysql`. Figures 8a through 8f illustrate this process.
|
||||
|
||||
![Locating the initial named object][17]
|
||||
|
||||
Figure 8a: Navigating the YAML structure in the original file (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Locating the initial named object][18]
|
||||
|
||||
Figure 8b: Finding the correct item via `name` (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Locating the target section in the hierarchy][19]
|
||||
|
||||
Figure 8c: Finding the target section (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Identifying the path][20]
|
||||
|
||||
Figures 8d: Identifying the path (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Substituting the desired value][21]
|
||||
|
||||
Figure 8e: The substitution (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Rendering the result][22]
|
||||
|
||||
Figure 8f: The rendered file with the change (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
Kustomize supports patching via a "strategic merge patch" (illustrated above) or via JSON patches.
|
||||
|
||||
### Kustomization hierarchies
|
||||
|
||||
The patch scenario example illustrates another useful concept when working with Kustomize: multiple kustomization.yaml files in a project hierarchy. This example project has two subprojects: one for a database and another for a web app.
|
||||
|
||||
The database piece has a customization to update the service name with the patch functionality, as described above.
|
||||
|
||||
The web piece simply has a file to include the resources.
|
||||
|
||||
At the base level, there is a kustomization.yaml file that pulls in resources from both parts of the project and a simple file to create a namespace. It also applies a common label to the different elements.
|
||||
|
||||
### Generators
|
||||
|
||||
Kustomize also includes "generators" to automatically update related Kubernetes resources when a different resource is updated. A generator establishes a connection between two resources by generating a random identifier and using it as a common suffix on the objects' names.
|
||||
|
||||
This can be beneficial for configmaps and secrets: If data is changed in them, the corresponding deployment will automatically be regenerated and updated. Figure 9 shows an example specification for a Kustomize generator.
|
||||
|
||||
![Kustomize generator spec][23]
|
||||
|
||||
Figure 9: Example of a Kustomize generator spec (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
When run through a Kustomize build operation, the new objects produced will have the generated name applied and included in the specs, as shown in Figure 10.
|
||||
|
||||
![Objects and specs from a Kustomize generator ][24]
|
||||
|
||||
Figure 10: Objects and specs resulting from using a Kustomize generator (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
If you then change the configmap associated with the generator (as Figure 11 shows)...
|
||||
|
||||
![Objects and specs from Kustomize generator][25]
|
||||
|
||||
Figure 11: Making a change to the configMapGenerator (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
… Kustomize will generate new values that are incorporated into the specs and objects (Figure 12a). Then, if you take the build output and apply it, the deployment will be updated because the associated configmap was updated (Figure 12b).
|
||||
|
||||
![Changes after configMapGenerator update and Kustomize build][26]
|
||||
|
||||
Figure12a: Changes after the configMapGenerator is updated and a Kustomize build is run (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
![Deployment changes after configmap changes][27]
|
||||
|
||||
Figure 12b: Changes to the deployment based on changes to the configmap (Brent Laster, [CC BY-SA 4.0][6])
|
||||
|
||||
In summary, a `kubectl apply` operation on the build's results causes the configmap and any dependent items to reference the new hash value of the updated configmap and update them in the cluster.
|
||||
|
||||
### Kubernetes integration
|
||||
|
||||
Kustomize has been integrated into Kubernetes. There are two integration points:
|
||||
|
||||
1. To view the resources in a directory with a kustomization file, you can run:
|
||||
`$ kubectl kustomize < directory >`
|
||||
2. To apply those resources, you can use the `-k` option on `kubectl apply`:
|
||||
`$ kubectl apply -k < directory >`
|
||||
|
||||
|
||||
|
||||
If you are using an older version of Kubernetes, it might not have an updated version of Kustomize. In most cases, this isn't a problem unless you need a particular feature or bug fix available in a current version of Kustomize.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Kustomize is another way to facilitate the reuse of Kubernetes manifests. Unlike most other approaches, it leaves the original files intact and generates changed versions on the fly with its `build` command. The changes to make are defined in a kustomization.yaml file and can include adding various common attributes, making patches on top of original content, or even generating unique identifiers to tie together items like configmaps and deployments.
|
||||
|
||||
All in all, Kustomize provides a unique and simple way to deliver variations of Kubernetes manifests once you are comfortable with the setup and function of its various ways to transform files. It is significantly different from the traditional reuse approach taken by Helm, the other main tool for reuse. I'll explore those differences in a future article.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/kustomize-kubernetes
|
||||
|
||||
作者:[Brent Laster][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bclaster
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://helm.sh/
|
||||
[4]: https://kustomize.io/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/kustomize-1_simple-example.png (A simple example)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/kustomize-2_resources.png (Resources section in kustomization.yaml)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/kustomize-3_transformers.png (commonAnnotations and namePrefix transformers)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/kustomize-4_image-transformer.png (kustomization.yaml file for an image transformer)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/kustomize-5_base-overlay.png (base/overlay approach)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/kustomize-6a_patch1.png (Patching database content)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/kustomize-6b_patch2.png (Renaming database service)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/kustomize-7a_patchblock.png (Patch block)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/kustomize-7b_patch_0.png (Patch to apply)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/kustomize-7c_targetlocation.png (Target location)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/kustomize-7d_valuemodify.png (value to modify)
|
||||
[17]: https://opensource.com/sites/default/files/uploads/kustomize-8a_service.png (Locating the initial named object)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/kustomize-8b_name.png (Locating the initial named object)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/kustomize-8c_metadata.png (Locating the target section in the hierarchy)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/kustomize-8d_name.png (Identifying the path)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/kustomize-8e_name.png (Substituting the desired value)
|
||||
[22]: https://opensource.com/sites/default/files/uploads/kustomize-8f_newname.png (Rendering the result)
|
||||
[23]: https://opensource.com/sites/default/files/uploads/kustomize-9a_kustomizegenerator.png (Kustomize generator spec)
|
||||
[24]: https://opensource.com/sites/default/files/uploads/kustomize-9b_hashadded.png (Objects and specs from a Kustomize generator )
|
||||
[25]: https://opensource.com/sites/default/files/uploads/kustomize-9c_commonlabel.png (Objects and specs from Kustomize generator)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/kustomize-9d_hashchanged.png (Changes after configMapGenerator update and Kustomize build)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/kustomize-9e_updates.png (Deployment changes after configmap changes)
|
@ -0,0 +1,399 @@
|
||||
[#]: subject: (Test your Kubernetes experiments with an open source web interface)
|
||||
[#]: via: (https://opensource.com/article/21/6/chaos-mesh-kubernetes)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Test your Kubernetes experiments with an open source web interface
|
||||
======
|
||||
Chaos Mesh enables chaos engineering with a web frontend. Learn more in
|
||||
the fourth article in this series.
|
||||
![Digital creative of a browser on the internet][1]
|
||||
|
||||
Have you wanted to cause chaos to test your systems but prefer to use visual tools rather than the terminal? Well, this article is for you, my friend. In the first article in this series, I explained [what chaos engineering is][2]; in the second article, I demonstrated how to get your [system's steady state][3] so that you can compare it against a chaos state; and in the third, I showed how to [use Litmus to test][4] arbitrary failures and experiments in your Kubernetes cluster.
|
||||
|
||||
The fourth article introduces [Chaos Mesh][5], an open source chaos orchestrator with a web user interface (UI) that anyone can use. It allows you to create experiments and display statistics in a web UI for presentations or visual storytelling. The [Cloud Native Computing Foundation][6] hosts the Chaos Mesh project, which means it is a good choice for Kubernetes. So let's get started! In this walkthrough, I'll use Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19.
|
||||
|
||||
### Configure Minikube
|
||||
|
||||
If you haven't already, [install Minikube][7] in whatever way that makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power:
|
||||
|
||||
|
||||
```
|
||||
$ minikube config set memory 8192
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
$ minikube config set cpus 6
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
```
|
||||
|
||||
Then start and check the status of your system:
|
||||
|
||||
|
||||
```
|
||||
$ minikube start
|
||||
😄 minikube v1.14.2 on Debian bullseye/sid
|
||||
🎉 minikube 1.19.0 is available! Download it: <https://github.com/kubernetes/minikube/releases/tag/v1.19.0>
|
||||
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
|
||||
|
||||
✨ Using the docker driver based on user configuration
|
||||
👍 Starting control plane node minikube in cluster minikube
|
||||
🔥 Creating docker container (CPUs=6, Memory=8192MB) ...
|
||||
🐳 Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
|
||||
🔎 Verifying Kubernetes components...
|
||||
🌟 Enabled addons: storage-provisioner, default-storageclass
|
||||
🏄 Done! kubectl is now configured to use "minikube" by default
|
||||
$ minikube status
|
||||
minikube
|
||||
type: Control Plane
|
||||
host: Running
|
||||
kubelet: Running
|
||||
apiserver: Running
|
||||
kubeconfig: Configured
|
||||
```
|
||||
|
||||
#### Install Chaos Mesh
|
||||
|
||||
Start installing Chaos Mesh by adding the repository to Helm:
|
||||
|
||||
|
||||
```
|
||||
$ helm repo add chaos-mesh <https://charts.chaos-mesh.org>
|
||||
"chaos-mesh" has been added to your repositories
|
||||
```
|
||||
|
||||
Then search for your Helm chart:
|
||||
|
||||
|
||||
```
|
||||
$ helm search repo chaos-mesh
|
||||
NAME CHART VERSION APP VERSION DESCRIPTION
|
||||
chaos-mesh/chaos-mesh v0.5.0 v1.2.0 Chaos Mesh® is a cloud-native Chaos Engineering...
|
||||
```
|
||||
|
||||
Once you find your chart, you can begin the installation steps, starting with creating a `chaos-testing` namespace:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create ns chaos-testing
|
||||
namespace/chaos-testing created
|
||||
```
|
||||
|
||||
Next, install your Chaos Mesh chart in this namespace and name it `chaos-mesh`:
|
||||
|
||||
|
||||
```
|
||||
$ helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing
|
||||
NAME: chaos-mesh
|
||||
LAST DEPLOYED: Mon May 10 10:08:52 2021
|
||||
NAMESPACE: chaos-testing
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
1\. Make sure chaos-mesh components are running
|
||||
kubectl get pods --namespace chaos-testing -l app.kubernetes.io/instance=chaos-mesh
|
||||
```
|
||||
|
||||
As the output instructs, check that the Chaos Mesh components are running:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods --namespace chaos-testing -l app.kubernetes.io/instance=chaos-mesh
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
chaos-controller-manager-bfdcb99fd-brkv7 1/1 Running 0 85s
|
||||
chaos-daemon-4mjq2 1/1 Running 0 85s
|
||||
chaos-dashboard-865b778d79-729xw 1/1 Running 0 85s
|
||||
```
|
||||
|
||||
Now that everything is running correctly, you can set up the services to see the Chaos Mesh dashboard and make sure the `chaos-dashboard` service is available:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get svc -n chaos-testing
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
chaos-daemon ClusterIP None <none> 31767/TCP,31766/TCP 3m42s
|
||||
chaos-dashboard NodePort 10.99.137.187 <none> 2333:30029/TCP 3m42s
|
||||
chaos-mesh-controller-manager ClusterIP 10.99.118.132 <none> 10081/TCP,10080/TCP,443/TCP 3m42s
|
||||
```
|
||||
|
||||
Now that you know the service is running, go ahead and expose it, rename it, and open the dashboard using `minikube service`:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl expose service chaos-dashboard --namespace chaos-testing --type=NodePort --target-port=2333 --name=chaos
|
||||
service/chaos exposed
|
||||
|
||||
$ minikube service chaos --namespace chaos-testing
|
||||
|---------------|-------|-------------|---------------------------|
|
||||
| NAMESPACE | NAME | TARGET PORT | URL |
|
||||
|---------------|-------|-------------|---------------------------|
|
||||
| chaos-testing | chaos | 2333 | <http://192.168.49.2:32151> |
|
||||
|---------------|-------|-------------|---------------------------|
|
||||
🎉 Opening service chaos-testing/chaos in default browser...
|
||||
```
|
||||
|
||||
When the browser opens, you'll see a token generator window. Check the box next to **Cluster scoped**, and follow the directions on the screen.
|
||||
|
||||
![Token generator][8]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
Then you can log into Chaos Mesh and see the Dashboard.
|
||||
|
||||
![Chaos Mesh Dashboard][10]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
You have installed your Chaos Mesh instance and can start working towards chaos testing!
|
||||
|
||||
### Get meshy in your cluster
|
||||
|
||||
Now that everything is up and running, you can set up some new experiments to try. The documentation offers some predefined experiments, and I'll choose [StressChaos][11] from the options. In this walkthrough, you will create something in a new namespace to stress against and scale it up so that it can stress against more than one thing.
|
||||
|
||||
Create the namespace:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create ns app-demo
|
||||
namespace/app-demo created
|
||||
```
|
||||
|
||||
Then create the deployment in your new namespace:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create deployment nginx --image=nginx --namespace app-demo
|
||||
deployment.apps/nginx created
|
||||
```
|
||||
|
||||
Scale the deployment up to eight pods:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl scale deployment/nginx --replicas=8 --namespace app-demo
|
||||
deployment.apps/nginx scaled
|
||||
```
|
||||
|
||||
Finally, confirm everything is up and working correctly by checking your pods in the namespace:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods -n app-demo
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-6799fc88d8-7kphn 1/1 Running 0 69s
|
||||
nginx-6799fc88d8-82p8t 1/1 Running 0 69s
|
||||
nginx-6799fc88d8-dfrlz 1/1 Running 0 69s
|
||||
nginx-6799fc88d8-kbf75 1/1 Running 0 69s
|
||||
nginx-6799fc88d8-m25hs 1/1 Running 0 2m44s
|
||||
nginx-6799fc88d8-mg4tb 1/1 Running 0 69s
|
||||
nginx-6799fc88d8-q9m2m 1/1 Running 0 69s
|
||||
nginx-6799fc88d8-v7q4d 1/1 Running 0 69s
|
||||
```
|
||||
|
||||
Now that you have something to test against, you can begin working on the definition for your experiment. Start by creating `chaos-test.yaml`:
|
||||
|
||||
|
||||
```
|
||||
`$ touch chaos-test.yaml`
|
||||
```
|
||||
|
||||
Next, create the definition for the chaos test. Just copy and paste this experiment definition into your `chaos-test.yaml` file:
|
||||
|
||||
|
||||
```
|
||||
apiVersion: chaos-mesh.org/v1alpha1
|
||||
kind: StressChaos
|
||||
metadata:
|
||||
name: burn-cpu
|
||||
namespace: chaos-testing
|
||||
spec:
|
||||
mode: one
|
||||
selector:
|
||||
namespaces:
|
||||
- app-demo
|
||||
labelSelectors:
|
||||
app: "nginx"
|
||||
stressors:
|
||||
cpu:
|
||||
workers: 1
|
||||
duration: '30s'
|
||||
scheduler:
|
||||
cron: '@every 2m'
|
||||
```
|
||||
|
||||
This test will burn 1 CPU for 30 seconds every 2 minutes on pods in the `app-demo` namespace. Finally, apply the YAML file to start the experiment and view what happens in your dashboard.
|
||||
|
||||
Apply the experiment file:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl apply -f chaos-test.yaml
|
||||
stresschaos.chaos-mesh.org/burn-cpu created
|
||||
```
|
||||
|
||||
Then go to your dashboard and click **Experiments** to see the stress test running. You can pause the experiment by pressing the **Pause** button on the right-hand side of the experiment.
|
||||
|
||||
![Chaos Mesh Experiments interface][12]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
Click **Dashboard** to see the state with a count of total experiments, the state graph, and a timeline of running events or previously run tests.
|
||||
|
||||
![Chaos Mesh Dashboard][13]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
Choose **Events** to see the timeline and the experiments below it with details.
|
||||
|
||||
![Chaos Mesh Events interface][14]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
![Chaos Mesh Events timeline details][15]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
Congratulations on completing your first test! Now that you have this working, I'll share more details about what else you can do with your experiments.
|
||||
|
||||
### But wait, there's more
|
||||
|
||||
Other things you can do with this experiment using the command line include:
|
||||
|
||||
* Updating the experiment to change how it works
|
||||
* Pausing the experiment if you need to return the cluster to a steady state
|
||||
* Resuming the experiment to continue testing
|
||||
* Deleting the experiment if you no longer need it for testing
|
||||
|
||||
|
||||
|
||||
#### Updating the experiment
|
||||
|
||||
As an example, update the experiment in your cluster to increase the duration between tests. Go back to your `cluster-test.yaml` and edit the scheduler to change 2 minutes to 20 minutes:
|
||||
|
||||
Before:
|
||||
|
||||
|
||||
```
|
||||
scheduler:
|
||||
cron: '@every 2m'
|
||||
```
|
||||
|
||||
After:
|
||||
|
||||
|
||||
```
|
||||
scheduler:
|
||||
cron: '@every 20m'
|
||||
```
|
||||
|
||||
Save and reapply your file; the output should show the new stress test configuration:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl apply -f chaos-test.yaml
|
||||
stresschaos.chaos-mesh.org/burn-cpu configured
|
||||
```
|
||||
|
||||
If you look in the Dashboard, the experiment should show the new cron configuration.
|
||||
|
||||
![New cron configuration][16]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
#### Pausing and resuming the experiment
|
||||
|
||||
Manually pausing the experiment on the command line will require adding an [annotation][17] to the experiment. Resuming the experiment will require removing the annotation.
|
||||
|
||||
To add the annotation, you will need the kind, name, and namespace of the experiment from your YAML file.
|
||||
|
||||
**Pause an experiment:**
|
||||
|
||||
|
||||
```
|
||||
$ kubectl annotate stresschaos burn-cpu experiment.chaos-mesh.org/pause=true -n chaos-testing
|
||||
|
||||
stresschaos.chaos-mesh.org/burn-cpu annotated
|
||||
```
|
||||
|
||||
The web UI shows it is paused.
|
||||
|
||||
![Paused experiment][18]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
**Resume an experiment**
|
||||
|
||||
You need the same information to resume your experiment. However, rather than the word `true`, you use a dash to remove the pause.
|
||||
|
||||
|
||||
```
|
||||
$ kubectl annotate stresschaos burn-cpu experiment.chaos-mesh.org/pause- -n chaos-testing
|
||||
|
||||
stresschaos.chaos-mesh.org/burn-cpu annotated
|
||||
```
|
||||
|
||||
Now you can see the experiment has resumed in the web UI.
|
||||
|
||||
![Resumed experiment][19]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
#### Remove an experiment
|
||||
|
||||
Removing an experiment altogether requires a simple `delete` command with the file name:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl delete -f chaos-test.yaml
|
||||
|
||||
stresschaos.chaos-mesh.org "burn-cpu" deleted
|
||||
```
|
||||
|
||||
Once again, you should see the desired result in the web UI.
|
||||
|
||||
![All experiments deleted][20]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][9])
|
||||
|
||||
Many of these tasks were done with the command line, but you can also create your own experiments using the UI or import experiments you created as YAML files. This helps many people become more comfortable with creating new experiments. There is also a Download button for each experiment, so you can see the YAML file you created by clicking a few buttons.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
Now that you have this new tool, you can get meshy with your environment. Chaos Mesh allows more user-friendly interaction, which means more people can join the chaos team. I hope you've learned enough here to expand on your chaos engineering. Happy pod hunting!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/chaos-mesh-kubernetes
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cherrybomb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos
|
||||
[3]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus
|
||||
[4]: https://opensource.com/article/21/5/total-chaos-litmus
|
||||
[5]: https://chaos-mesh.org/
|
||||
[6]: https://www.cncf.io/
|
||||
[7]: https://minikube.sigs.k8s.io/docs/start/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/tokengenerator.png (Token generator)
|
||||
[9]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]: https://opensource.com/sites/default/files/uploads/chaosmesh_dashboard.png (Chaos Mesh Dashboard)
|
||||
[11]: https://chaos-mesh.org/docs/chaos_experiments/stresschaos_experiment
|
||||
[12]: https://opensource.com/sites/default/files/uploads/chaosmesh_experiments.png (Chaos Mesh Experiments interface)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/chaosmesh_experiment-dashboard.png (Chaos Mesh Dashboard)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/chaosmesh_events.png (Chaos Mesh Events interface)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/chaosmesh_event-details.png (Chaos Mesh Events timeline details)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/newcron.png (New cron configuration)
|
||||
[17]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
|
||||
[18]: https://opensource.com/sites/default/files/uploads/pausedexperiment.png (Paused experiment)
|
||||
[19]: https://opensource.com/sites/default/files/uploads/resumedexperiment.png (Resumed experiment)
|
||||
[20]: https://opensource.com/sites/default/files/uploads/deletedexperiment.png (All experiments deleted)
|
@ -0,0 +1,267 @@
|
||||
[#]: subject: (Optimize Java serverless functions in Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/21/6/java-serverless-functions-kubernetes)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Optimize Java serverless functions in Kubernetes
|
||||
======
|
||||
Achieve faster startup and a smaller memory footprint to run serverless
|
||||
functions on Kubernetes.
|
||||
![Ship captain sailing the Kubernetes seas][1]
|
||||
|
||||
A faster startup and smaller memory footprint always matter in [Kubernetes][2] due to the expense of running thousands of application pods and the cost savings of doing it with fewer worker nodes and other resources. Memory is more important than throughput on containerized microservices on Kubernetes because:
|
||||
|
||||
* It's more expensive due to permanence (unlike CPU cycles)
|
||||
* Microservices multiply the overhead cost
|
||||
* One monolith application becomes _N_ microservices (e.g., 20 microservices ≈ 20GB)
|
||||
|
||||
|
||||
|
||||
This significantly impacts serverless function development and the Java deployment model. This is because many enterprise developers chose alternatives such as Go, Python, and Nodejs to overcome the performance bottleneck—until now, thanks to [Quarkus][3], a new Kubernetes-native Java stack. This article explains how to optimize Java performance to run serverless functions on Kubernetes using Quarkus.
|
||||
|
||||
### Container-first design
|
||||
|
||||
Traditional frameworks in the Java ecosystem come at a cost in terms of the memory and startup time required to initialize those frameworks, including configuration processing, classpath scanning, class loading, annotation processing, and building a metamodel of the world, which the framework requires to operate. This is multiplied over and over for different frameworks.
|
||||
|
||||
Quarkus helps fix these Java performance issues by "shifting left" almost all of the overhead to the build phase. By doing code and framework analysis, bytecode transformation, and dynamic metamodel generation only once, at build time, you end up with a highly optimized runtime executable that starts up super fast and doesn't require all the memory of a traditional startup because the work is done once, in the build phase.
|
||||
|
||||
![Quarkus Build phase][4]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][5])
|
||||
|
||||
More importantly, Quarkus allows you to build a native executable file that provides [performance advantages][6], including amazingly fast boot time and incredibly small resident set size (RSS) memory, for instant scale-up and high-density memory utilization compared to the traditional cloud-native Java stack.
|
||||
|
||||
![Quarkus RSS and Boot Time Metrics][7]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][5])
|
||||
|
||||
Here is a quick example of how you can build the native executable with a [Java serverless][8] function project using Quarkus.
|
||||
|
||||
### 1\. Create the Quarkus serverless Maven project
|
||||
|
||||
This command generates a Quarkus project (e.g., `quarkus-serverless-native`) to create a simple function:
|
||||
|
||||
|
||||
```
|
||||
$ mvn io.quarkus:quarkus-maven-plugin:1.13.4.Final:create \
|
||||
-DprojectGroupId=org.acme \
|
||||
-DprojectArtifactId=quarkus-serverless-native \
|
||||
-DclassName="org.acme.getting.started.GreetingResource"
|
||||
```
|
||||
|
||||
### 2\. Build a native executable
|
||||
|
||||
You need a GraalVM to build a native executable for the Java application. You can choose any GraalVM distribution, such as [Oracle GraalVM Community Edition (CE)][9] and [Mandrel][10] (the downstream distribution of Oracle GraalVM CE). Mandrel is designed to support building Quarkus-native executables on OpenJDK 11.
|
||||
|
||||
Open `pom.xml`, and you will find this `native` profile. You'll use it to build a native executable:
|
||||
|
||||
|
||||
```
|
||||
<profiles>
|
||||
<profile>
|
||||
<id>native</id>
|
||||
<properties>
|
||||
<quarkus.package.type>native</quarkus.package.type>
|
||||
</properties>
|
||||
</profile>
|
||||
</profiles>
|
||||
```
|
||||
|
||||
> **Note:** You can install the GraalVM or Mandrel distribution locally. You can also download the Mandrel container image to build it (as I did), so you need to run a container engine (e.g., Docker) locally.
|
||||
|
||||
Assuming you have started your container runtime already, run one of the following Maven commands.
|
||||
|
||||
For [Docker][11]:
|
||||
|
||||
|
||||
```
|
||||
$ ./mvnw package -Pnative \
|
||||
-Dquarkus.native.container-build=true \
|
||||
-Dquarkus.native.container-runtime=docker
|
||||
```
|
||||
|
||||
For [Podman][12]:
|
||||
|
||||
|
||||
```
|
||||
$ ./mvnw package -Pnative \
|
||||
-Dquarkus.native.container-build=true \
|
||||
-Dquarkus.native.container-runtime=podman
|
||||
```
|
||||
|
||||
The output should end with `BUILD SUCCESS`.
|
||||
|
||||
![Native Build Logs][13]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][5])
|
||||
|
||||
Run the native executable directly without Java Virtual Machine (JVM):
|
||||
|
||||
|
||||
```
|
||||
`$ target/quarkus-serverless-native-1.0.0-SNAPSHOT-runner`
|
||||
```
|
||||
|
||||
The output will look like:
|
||||
|
||||
|
||||
```
|
||||
__ ____ __ _____ ___ __ ____ ______
|
||||
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
|
||||
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
|
||||
\--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/
|
||||
INFO [io.quarkus] (main) quarkus-serverless-native 1.0.0-SNAPSHOT native
|
||||
(powered by Quarkus xx.xx.xx.) Started in 0.019s. Listening on: <http://0.0.0.0:8080>
|
||||
INFO [io.quarkus] (main) Profile prod activated.
|
||||
INFO [io.quarkus] (main) Installed features: [cdi, kubernetes, resteasy]
|
||||
```
|
||||
|
||||
Supersonic! That's _19_ _milliseconds_ to startup. The time might be different in your environment.
|
||||
|
||||
It also has extremely low memory usage, as the Linux `ps` utility reports. While the app is running, run this command in another terminal:
|
||||
|
||||
|
||||
```
|
||||
`$ ps -o pid,rss,command -p $(pgrep -f runner)`
|
||||
```
|
||||
|
||||
You should see something like:
|
||||
|
||||
|
||||
```
|
||||
PID RSS COMMAND
|
||||
10246 11360 target/quarkus-serverless-native-1.0.0-SNAPSHOT-runner
|
||||
```
|
||||
|
||||
This process is using around _11MB_ of memory (RSS). Pretty compact!
|
||||
|
||||
> **Note:** The RSS and memory usage of any app, including Quarkus, will vary depending on your specific environment and will rise as application experiences load.
|
||||
|
||||
You can also access the function with a REST API. Then the output should be `Hello RESTEasy`:
|
||||
|
||||
|
||||
```
|
||||
$ curl localhost:8080/hello
|
||||
Hello RESTEasy
|
||||
```
|
||||
|
||||
### 3\. Deploy the functions to Knative service
|
||||
|
||||
If you haven't already, [create a namespace][14] (e.g., `quarkus-serverless-native`) on [OKD][15] (OpenShift Kubernetes Distribution) to deploy this native executable as a serverless function. Then add a `quarkus-openshift` extension for Knative service deployment:
|
||||
|
||||
|
||||
```
|
||||
`$ ./mvnw -q quarkus:add-extension -Dextensions="openshift"`
|
||||
```
|
||||
|
||||
Append the following variables in `src/main/resources/application.properties` to configure Knative and Kubernetes resources:
|
||||
|
||||
|
||||
```
|
||||
quarkus.container-image.group=quarkus-serverless-native
|
||||
quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000
|
||||
quarkus.native.container-build=true
|
||||
quarkus.kubernetes-client.trust-certs=true
|
||||
quarkus.kubernetes.deployment-target=knative
|
||||
quarkus.kubernetes.deploy=true
|
||||
quarkus.openshift.build-strategy=docker
|
||||
```
|
||||
|
||||
Build the native executable, then deploy it to the OKD cluster directly:
|
||||
|
||||
|
||||
```
|
||||
`$ ./mvnw clean package -Pnative`
|
||||
```
|
||||
|
||||
> **Note:** Make sure to log in to the right project (e.g., `quarkus-serverless-native`) using the `oc login` command ahead of time.
|
||||
|
||||
The output should end with `BUILD SUCCESS`. It will take a few minutes to complete a native binary build and deploy a new Knative service. After successfully creating the service, you should see a Knative service (KSVC) and revision (REV) using either the `kubectl` or `oc` command tool:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get ksvc
|
||||
NAME URL [...]
|
||||
quarkus-serverless-native <http://quarkus-serverless-native-\[...\].SUBDOMAIN> True
|
||||
|
||||
$ kubectl get rev
|
||||
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON
|
||||
quarkus-serverless-native-00001 quarkus-serverless-native quarkus-serverless-native-00001 1 True
|
||||
```
|
||||
|
||||
### 4\. Access the native executable function
|
||||
|
||||
Retrieve the serverless function's endpoint by running this `kubectl` command:
|
||||
|
||||
|
||||
```
|
||||
`$ kubectl get rt/quarkus-serverless-native`
|
||||
```
|
||||
|
||||
The output should look like:
|
||||
|
||||
|
||||
```
|
||||
NAME URL READY REASON
|
||||
quarkus-serverless-native <http://quarkus-serverless-restapi-quarkus-serverless-native.SUBDOMAIN> True
|
||||
```
|
||||
|
||||
Access the route `URL` with a `curl` command:
|
||||
|
||||
|
||||
```
|
||||
`$ curl http://quarkus-serverless-restapi-quarkus-serverless-native.SUBDOMAIN/hello`
|
||||
```
|
||||
|
||||
In less than one second, you will get the same result as you got locally:
|
||||
|
||||
|
||||
```
|
||||
`Hello RESTEasy`
|
||||
```
|
||||
|
||||
When you access the Quarkus running pod's logs in the OKD cluster, you will see the native executable is running as the Knative service.
|
||||
|
||||
![Native Quarkus Log][16]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][5])
|
||||
|
||||
### What's next?
|
||||
|
||||
You can optimize Java serverless functions with GraalVM distributions to deploy them as serverless functions on Knative with Kubernetes. Quarkus enables this performance optimization using simple configurations in normal microservices.
|
||||
|
||||
The next article in this series will guide you on making portable functions across multiple serverless platforms with no code changes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/java-serverless-functions-kubernetes
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
|
||||
[2]: https://opensource.com/article/19/6/reasons-kubernetes
|
||||
[3]: https://quarkus.io/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/quarkus-build.png (Quarkus Build phase)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://quarkus.io/blog/runtime-performance/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/quarkus-boot-metrics.png (Quarkus RSS and Boot Time Metrics)
|
||||
[8]: https://opensource.com/article/21/5/what-serverless-java
|
||||
[9]: https://www.graalvm.org/community/
|
||||
[10]: https://github.com/graalvm/mandrel
|
||||
[11]: https://www.docker.com/
|
||||
[12]: https://podman.io/
|
||||
[13]: https://opensource.com/sites/default/files/uploads/native-build-logs.png (Native Build Logs)
|
||||
[14]: https://docs.okd.io/latest/applications/projects/configuring-project-creation.html
|
||||
[15]: https://docs.okd.io/latest/welcome/index.html
|
||||
[16]: https://opensource.com/sites/default/files/uploads/native-quarkus-log.png (Native Quarkus Log)
|
@ -0,0 +1,110 @@
|
||||
[#]: subject: (Set and use environment variables in FreeDOS)
|
||||
[#]: via: (https://opensource.com/article/21/6/freedos-environment-variables)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Set and use environment variables in FreeDOS
|
||||
======
|
||||
Environment variables are helpful in almost every command-line
|
||||
environment, including FreeDOS.
|
||||
![Looking at a map for career journey][1]
|
||||
|
||||
A useful feature in almost every command-line environment is the _environment variable_. Some of these variables allow you to control the behavior or features of the command line, and other variables simply allow you to store data that you might need to reference later. Environment variables are also used in FreeDOS.
|
||||
|
||||
### Variables on Linux
|
||||
|
||||
On Linux, you may already be familiar with several of these important environment variables. In the [Bash][2] shell on Linux, the `PATH` variable identifies where the shell can find programs and commands. For example, on my Linux system, I have this `PATH` value:
|
||||
|
||||
|
||||
```
|
||||
bash$ echo $PATH
|
||||
/home/jhall/bin:/usr/lib64/ccache:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin
|
||||
```
|
||||
|
||||
That means when I type a command name like `cat`, Bash will check each of the directories listed in my `PATH` variable, in order:
|
||||
|
||||
1. `/home/jhall/bin`
|
||||
2. `/usr/lib64/ccache`
|
||||
3. `/usr/local/bin`
|
||||
4. `/usr/local/sbin`
|
||||
5. `/usr/bin`
|
||||
6. `/usr/sbin`
|
||||
|
||||
|
||||
|
||||
And in my case, the `cat` command is located in the `/usr/bin` directory, so the full path to that command is `/usr/bin/cat`.
|
||||
|
||||
To set an environment variable on Linux, you type the name of the variable, then an equals sign (`=`) and then the value to store in the variable. To reference that value later using Bash, you type a dollar sign (`$`) in front of the variable name.
|
||||
|
||||
|
||||
```
|
||||
bash$ var=Hello
|
||||
bash$ echo $var
|
||||
Hello
|
||||
```
|
||||
|
||||
### Variables on FreeDOS
|
||||
|
||||
On FreeDOS, environment variables serve a similar function. Some variables control the behavior of the DOS system, and others are useful to store some temporary value.
|
||||
|
||||
To set an environment variable on FreeDOS, you need to use the `SET` keyword. FreeDOS is _case insensitive_, so you can type that using either uppercase or lowercase letters. Then set the variable as you might on Linux, using the variable name, an equals sign (`=`), and the value you want to store.
|
||||
|
||||
However, referencing or _expanding_ an environment variable's value in FreeDOS is quite different from how you do it on Linux. You can't use the dollar sign (`$`) to reference a variable in FreeDOS. Instead, you need to surround the variable's name with percent signs (`%`).
|
||||
|
||||
![Use % \(not $\) to reference a variable's value][3]
|
||||
|
||||
(Jim Hall, [CC-BY SA 4.0][4])
|
||||
|
||||
It's important to use the percent signs both before and after the name because that's how FreeDOS knows where the variable name begins and ends. This is very useful, as it allows you to reference a variable's value while immediately appending (or prepending) other text to the value. Let me demonstrate this by setting a new variable called `reply` with the value `yes`, then referencing that value with the text "11" before and "22" after it:
|
||||
|
||||
![Set and reference an environment variable][5]
|
||||
|
||||
(Jim Hall, [CC-BY SA 4.0][4])
|
||||
|
||||
Because FreeDOS is case insensitive you can also use uppercase or lowercase letters for the variable name, as well as the `SET` keyword. However, the variable's value will use the letter case as you typed it on the command line.
|
||||
|
||||
Finally, you can see a list of all the environment variables currently defined in FreeDOS. Without any arguments, the `SET` keyword will display all variables, so you can see everything at a glance:
|
||||
|
||||
![Show all variables at once with SET][6]
|
||||
|
||||
(Jim Hall, [CC-BY SA 4.0][4])
|
||||
|
||||
Environment variables are a useful staple in command-line environments, and the same applies to FreeDOS. You can set your own variables to serve your own needs, but be careful about changing some of the variables that FreeDOS uses. These can change the behavior of your running FreeDOS system:
|
||||
|
||||
* **DOSDIR**: The location of the FreeDOS installation directory, usually `C:\FDOS`
|
||||
* **COMSPEC**: The current instance of the FreeDOS shell, usually `C:\COMMAND.COM` or `%DOSDIR%\BIN\COMMAND.COM`
|
||||
* **LANG**: The user's preferred language
|
||||
* **NLSPATH**: The location of the system's language files, usually `%DOSDIR%\NLS`
|
||||
* **TZ**: The system's time zone
|
||||
* **PATH**: A list of directories where FreeDOS can find programs to run, such as `%DOSDIR%\BIN`
|
||||
* **HELPPATH**: The location of the system's documentation files, usually `%DOSDIR%\HELP`
|
||||
* **TEMP**: A temporary directory where FreeDOS stores output from each command as it "pipes" data between programs on the command line
|
||||
* **DIRCMD**: A variable that controls how the `DIR` command displays files and directories, typically set to `/OGNE` to order (O) the contents by grouping (G) directories first, then sorting entries by name (N) then extension (E)
|
||||
|
||||
|
||||
|
||||
If you accidentally change any of the FreeDOS "internal" variables, you could prevent some parts of FreeDOS from working properly. In that case, simply reboot your computer, and FreeDOS will reset the variables from the system defaults.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/freedos-environment-variables
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
|
||||
[2]: https://opensource.com/article/19/8/using-variables-bash
|
||||
[3]: https://opensource.com/sites/default/files/uploads/env-path.png (Use % (not $) to reference a variable's value)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/env-vars.png (Set and reference an environment variable)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/env-set.png (Show all variables at once with SET)
|
@ -0,0 +1,240 @@
|
||||
[#]: subject: (15 Useful Visual Studio Code Keyboard Shortcuts to Increase Productivity)
|
||||
[#]: via: (https://itsfoss.com/vs-code-shortcuts/)
|
||||
[#]: author: (Sarvottam Kumar https://itsfoss.com/author/sarvottam/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ywxgod)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
15 Useful Visual Studio Code Keyboard Shortcuts to Increase Productivity
|
||||
======
|
||||
|
||||
There is no doubt that Microsoft’s [VS Code is one of the best open source code editor][1] out there. Unlike the legendary Vim, VS Code doesn’t need you to be a keyboard ninja and has tons of features that developers swear by.
|
||||
|
||||
But this doesn’t mean you cannot, or you should not use keyboard shortcuts in Visual Studio Code.
|
||||
|
||||
Do you hate breaking your coding flow and move your hand to a mouse for performing an action like toggling terminal in your Visual Studio Code (VS Code) editor? If yes, then you should immediately get yourself familiar and memorize these useful keyboard shortcuts for VS Code.
|
||||
|
||||
It will not just help you to get rid of a mouse, but also make you highly productive and efficient.
|
||||
|
||||
So, let’s get to know how you can code fast by quickly navigating through the code editor using keyboard shortcuts.
|
||||
|
||||
### Useful VS Code Keyboard Shortcuts
|
||||
|
||||
Just a disclaimer. These keyboard shortcuts are what I find most useful when working in VS Code. You may explore more of them based on your needs.
|
||||
|
||||
I have also mentioned keyboard shortcuts for macOS users.
|
||||
|
||||
#### 1\. Show All Commands
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + SHIFT + P or F1 | SHIFT + ⌘ + P or F1
|
||||
|
||||
Starting with the most helpful shortcut, it opens Command Palette that provides access to all of the functionality of VS Code.
|
||||
|
||||
![Command Palette][2]
|
||||
|
||||
It is a very important VS Code Shortcut because even if you forget or don’t want to remember any shortcut except this one, you can still perform various operations using Command Palette like create a new file, open settings, change theme, and view all keyboard shortcuts as well.
|
||||
|
||||
#### 2\. Split VS Code Editor Vertically Or Horizontally
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + \
|
||||
| ⌘ + \
|
||||
|
||||
If you don’t have a multi-monitor setup for high productivity, you can still view codes of multiple files at once by splitting the editor either horizontally or vertically.
|
||||
|
||||
![Split VS Code][3]
|
||||
|
||||
To change focus into editor group, you can either use number or arrow keys.
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + 1/2/3 | ⌘ + 1/2/3
|
||||
CTRL + K CTRL + ←/→ | ⌘ + K ⌘ + ←/→
|
||||
|
||||
#### 3\. Toggle Integrated Terminal
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + ` | ⌘ + `
|
||||
|
||||
Integrated terminal in VS Code is a very convenient feature that lets you execute the task quickly without switching windows. To hide/unhide the terminal in the editor, this keyboard shortcut comes in very handy.
|
||||
|
||||
![Integrated Terminal][4]
|
||||
|
||||
However, like me, if you find pressing “CTRL+`” difficult to use due to its weird corner location, you can still open Command Palette and execute `View: Toggle Terminal` command.
|
||||
|
||||
![Toggle Terminal Using Command Palette][5]
|
||||
|
||||
#### 4\. Go To File
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + P | ⌘ + P
|
||||
|
||||
As the project grows, looking for a file might become a very difficult task. Hence, I would suggest even you use a mouse, this command can save you a lot of time in searching and navigating to a file in a repository.
|
||||
|
||||
![Go to file][6]
|
||||
|
||||
#### 5\. Go To Line
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + G | ^ + G
|
||||
|
||||
Once you search a file, you may now want to jump to a specific line for adding or editing code. If a file contains thousands of lines of code, scrolling can definitely eat up your time. Hence, CTRL+G or ^+G VS Code Keyboard Shortcut can quickly take you to a line you want.
|
||||
|
||||
![Go to line][7]
|
||||
|
||||
Alternatively, you can also use the fourth shortcut for ‘Go To File,’ where appending `:` colon with line number in the input box works as ‘Go To Line.’
|
||||
|
||||
#### 6\. Search Complete Project
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + SHIFT + F | ⌘ + SHIFT + F
|
||||
|
||||
Most probably you may also want to search for a text, variable, or function in your whole project. In such a case, this command is very convenient that shows search input in the sidebar.
|
||||
|
||||
![Search project][8]
|
||||
|
||||
You can also add filters to your search using ALT+C to match case, ALT+W to match the whole word, and ALT+R to use regular expression.
|
||||
|
||||
#### 7\. Zen Mode
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + K Z | ⌘ + K Z
|
||||
|
||||
Want to work in a distraction-free environment to stay more focused? Zen mode is a feature in a VS Code that hides all UI (Status Bar, Activity Bar, Panel, and Sidebar) and displays only the editor on a full screen.
|
||||
|
||||
![Zen Mode][9]
|
||||
|
||||
To enable Zen Mode, you can either use the above shortcut or open Command Palette and execute “View: Toggle Zen Mode.” To exit Zen mode, you need to press `Esc` button twice.
|
||||
|
||||
#### 8\. Add Selection To Next Find Match
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + D | ⌘ + D
|
||||
|
||||
This command enables you to select the next occurrences of a selected text for editing. It comes very handy if the next match is located far away from the first match.
|
||||
|
||||
![Next find match][10]
|
||||
|
||||
#### 9\. Toggle Line Comment
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + / | ⌘ + /
|
||||
|
||||
The struggle to reach the start of a line and then add a double forward slash to the comment line can be replaced with this quick keyboard shortcut.
|
||||
|
||||
![Comment out code][11]
|
||||
|
||||
Even if you want to comment out multiple lines, you can select all lines using `SHIFT+UP/Down` and then press `CTRL+/`.
|
||||
|
||||
#### 10\. Jump To The Beginning Or End Of File
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + HOME/END | ⌘ + ↑/↓
|
||||
|
||||
If you get lost in the middle of your codes, the command can help to quickly reach either start or end of the file.
|
||||
|
||||
#### 11\. Code Folding Or Unfolding
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + SHIFT + [ or ] | ⌥ + ⌘ + [ or ]
|
||||
|
||||
It is one of the most useful shortcuts that can help you collapse/uncollapse a region of code. In this way, you can hide unnecessary code and view only the required section of code at a time to focus more and code fast.
|
||||
|
||||
![Collapse a region of code][12]
|
||||
|
||||
#### 12\. Peek Implementation
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + SHIFT + F12 | ⌘ + SHIFT + F12
|
||||
|
||||
The shortcut is most likely to help you in your code analysis or bug fixing where you want to understand the working of functions and variables.
|
||||
|
||||
![Peek Implementation][13]
|
||||
|
||||
#### 13\. Delete Current Line
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + SHIFT + K | SHIFT + ⌘ + K
|
||||
|
||||
A single quick command can sum up two tasks of selecting a current line and pressing the delete/backspace button.
|
||||
|
||||
#### 14\. Find And Replace
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + F
|
||||
CTRL + H | ⌘ + F
|
||||
⌥ + ⌘ + F
|
||||
|
||||
What could be the best way to replace all occurrences of a text in a file with a new one? If you go for one by one manually by scrolling down the code, no wonder how much time it will take if text occurrence is large.
|
||||
|
||||
![Find and replace][14]
|
||||
|
||||
While using Find and Replace do the same task within seconds. You can open it using two shortcuts where one actually opens the input box for finding text and the other for replacing text.
|
||||
|
||||
#### 15\. VS Code Keyboard Shortcuts
|
||||
|
||||
Windows/Linux | macOS
|
||||
---|---
|
||||
CTRL + K CTRL + S | ⌘ + K ⌘ + S
|
||||
|
||||
At last, if you still struggle with remembering all the above keyboard shortcuts, you still don’t have to worry. This is because you can view all available commands for your editor using the above shortcut.
|
||||
|
||||
![Keyboard Shortcuts][15]
|
||||
|
||||
Here you can also edit keybinding for the command as per your comfort.
|
||||
|
||||
### Want More Keyboard Shortcuts For VS Code?
|
||||
|
||||
If you want to have complete knowledge of VS Code keyboard shortcuts, you can check out the [documentation][16] of Visual Studio Code.
|
||||
|
||||
Or, if you want all available shortcuts in a single piece of paper, get the cheatsheet for [Linux][17], [macOS][18], and [Windows][19]. You can have a quick look whenever you forget.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/vs-code-shortcuts/
|
||||
|
||||
作者:[Sarvottam Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/sarvottam/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/Command-Palette.jpg?resize=800%2C418&ssl=1
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/Split-VS-Code.png?resize=800%2C405&ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/Integrated-Terminal.png?resize=800%2C221&ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/Toggle-Terminal-Using-Command-Palette.png?resize=686%2C118&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/Go-to-file.jpg?resize=800%2C388&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/Go-to-line.jpg?resize=800%2C99&ssl=1
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/Search-project.jpg?resize=381%2C450&ssl=1
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/Zen-Mode.png?resize=800%2C450&ssl=1
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/Next-find-match.jpg?resize=800%2C313&ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/Comment-out-code.jpg?resize=800%2C313&ssl=1
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/Collapse-a-region-of-code.jpg?resize=800%2C287&ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/Peek-Implementation.png?resize=800%2C339&ssl=1
|
||||
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/Find-and-replace.png?resize=800%2C223&ssl=1
|
||||
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/Keyboard-Shortcuts.png?resize=800%2C406&ssl=1
|
||||
[16]: https://code.visualstudio.com/docs/getstarted/keybindings
|
||||
[17]: https://code.visualstudio.com/shortcuts/keyboard-shortcuts-linux.pdf
|
||||
[18]: https://code.visualstudio.com/shortcuts/keyboard-shortcuts-macos.pdf
|
||||
[19]: https://code.visualstudio.com/shortcuts/keyboard-shortcuts-windows.pdf
|
@ -0,0 +1,76 @@
|
||||
[#]: subject: (5 handy guides to open source for teachers)
|
||||
[#]: via: (https://opensource.com/article/21/6/open-source-guides-teachers)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
5 handy guides to open source for teachers
|
||||
======
|
||||
To help you get the most out of your summer, but also satiate the real
|
||||
need to plan for the coming school year, we've collected some of our
|
||||
favorite concise guides to help you plan.
|
||||
![Looking at a map][1]
|
||||
|
||||
For some teachers, summer is here and thus a long (hopefully, relaxing) break. All the teachers I know are proud lifelong learners, though, and at the end of the summer break there's a new school year awaiting. To help you get the most out of your summer, but also satiate the real need to plan for the coming school year, we've collected some of our favorite _concise_ guides to help you plan.
|
||||
|
||||
### How to make your school pandemic-ready
|
||||
|
||||
By going [all-in on Linux][2], teacher Robert Maynord ensured his school was ready for remote learning—even before it needed to be. We still don't know what the rest of the year has in store, but if there's anything that the pandemic has shown the world, it's that [digital transformation][3] (the integration of digital technology into all areas of education) is not only possible, but beneficial to both teachers and students. You may not have the authority to change the way your classroom operates on a technological level, but there are lots of small changes you can make to create a more agile learning experience for your pupils.
|
||||
|
||||
### The ultimate guide to open source for teachers
|
||||
|
||||
With this article, you can learn how to [incorporate open source principles][4] in your classroom. Open source is about more than just technology. It's about sharing knowledge, collaborating, working together toward a common goal. You can transform your classroom into a shared space where students learn from each other just as much as they do from you. Read it, put it into practice, and encourage it.
|
||||
|
||||
### 8 WordPress plugins for virtual classrooms
|
||||
|
||||
The WordPress web platform is a powerful tool for building websites. In the classroom, [it can serve as a great tool][5] to teach both web technology and creative or academic writing. It can also be used to enable remote learning, or to integrate everyday schoolwork with the digital realm. Gain the most benefit from WordPress for educational purposes by mastering its many [add-on features][6].
|
||||
|
||||
### Teach kids Python (interactive gaming)
|
||||
|
||||
Open source tools can help anyone get started learning Python in an easy and fun way—making games. Of course, Python is a big topic, but we have a curriculum to take you from installing Python, taking your first steps with code with simple text and "turtle" drawing games, all the way to intermediate game development.
|
||||
|
||||
1. Start out by installing Python and getting used to how code works in our [Python 101 article.][7] This article alone can probably serve as the basis for two or three distinct classroom lessons.
|
||||
2. If you're familiar with [Jupyter][8], then learn to [program a simple game with Python and Jupyter][9].
|
||||
3. You can also learn [game development with this free Python ebook][10], which teaches you how to use Git, Python, and PyGame. Once you've learned the basics, check out [this collection of cool creations from the book's "playtesters"][11].
|
||||
|
||||
|
||||
|
||||
If Python is too advanced for you or your students, take a look at [Twine][12], a simple HTML-based interactive storytelling tool.
|
||||
|
||||
### Teach kids the Raspberry Pi (programming)
|
||||
|
||||
This article in our guide to [getting started with the Raspberry Pi][13] explores resources for helping kids learn to program. The Raspberry Pi has the unique quality of costing only $35 USD, while also being a full-powered Linux computer that can be used for anything from basic Python lessons to actual webservers, so it's full of potential for education. It's a reasonable goal to have a Pi per child in your classroom, or you can have a single Pi for the classroom to explore together (Linux is a multi-user OS, so with the right setup all of your students can use one Pi at the same time until you sell their parents or your principle on the value of purchasing more).
|
||||
|
||||
### Learn together
|
||||
|
||||
Part of an open classroom is being brave enough to learn alongside your students. As a teacher, you might be used to having all the answers, but the digital world is ever-changing and evolving. Don't be afraid to learn Python, Linux, the Raspberry Pi, and anything else _with_ your students. Work together to learn new fundamentals, new tricks, and new ways of solving problems. Open source is a proven and successful methodology, so don't just teach it—make it happen in your classroom.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/open-source-guides-teachers
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
|
||||
[2]: https://opensource.com/article/21/5/linux-school-servers
|
||||
[3]: https://enterprisersproject.com/what-is-digital-transformation
|
||||
[4]: https://opensource.com/article/20/7/open-source-teachers
|
||||
[5]: https://opensource.com/article/20/3/wordpress-education
|
||||
[6]: https://opensource.com/article/20/5/wordpress-plugins-education
|
||||
[7]: https://opensource.com/article/17/10/python-101
|
||||
[8]: https://opensource.com/article/18/3/getting-started-jupyter-notebooks
|
||||
[9]: https://opensource.com/article/20/5/python-games
|
||||
[10]: https://opensource.com/article/20/10/learn-python-ebook
|
||||
[11]: https://github.com/MakerBox-NZ?q=pygame&type=&language=&sort=
|
||||
[12]: https://opensource.com/article/18/2/twine-gaming
|
||||
[13]: https://opensource.com/article/19/3/teach-kids-program-raspberry-pi
|
@ -0,0 +1,221 @@
|
||||
[#]: subject: (Automate tasks with BAT files on FreeDOS)
|
||||
[#]: via: (https://opensource.com/article/21/6/automate-tasks-bat-files-freedos)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Automate tasks with BAT files on FreeDOS
|
||||
======
|
||||
Here's a helpful guide to batch files under FreeDOS.
|
||||
![Tips and gears turning][1]
|
||||
|
||||
Even if you haven't used DOS before, you are probably aware of its command-line shell, named simply `COMMAND.COM`. The `COMMAND.COM` shell has become synonymous with DOS, and so it's no surprise that FreeDOS also implements a similar shell called "FreeCOM"—but named `COMMAND.COM` just as on other DOS systems.
|
||||
|
||||
But the FreeCOM shell can do more than just provide a command-line prompt where you run commands. If you need to automate tasks on FreeDOS, you can do that using _batch files_, also called "BAT files" because these scripts use the `.BAT` extension.
|
||||
|
||||
Batch files are much simpler than scripts you might write on Linux. That's because when this feature was originally added to DOS, long ago, it was meant as a way for DOS users to "batch up" certain commands. There's not much flexibility for conditional branching, and batch files do not support more advanced features such as arithmetic expansion, separate redirection for standard output vs error messages, background processes, tests, loops, and other scripting structures that are common in Linux scripts.
|
||||
|
||||
Here's a helpful guide to batch files under FreeDOS. Remember to reference environment variables by wrapping the variable name with percent signs (`%`) such as `%PATH%`. However, note that `FOR` loops use a slightly different construct for historical reasons.
|
||||
|
||||
### Printing output
|
||||
|
||||
Your batch file might need to print messages to the user, to let them know what's going on. Use the `ECHO` statement to print messages. For example, a batch file might indicate it is done with a task with this statement:
|
||||
|
||||
|
||||
```
|
||||
`ECHO Done`
|
||||
```
|
||||
|
||||
You don't need quotes in the `ECHO` statement. The FreeCOM `ECHO` statement will not treat quotes in any special way and will print them just like regular text.
|
||||
|
||||
Normally, FreeDOS prints out every line in the batch file as it executes them. This is usually not a problem in a very short batch file that only defines a few environment variables for the user. But for longer batch files that do more work, this constant display of the batch lines can become bothersome. To suppress this output, use the `OFF` keyword to the `ECHO` statement, as:
|
||||
|
||||
|
||||
```
|
||||
`ECHO OFF`
|
||||
```
|
||||
|
||||
To resume displaying the batch lines as FreeDOS runs them, use the `ON` keyword instead:
|
||||
|
||||
|
||||
```
|
||||
`ECHO ON`
|
||||
```
|
||||
|
||||
Most batch files include an `ECHO OFF` statement on the first line, to suppress messages. But the shell will still print `ECHO OFF` to the screen as it executes that statement. To hide that message, batch files often use an at sign (`@`) in front. This special character at the start of any line in a batch file suppresses printing that line, even if `ECHO` is turned on.
|
||||
|
||||
|
||||
```
|
||||
`@ECHO OFF`
|
||||
```
|
||||
|
||||
### Comments
|
||||
|
||||
When writing any long batch file, most programmers prefer to use _comments_ to remind themselves about what the batch file is meant to do. To enter a comment in a batch file, use the `REM` (for _remark_) keyword. Anything after `REM` gets ignored by the FreeCOM shell.
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
REM This is a comment
|
||||
```
|
||||
|
||||
### Executing a "secondary" batch file
|
||||
|
||||
Normally, FreeCOM only runs one batch file at a time. However, you might need to use another batch file to do certain things, such as set environment variables that are common across several batch files.
|
||||
|
||||
If you simply call the second batch file from a "running" batch file, FreeCOM switches entirely to that second batch file and stops processing the first one. To instead run the second batch file "inside" the first batch file, you need to tell the FreeCOM shell to _call_ the second batch file with the `CALL` keyword.
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
CALL SETENV.BAT
|
||||
```
|
||||
|
||||
### Conditional evaluation
|
||||
|
||||
Batch files do support a simple conditional evaluation structure with the `IF` statement. This has three basic forms:
|
||||
|
||||
1. Testing the return status of the previous command
|
||||
2. Testing if a variable is equal to a value
|
||||
3. Testing if a file exists
|
||||
|
||||
|
||||
|
||||
A common use of the `IF` statement is to test if a program returned successfully to the operating system. Most programs will return a zero value if they completed normally, or some other value in case of an error. In DOS, this is called the _error level_ and is a special case to the `IF` test.
|
||||
|
||||
To test if a program called `MYPROG` exited successfully, you actually want to examine if the program returned a "zero" error level. Use the `ERRORLEVEL` keyword to test for a specific value, such as:
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
MYPROG
|
||||
IF ERRORLEVEL 0 ECHO Success
|
||||
```
|
||||
|
||||
Testing the error level with `ERRORLEVEL` is a clunky way to examine the exit status of a program. A more useful way to examine different possible return codes for a DOS program is with a special variable FreeDOS defines for you, called `ERRORLEVEL`. This stores the error level of the most recently executed program. You can then test for different values using the `==` test.
|
||||
|
||||
You can test if a variable is equal to a value using the `==` test with the `IF` statement. Like some programming languages, you use `==` to directly compare two values. Usually, you will reference an environment variable on one side and a value on the other, but you could also compare the values of two variables to see if they are the same. For example, you could rewrite the above `ERRORLEVEL` code with this batch file:
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
MYPROG
|
||||
IF %ERRORLEVEL%==0 ECHO Success
|
||||
```
|
||||
|
||||
And another common use of the `IF` statement is to test if a file exists, and take action if so. You can test for a file with the `EXIST` keyword. For example, to delete a temporary file called `TEMP.DAT`, you might use this line in your batch file:
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
IF EXIST TEMP.DAT DEL TEMP.DAT
|
||||
```
|
||||
|
||||
With any of the `IF` statements, you can use the `NOT` keyword to _negate_ a test. To print a message if a file _does not_ exist, you could write:
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
IF NOT EXIST TEMP.DAT ECHO No file
|
||||
```
|
||||
|
||||
### Branched execution
|
||||
|
||||
One way to leverage the `IF` test is to jump to an entirely different part of the batch file, depending on the outcome of a previous test. In the simplest case, you might want to skip to the end of the batch file if a key command fails. Or you might want to execute other statements if certain environment variables are not set up correctly.
|
||||
|
||||
You can skip around to different parts of a batch file using the `GOTO` instruction. This jumps to a specific line, called a _label_, in the batch file. Note that this is a strict "go-to" jump; batch file execution picks up at the new label.
|
||||
|
||||
Let's say a program needed an existing empty file to store temporary data. If the file did not exist, you would need to create a file before running the program. You might add these lines to a batch file, so your program always has a temporary file to work with:
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
IF EXIST temp.dat GOTO prog
|
||||
ECHO Creating temp file...
|
||||
TOUCH temp.dat
|
||||
:prog
|
||||
ECHO Running the program...
|
||||
MYPROG
|
||||
```
|
||||
|
||||
Of course, this is a very simple example. For this one case, you might instead rewrite the batch file to create the temporary file as part of the `IF` statement:
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
IF NOT EXIST temp.dat TOUCH temp.dat
|
||||
ECHO Running the program...
|
||||
MYPROG
|
||||
```
|
||||
|
||||
### Iteration
|
||||
|
||||
What if you need to perform the same task over a set of files? You can _iterate_ over a set of files with the `FOR` loop. This is a one-line loop that runs a single command with a different file each time.
|
||||
|
||||
The `FOR` loop uses a special syntax for the iteration variable, which is used differently than other DOS environment variables. To loop through a set of text files so you can edit each one, in turn, use this statement in your batch file:
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
FOR %F IN (*.TXT) DO EDIT %F
|
||||
```
|
||||
|
||||
Note that the iteration variable is specified with only one percent sign (`%`) if you run this loop at the command line, without a batch file:
|
||||
|
||||
|
||||
```
|
||||
`C:\> FOR %F IN (*.TXT) DO EDIT %F`
|
||||
```
|
||||
|
||||
### Command-line processing
|
||||
|
||||
FreeDOS provides a simple method to evaluate any command-line options the user might have provided when running batch files. FreeDOS parses the command line, and stores the first nine batch file options in the special variables `%1`, `%2`, .. and so on until `%9`. Notice that the eleventh option (and beyond) are not directly accessible in this way. (The special variable `%0` stores the name of the batch file.)
|
||||
|
||||
If your batch file needs to process more than nine options, you can use the `SHIFT` statement to remove the first option and _shift_ every option down by one value. So the second option becomes `%1`, and the tenth option becomes `%9`.
|
||||
|
||||
Most batch files need to shift by one value. But if you need to shift by some other increment, you can provide that parameter to the `SHIFT` statement, such as:
|
||||
|
||||
|
||||
```
|
||||
`SHIFT 2`
|
||||
```
|
||||
|
||||
Here's a simple batch file that demonstrates shifting by one:
|
||||
|
||||
|
||||
```
|
||||
@ECHO OFF
|
||||
ECHO %1 %2 %3 %4 %5 %6 %7 %8 %9
|
||||
ECHO Shift by one ..
|
||||
SHIFT 1
|
||||
ECHO %1 %2 %3 %4 %5 %6 %7 %8 %9
|
||||
```
|
||||
|
||||
Executing this batch file with ten arguments shows how the `SHIFT` statement reorders the command line options, so the batch file can now access the tenth argument as `%9`:
|
||||
|
||||
|
||||
```
|
||||
C:\SRC>args 1 2 3 4 5 6 7 8 9 10
|
||||
1 2 3 4 5 6 7 8 9
|
||||
Shift by one ..
|
||||
2 3 4 5 6 7 8 9 10
|
||||
C:\SRC>
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/automate-tasks-bat-files-freedos
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
|
@ -0,0 +1,184 @@
|
||||
[#]: subject: (Comparing Linux Mint and Fedora: Which One Should You Use?)
|
||||
[#]: via: (https://itsfoss.com/linux-mint-vs-fedora/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Comparing Linux Mint and Fedora: Which One Should You Use?
|
||||
======
|
||||
|
||||
Linux Mint is a [popular Linux distribution tailored for beginners][1] while providing a similar experience to former Windows users. In fact, it does a [few things better than Ubuntu][2], which makes it a suitable choice for every type of user.
|
||||
|
||||
It is completely community-powered, on top of Ubuntu as its base.
|
||||
|
||||
On the other hand, Fedora is a cutting-edge distribution that focuses on incorporating exciting changes that eventually makes it way to Red Hat Enterprise Linux (RHEL).
|
||||
|
||||
Unlike Linux Mint, Fedora does not exclusively focus on personal use (or non-developers). Even though they offer a workstation edition, they aim for developers and experienced Linux users.
|
||||
|
||||
### Fedora or Mint: What Should You Go For?
|
||||
|
||||
While we know that Fedora is not exactly geared towards Linux newbies, many users love using Fedora as their daily driver. So, in this article, we shall shed light on some differences between the two to help you pick one to use on your desktop machine.
|
||||
|
||||
#### System Requirements & Hardware Compatibility
|
||||
|
||||
![][3]
|
||||
|
||||
Before choosing any Linux distribution, you should always go through the system requirements and check the hardware compatibility.
|
||||
|
||||
Here, both Linux Mint and Fedora require at least 2 GB of RAM, 20 GB of disk space, and a 1024 x 768 resolution display to get an entry-level experience.
|
||||
|
||||
Yes, the official documentation may mention 1 GB of RAM to start with but let us be practical of your use-case. Unless you have a vintage computer that you want to revive for a specific purpose, it is out of the equation.
|
||||
|
||||
![Linux Mint Resource Usage][4]
|
||||
|
||||
Technically, both support modern and old hardware. You will only know how it works along with the software/driver support when you get it installed. Unless you have a special peripheral or hardware component with specific features, hardware support may not be a big deal.
|
||||
|
||||
Linux Mint 19 series still provide support for 32-bit systems and you can use it till April 2023. Fedora doesn’t support 32-bit systems anymore.
|
||||
|
||||
#### Software Update Cycle
|
||||
|
||||
![Linux Mint Update Manager][5]
|
||||
|
||||
Linux Mint focuses on Long-Term Releases (LTS) with a five-year support. It will be maintained same as Ubuntu. But there is no paid extension like Ubuntu offers.
|
||||
|
||||
Fedora does not offer an LTS release but pushes a new update every 6 months. Every version gets software support for 13 months. You get the ability to skip one version if you want.
|
||||
|
||||
If you just want to have a Linux distro installed for years without requiring the latest technology/features in an update, Linux Mint is the way to go.
|
||||
|
||||
But, if you want to the latest and greatest (which can also break your computing experience ins some rare cases) and accept to adapt with the major changes Fedora pushes, Fedora can be a choice.
|
||||
|
||||
#### Desktop Environment Choices
|
||||
|
||||
![Linux Mint Cinnamon Edition][6]
|
||||
|
||||
Linux Mint provides three different [desktop environments][7] — **MATE, Cinnamon, and Xfce**. All of them will have the same update cycle and will be supported for five years from their release.
|
||||
|
||||
Even though Fedora does not offer LTS releases, you get a variety of desktop choices in the form of Fedora spins. You get KDE, LXQt, MATE, Cinnamon, LXDE, and an edition with i3 tiling window manager baked in.
|
||||
|
||||
![Fedora 34 with GNOME 40][8]
|
||||
|
||||
So, if you want more choices out of the box, Fedora can be a quite exciting choice.
|
||||
|
||||
#### Software Availability
|
||||
|
||||
![Linux Mint’s Software Center and Package Manager][9]
|
||||
|
||||
The default repositories of Linux Mint (or Ubuntu’s) offer a wide range of software to install. But Fedora’s default repository sticks only to open-source applications.
|
||||
|
||||
Not just limited to that, Linux Mint also comes packed with [Synaptic Package Manager][10] which is an impressive lightweight tool to install software.
|
||||
|
||||
Even though you can [enable third-party repositories in Fedora][11], it is yet an additional step. Also, the RPM Fusion repository may not be as huge as Ubuntu’s universe repository.
|
||||
|
||||
![Fedora 34 Software Center][12]
|
||||
|
||||
So, with Linux Mint, overall, you get more packages available to install and more ways to install software, out of the box.
|
||||
|
||||
#### Ease of Use & Installation
|
||||
|
||||
For an entirely new user, Ubuntu or any Ubuntu-based distribution generally fares well to start with.
|
||||
|
||||
Starting from the [installation experience in Ubuntu][13] to the ease of [installing software][14] while having the option to opt for an LTS release is what a beginner finds handy.
|
||||
|
||||
And, Linux Mint naturally presents the same benefits of Ubuntu with Ubiquity installer – hence, it offers a minimal learning curve, easy to install and easy to use.
|
||||
|
||||
While Fedora is not complex by definition but the installation options, package manager, and lack of software in the default repositories may prove to be a time-consuming factor.
|
||||
|
||||
If you’ve never tried it, I suggest you to go through our [Fedora installation guide for VirtualBox][15]. It is a good way to test the installation experience before trying it out on your production system of any sort.
|
||||
|
||||
#### Out of the Box Experience
|
||||
|
||||
The most hassle-free experience is usually the pleasant option. Well, for most people.
|
||||
|
||||
Now, you need to understand that depending on the hardware configuration, every user might end up having a different “out-of-the-box” experience.
|
||||
|
||||
But, for a reference, let me just give you my example for both Fedora and Linux Mint.
|
||||
|
||||
Considering I’m rocking an NVIDIA GPU on my PC, I need to install the proprietary drivers for the best performance.
|
||||
|
||||
![][16]
|
||||
|
||||
And, when I booted up Linux Mint, installing the drivers were pretty easy using the **Driver Manager** app.
|
||||
|
||||
But, with Fedora, even though I followed our guide on [installing Nvidia drivers in Fedora][17], I was presented with an error when I rebooted.
|
||||
|
||||
![Installing NVIDIA drivers in Fedora][18]
|
||||
|
||||
Not just that, for some reason, my wired network did not seem to be active – hence, I had no internet connectivity.
|
||||
|
||||
Yes, you should always try to troubleshoot when you run into issues, but I did not need to do that for Linux Mint. So, in my experience, I will recommend Linux Mint with a better out of the experience.
|
||||
|
||||
#### Documentation
|
||||
|
||||
I would recommend going for [Fedora’s documentation][19] if you rely on resources and want to challenge yourself with a decent learning experience along the process.
|
||||
|
||||
You will find up-to-date information for recent and latest Fedora releases, which is a good thing.
|
||||
|
||||
On the other hand, [Linux Mint’s documentation][20] is not regularly updated but useful when you want to dig deeper.
|
||||
|
||||
#### Community Support
|
||||
|
||||
You will get a good community support for both. The [Linux Mint forums][21] is a basic platform which is easy to use and gets the job done.
|
||||
|
||||
[Fedora’s forum][22] is powered by Discourse, which happens to be one of the most [popular modern open-source forum software][23].
|
||||
|
||||
#### Corporate vs Community Angle
|
||||
|
||||
Fedora is backed up by the biggest open-source company [Red Hat][24] – so you get a good level of constant innovation and support for the long run.
|
||||
|
||||
However, just because Fedora is not built for the daily computer users in mind, the choices made with every release may affect your user experience entirely.
|
||||
|
||||
On the other hand, Linux Mint is completely backed up by a passionate Linux community focusing on making Linux easier and reliable for everyday use. Of course, it depends on Ubuntu as the base but Linux Mint does make bold changes if the community does not like something from the upstream.
|
||||
|
||||
For instance, Linux Mint disabled snaps by default unlike official Ubuntu distro. So, you will have to [enable snaps in Linux Mint][25] if you want to use them.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
If you want a no-nonsense and easy to use operating system for your home computer, Linux Mint would be my suggestion. But, if you want to experience the latest and greatest, while taking a little adventure in your Linux learning experience, Fedora can be a good pick.
|
||||
|
||||
While every operating system requires some form of troubleshooting and nothing can guarantee zero issues with your hardware, I think Linux Mint will have potentially no issues for the majority of the users.
|
||||
|
||||
In any case, you can re-visit the comparison points mentioned above to see what matters most for your computer.
|
||||
|
||||
What do you think? Would you pick Fedora over Mint? And, Why? Let me know in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/linux-mint-vs-fedora/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/best-linux-beginners/
|
||||
[2]: https://itsfoss.com/linux-mint-vs-ubuntu/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/fedora-34-about.png?resize=1020%2C709&ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/linux-mint-resources.png?resize=800%2C293&ssl=1
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/linux-mint-update-manager.png?resize=819%2C612&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/linux-mint-cinnamon-desktop.png?resize=800%2C450&ssl=1
|
||||
[7]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/fedora-34-desktop.png?resize=800%2C478&ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/05/linux-mint-software-sources.png?resize=800%2C385&ssl=1
|
||||
[10]: https://itsfoss.com/synaptic-package-manager/
|
||||
[11]: https://itsfoss.com/fedora-third-party-repos/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/05/fedora-34-software.png?resize=1055%2C691&ssl=1
|
||||
[13]: https://itsfoss.com/install-ubuntu/
|
||||
[14]: https://itsfoss.com/remove-install-software-ubuntu/
|
||||
[15]: https://itsfoss.com/install-fedora-in-virtualbox/
|
||||
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/linux-mint-driver-manager.jpg?resize=800%2C548&ssl=1
|
||||
[17]: https://itsfoss.com/install-nvidia-drivers-fedora/
|
||||
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/fedora-nvidia-driver-installation.png?resize=706%2C516&ssl=1
|
||||
[19]: https://docs.fedoraproject.org/en-US/docs/
|
||||
[20]: https://linuxmint.com/documentation.php
|
||||
[21]: https://forums.linuxmint.com
|
||||
[22]: https://ask.fedoraproject.org
|
||||
[23]: https://itsfoss.com/open-source-forum-software/
|
||||
[24]: https://www.redhat.com/en
|
||||
[25]: https://itsfoss.com/enable-snap-support-linux-mint/
|
@ -0,0 +1,438 @@
|
||||
[#]: subject: (Identify security properties on Linux using checksec)
|
||||
[#]: via: (https://opensource.com/article/21/6/linux-checksec)
|
||||
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Identify security properties on Linux using checksec
|
||||
======
|
||||
Learn how to use checksec to identify an executable's security
|
||||
properties, understand what they mean, and know how to use them.
|
||||
![Target practice][1]
|
||||
|
||||
Compiling source code produces a binary. During compilation, you can provide flags to the compiler to enable or disable certain properties on the binary. Some of these properties are relevant to security.
|
||||
|
||||
Checksec is a nifty little tool (and shell script) that, among other functions, identifies the security properties that were built into a binary when it was compiled. A compiler might enable some of these properties by default, and you might have to provide specific flags to enable others.
|
||||
|
||||
This article explains how to use checksec to identify the security properties on a binary, including:
|
||||
|
||||
1. The underlying commands checksec uses to find information on the security properties
|
||||
2. How to enable security properties using the GNU Compiler Collection (GCC) when compiling a sample binary
|
||||
|
||||
|
||||
|
||||
## Install checksec
|
||||
|
||||
To install checksec on Fedora and other RPM-based systems, use:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install checksec`
|
||||
```
|
||||
|
||||
For Debian-based distros, use the equivalent `apt` command.
|
||||
|
||||
## The shell script
|
||||
|
||||
Checksec is a single-file shell script, albeit a rather large one. An advantage is that you can read through the script quickly and understand all the system commands running to find information about binaries or executables:
|
||||
|
||||
|
||||
```
|
||||
$ file /usr/bin/checksec
|
||||
/usr/bin/checksec: Bourne-Again shell script, ASCII text executable, with very long lines
|
||||
|
||||
$ wc -l /usr/bin/checksec
|
||||
2111 /usr/bin/checksec
|
||||
```
|
||||
|
||||
Take checksec for a drive with a binary you probably run daily: the ubiquitous `ls` command. The command's format is `checksec --file=` followed by the absolute path of the `ls` binary:
|
||||
|
||||
|
||||
```
|
||||
$ checksec --file=/usr/bin/ls
|
||||
RELRO STACK CANARY NX PIE RPATH RUNPATH Symbols FORTIFY Fortified Fortifiable FILE
|
||||
Full RELRO Canary found NX enabled PIE enabled No RPATH No RUNPATH No Symbols Yes 5 17 /usr/bin/ls
|
||||
```
|
||||
|
||||
When you run this in a terminal, you see color-coding that shows what is good and what probably isn't. I say "probably" because even if something is in red, it doesn't necessarily mean things are horrible—it might just mean the distro vendors made some tradeoffs when compiling the binaries.
|
||||
|
||||
The first line provides various security properties that are usually available for binaries, like `RELRO`, `STACK CANARY`, `NX`, and so on (I explain in detail below). The second line shows the status of these properties for the given binary (`ls`, in this case). For example, `NX enabled` means some property is enabled for this binary.
|
||||
|
||||
## A sample binary
|
||||
|
||||
For this tutorial, I'll use the following "hello world" program as the sample binary.
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main()
|
||||
{
|
||||
[printf][2]("Hello World\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Note that I did not provide `gcc` with any additional flags during compilation:
|
||||
|
||||
|
||||
```
|
||||
$ gcc hello.c -o hello
|
||||
|
||||
$ file hello
|
||||
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=014b8966ba43e3ae47fab5acae051e208ec9074c, for GNU/Linux 3.2.0, not stripped
|
||||
|
||||
$ ./hello
|
||||
Hello World
|
||||
```
|
||||
|
||||
Run the binary through checksec. Some of the properties are different than with the `ls` command above (on your screen, these may be displayed in red):
|
||||
|
||||
|
||||
```
|
||||
$ checksec --file=./hello
|
||||
RELRO STACK CANARY NX PIE RPATH RUNPATH Symbols FORTIFY Fortified Fortifiable FILE
|
||||
Partial RELRO No canary found NX enabled No PIE No RPATH No RUNPATH 85) Symbols No 0 0./hello
|
||||
$
|
||||
```
|
||||
|
||||
## Changing the output format
|
||||
|
||||
Checksec allows various output formats, which you can specify with `--output`. I'll choose the JSON format and pipe the output to the `jq` utility for pretty printing.
|
||||
|
||||
To follow along, [ensure you have `jq` installed][3] because this tutorial uses this output format to quickly grep for specific properties from the output and report `yes` or `no` on each:
|
||||
|
||||
|
||||
```
|
||||
$ checksec --file=./hello --output=json | jq
|
||||
{
|
||||
"./hello": {
|
||||
"relro": "partial",
|
||||
"canary": "no",
|
||||
"nx": "yes",
|
||||
"pie": "no",
|
||||
"rpath": "no",
|
||||
"runpath": "no",
|
||||
"symbols": "yes",
|
||||
"fortify_source": "no",
|
||||
"fortified": "0",
|
||||
"fortify-able": "0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Walking through the security properties
|
||||
|
||||
The binary above includes several security properties. I'll compare that binary against the `ls` binary above to examine what is enabled and explain how checksec found this information.
|
||||
|
||||
### 1\. Symbols
|
||||
|
||||
I'll start with the easy one first. During compilation, certain symbols are included in the binary, mostly for debugging. These symbols are required when you are developing software and require multiple cycles for debugging and fixing things.
|
||||
|
||||
These symbols are usually stripped (removed) from the final binary before it's released for general use. This does not affect the binary's execution in any way; it will run just as it would with the symbols. Stripping is often done to save space, as the binary is somewhat lighter once the symbols have been stripped. In closed-source or proprietary software, symbols often are removed because having these symbols in a binary makes it somewhat easy to infer the software's inner workings.
|
||||
|
||||
According to checksec, symbols are present in this binary, yet they were not in the `ls` binary. You can also find this information by running the `file` command on the program—you see `not stripped` in the output towards the end:
|
||||
|
||||
|
||||
```
|
||||
$ checksec --file=/bin/ls --output=json | jq | grep symbols
|
||||
"symbols": "no",
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep symbols
|
||||
"symbols": "yes",
|
||||
|
||||
$ file hello
|
||||
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=014b8966ba43e3ae47fab5acae051e208ec9074c, for GNU/Linux 3.2.0, not stripped
|
||||
```
|
||||
|
||||
How did checksec find this information? Well, it provides a handy `--debug` option to show which functions ran. Therefore, running the following command should show you which functions ran within the shell script:
|
||||
|
||||
|
||||
```
|
||||
`$ checksec --debug --file=./hello`
|
||||
```
|
||||
|
||||
In this tutorial, I'm looking for the underlying commands used to find this information. Since it's a shell script, you can always utilize Bash features. This command will output every command that ran from within the shell script:
|
||||
|
||||
|
||||
```
|
||||
`$ bash -x /usr/bin/checksec --file=./hello`
|
||||
```
|
||||
|
||||
If you scroll through the output, you should see an `echo_message` followed by the security property's category. Here is what checksec reports about whether the binary contains symbols:
|
||||
|
||||
|
||||
```
|
||||
\+ readelf -W --symbols ./hello
|
||||
\+ grep -q '\\.symtab'
|
||||
\+ echo_message '\033[31m96) Symbols\t\033[m ' Symbols, ' symbols="yes"' '"symbols":"yes",'
|
||||
```
|
||||
|
||||
To simplify this, checksec utilizes the `readelf` utility to read the binary and provides a special `--symbols` flag that lists all symbols within the binary. Then it greps for a special value, `.symtab`, that provides a count of entries (symbols) it finds. You can try out the following commands on the test binary you compiled above:
|
||||
|
||||
|
||||
```
|
||||
$ readelf -W --symbols ./hello
|
||||
$ readelf -W --symbols ./hello | grep -i symtab
|
||||
```
|
||||
|
||||
## How to strip symbols
|
||||
|
||||
You can strip symbols after compilation or during compilation.
|
||||
|
||||
* **Post compilation:** After compilation, you can use the `strip` utility on the binary to remove the symbols. Confirm it worked using the `file` command, which now shows the output as `stripped`: [code] $ gcc hello.c -o hello
|
||||
$
|
||||
$ file hello
|
||||
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=322037496cf6a2029dcdcf68649a4ebc63780138, for GNU/Linux 3.2.0, not stripped
|
||||
$
|
||||
$ strip hello
|
||||
$
|
||||
$ file hello
|
||||
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=322037496cf6a2029dcdcf68649a4ebc63780138, for GNU/Linux 3.2.0, stripped
|
||||
$
|
||||
```
|
||||
## How to strip symbols during compilation
|
||||
|
||||
Instead of stripping symbols manually after compilation, you can ask the compiler to do it for you by providing the `-s` argument:
|
||||
```
|
||||
|
||||
|
||||
$ gcc -s hello.c -o hello
|
||||
$
|
||||
$ file hello
|
||||
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=247de82a8ad84e7d8f20751ce79ea9e0cf4bd263, for GNU/Linux 3.2.0, stripped
|
||||
$
|
||||
|
||||
```
|
||||
After rerunning checksec, you can see that `symbols` are shown as `no`:
|
||||
```
|
||||
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep symbols
|
||||
"symbols": "no",
|
||||
$
|
||||
|
||||
```
|
||||
### 2\. Canary
|
||||
|
||||
Canaries are known values that are placed between a buffer and control data on the _stack_ to monitor buffer overflows. When an application executes, two kinds of memory are assigned to it. One of them is a _stack_, which is simply a data structure with two operations: `push`, which puts data onto the stack, and `pop`, which removes data from the stack in reverse order. Malicious input could overflow or corrupt the stack with specially crafted input and cause the program to crash:
|
||||
```
|
||||
|
||||
|
||||
$ checksec --file=/bin/ls --output=json | jq | grep canary
|
||||
"canary": "yes",
|
||||
$
|
||||
$ checksec --file=./hello --output=json | jq | grep canary
|
||||
"canary": "no",
|
||||
$
|
||||
|
||||
```
|
||||
How does checksec find out if the binary is enabled with a canary? Using the method above, you can narrow it down by running the following command within the shell script:
|
||||
```
|
||||
`$ readelf -W -s ./hello | grep -E '__stack_chk_fail|__intel_security_cookie'`
|
||||
```
|
||||
#### Enable canary
|
||||
|
||||
To protect against these cases, the compiler provides the `-stack-protector-all` flag, which adds extra code to the binary to check for such buffer overflows:
|
||||
```
|
||||
|
||||
|
||||
$ gcc -fstack-protector-all hello.c -o hello
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep canary
|
||||
"canary": "yes",
|
||||
|
||||
```
|
||||
Checksec shows that the property is now enabled. You can also verify this with:
|
||||
```
|
||||
|
||||
|
||||
$ readelf -W -s ./hello | grep -E '__stack_chk_fail|__intel_security_cookie'
|
||||
2: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __stack_chk_fail@GLIBC_2.4 (3)
|
||||
83: 0000000000000000 0 FUNC GLOBAL DEFAULT UND __stack_chk_fail@@GLIBC_2.4
|
||||
$
|
||||
|
||||
```
|
||||
### 3\. PIE
|
||||
|
||||
PIE stands for position-independent executable. As the name suggests, it's code that is placed somewhere in memory for execution regardless of its absolute address:
|
||||
```
|
||||
|
||||
|
||||
$ checksec --file=/bin/ls --output=json | jq | grep pie
|
||||
"pie": "yes",
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep pie
|
||||
"pie": "no",
|
||||
|
||||
```
|
||||
Often, PIE is enabled only for libraries and not for standalone command-line programs. In the output below, `hello` is shown as `LSB executable`, whereas, the `libc` standard library (`.so`) file is marked `LSB shared object`:
|
||||
```
|
||||
|
||||
|
||||
$ file hello
|
||||
hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=014b8966ba43e3ae47fab5acae051e208ec9074c, for GNU/Linux 3.2.0, not stripped
|
||||
|
||||
$ file /lib64/libc-2.32.so
|
||||
/lib64/libc-2.32.so: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=4a7fb374097fb927fb93d35ef98ba89262d0c4a4, for GNU/Linux 3.2.0, not stripped
|
||||
|
||||
```
|
||||
Checksec tries to find this information with:
|
||||
```
|
||||
|
||||
|
||||
$ readelf -W -h ./hello | grep EXEC
|
||||
Type: EXEC (Executable file)
|
||||
|
||||
```
|
||||
If you try the same command on a shared library instead of `EXEC`, you will see a `DYN`:
|
||||
```
|
||||
|
||||
|
||||
$ readelf -W -h /lib64/libc-2.32.so | grep DYN
|
||||
Type: DYN (Shared object file)
|
||||
|
||||
```
|
||||
#### Enable PIE
|
||||
|
||||
To enable PIE on a test program, send the following arguments to the compiler:
|
||||
```
|
||||
`$ gcc -pie -fpie hello.c -o hello`
|
||||
```
|
||||
You can verify PIE is enabled using checksec:
|
||||
```
|
||||
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep pie
|
||||
"pie": "yes",
|
||||
$
|
||||
|
||||
```
|
||||
It should show as a PIE executable with the type changed from `EXEC` to `DYN`:
|
||||
```
|
||||
|
||||
|
||||
$ file hello
|
||||
hello: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=bb039adf2530d97e02f534a94f0f668cd540f940, for GNU/Linux 3.2.0, not stripped
|
||||
|
||||
$ readelf -W -h ./hello | grep DYN
|
||||
Type: DYN (Shared object file)
|
||||
|
||||
```
|
||||
### 4\. NX
|
||||
|
||||
NX stands for "non-executable." It's often enabled at the CPU level, so an operating system with NX enabled can mark certain areas of memory as non-executable. Often, buffer-overflow exploits put code on the stack and then try to execute it. However, making this writable area non-executable can prevent such attacks. This property is enabled by default during regular compilation using `gcc`:
|
||||
```
|
||||
|
||||
|
||||
$ checksec --file=/bin/ls --output=json | jq | grep nx
|
||||
"nx": "yes",
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep nx
|
||||
"nx": "yes",
|
||||
|
||||
```
|
||||
Checksec determines this information with the command below. `RW` towards the end means the stack is readable and writable; since there is no `E`, it's not executable:
|
||||
```
|
||||
|
||||
|
||||
$ readelf -W -l ./hello | grep GNU_STACK
|
||||
GNU_STACK 0x000000 0x0000000000000000 0x0000000000000000 0x000000 0x000000 RW 0x10
|
||||
|
||||
```
|
||||
#### Disable NX for demo purposes
|
||||
|
||||
It's not recommended, but you can disable `NX` when compiling a program by using the `-z execstack` argument:
|
||||
```
|
||||
|
||||
|
||||
$ gcc -z execstack hello.c -o hello
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep nx
|
||||
"nx": "no",
|
||||
|
||||
```
|
||||
Upon compilation, the stack becomes executable (`RWE`), which allows malicious code to execute:
|
||||
```
|
||||
|
||||
|
||||
$ readelf -W -l ./hello | grep GNU_STACK
|
||||
GNU_STACK 0x000000 0x0000000000000000 0x0000000000000000 0x000000 0x000000 RWE 0x10
|
||||
|
||||
```
|
||||
### 5\. RELRO
|
||||
|
||||
RELRO stands for Relocation Read-Only. An Executable Linkable Format (ELF) binary uses a Global Offset Table (GOT) to resolve functions dynamically. When enabled, this security property makes the GOT within the binary read-only, which prevents some form of relocation attacks:
|
||||
```
|
||||
|
||||
|
||||
$ checksec --file=/bin/ls --output=json | jq | grep relro
|
||||
"relro": "full",
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep relro
|
||||
"relro": "partial",
|
||||
|
||||
```
|
||||
Checksec finds this information by using the command below. Here, one of the RELRO properties is enabled; therefore, the binary shows "partial" when verifying via checksec:
|
||||
```
|
||||
|
||||
|
||||
$ readelf -W -l ./hello | grep GNU_RELRO
|
||||
GNU_RELRO 0x002e10 0x0000000000403e10 0x0000000000403e10 0x0001f0 0x0001f0 R 0x1
|
||||
|
||||
$ readelf -W -d ./hello | grep BIND_NOW
|
||||
|
||||
```
|
||||
#### Enable full RELRO
|
||||
|
||||
To enable full RELRO, use the following command-line arguments when compiling with `gcc`:
|
||||
```
|
||||
|
||||
|
||||
$ gcc -Wl,-z,relro,-z,now hello.c -o hello
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep relro
|
||||
"relro": "full",
|
||||
|
||||
```
|
||||
Now, the second property is also enabled, making the program full RELRO:
|
||||
```
|
||||
|
||||
|
||||
$ readelf -W -l ./hello | grep GNU_RELRO
|
||||
GNU_RELRO 0x002dd0 0x0000000000403dd0 0x0000000000403dd0 0x000230 0x000230 R 0x1
|
||||
|
||||
$ readelf -W -d ./hello | grep BIND_NOW
|
||||
0x0000000000000018 (BIND_NOW)
|
||||
|
||||
```
|
||||
### 6\. Fortify
|
||||
|
||||
Fortify is another security property, but it's out of scope for this article. I will leave learning how checksec verifies fortify in binaries and how it's enabled with `gcc` as an exercise for you to tackle.
|
||||
```
|
||||
|
||||
|
||||
$ checksec --file=/bin/ls --output=json | jq | grep -i forti
|
||||
"fortify_source": "yes",
|
||||
"fortified": "5",
|
||||
"fortify-able": "17"
|
||||
|
||||
$ checksec --file=./hello --output=json | jq | grep -i forti
|
||||
"fortify_source": "no",
|
||||
"fortified": "0",
|
||||
"fortify-able": "0"
|
||||
|
||||
```
|
||||
## Other checksec features
|
||||
|
||||
The topic of security is never-ending, and while it's not possible to cover everything here, I do want to mention a few more features of the `checksec` command that make it a pleasure to work with.
|
||||
|
||||
### Run against multiple binaries
|
||||
|
||||
You don't have to provide each binary to checksec individually. Instead, you can provide a directory path where multiple binaries reside, and checksec will verify all of them for you in one go:
|
||||
```
|
||||
`$ checksec --dir=/usr
|
@ -0,0 +1,364 @@
|
||||
[#]: subject: (Test arbitrary pod failures on Kubernetes with kube-monkey)
|
||||
[#]: via: (https://opensource.com/article/21/6/chaos-kubernetes-kube-monkey)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Test arbitrary pod failures on Kubernetes with kube-monkey
|
||||
======
|
||||
Kube-monkey offers an easy way to stress-test your systems by scheduling
|
||||
random termination pods in your cluster.
|
||||
![Parts, modules, containers for software][1]
|
||||
|
||||
I have covered multiple chaos engineering tools in this series. The first article in this series explained [what chaos engineering is][2]; the second demonstrated how to get your [system's steady state][3] so that you can compare it against a chaos state; the third showed how to [use Litmus to test][4] arbitrary failures and experiments in your Kubernetes cluster; and the fourth article got into [Chaos Mesh][5], an open source chaos orchestrator with a web user interface.
|
||||
|
||||
In this fifth article, I want to talk about arbitrary pod failure. [Kube-monkey][6] offers an easy way to stress-test your systems by scheduling random termination pods in your cluster. This aims to encourage and validate the development of failure-resilient services. As in the previous walkthroughs, I'll use Pop!_OS 20.04, Helm 3, Minikube 1.14.2, and Kubernetes 1.19.
|
||||
|
||||
### Configure Minikube
|
||||
|
||||
If you haven't already, [install Minikube][7] in whatever way makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power:
|
||||
|
||||
|
||||
```
|
||||
$ minikube config set memory 8192
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
$ minikube config set cpus 6
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
```
|
||||
|
||||
Then start and check the status of your system:
|
||||
|
||||
|
||||
```
|
||||
$ minikube start
|
||||
😄 minikube v1.14.2 on Debian bullseye/sid
|
||||
🎉 minikube 1.19.0 is available! Download it: <https://github.com/kubernetes/minikube/releases/tag/v1.19.0>
|
||||
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
|
||||
|
||||
✨ Using the docker driver based on user configuration
|
||||
👍 Starting control plane node minikube in cluster minikube
|
||||
🔥 Creating docker container (CPUs=6, Memory=8192MB) ...
|
||||
🐳 Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
|
||||
🔎 Verifying Kubernetes components...
|
||||
🌟 Enabled addons: storage-provisioner, default-storageclass
|
||||
🏄 Done! kubectl is now configured to use "minikube" by default
|
||||
$ minikube status
|
||||
minikube
|
||||
type: Control Plane
|
||||
host: Running
|
||||
kubelet: Running
|
||||
apiserver: Running
|
||||
kubeconfig: Configured
|
||||
```
|
||||
|
||||
### Preconfiguring with deployments
|
||||
|
||||
Start by adding some small deployments to run chaos against. These deployments will need some special labels, so you need to create a new Helm chart. The following labels will help kube-monkey determine what to kill if the app is opted-in to doing chaos and understand what details are behind the chaos:
|
||||
|
||||
* **kube-monkey/enabled**: This setting opts you in to starting the chaos.
|
||||
* **kube-monkey/mtbf**: This stands for mean time between failure (in days). For example, if it's set to 3, the Kubernetes (K8s) app expects to have a pod killed approximately every third weekday.
|
||||
* **kube-monkey/identifier**: This is a unique identifier for the K8s apps; in this example, it will be "nginx."
|
||||
* **kube-monkey/kill-mode**: The kube-monkey's default behavior is to kill only one pod in the cluster, but you can change it to add more:
|
||||
* **kill-all:** Kill every pod, no matter what is happening with a pod
|
||||
* **fixed:** Pick a number of pods you want to kill
|
||||
* **fixed-percent:** Kill a fixed percent of pods (e.g., 50%)
|
||||
* **kube-monkey/kill-value**: This is where you can specify a value for kill-mode
|
||||
* **fixed:** The number of pods to kill
|
||||
* **random-max-percent:** The maximum number from 0–100 that kube-monkey can kill
|
||||
* **fixed-percent:** The percentage, from 0–100 percent, of pods to kill
|
||||
|
||||
|
||||
|
||||
Now that you have this background info, you can start [creating a basic Helm chart][8].
|
||||
|
||||
I named this Helm chart `nginx`. I'll show only the changes to the Helm chart deployment labels below. You need to change the deployment YAML file, which is `nginx/templates` in this example:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos/kube-monkey/helm/nginx/templates$ ls -la
|
||||
total 40
|
||||
drwxr-xr-x 3 jess jess 4096 May 15 14:46 .
|
||||
drwxr-xr-x 4 jess jess 4096 May 15 14:46 ..
|
||||
-rw-r--r-- 1 jess jess 1826 May 15 14:46 deployment.yaml
|
||||
-rw-r--r-- 1 jess jess 1762 May 15 14:46 _helpers.tpl
|
||||
-rw-r--r-- 1 jess jess 910 May 15 14:46 hpa.yaml
|
||||
-rw-r--r-- 1 jess jess 1048 May 15 14:46 ingress.yaml
|
||||
-rw-r--r-- 1 jess jess 1735 May 15 14:46 NOTES.txt
|
||||
-rw-r--r-- 1 jess jess 316 May 15 14:46 serviceaccount.yaml
|
||||
-rw-r--r-- 1 jess jess 355 May 15 14:46 service.yaml
|
||||
drwxr-xr-x 2 jess jess 4096 May 15 14:46 tests
|
||||
```
|
||||
|
||||
In your `deployment.yaml` file, find this section:
|
||||
|
||||
|
||||
```
|
||||
template:
|
||||
metadata:
|
||||
{{- with .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "nginx.selectorLabels" . | nindent 8 }}
|
||||
```
|
||||
|
||||
And make these changes:
|
||||
|
||||
|
||||
```
|
||||
template:
|
||||
metadata:
|
||||
{{- with .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "nginx.selectorLabels" . | nindent 8 }}
|
||||
kube-monkey/enabled: enabled
|
||||
kube-monkey/identifier: monkey-victim
|
||||
kube-monkey/mtbf: '2'
|
||||
kube-monkey/kill-mode: "fixed"
|
||||
kube-monkey/kill-value: '1'
|
||||
```
|
||||
|
||||
Move back one directory and find the `values` file:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos/kube-monkey/helm/nginx/templates$ cd ../
|
||||
$ /chaos/kube-monkey/helm/nginx$ ls
|
||||
charts Chart.yaml templates values.yaml
|
||||
```
|
||||
|
||||
You need to change one line in the values file, from:
|
||||
|
||||
|
||||
```
|
||||
`replicaCount: 1`
|
||||
```
|
||||
|
||||
to:
|
||||
|
||||
|
||||
```
|
||||
`replicaCount: 8`
|
||||
```
|
||||
|
||||
This will give you eight different pods to test chaos against.
|
||||
|
||||
Move back one more directory and install the new Helm chart:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos/kube-monkey/helm/nginx$ cd ../
|
||||
$ /chaos/kube-monkey/helm$ helm install nginxtest nginx
|
||||
NAME: nginxtest
|
||||
LAST DEPLOYED: Sat May 15 14:53:47 2021
|
||||
NAMESPACE: default
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
NOTES:
|
||||
1\. Get the application URL by running these commands:
|
||||
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=nginxtest" -o jsonpath="{.items[0].metadata.name}")
|
||||
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
|
||||
echo "Visit <http://127.0.0.1:8080> to use your application"
|
||||
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
|
||||
```
|
||||
|
||||
Then check the labels in your Nginx pods:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos/kube-monkey/helm$ kubectl get pods -n default
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginxtest-8f967857-88zv7 1/1 Running 0 80s
|
||||
nginxtest-8f967857-8qb95 1/1 Running 0 80s
|
||||
nginxtest-8f967857-dlng7 1/1 Running 0 80s
|
||||
nginxtest-8f967857-h7mmc 1/1 Running 0 80s
|
||||
nginxtest-8f967857-pdzpq 1/1 Running 0 80s
|
||||
nginxtest-8f967857-rdpnb 1/1 Running 0 80s
|
||||
nginxtest-8f967857-rqv2w 1/1 Running 0 80s
|
||||
nginxtest-8f967857-tr2cn 1/1 Running 0 80s
|
||||
```
|
||||
|
||||
Chose the first pod to describe and confirm the labels are in place:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos/kube-monkey/helm$ kubectl describe pod nginxtest-8f967857-88zv7 -n default
|
||||
Name: nginxtest-8f967857-88zv7
|
||||
Namespace: default
|
||||
Priority: 0
|
||||
Node: minikube/192.168.49.2
|
||||
Start Time: Sat, 15 May 2021 15:11:37 -0400
|
||||
Labels: app.kubernetes.io/instance=nginxtest
|
||||
app.kubernetes.io/name=nginx
|
||||
kube-monkey/enabled=enabled
|
||||
kube-monkey/identifier=monkey-victim
|
||||
kube-monkey/kill-mode=fixed
|
||||
kube-monkey/kill-value=1
|
||||
kube-monkey/mtbf=2
|
||||
pod-template-hash=8f967857
|
||||
```
|
||||
|
||||
### Configure and install kube-monkey
|
||||
|
||||
To install kube-monkey using Helm, you first need to run `git clone on `the [kube-monkey repository][6]:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos$ git clone <https://github.com/asobti/kube-monkey>
|
||||
Cloning into 'kube-monkey'...
|
||||
remote: Enumerating objects: 14641, done.
|
||||
remote: Counting objects: 100% (47/47), done.
|
||||
remote: Compressing objects: 100% (36/36), done.
|
||||
remote: Total 14641 (delta 18), reused 22 (delta 8), pack-reused 14594
|
||||
Receiving objects: 100% (14641/14641), 30.56 MiB | 39.31 MiB/s, done.
|
||||
Resolving deltas: 100% (6502/6502), done.
|
||||
```
|
||||
|
||||
Change to the `kube-monkey/helm` directory:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos$ cd kube-monkey/helm/
|
||||
$ /chaos/kube-monkey/helm$
|
||||
```
|
||||
|
||||
Then go into the Helm chart and find the `values.yaml` file:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos/kube-monkey/helm$ cd kubemonkey/
|
||||
$ /chaos/kube-monkey/helm/kubemonkey$ ls
|
||||
Chart.yaml README.md templates values.yaml
|
||||
```
|
||||
|
||||
Below, I will show just the sections of the `values.yaml` file you need to change. They disable dry-run mode by changing it in the config section to `false`, then add the default namespace to the whitelist so that it can kill the pods you deployed. You must keep the `blacklistedNamespaces` value or you will cause severe damage to your system.
|
||||
|
||||
Change this:
|
||||
|
||||
|
||||
```
|
||||
config:
|
||||
dryRun: true
|
||||
runHour: 8
|
||||
startHour: 10
|
||||
endHour: 16
|
||||
blacklistedNamespaces:
|
||||
- kube-system
|
||||
whitelistedNamespaces: []
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
|
||||
```
|
||||
config:
|
||||
dryRun: false
|
||||
runHour: 8
|
||||
startHour: 10
|
||||
endHour: 16
|
||||
blacklistedNamespaces:
|
||||
- kube-system
|
||||
whitelistedNamespaces: ["default"]
|
||||
```
|
||||
|
||||
In the debug section, set `enabled` and `schedule_immediate_kill` to `true`. This will show the pods being killed.
|
||||
|
||||
Change this:
|
||||
|
||||
|
||||
```
|
||||
debug:
|
||||
enabled: false
|
||||
schedule_immediate_kill: false
|
||||
```
|
||||
|
||||
To this:
|
||||
|
||||
|
||||
```
|
||||
debug:
|
||||
enabled: true
|
||||
schedule_immediate_kill: true
|
||||
```
|
||||
|
||||
Run a `helm install`:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos/kube-monkey/helm$ helm install chaos kubemonkey
|
||||
NAME: chaos
|
||||
LAST DEPLOYED: Sat May 15 13:51:59 2021
|
||||
NAMESPACE: default
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
1\. Wait until the application is rolled out:
|
||||
kubectl -n default rollout status deployment chaos-kube-monkey
|
||||
2\. Check the logs:
|
||||
kubectl logs -f deployment.apps/chaos-kube-monkey -n default
|
||||
```
|
||||
|
||||
Check the kube-monkey logs and see that the pods are being terminated:
|
||||
|
||||
|
||||
```
|
||||
$ /chaos/kube-monkey/helm$ kubectl logs -f deployment.apps/chaos-kube-monkey -n default
|
||||
|
||||
********** Today's schedule **********
|
||||
k8 Api Kind Kind Name Termination Time
|
||||
----------- --------- ----------------
|
||||
v1.Deployment nginxtest 05/15/2021 15:15:22 -0400 EDT
|
||||
********** End of schedule **********
|
||||
I0515 19:15:22.343202 1 kubemonkey.go:70] Termination successfully executed for v1.Deployment nginxtest
|
||||
I0515 19:15:22.343216 1 kubemonkey.go:73] Status Update: 0 scheduled terminations left.
|
||||
I0515 19:15:22.343220 1 kubemonkey.go:76] Status Update: All terminations done.
|
||||
I0515 19:15:22.343278 1 kubemonkey.go:19] Debug mode detected!
|
||||
I0515 19:15:22.343283 1 kubemonkey.go:20] Status Update: Generating next schedule in 30 sec
|
||||
```
|
||||
|
||||
You can also use [K9s][9] and watch the pods die.
|
||||
|
||||
![Pods dying in K9s][10]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][11])
|
||||
|
||||
Congratulations! You now have a running chaos test with arbitrary failures. Anytime you want, you can change your applications to test at a certain day of the week and time of day.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
While kube-monkey is a great chaos engineering tool, it does require heavy configurations. Therefore, it isn't the best starter chaos engineering tool for someone new to Kubernetes. Another drawback is you have to edit your application's Helm chart for chaos testing to run.
|
||||
|
||||
This tool would be best positioned in a staging environment to watch how applications respond to arbitrary failure regularly. This gives you a long-term way to keep track of unsteady states using cluster monitoring tools. It also keeps notes that you can use for recovery of your internal applications in production.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/chaos-kubernetes-kube-monkey
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cherrybomb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
|
||||
[2]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos
|
||||
[3]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus
|
||||
[4]: https://opensource.com/article/21/5/total-chaos-litmus
|
||||
[5]: https://opensource.com/article/21/5/get-meshy-chaos-mesh
|
||||
[6]: https://github.com/asobti/kube-monkey
|
||||
[7]: https://minikube.sigs.k8s.io/docs/start/
|
||||
[8]: https://opensource.com/article/20/5/helm-charts
|
||||
[9]: https://opensource.com/article/20/5/kubernetes-administration
|
||||
[10]: https://opensource.com/sites/default/files/uploads/podsdying.png (Pods dying in K9s)
|
||||
[11]: https://creativecommons.org/licenses/by-sa/4.0/
|
@ -0,0 +1,83 @@
|
||||
[#]: subject: (Analyze community health metrics with this open source tool)
|
||||
[#]: via: (https://opensource.com/article/21/6/health-metrics-cauldron)
|
||||
[#]: author: (Georg Link https://opensource.com/users/georglink)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Analyze community health metrics with this open source tool
|
||||
======
|
||||
Cauldron makes it easier for anyone to use GrimoireLab to learn more
|
||||
about open source communities.
|
||||
![Open source doctor.][1]
|
||||
|
||||
Community managers, maintainers, and foundations seek metrics and insights about open source communities. Because each open source project works differently, its data needs to be analyzed differently. Yet, all projects share common challenges with getting data and creating visualizations. This presents an ideal use case for an open source project to solve this problem generically with the capability to customize it to users' needs.
|
||||
|
||||
The open source GrimoireLab project has been working on ways to [measure the health of open source communities][2]. In addition to powering large-scale open source metrics solutions, it also serves as the backbone of the new [Cauldron][3] platform.
|
||||
|
||||
GrimoireLab solves some hard problems related to retrieving and curating data. It was designed to be a flexible metrics solution for analyzing open source communities. [LibreOffice][4] and [Mautic][5] are among the communities using GrimoireLab's open source tools to generate community health metrics.
|
||||
|
||||
![LibreOffice's GrimoireLab dashboard][6]
|
||||
|
||||
LibreOffice's GrimoireLab dashboard (Georg Link, [CC BY-SA 4.0][7])
|
||||
|
||||
GrimoireLab satisfies the need for metrics, but two challenges have prevented wider adoption. First, it is difficult to deploy and secure. Its setup is more difficult than many expect, especially those who just want to have metrics without manually editing configuration files. Second, it does not scale well if you have many users trying to analyze different projects; every user must deploy their own GrimoireLab instance.
|
||||
|
||||
Two platforms have solved these challenges to offer community metrics as a service, with GrimoireLab working under the hood. First, the Linux Foundation leveraged GrimoireLab to bootstrap its [LFX Insights platform][8]. It gives the foundation's open source projects a great deal of insight into their communities, some of which goes beyond GrimoireLab's core features. LFX Insights is not available as open source and only available from the Linux Foundation.
|
||||
|
||||
![LFX Insights dashboard][9]
|
||||
|
||||
LFX Insights dashboard showing metrics about the Kubernetes project (Georg Link, [CC BY-SA 4.0][7])
|
||||
|
||||
The other choice is [Cauldron][10], which is open source. It's designed to abstract the difficulty of using GrimoireLab's metrics and create a smooth user experience. Anyone can use Cauldron for their open source communities for free at [Cauldron.io][3]. Cauldron provides metrics without having to deploy software, which resolves the challenge of deploying and securing GrimoireLab.
|
||||
|
||||
![Cauldron dashboard][11]
|
||||
|
||||
Cauldron dashboard showing metrics about the Kubernetes project (Georg Link, [CC BY-SA 4.0][7])
|
||||
|
||||
Cauldron solves the scalability challenge by collecting data about an open source community centrally and making it available to all platform users. This reduces the time needed for new reports if the data was previously collected. It also minimizes the issue of API rate limits that could restrict collecting data at scale.
|
||||
|
||||
To mitigate privacy concerns, Cauldron anonymizes all data by default. Should you want to know who your contributors (or companies in your communities) are, you will need a private Cauldron instance, either by deploying it yourself or using [Cauldron Cloud service][12].
|
||||
|
||||
These design choices enable a new way of working with this data. Instead of limiting analysis to individual projects, anyone can define reports and include anything from a single project's repository to hundreds of repositories from a group of projects. This makes it possible to analyze trends, like the rise in blockchain projects, by looking at data across many projects.
|
||||
|
||||
Many people want to be able to compare data about multiple open source projects. In Cauldron, a user can create a report for each project then use the Comparison feature to show the data for each project side-by-side with graphs.
|
||||
|
||||
![A Cauldron dashboard comparing Ansible, Ethereum, and Kubernetes][13]
|
||||
|
||||
Cauldron dashboard comparing Ansible, Ethereum, and Kubernetes (Georg Link, [CC BY-SA 4.0][7])
|
||||
|
||||
The high demand for open source within the enterprise and increasing interest in community health and metrics are leading solution providers to improve usability. GrimoireLab continues to focus on retrieving data about open source communities. Downstream projects like LFX Insights and Cauldron leverage GrimoireLab to provide easy-to-use metrics.
|
||||
|
||||
On a related note, the CHAOSS Project offers a Community Health Report. The report is created using the two CHAOSS projects, Augur and GrimoireLab. You can [request your Community Health Report][14] on the CHAOSS website or see the same metrics and visualizations under the [CHAOSS tab][15] in Cauldron.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/health-metrics-cauldron
|
||||
|
||||
作者:[Georg Link][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/georglink
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourcedoctor.png?itok=fk79NwpC (Open source doctor.)
|
||||
[2]: https://opensource.com/article/20/3/grimoirelab
|
||||
[3]: https://cauldron.io/
|
||||
[4]: https://dashboard.documentfoundation.org/
|
||||
[5]: https://dashboard.mautic.org/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/libreoffice_grimoirelab-dashboard.png (LibreOffice's GrimoireLab dashboard)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://lfx.linuxfoundation.org/tools/insights
|
||||
[9]: https://opensource.com/sites/default/files/uploads/lfx-insights.png (LFX Insights dashboard)
|
||||
[10]: https://gitlab.com/cauldronio/cauldron/
|
||||
[11]: https://opensource.com/sites/default/files/uploads/cauldron-dashboard.png (Cauldron dashboard)
|
||||
[12]: http://cloud.cauldron.io/
|
||||
[13]: https://opensource.com/sites/default/files/uploads/compare-projects.png (A Cauldron dashboard comparing Ansible, Ethereum, and Kubernetes)
|
||||
[14]: https://chaoss.community/community-reports/
|
||||
[15]: https://cauldron.io/project/372?tab=chaoss
|
76
sources/tech/20210608 How FreeDOS boots.md
Normal file
76
sources/tech/20210608 How FreeDOS boots.md
Normal file
@ -0,0 +1,76 @@
|
||||
[#]: subject: (How FreeDOS boots)
|
||||
[#]: via: (https://opensource.com/article/21/6/freedos-boots)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
How FreeDOS boots
|
||||
======
|
||||
An overview of how your computer boots up and starts a simple operating
|
||||
system like FreeDOS.
|
||||
![Code going into a computer.][1]
|
||||
|
||||
One thing I appreciate from growing up with DOS computers is that the boot process is relatively easy to understand. There aren't a lot of moving parts in DOS. And today, I'd like to share an overview of how your computer boots up and starts a simple operating system like FreeDOS.
|
||||
|
||||
### Initial bootstrapping
|
||||
|
||||
When you turn on the power to your computer, the system performs several self-checks, such as verifying the memory and other components. This is called the **Power On Self Test** or "POST." After the POST, the computer uses a hard-coded instruction that tells it where to find its instructions to load the operating system. This is the "boot loader," and usually it will try to locate a Master Boot Record or (MBR) on the hard drive. The MBR then loads the primary operating system; in this case, that's FreeDOS.
|
||||
|
||||
This process of locating one piece of information so the computer can load the next part of the operating system is called "bootstrapping," from the old expression of "picking yourself up by your bootstraps." It is from this usage that we adopted the term "boot" to mean starting up your computer.
|
||||
|
||||
### The kernel
|
||||
|
||||
When the computer loads the FreeDOS kernel, one of the first things the kernel does is identify any parameters the user has indicated to use. This is stored in a file called `FDCONFIG.SYS`, stored in the same root directory as the kernel. If `FDCONFIG.SYS` does not exist, then the FreeDOS kernel looks for an alternate file called `CONFIG.SYS`.
|
||||
|
||||
If you used DOS in the 1980s or 1990s, you may be familiar with the `CONFIG.SYS` file. Since 1999, FreeDOS looks for `FDCONFIG.SYS` first in case you have a DOS system that is _dual booting_ FreeDOS with some other DOS, such as MS-DOS. Note that MS-DOS only uses the `CONFIG.SYS` file. So if you use the same hard drive to boot both FreeDOS and MS-DOS, MS-DOS uses `CONFIG.SYS` to configure itself, and FreeDOS uses `FDCONFIG.SYS` instead. That way, each can use its own configuration.
|
||||
|
||||
`FDCONFIG.SYS` can contain a number of configuration settings, one of which is `SHELL=` or `SHELLHIGH=`. Either one will instruct the kernel to load this program as the interactive shell for the user.
|
||||
|
||||
If neither `FDCONFIG.SYS` nor `CONFIG.SYS` exist, then the kernel assumes several default values, including where to find the shell. If you see the message "Bad or missing Command Interpreter" when you boot your FreeDOS system, that means `SHELL=` or `SHELLHIGH=` is pointing to a shell program that doesn't exist on your system.
|
||||
|
||||
![Bad or missing Command Interpreter][2]
|
||||
|
||||
Jim Hall, CC-BY SA 4.0
|
||||
|
||||
You might debug this by looking at the `SHELL=` or `SHELLHIGH=` lines. Failing that, make sure you have a program called `COMMAND.COM` in the root directory of your FreeDOS system. This is the _shell_, which I'll talk about next.
|
||||
|
||||
### The shell
|
||||
|
||||
The term "shell" on a DOS system usually means a command-line interpreter; an interactive program that reads instructions from the user, then executes them. In this way, the FreeDOS shell is similar to the Bash shell on Linux.
|
||||
|
||||
Unless you've asked the kernel to load a different shell using `SHELL=` or `SHELLHIGH=`, the standard command-line shell on DOS is called `COMMAND.COM`. And as `COMMAND.COM` starts up, it also looks for a file to configure itself. By default, `COMMAND.COM` will look for a file called `AUTOEXEC.BAT` in the root directory. `AUTOEXEC.BAT` is a "batch file" that contains a set of instructions that run at startup, and is roughly analogous to the `~/.bashrc` "resource file" that Bash reads when it starts up on Linux.
|
||||
|
||||
You can change the shell, and the startup file for the shell, in the `FDCONFIG.SYS` file, with `SHELL=` or `SHELLHIGH=`. The FreeDOS 1.3 RC4 installer sets up the system to read `FDAUTO.BAT` instead of `AUTOEXEC.BAT`. This is for the same reason that the kernel reads an alternate configuration file; you can dual-boot FreeDOS on a hard drive with another DOS. FreeDOS will use `FDAUTO.BAT` while MS-DOS will use `AUTOEXEC.BAT`..
|
||||
|
||||
Without a startup file like `AUTOEXEC.BAT`, the shell will simply prompt the user to enter the date and time.
|
||||
|
||||
![Without AUTOEXEC.BAT, the shell will prompt for date and time][3]
|
||||
|
||||
Jim Hall, CC-BY SA 4.0
|
||||
|
||||
And that's it. Once FreeDOS has loaded the kernel, and the kernel has loaded the shell, FreeDOS is ready for the user to type commands.
|
||||
|
||||
![FreeDOS is ready for you to enter your first command][4]
|
||||
|
||||
Jim Hall, CC-BY SA 4.0
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/freedos-boots
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 (Code going into a computer.)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/bad-missing-command.png (Bad or missing Command Interpreter)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/no-autoexec.png (Without AUTOEXEC.BAT, the shell will prompt for date and time)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/freedos-boot.png (FreeDOS is ready for you to enter your first command)
|
232
sources/tech/20210608 Play Doom on Kubernetes.md
Normal file
232
sources/tech/20210608 Play Doom on Kubernetes.md
Normal file
@ -0,0 +1,232 @@
|
||||
[#]: subject: (Play Doom on Kubernetes)
|
||||
[#]: via: (https://opensource.com/article/21/6/kube-doom)
|
||||
[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Play Doom on Kubernetes
|
||||
======
|
||||
Terminate pods while having fun by playing Kube DOOM.
|
||||
![A cat under a keyboard.][1]
|
||||
|
||||
Do you ever feel nostalgic for Doom and other blocky video games, the ones that didn't require much more than a mouse and the hope that you could survive on a LAN with your friends? You know what I'm talking about; the days when your weekends were consumed with figuring out how you could travel with your desktop and how many Mountain Dews you could fit in your cargo pants pockets? If this memory puts a warm feeling in your heart, well, this article is for you.
|
||||
|
||||
Get ready to play Doom again, only this time you'll be playing for a legitimate work reason: doing chaos engineering. I'll be using my [fork of Kube DOOM][2] (with a new Helm chart because that's how I sometimes spend my weekends). I also have a pull request with the [original Kube DOOM][3] creator that I'm waiting to hear about.
|
||||
|
||||
The first article in this series explained [what chaos engineering is][4], and the second demonstrated how to get your [system's steady state][5] so that you can compare it against a chaos state. In the next few articles, I introduced some chaos engineering tools you can use: [Litmus for testing][6] arbitrary failures and experiments in your Kubernetes cluster; [Chaos Mesh][7], an open source chaos orchestrator with a web user interface; and [Kube-monkey][8] for stress-testing your systems by scheduling random termination pods in your cluster.
|
||||
|
||||
In this sixth article, I'll use Pop!_OS 20.04, Helm 3, Minikube 1.14.2, a VNC viewer, and Kubernetes 1.19.
|
||||
|
||||
### Configure Minikube
|
||||
|
||||
If you haven't already, [install Minikube][9] in whatever way that makes sense for your environment. If you have enough resources, I recommend giving your virtual machine a bit more than the default memory and CPU power:
|
||||
|
||||
|
||||
```
|
||||
$ minikube config set memory 8192
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
$ minikube config set cpus 6
|
||||
❗ These changes will take effect upon a minikube delete and then a minikube start
|
||||
```
|
||||
|
||||
Then start and check the status of your system:
|
||||
|
||||
|
||||
```
|
||||
$ minikube start
|
||||
😄 minikube v1.14.2 on Debian bullseye/sid
|
||||
🎉 minikube 1.19.0 is available! Download it: <https://github.com/kubernetes/minikube/releases/tag/v1.19.0>
|
||||
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
|
||||
|
||||
✨ Using the docker driver based on user configuration
|
||||
👍 Starting control plane node minikube in cluster minikube
|
||||
🔥 Creating docker container (CPUs=6, Memory=8192MB) ...
|
||||
🐳 Preparing Kubernetes v1.19.0 on Docker 19.03.8 ...
|
||||
🔎 Verifying Kubernetes components...
|
||||
🌟 Enabled addons: storage-provisioner, default-storageclass
|
||||
🏄 Done! kubectl is now configured to use "minikube" by default
|
||||
$ minikube status
|
||||
minikube
|
||||
type: Control Plane
|
||||
host: Running
|
||||
kubelet: Running
|
||||
apiserver: Running
|
||||
kubeconfig: Configured
|
||||
```
|
||||
|
||||
### Preinstall pods with Helm
|
||||
|
||||
Before moving forward, you'll need to deploy some pods into your cluster. To do this, I generated a simple Helm chart and changed the replicas in my values file from 1 to 8.
|
||||
|
||||
If you need to generate a Helm chart, you can read my article on [creating a Helm chart][10] for guidance. I created a Helm chart named `nginx` and created a namespace to install my chart into using the commands below.
|
||||
|
||||
Create a namespace:
|
||||
|
||||
|
||||
```
|
||||
`$ kubectl create ns nginx`
|
||||
```
|
||||
|
||||
Install the chart in your new namespace with a name:
|
||||
|
||||
|
||||
```
|
||||
$ helm install chaos-pods nginx -n nginx
|
||||
|
||||
NAME: chaos-pods
|
||||
LAST DEPLOYED: Sun May 23 10:15:52 2021
|
||||
NAMESPACE: nginx
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
NOTES:
|
||||
1\. Get the application URL by running these commands:
|
||||
export POD_NAME=$(kubectl get pods --namespace nginx -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=chaos-pods" -o jsonpath="{.items[0].metadata.name}")
|
||||
export CONTAINER_PORT=$(kubectl get pod --namespace nginx $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
|
||||
echo "Visit <http://127.0.0.1:8080> to use your application"
|
||||
kubectl --namespace nginx port-forward $POD_NAME 8080:$CONTAINER_PORT
|
||||
```
|
||||
|
||||
### Install Kube DOOM
|
||||
|
||||
You can use any [Virtual Network Computer][11] (VNC) viewer you want; I installed [TigerVNC][12] on my Linux box. There are several ways you can set up Kube DOOM. Before I generated my Helm chart, you could set it up with [kind][13] or use it locally with Docker, and the [README][14] contains instructions for those uses.
|
||||
|
||||
Get started with a `git clone`:
|
||||
|
||||
|
||||
```
|
||||
$ git clone [git@github.com][15]:Alynder/kubedoom.git
|
||||
Cloning into 'kubedoom'...
|
||||
```
|
||||
|
||||
Then change directory into the `kubedoom/helm` folder:
|
||||
|
||||
|
||||
```
|
||||
`$ cd kubedoom/helm/`
|
||||
```
|
||||
|
||||
Since the base values file is already set up correctly, you just need to run a single install command:
|
||||
|
||||
|
||||
```
|
||||
$ helm install kubedoom kubedoom/ -n kubedoom
|
||||
NAME: kubedoom
|
||||
LAST DEPLOYED: Mon May 31 11:16:58 2021
|
||||
NAMESPACE: kubedoom
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
NOTES:
|
||||
1\. Get the application URL by running these commands:
|
||||
export NODE_PORT=$(kubectl get --namespace kubedoom -o jsonpath="{.spec.ports[0].nodePort}" services kubedoom-kubedoom-chart)
|
||||
export NODE_IP=$(kubectl get nodes --namespace kubedoom -o jsonpath="{.items[0].status.addresses[0].address}")
|
||||
echo http://$NODE_IP:$NODE_PORT
|
||||
```
|
||||
|
||||
Everything should be installed, set up, and ready to go.
|
||||
|
||||
### Play with Kube DOOM
|
||||
|
||||
Now you just need to get in there, run a few commands, and start playing your new chaos video game. The first command is a port forward, followed by the VNC viewer connection command. The VNC viewer connection needs a password, which is `idbehold`.
|
||||
|
||||
Find your pod for the port forward:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods -n kubedoom
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kubedoom-kubedoom-chart-676bcc5c9c-xkwpp 1/1 Running 0 68m
|
||||
```
|
||||
|
||||
Run the `port-forward` command using your pod name:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl port-forward kubedoom-kubedoom-chart-676bcc5c9c-xkwpp 5900:5900 -n kubedoom
|
||||
Forwarding from 127.0.0.1:5900 -> 5900
|
||||
Forwarding from [::1]:5900 -> 5900
|
||||
```
|
||||
|
||||
Everything is ready to play, so you just need to run the VNC viewer command (shown below with output):
|
||||
|
||||
|
||||
```
|
||||
$ vncviewer viewer localhost:5900
|
||||
|
||||
TigerVNC Viewer 64-bit v1.10.1
|
||||
Built on: 2020-04-09 06:49
|
||||
Copyright (C) 1999-2019 TigerVNC Team and many others (see README.rst)
|
||||
See <https://www.tigervnc.org> for information on TigerVNC.
|
||||
|
||||
Mon May 31 11:33:23 2021
|
||||
DecodeManager: Detected 64 CPU core(s)
|
||||
DecodeManager: Creating 4 decoder thread(s)
|
||||
CConn: Connected to host localhost port 5900
|
||||
```
|
||||
|
||||
Next, you'll see the password request, so enter it (`idbehold`, as given above).
|
||||
|
||||
![VNC authentication][16]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][17])
|
||||
|
||||
Once you are logged in, you should be able to walk around and see your enemies with pod names.
|
||||
|
||||
![Kube Doom pods][18]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][17])
|
||||
|
||||
I'm terrible at this game, so I use some cheats to have a little more fun:
|
||||
|
||||
* Type `idspispopd` to walk straight through a wall to get to your army of pods.
|
||||
* Can't handle the gun? That's cool; I'm bad at it, too. If you type `idkfa` and press the number **5**, you'll get a better weapon.
|
||||
|
||||
|
||||
|
||||
This is what it looks like when you kill something (I used [k9s][19] for this view).
|
||||
|
||||
![Killing pods in Kube DOOM][20]
|
||||
|
||||
(Jess Cherry, [CC BY-SA 4.0][17])
|
||||
|
||||
### Final notes
|
||||
|
||||
Because this application requires a cluster-admin role, you have to really pay attention to the names of the pods—you might run into a kube-system pod, and you'd better run away. If you kill one of those pods, you will kill an important part of the system.
|
||||
|
||||
I love this application because it's the quickest gamified way to do chaos engineering. It did remind me of how bad I was at this video game, but it was hilarious to try it. Happy hunting!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/kube-doom
|
||||
|
||||
作者:[Jessica Cherry][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/cherrybomb
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-lead_cat-keyboard.png?itok=fuNmiGV- (A cat under a keyboard.)
|
||||
[2]: https://github.com/Alynder/kubedoom
|
||||
[3]: https://github.com/storax/kubedoom
|
||||
[4]: https://opensource.com/article/21/5/11-years-kubernetes-and-chaos
|
||||
[5]: https://opensource.com/article/21/5/get-your-steady-state-chaos-grafana-and-prometheus
|
||||
[6]: https://opensource.com/article/21/5/total-chaos-litmus
|
||||
[7]: https://opensource.com/article/21/5/get-meshy-chaos-mesh
|
||||
[8]: https://opensource.com/article/21/6/chaos-kubernetes-kube-monkey
|
||||
[9]: https://minikube.sigs.k8s.io/docs/start/
|
||||
[10]: https://opensource.com/article/20/5/helm-charts
|
||||
[11]: https://en.wikipedia.org/wiki/Virtual_Network_Computing
|
||||
[12]: https://tigervnc.org/
|
||||
[13]: https://kind.sigs.k8s.io/
|
||||
[14]: https://github.com/Alynder/kubedoom/blob/master/README.md
|
||||
[15]: mailto:git@github.com
|
||||
[16]: https://opensource.com/sites/default/files/uploads/vnc-password.png (VNC authentication)
|
||||
[17]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[18]: https://opensource.com/sites/default/files/uploads/doom-pods.png (Kube Doom pods)
|
||||
[19]: https://opensource.com/article/20/5/kubernetes-administration
|
||||
[20]: https://opensource.com/sites/default/files/uploads/doom-pods_kill.png (Killing pods in Kube DOOM)
|
@ -0,0 +1,111 @@
|
||||
[#]: subject: (Subtitld: A Cross-Platform Open-Source Subtitle Editor)
|
||||
[#]: via: (https://itsfoss.com/subtitld/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Subtitld: A Cross-Platform Open-Source Subtitle Editor
|
||||
======
|
||||
|
||||
Subtitles make the experience of watching a video seamless. You do not need to necessarily understand the language of the video, the subtitle helps you figure out what’s happening with a text version of the audio in your preferred language.
|
||||
|
||||
You get subtitles for most of the content in streaming platforms you might have to add subtitles for some videos that you have in your local collection.
|
||||
|
||||
While you can do that by simply downloading SRT files and loading it up using the video player, how do you edit it, remove it, or transcribe a video? Subtitld is an open-source subtitle editor that comes to the rescue.
|
||||
|
||||
### Subtitld: Create, Remove, Slice, and Transcribe Subtitles
|
||||
|
||||
Subtitld is a free and open-source project that lets you make the most out of your subtitles.
|
||||
|
||||
![][1]
|
||||
|
||||
If you do not have a subtitle, create one, if you need to edit it, go ahead. With this open-source tool, you get many options to work with the subtitles.
|
||||
|
||||
In other words, it is a full-fledged subtitle editor and one of its kind (as far as I’ve come across).
|
||||
|
||||
Let me highlight some key features before you decide to try it.
|
||||
|
||||
### Features of Subtitld
|
||||
|
||||
![][2]
|
||||
|
||||
It offers a great deal of functions, while not everyone needs all of them, but if you are someone who regularly creates, edits, and works with subtitles, it should come in pretty handy.
|
||||
|
||||
Here’s a list of them:
|
||||
|
||||
* Create subtitles
|
||||
* Edit subtitles
|
||||
* Move subtitles using a timeline to sync manually
|
||||
* Zoom in/out function to help with a crowded timeline
|
||||
* Supports saving to SRT file format
|
||||
* Supports various other formats to import and export (SSA, TTML, SBV, DFXP, VTT, XML, SCC and SAMI)
|
||||
* Easy to resize or adjust the duration of a subtitle from the timeline
|
||||
* Merge with other subtitles or just slice a subtitle in a project
|
||||
* Ability to enable grids to visualize by frames, scenes, or seconds
|
||||
* Playback in the editor to check how the subtitles work
|
||||
* Snap the subtitles in the timeline to avoid overlapping
|
||||
* Add/remove among the subtitles
|
||||
* Enable safety margins to ensure that the subtitles do not look improper
|
||||
* Adjust the playback speed
|
||||
* Keyboard shortcuts available
|
||||
* Auto-transcribe
|
||||
* Export videos with subtitles burned in
|
||||
* Unlimited undo
|
||||
|
||||
|
||||
|
||||
In addition to these features, the visual clues for the audio waveform also help in a way.
|
||||
|
||||
![][3]
|
||||
|
||||
Overall, you can do many things and use it professionally as well, if you are someone who transcribes videos and want to edit the video in one go.
|
||||
|
||||
**Recommended Read:**
|
||||
|
||||
![][4]
|
||||
|
||||
#### [App Highlight: Penguin Subtitle Player for Adding Subtitles to Online Videos][5]
|
||||
|
||||
With the free and open source application, Penguin subtitle player, you can add subtitles to any online videos. Learn more about this nifty app.
|
||||
|
||||
### Installing Subtitld in Linux
|
||||
|
||||
While it is also available for Windows, you can easily install it on Linux using the [snap package][6]. You will not find any binary packages or Flatpak available, but you should be able to install it on any Linux distribution [using snap on Linux][7].
|
||||
|
||||
[Subtitld][8]
|
||||
|
||||
You can find the source code on [GitLab][9] if you want to explore.
|
||||
|
||||
### Closing Thoughts
|
||||
|
||||
With fine-grained settings available to sync or add subtitles for a video, I just tested a few basic functions to import, export, add or remove subtitles.
|
||||
|
||||
The auto-transcribe feature is still something in beta (as of publishing this), but the user interface could use some improvements. For instance, when I hover over the buttons inside the editor, it does not tell me what it does.
|
||||
|
||||
Overall, it is a useful tool to have available on Linux. What do you think about it? Please don’t hesitate to let me know your thoughts in the comments down below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/subtitld/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/subtitld-editor.png?resize=800%2C546&ssl=1
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/06/subtitld-export.png?resize=800%2C469&ssl=1
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/subtitld-screenshot-1.png?resize=800%2C588&ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/Add_subtitle_online_videos.png?fit=800%2C450&ssl=1
|
||||
[5]: https://itsfoss.com/penguin-subtitle-player/
|
||||
[6]: https://snapcraft.io/subtitld
|
||||
[7]: https://itsfoss.com/use-snap-packages-ubuntu-16-04/
|
||||
[8]: https://subtitld.jonata.org
|
||||
[9]: https://gitlab.com/jonata/subtitld
|
108
sources/tech/20210608 Tune your MySQL queries like a pro.md
Normal file
108
sources/tech/20210608 Tune your MySQL queries like a pro.md
Normal file
@ -0,0 +1,108 @@
|
||||
[#]: subject: (Tune your MySQL queries like a pro)
|
||||
[#]: via: (https://opensource.com/article/21/5/mysql-query-tuning)
|
||||
[#]: author: (Dave Stokes https://opensource.com/users/davidmstokes)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Tune your MySQL queries like a pro
|
||||
======
|
||||
Optimizing your queries isn't a dark art; it's just simple engineering.
|
||||
![woman on laptop sitting at the window][1]
|
||||
|
||||
Many people consider tuning database queries to be some mysterious "dark art" out of a Harry Potter novel; with the wrong incantation, your data turns from a valuable resource into a pile of mush.
|
||||
|
||||
In reality, tuning queries for a relational database system is simple engineering and follows easy-to-understand rules or heuristics. The query optimizer translates the query you send to a [MySQL][2] instance, and then it determines the best way to get the requested data using those heuristics combined with what it knows about your data. Reread the last part of that: _"what it knows about your data_." The less the query optimizer has to guess about where your data is located, the better it can create a plan to deliver your data.
|
||||
|
||||
To give the optimizer better insight about the data, you can use indexes and histograms. Used properly, they can greatly increase the speed of a database query. If you follow the recipe, you will get something you will like. But if you add your own ingredients to that recipe, you may not get what you want.
|
||||
|
||||
### Cost-based optimizer
|
||||
|
||||
Most modern relational databases use a cost-based optimizer to determine how to retrieve your data out of the database. That cost is based on reducing very expensive disk reads as much as possible. The query optimizer code inside the database server keeps statistics on getting that data as it is encountered, and it builds a historical model of what it took to get the data.
|
||||
|
||||
But historical data can be out of date. It's like going to the store to buy your favorite snack and being shocked at a sudden price increase or that the store closed. Your server's optimization process may make a bad assumption based on old information, and that will produce a poor query plan.
|
||||
|
||||
A query's complexity can work against optimization. The optimizer wants to deliver the lowest-cost query of the available options. Joining five different tables means that there are five-factorial or 120 possible combinations about which to join to what. Heuristics are built into the code to try to shortcut evaluating all the possible options. MySQL wants to generate a new query plan every time it sees a query, while other databases such as Oracle can have a query plan locked down. This is why giving detailed information on your data to the optimizer is vital. For consistent performance, it really helps to have up-to-date information for the query optimizer to use when making query plans.
|
||||
|
||||
Also, rules are built into the optimizer with assumptions that probably do not match the reality of your data. The query optimizer will assume all the data in a column is evenly distributed among all the rows unless it has other information. And it will default to the smaller of two possible indexes if it sees no alternative. While the cost-based model for an optimizer can make a lot of good decisions, you can smack into cases where you will not get an optimal query plan.
|
||||
|
||||
### A query plan?
|
||||
|
||||
A query plan is what the optimizer will generate for the server to execute from the query. The way to see the query plan is to prepend the word `EXPLAIN` to your query. For example, the following query asks for the name of a city from the city table and the name of the corresponding country table, and the two tables are linked by the country's unique code. This case is interested only in the top five cities alphabetically from the United Kingdom:
|
||||
|
||||
|
||||
```
|
||||
SELECT city.name AS 'City',
|
||||
country.name AS 'Country'
|
||||
FROM city
|
||||
JOIN country ON (city.countrycode = country.code)
|
||||
WHERE country.code = 'GBR'
|
||||
LIMIT 5;
|
||||
```
|
||||
|
||||
Prepending `EXPLAIN` in front of this query will give the query plan generated by the optimizer. Skipping over all but the end of the output, it is easy to see the optimized query:
|
||||
|
||||
|
||||
```
|
||||
SELECT `world`.`city`.`Name` AS `City`,
|
||||
'United Kingdom' AS `Country`
|
||||
FROM `world`.`city`
|
||||
JOIN `world`.`country`
|
||||
WHERE (`world`.`city`.`CountryCode` = 'GBR')
|
||||
LIMIT 5;
|
||||
```
|
||||
|
||||
The big changes are that `country.name as 'Country'` was changed to `'United Kingdom' AS 'Country'` and the `WHERE` clause went from looking in the country table to the city table. The optimizer determined that these two changes will provide a faster result than the original query.
|
||||
|
||||
### Indexes
|
||||
|
||||
You will hear indexes and keys used interchangeably in the MySQL-verse. However, indexes are made up of keys, and keys are a way to identify a record, hopefully uniquely. If a column is designed as a key, the optimizer can search a list of those keys to find the desired record without having to read the entire table. Without an index, the server has to start at the first row of the first column and read through every row of data. If the column was created as a unique index, then the server can go to that one row of data and ignore the rest. The more unique the value of the index (also known as its cardinality), the better. Remember, we are looking for faster ways of getting to the data.
|
||||
|
||||
The MySQL default InnoDB storage engine wants your table to have a primary key and will store your data in a B+ tree by that key. A recently added MySQL feature is invisible columns—columns that do not return data unless the column is explicitly named in the query. For example, `SELECT * FROM foo;` doesn't provide any columns that are designated as hidden. This feature provides a way to add a primary key to older tables without recoding all the queries to include that new column.
|
||||
|
||||
To make this even more complicated, there are many types of indexes, such as functional, spatial, and composite. There are even cases where you can create an index that will provide all the requested information for a query so that there is no need to access the data table.
|
||||
|
||||
Describing the various indexes is beyond the scope of this article, so just think of an index as a shortcut to the record or records you desire. You can create an index on one or more columns or part of those columns. My physician's system can look up my records by the first three letters of my last name and birthdate. Using multiple columns requires using the most unique field first, then the second most unique, and so forth. An index on year-month-day works for year-month-day, year-month, and year searches, but it doesn't work for day, month-day, or year-day searches. It helps to design your indexes around how you want to use your data.
|
||||
|
||||
### Histograms
|
||||
|
||||
A histogram is a distribution of your data. If you were alphabetizing people by their last name, you could use a "logical bucket" for the folks with last names starting with the letters A to F, then another for G to J, and so forth. The optimizer assumes that the data is evenly distributed within the column, but this is rarely the case in practical use.
|
||||
|
||||
MySQL provides two types of histograms: equal height, where all the data is divided equally among the buckets, and singleton, where a single value is in a bucket. You can have up to 1,024 buckets. The amount of buckets to choose for your data column depends on many factors, including how many distinct values you have, how skewed your data is, and how high your accuracy really needs to be. After a certain amount of buckets, there are diminishing returns.
|
||||
|
||||
This command will create a histogram of 10 buckets on column c1 of table t:
|
||||
|
||||
|
||||
```
|
||||
`ANALYZE TABLE t UPDATE HISTOGRAM ON c1 WITH 10 BUCKETS;`
|
||||
```
|
||||
|
||||
Imagine you sell small, medium, and large socks, and each size has its own bin for storage. To find the size you need, you go to the bin for that size. MySQL has had histograms since MySQL 8.0 was released three years ago, yet they are not as well-known as indexes. Unlike indexes, there is no overhead for inserting, updating, or deleting a record. To update an index, an `ANALYZE TABLE` command must be updated. This is a good approach when the data does not churn very much and frequent changes to the data will reduce the efficiency.
|
||||
|
||||
### Indexes or histograms?
|
||||
|
||||
Use indexes for unique items where you need to access the data directly. There is overhead for updates, deletes, and inserts, but you get speedy access if your data is properly architected. Use histograms for data that does not get updated frequently, such as quarterly results for the last dozen years.
|
||||
|
||||
### Parting thoughts
|
||||
|
||||
This article grew out of a recent presentation at the [Open Source 101 conference][3]. And that presentation grew out of a workshop at a [PHP UK Conference][4]. Query tuning is a complex subject, and each time I present on indexes and histograms, I find ways to refine my presentation. But each presentation also shows that many folks in the software world are not well-versed on indexes and tend to use them incorrectly. Histograms have not been around long enough (I hope) to have been misused similarly.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/5/mysql-query-tuning
|
||||
|
||||
作者:[Dave Stokes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/davidmstokes
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
|
||||
[2]: https://www.mysql.com/
|
||||
[3]: https://opensource101.com/
|
||||
[4]: https://www.phpconference.co.uk/
|
@ -0,0 +1,104 @@
|
||||
[#]: subject: (Helix: A Terminal Based Text Editor for Power Linux Users)
|
||||
[#]: via: (https://itsfoss.com/helix-editor/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Helix: A Terminal Based Text Editor for Power Linux Users
|
||||
======
|
||||
|
||||
When it comes to [terminal based text editors][1], it is usually Vim, Emacs and Nano that get the limelight.
|
||||
|
||||
That doesn’t mean there are not other such text editors. [Neovim][2], a modern enhancement to Vim, is one of many such examples.
|
||||
|
||||
Along the same line, I would like to introduce yet another terminal based text editor called Helix Editor.
|
||||
|
||||
### Helix, a modern text editor written in Rust
|
||||
|
||||
![][3]
|
||||
|
||||
[Helix][4] is written in Rust and uses Tree-sitter for syntax highlighting. The developer claims that it is faster than regex highlighting because Tree-sitter parses code into syntax trees like a compiler and thus giving a lot more information about code structure.
|
||||
|
||||
You can track local variables, calculate indentations and manipulate selection to select syntax nodes. It is robust enough to produce results even with syntax error.
|
||||
|
||||
The main focus of Helix is on ‘multiple selection’. This is based on [Kakoune][5].
|
||||
|
||||
The built-in language server support provides context aware completion, diagnostics and code actions.
|
||||
|
||||
### Installing Helix on Linux
|
||||
|
||||
For Arch and Manjaro users, Helix is available in the AUR in two packages:
|
||||
|
||||
* [helix-bin][6]: contains prebuilt binary from GitHub releases
|
||||
* [helix-git][7]: builds the master branch of this repository
|
||||
|
||||
|
||||
|
||||
As an Arch user, you probably already know [how to install applications using AUR][8], I believe.
|
||||
|
||||
For other Linux distributions, you have to use Cargo. Cargo is Rust package manager. With this, you can install Rust packages. Consider it Rust equivalent to PIP of Python.
|
||||
|
||||
You should be able to install Cargo using your distribution’s package manager. On Ubuntu based distributions, install cargo like this:
|
||||
|
||||
```
|
||||
sudo apt install cargo
|
||||
```
|
||||
|
||||
Next, you clone the Helix repository:
|
||||
|
||||
```
|
||||
git clone --recurse-submodules --shallow-submodules -j8 https://github.com/helix-editor/helix
|
||||
```
|
||||
|
||||
Move to the cloned directory:
|
||||
|
||||
```
|
||||
cd helix
|
||||
```
|
||||
|
||||
And now use cargo to install Helix:
|
||||
|
||||
```
|
||||
cargo install --path helix-term --features "embed_runtime"
|
||||
```
|
||||
|
||||
One last step is to add the hx binary to the PATH variable so that you can run it from anywhere. This should be added to your bashrc or bash profile.
|
||||
|
||||
```
|
||||
export PATH=”$HOME/.cargo/bin:$PATH”
|
||||
```
|
||||
|
||||
Now that everything is set, you should be able to use the editor by typing `hx` in the terminal.
|
||||
|
||||
You can find the keyboard shortcuts for using Helix on its [documentation page][9]:
|
||||
|
||||
[Helix Keyboard Shortcuts][10]
|
||||
|
||||
How does it compare with Vim or Neovim? I cannot say. I can use Vim for basic editing but I am not a Vim ninja. If you are someone who swears and live by Vim (or Emacs), I let you try Helix and judge it yourself.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/helix-editor/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/command-line-text-editors-linux/
|
||||
[2]: https://neovim.io/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/helix-editor-screenshot.png?resize=800%2C515&ssl=1
|
||||
[4]: https://helix-editor.com/
|
||||
[5]: http://kakoune.org/
|
||||
[6]: https://aur.archlinux.org/packages/helix-bin/
|
||||
[7]: https://aur.archlinux.org/packages/helix-git/
|
||||
[8]: https://itsfoss.com/aur-arch-linux/
|
||||
[9]: https://docs.helix-editor.com/
|
||||
[10]: https://docs.helix-editor.com/keymap.html
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 requirements of cloud-native software)
|
||||
[#]: via: (https://opensource.com/article/20/1/cloud-native-software)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
云原生软件的 6 个要求
|
||||
======
|
||||
开发和实施云原生(容器优先)软件的检查清单。
|
||||
![Team checklist][1]
|
||||
|
||||
许多年来,单体应用是实现业务需求的标准企业架构。但是,当云基础设施开始以规模和速度为业务加速,这种情况就发生了重大变化。应用架构也发生了转变,以适应云原生应用和[微服务][2]、[无服务器][3]以及事件驱动的服务,这些服务在跨混合云和多云平台的不可变的基础设施上运行。
|
||||
|
||||
### 云原生与 Kubernetes 的联系
|
||||
|
||||
根据[云原生计算基金会][4] (CNCF) 的说法:
|
||||
|
||||
> “云原生技术使企业能够在现代动态环境中建立和运行可扩展的应用,如公共云、私有云和混合云。容器、服务网格、微服务、不可变的基础设施和声明式 API 就是这种方法的典范。”
|
||||
>
|
||||
> “这些技术使松散耦合的系统具有弹性、可管理和可观察性。与强大的自动化相结合,它们使工程师能够以最小的工作量频繁地、可预测地进行重要的改变。”
|
||||
|
||||
|
||||
像 [Kubernetes][5] 这样的容器编排平台允许 DevOps 团队建立不可变的基础设施,以开发、部署和管理应用服务。现在,快速迭代的速度与业务需求相一致。构建容器以在 Kubernetes 中运行的开发人员需要一个有效的地方来完成。
|
||||
|
||||
### 云原生软件的要求
|
||||
|
||||
创建云原生应用架构需要哪些能力,开发人员将从中获得哪些好处?
|
||||
|
||||
虽然构建和架构云原生应用的方法有很多,但以下是一些需要考虑的部分:
|
||||
|
||||
* **运行时:**它们更可能以容器优先或/和 Kubernetes 原生语言编写,这意味着运行时会如 Java、Node.js、Go、Python 和 Ruby。
|
||||
* **安全:**在多云或混合云应用环境中部署和维护应用时,安全是最重要的,应该是环境的一部分。
|
||||
* **可观察性:**使用 Prometheus、Grafana 和 Kiali 等工具,这些工具可以通过提供实时指标和有关应用在云中的使用和行为的更多信息来增强可观察性。
|
||||
* **效率:**专注于极小的内存占用、小的构件大小和快速启动时间,使应用可跨混合/多云平台移植。
|
||||
* **互操作性:**将云原生应用与能够满足上述要求的开源技术相结合,包括 Infinispan、MicroProfile、Hibernate、Kafka、Jaeger、Prometheus 等,以构建标准运行时架构。
|
||||
* **DevOps/DevSecOps:**这些方法论是为持续部署到生产而设计的,与最小可行产品 (MVP) 一致,并将安全作为工具的一部分。
|
||||
|
||||
|
||||
|
||||
### 让云原生具体化
|
||||
|
||||
云原生似乎是一个抽象的术语,但回顾一下定义并像开发人员一样思考可以使其更加具体。为了使云原生应用获得成功,它们需要包括一长串定义明确的组成清单。
|
||||
|
||||
你是如何规划云原生应用的设计的?在评论中分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/1/cloud-native-software
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist)
|
||||
[2]: https://opensource.com/resources/what-are-microservices
|
||||
[3]: https://opensource.com/article/18/11/open-source-serverless-platforms
|
||||
[4]: https://github.com/cncf/toc/blob/master/DEFINITION.md
|
||||
[5]: https://opensource.com/resources/what-is-kubernetes
|
@ -0,0 +1,202 @@
|
||||
[#]: subject: (4 essential characteristics of successful APIs)
|
||||
[#]: via: (https://opensource.com/article/21/5/successful-apis)
|
||||
[#]: author: (Tom Wilson https://opensource.com/users/tomwillson4014)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ywxgod)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
成功API的4个基本特征
|
||||
======
|
||||
创建一个API(译者注:应用程序接口),我们所要做的远远不止是让它能"正常工作。"
|
||||
![看地图][1]
|
||||
|
||||
如果你正在构建基于C/S模型的应用程序,那么你需要一个应用程序接口(API)。API就是一种非常清晰而又明确的定义,它定义了一个过程(process)与另一个过程(process)之间的边界。 web应用中我们常见的边界定义就是 REST/JSON API.
|
||||
|
||||
虽然很多开发者可能主要关注在如何让API正常工作(或功能正常),但却还有一些“非功能性”的要求也是需要他们注意的。 对于所有API我们有以下4个 _必须有的_ 非功能性的要求:
|
||||
|
||||
* 安全
|
||||
* 文档
|
||||
* 校验
|
||||
* 测试
|
||||
|
||||
|
||||
|
||||
### 安全
|
||||
|
||||
在软件开发中,安全是最基本的要求。对于API开发者来说,API的安全性主要包含以下4个方面:
|
||||
|
||||
1. HTTPS/SSL 证书
|
||||
2. 跨域资源共享
|
||||
3. 身份认证与JSON Web令牌
|
||||
4. 授权与范围划分
|
||||
|
||||
|
||||
|
||||
#### 1\. HTTPS/SSL 证书
|
||||
|
||||
Web应用的黄金标准是使用HTTPS协议,而HTTPS协议的使用是要有SSL证书。关于SSL证书,[Let's Encrypt][2]可以帮我们达到目的。 Let's Encrypt来自于非营利性的互联网安全研究小组(ISRG),它是一个免费的、自动化的、开放的证书颁发机构。
|
||||
|
||||
Let's Encrypt的软件会为你的域(译者注:包含域名、IP等信息)生成一些权威的中央授权证书。而正是这些证书确保了,从你的API到客户端的点到点的数据都是被加密过的。
|
||||
|
||||
Let's Encrypt支持多种不同证书管理的部署方案。我们可以通过查看[文档][3]来找出最适合自己需要的方案。
|
||||
|
||||
#### 2\. 跨域资源共享
|
||||
|
||||
CORS是浏览器基于浏览器安全策略的一个预检。如果你的API服务器与发出请求的客户端不在同一个域中,那么你将要处理CORS。例如,如果你的服务器运行在**api.domain-a.com **并且接到一个来自**domain-b.com**的客户端的请求, 那么CORS就会(让浏览器)发送一个HTTP预检请求,以便查看你的API服务是否会接受来自此客户域的客户端请求。
|
||||
|
||||
[来自MDN的定义][4]:
|
||||
|
||||
> “跨域资源共享(CORS)是一种基于HTTP头的机制,这种机制允许服务器标记除自身源外的其他任何来源(域、方案或端口)。而对于这些被服务器标识的源,浏览器应该允许我们从它们加载资源。”
|
||||
|
||||
![CORS原理][5]
|
||||
|
||||
([MDN文档][4], [CC-BY-SA 2.5][6])
|
||||
|
||||
另外,有很多用于[Node.js][7]的辅助库来[帮助API开发者处理CORS][8]。
|
||||
|
||||
#### 3\. 身份认证与JSON Web令牌
|
||||
|
||||
有多种方法可以验证你的API中的认证用户,但最好的方法之一是使用JSON Web令牌(JWT),而这些令牌都是被各种知名的加密库进行签名过的。
|
||||
|
||||
当客户端登录时,身份管理服务会向客户端提供一个JWT(JSON Web令牌)。然后,客户端可以使用这个令牌向API发出请求,API收到请求后,从服务器读取公钥或私钥来验证这个令牌。
|
||||
|
||||
有一些现有的库,可以帮助我们对令牌进行验证,包括[jsonwebtoken][9]。关于JWT的更多信息,以及各种语言中对其的支持库,请查看[JWT.io][10]。
|
||||
|
||||
![JWT验证示例][11]
|
||||
|
||||
(Tom Wilson, [Hyper63 blog][12])
|
||||
|
||||
#### 4\. 授权与范围划分
|
||||
|
||||
认证(或身份认证)很重要,但授权同样很重要。比如, _经过认证的客户端是否有权限让服务器执行某个请求呢?_ 这就是**scopes(范围划分)**的价值所在。 当身份管理服务器对客户端进行身份认证,且创建JWT令牌时,身份管理服务会给当前客户提供一个权限范围,这个权限范围将会决定当前客户的API请求能否被服务器执行。这样也就免去了服务器对访问控制列表的一些不必要的查询。
|
||||
|
||||
授权范围通常是一个文本块(通常以空格分隔),用于描述一个API的访问能力。一般来说,范围被分为资源(Resources)与动作(Actions)。这种模式对REST/JSON API很有效,因为他们有相似的RESOURCE:ACTION结构。(例如,ARTICLE:WRITE或ARTICLE:READ,其中ARTICLE是资源,READ和WRITE是动作)。
|
||||
|
||||
授权范围的划分让我们的API能够专注于功能的实现,而不是去考虑各种角色和用户。 身份管理服务可以将不同的角色和用户分配不同的权限范围, 然后再将这些不同的授权范围提供给不同的JWT验证中的客户。
|
||||
|
||||
#### 总结
|
||||
|
||||
当我们开发和部署API时,安全应该一直是最重要的要求。虽然安全性是一个比较泛的话题,但如果能解决上面四个方面的问题,这对于你的API来说,在生产环境中将会表现得更好。
|
||||
|
||||
### 文档
|
||||
|
||||
_有什么能比没有文档更糟糕? 过期的文档._
|
||||
|
||||
开发者对文档真的是又爱又恨。 尽管如此,文档仍然是API定义是否成功的一个关键部分。开发者需要从文档中知道如何使用这些API,且你创建的文档对于开发者如何更好地使用API也有着非常巨大的作用。
|
||||
|
||||
创建API文档,我们需要关注下面三个方面:
|
||||
|
||||
1. 开发者入门文档 (自述文件/基本介绍)
|
||||
2. 技术参考 (说明书/规格)
|
||||
3. 使用方法 (如何开始和其他指南)
|
||||
|
||||
|
||||
|
||||
#### 1\. 入门文档
|
||||
|
||||
在构建API服务的时候,你需要明确一些事情,比如:这个API是干嘛的?如何建立开发者环境?如何测试该服务?如何提交问题?如何部署它?
|
||||
|
||||
通常我们可以通过README文件来回答上面的这些问题,README文件一般放在你的代码库中,用于为开发者提供使用你项目的最基本的起点和说明。
|
||||
|
||||
README文件应该包含:
|
||||
|
||||
* API的描述
|
||||
* 技术参考与指南链接
|
||||
* 如何以开发者的身份设置你的项目
|
||||
* 如何测试这个项目
|
||||
* 如何部署这个项目
|
||||
* 依赖管理
|
||||
* 代码贡献指南
|
||||
* 行为准则
|
||||
* 许可证
|
||||
* 致谢
|
||||
|
||||
|
||||
|
||||
你的README文件应该简明扼要; 你不必解释每一个方面,但要提供足够的信息,以便开发者在熟悉你的项目后可以进一步深入研究。
|
||||
|
||||
#### 2\. 技术参考
|
||||
|
||||
在REST/JSON API中, 每一个具体的网址(endpoint)都对应一个特定的功能,所以每一个具体的网址(endpoint)都需要对应一个具体的说明文档,这显得非常重要,文档中会定义API的描述,输入和可能的输出,并为各种客户端提供使用示例。
|
||||
|
||||
[OpenAPI][13]是一个创建REST/JSON文档的标准, 它可以指导你完成编写API文档所需的各种细节。OpenAPI还可以为你的API生成演示文档。
|
||||
|
||||
#### 3\. 使用方法
|
||||
|
||||
对于API的用户来说,仅仅只有技术说明是不够的。他们还需要知道如何在一些特定的情况和场景下来使用这些API,而大多数的潜在用户可能希望通过你的API来解决他们所遇到的问题。
|
||||
|
||||
向用户介绍API的一个好的方法是利用一个“开始”页面。“开始”页面可以通过一个简单的用例引导用户,让他们迅速了解你的API能给他们能带来的益处。
|
||||
|
||||
#### 总结
|
||||
|
||||
对于任何完善的API,文档都是一个很关键的组成部分。当你在创建文档时,你需要关注API文档中的如何入门,技术参考以及如何快速开始等三个方面,这样你的API才算是一个完善的API。
|
||||
|
||||
### 校验
|
||||
|
||||
API开发过程中经常被忽视的一个点就是校验。 校验是一个验证来自外部源的输入的过程。 这些源可能给是客户端发送过来的JSON数据,或者是你请求别人的服务收到的响应数据。 我们不仅仅要检查这些数据的类型,还要确保这些数据确实是我们要的数据,这样可以消除很多潜在的问题。 了解你的边界以及你能控制的和不能控制的东西,对于API的数据校验来说这是一个很重要的方面。
|
||||
|
||||
最好的策略是在进入数据逻辑处理之前,在你能控制的边界的边缘处进行数据的校验。当客户端向你的API发送数据时,你需要对该数据做出任何处理之前应用你的验证,比如:确保email是真实的邮件地址, 日期数据有正确的格式, 字符串符合长度要求。
|
||||
|
||||
这种简单的检查可以为你的应用增加安全性和一致性。 还有,当你从某个服务接收数据时,比如数据库或缓存,你需要重新验证这些数据,以确保返回的结果符合你的数据校验。
|
||||
|
||||
你可以自己手写这些校验逻辑,当然也可以用像[Lodash][14]或[Ramda][15]这样的函数库,它们对于一些小的数据对象非常有用。像[Joi][16],[Yup][17],或[Zod][18]这样的验证库效果会更好,因为它们包含了一些常见的验证,可以节省你的时间和精力。除此,它们还能创建一个可读性强的模式。如果你需要看看与语言无关的东西,请看[JSON Schema][19]。
|
||||
|
||||
#### 总结
|
||||
|
||||
数据校验虽然并不显眼和突出(译者注:跟API的功能实现以及其他几个方面比), 但它可以帮你节省大量的时间。如果不做校验,这些时间将可能被用于故障排除和编写数据迁移脚本。真的不要相信你的客户端会发送干净的数据给你,也不要让验证不通过(bad data)的数据渗入到你的业务逻辑或持久数据存储中去。花点时间验证你的API收到的数据和请求到的响应数据,虽然在前期你可能会感到一些挫折和不适,但这总比你在后期花大量时间去做各种数据收紧管制,故障排除等要容易得多。
|
||||
|
||||
### 测试
|
||||
|
||||
测试是软件开发中的最佳实践,它应该是最主要的非功能性的要求。对于任何项目,包括API,确定测试策略都是一个挑战,因为你自始至终都要掌握各种约束,以便相应的来制定你的测试策略。
|
||||
|
||||
集成测试是测试API的最有效的方法之一。在集成测试模式中,开发团队会创建一个测试集用来覆盖应用流程中的一个点到另一个点。一个好的集成测试流程包括测试API的入口点以及模拟请求到服务端的响应。 搞定这两点,你就覆盖了整个逻辑,包括从API请求的开始到模拟服务器的响应,并返回数据给API。
|
||||
|
||||
虽然使用的是模拟, 但这种方法让能我们专注于代码逻辑层,而不需要去依赖后端服务和展示逻辑来进行测试。没有依赖的测试会更加可靠, 更容易 实现自动化,且更容易被接入持续集成管道流。
|
||||
|
||||
集成测试的实施中,我会用[Tape][20],[Test-server][21],和[Fetch-mock][22]。这些库让我们能够从API的请求到数据的响应进行隔离测试,使用 Fetch-mock还可以将出站请求捕获到持久层。
|
||||
|
||||
#### 总结
|
||||
|
||||
虽然其他的各种测试和类型检查对API也都有很好的益处,但集成测试在流程效率,构建和管理时间方面却有着更大的优势。 使用Fetch-mock这样的工具,可以在服务边界提供一个干净的模拟场景。
|
||||
|
||||
### 专注于基础
|
||||
|
||||
不管是设计和构建应用程序还是API,都要确保包含上面的四个基本要素。它们并不是我们唯一需要考虑的非功能性需求,其他的还包括应用监控,日志和API管理等都是的。 即便如此, 安全、文档、校验和测试这四个基本点,对于构建任何使用场景下的成功API都是至关重要的关注点。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/5/successful-apis
|
||||
|
||||
作者:[Tom Wilson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[ywxgod](https://github.com/ywxgod)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/tomwillson4014
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tips_map_guide_ebook_help_troubleshooting_lightbulb_520.png?itok=L0BQHgjr (Looking at a map)
|
||||
[2]: https://letsencrypt.org/
|
||||
[3]: https://letsencrypt.org/docs/
|
||||
[4]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
|
||||
[5]: https://opensource.com/sites/default/files/uploads/cors_principle_1.png (CORS principles)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/2.5/
|
||||
[7]: https://nodejs.org
|
||||
[8]: https://www.npmjs.com/search?q=CORS
|
||||
[9]: https://github.com/auth0/node-jsonwebtoken
|
||||
[10]: https://jwt.io
|
||||
[11]: https://opensource.com/sites/default/files/uploads/jwt-verify-example.png (JWT verification example)
|
||||
[12]: https://blog.hyper63.com/content/images/2021/03/jwt-verify-example.png
|
||||
[13]: https://spec.openapis.org/oas/v3.1.0
|
||||
[14]: https://lodash.com
|
||||
[15]: https://ramdajs.com/
|
||||
[16]: https://joi.dev/
|
||||
[17]: https://github.com/jquense/yup
|
||||
[18]: https://github.com/colinhacks/zod/tree/v3
|
||||
[19]: https://json-schema.org/
|
||||
[20]: https://github.com/substack/tape
|
||||
[21]: https://github.com/twilson63/test-server
|
||||
[22]: http://www.wheresrhys.co.uk/fetch-mock/
|
@ -0,0 +1,166 @@
|
||||
[#]: subject: (Manage your Raspberry Pi with Cockpit)
|
||||
[#]: via: (https://opensource.com/article/21/5/raspberry-pi-cockpit)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (ShuyRoy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
使用Cockpit管理你的树莓派
|
||||
======
|
||||
用Cockpit建立你的树莓派的控制中心。
|
||||
![Neon colorized Raspberry Pi cluster with LEGOs][1]
|
||||
|
||||
去年,我写了关于使用[Cockpit管理我的Linux服务器的文章][2]。它是一个基于web的工具,为管理多个服务器及其相关的服务和应用提供了一个干净、强大的接口。它还简化了日常的管理任务。
|
||||
|
||||
在这篇文章中,我将会介绍如何在树莓派基金会提供的标准操作系统(OS)上安装Linux服务器的Cockpit web控制台。我还会简要介绍它的特性。
|
||||
|
||||
|
||||
### 在树莓派OS上安装Cockpit
|
||||
|
||||
在sudo权限下使用一个账户通过SSH登录你的树莓派系统。如果你还没有建立一个账户:
|
||||
|
||||
|
||||
```
|
||||
$ ssh pibox
|
||||
alan@pibox's password:
|
||||
Linux pibox.someplace.org 5.10.17-v7+ #1403 SMP Mon Feb 22 11:29:51 GMT 2021 armv7l
|
||||
|
||||
The programs included with the Debian GNU/Linux system are free software;
|
||||
the exact distribution terms for each program are described in the
|
||||
individual files in /usr/share/doc/*/copyright.
|
||||
|
||||
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
|
||||
permitted by applicable law.
|
||||
Last login: Tue May 4 09:55:57 2021 from 172.1.4.5
|
||||
alan@pibox:~ $
|
||||
```
|
||||
|
||||
在树莓派OS上安装Cockpit web控制台和在Linux服务器上一样简单:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install cockpit`
|
||||
```
|
||||
|
||||
Cockpit只需要60.4 KB的磁盘空间。加上它的几个包依赖项,总使用量是115MB。
|
||||
|
||||
安装过程将负责设置和启动服务。你可以使用 `systemctl` 命令来验证状态。
|
||||
|
||||
|
||||
```
|
||||
$ systemctl status cockpit.socket
|
||||
● cockpit.socket - Cockpit Web Service Socket
|
||||
Loaded: loaded (/lib/systemd/system/cockpit.socket; enabled; vendor preset: enabled)
|
||||
Active: active (listening) since Tue 2021-05-04 10:24:43 EDT; 35s ago
|
||||
Docs: man:cockpit-ws(8)
|
||||
Listen: 0.0.0.0:9090 (Stream)
|
||||
Process: 6563 ExecStartPost=/usr/share/cockpit/motd/update-motd localhost (code=exited, status=0/SUCCESS)
|
||||
Process: 6570 ExecStartPost=/bin/ln -snf active.motd /run/cockpit/motd (code=exited, status=0/SUCCESS)
|
||||
Tasks: 0 (limit: 2181)
|
||||
CGroup: /system.slice/cockpit.socket
|
||||
```
|
||||
|
||||
### 使用Cockpit
|
||||
|
||||
#### 连接
|
||||
|
||||
默认的监听端口号是9090。打开你最喜欢的web浏览器并输入地址,例如: `https://pibox:9090`.
|
||||
|
||||
![Cockpit home page][3]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
你现在可以使用你的常规账户登录。同样,在这个账户上使用sudo权限是很有帮助的——使用SSH和运行Apt很像。一定要勾选“为特权任务重用我的密码”。
|
||||
|
||||
#### 管理你的派
|
||||
|
||||
Cockpit的初始屏幕以 **System** 开始,将提供当前CPU和内存使用的详细信息和图。
|
||||
|
||||
|
||||
![Initial Cockpit screen][5]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
你可以从这个屏幕看到硬件细节。
|
||||
|
||||
![Cockpit hardware details][6]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
通过点击每一项来展开左边的列(例如,日志、存储、服务等)。这些是标准的Cockpit部分,而且非常明显。
|
||||
|
||||
#### 日志
|
||||
|
||||
这部分展示了日志。它们可以根据日期和严重程度来过滤。
|
||||
|
||||
#### 存储
|
||||
|
||||
存储部分展示了已经安装的物理驱动器和RAID设备。例如大小、序列号等细节都被展示了出来。还展示了读/写活动和真实的空间使用的图。专门存储日志的在底部展示。
|
||||
|
||||
#### 网络
|
||||
|
||||
这部分展示了发送和接收活动、IP地址以及网络的日志。你也可以添加更多的网络设备,例如使用各自按钮的bonds、桥以及VLAN。
|
||||
|
||||
#### 账户
|
||||
|
||||
这里展示了已有的账户。点击每个账户来管理或使用创建新账户按钮来添加用户。账户也可以被删除。
|
||||
|
||||
#### 服务
|
||||
|
||||
这部分可以让管理员查看系统所有服务的状态。点击任何服务都会转到一个包含启动、重启和禁用的标准任务的屏幕。
|
||||
|
||||
#### 应用程序
|
||||
|
||||
通常,这个屏幕提供了各种用于管理功能的应用程序,例如389目录服务器或Podman同期的创建。但在我的树莓派OS上,这个屏幕只显示“没有安装或可用的应用程序”。在写应用程序的时候,这个或许还没有实现。尽管如此,你得不得怀疑这类型的过程对于树莓派硬件来说是否太沉重。
|
||||
|
||||
#### 软件更新
|
||||
|
||||
对任何系统管理员来说,保持软件最新是最重要的任务之一。Cockpit的软件更新部分检查并进行更新。
|
||||
|
||||
![Software updates in Cockpit][7]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
#### 终端
|
||||
|
||||
Cockpit最整洁的特点之一是终端。你可以使用它,而不是打开一个单独的终端模拟器并使用SSH。我使用中断来安装[ScreenFetch][8]:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install screenfetch`
|
||||
```
|
||||
|
||||
我使用ScreenFetch生成了这张截图:
|
||||
|
||||
![Terminal in Cockpit][9]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][4])
|
||||
|
||||
### 使用Cockpit的中心控制
|
||||
|
||||
Cockpit在树莓派上的行为就像它在任何其他Linux系统上一样。你可以将它添加到控制面板上进行集中控制。它允许组织将基于树莓派的服务和系统集成到他们的整个Linux基础设施中,无论Cockpit在哪里被用来作为管理控制面板的解决方案。这是非常方便的,因为派经常在无领导者的高密度机架数据中心运行,而这些数据中心通常会缺乏KVM访问。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/5/raspberry-pi-cockpit
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[RiaXu](https://github.com/ShuyRoy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi_kuberenetes_cluster_lead2_0.jpeg?itok=kx0Zc0NK (Neon colorized Raspberry Pi cluster with LEGOs)
|
||||
[2]: https://opensource.com/article/20/11/cockpit-server-management
|
||||
[3]: https://opensource.com/sites/default/files/uploads/cockpit_homepage.png (Cockpit home page)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/cockpit_initialscreen.png (Initial Cockpit screen)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/hardware_details.png (Cockpit hardware details)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/software_updates.png (Software updates in Cockpit)
|
||||
[8]: https://opensource.com/article/20/1/screenfetch-neofetch
|
||||
[9]: https://opensource.com/sites/default/files/uploads/pi_cockpit_terminal.png (Terminal in Cockpit)
|
102
translated/tech/20210601 Get started with FreeDOS.md
Normal file
102
translated/tech/20210601 Get started with FreeDOS.md
Normal file
@ -0,0 +1,102 @@
|
||||
[#]: subject: (Get started with FreeDOS)
|
||||
[#]: via: (https://opensource.com/article/21/6/get-started-freedos)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
开始使用 FreeDOS
|
||||
======
|
||||
它看起来像复古计算,但它是一个现代的操作系统,你可以用它来完成任务。
|
||||
![Old UNIX computer][1]
|
||||
|
||||
在整个 1980 年代和 1990 年代,我主要是一个 DOS 用户。我喜欢 DOS 提供的命令行环境,它随着每一个连续的版本变得更加强大。我甚至学会了如何用 C 语言编写自己的 DOS 程序,这样我就可以扩展 DOS 命令行,并为标准的 DOS 命令编写更强大的替代程序。我曾经试验过微软的 Windows,但如果你记得当时的 Windows 3,你就会知道它很慢,而且容易崩溃。但无论如何我更喜欢命令行,所以我坚持使用 DOS。
|
||||
|
||||
这一切在 1994 年发生了变化。流行的技术杂志谈到了即将到来的 Windows 版本,它将完全废除 DOS。我不想被迫使用 Windows。在我访问的 Usenet 讨论区中,其他人也有同样的感觉。所以在 [1994 年 6 月 29 日][2],我决定如果我们想保留 DOS,我们需要自己编写。所以在 6 月 29 日,我宣布了一个小项目,这个项目后来成为 [FreeDOS 项目][3]。
|
||||
|
||||
从那时起,我们已经发布了几个完整的 FreeDOS 发行版。我们从 1994 年到 1997 年的 alpha 系列开始,再到 1998 年到 2005 年的 beta 系列,最后在 2006 年发布了 FreeDOS 1.0 版本。从那时起,进展是缓慢但稳定的。在 1.0 之后,我们并没有真正急于发布每个新版本,因为 DOS 在 1995 年不再是一个变动的目标。
|
||||
|
||||
从 1.0 开始的每一个 FreeDOS 发行版都是对现代 DOS 的不断重新想象。我们已经包括了很多编译器和汇编器,供开发人员编写软件。我们还提供了许多“强大工具”,以便你可以做真正的工作。我们还提供了各种编辑器,因为每个人都有自己的最爱。
|
||||
|
||||
我们最近发布了 FreeDOS 1.3 RC4 发行版。从技术上讲,这是我们即将推出的 FreeDOS 1.3 发行版的候选版本,但它是一个全功能的发行版。我对 FreeDOS 1.3 RC4 的所有功能感到非常兴奋。
|
||||
|
||||
### 无需安装 FreeDOS 即可运行 FreeDOS
|
||||
|
||||
在我们以前所有的 FreeDOS 发行版中,我们把重点放在_安装_ FreeDOS 到电脑上。但我们认识到,大多数用户实际上已经不在实际硬件上运行 FreeDOS 了。他们在[像 QEMU 或 VirtualBox 这样的虚拟机][4]中运行 FreeDOS。所以在 FreeDOS 1.3 RC4 中,我们改进了 “LiveCD” 环境。
|
||||
|
||||
通过 FreeDOS 1.3 RC4,你可以在你喜欢的虚拟机中启动 LiveCD 镜像,并立即开始使用 FreeDOS。这就是我现在运行 FreeDOS 的方式。我有一个小的虚拟硬盘镜像,我把所有的文件都放在那里,但我从 LiveCD 启动并运行 FreeDOS。
|
||||
|
||||
![Booting the FreeDOS 1.3 RC4 LiveCD on QEMU][5]
|
||||
|
||||
启动 FreeDOS 1.3 RC4 LiveCD (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
### 安装真的很简单
|
||||
|
||||
如果你不想从 LiveCD 上运行 FreeDOS,你也可以在你的硬盘上安装它。我们更新了 FreeDOS 的安装程序,所以它本身并不是一个真正的“程序”,而是一个非常聪明的 DOS “批处理”文件,它可以检测到各种情况并采取适当的行动,例如在没有 FreeDOS 分区的情况下为其创建一个新的磁盘分区。
|
||||
|
||||
旧的 FreeDOS 发行版曾经提示你一切,甚至选择个别程序来安装。新的安装程序非常精简。它只问你几个问题就开始了,然后就自己做其他事情。在一个空的虚拟机上安装 FreeDOS 只需要几分钟时间。
|
||||
|
||||
![Installing FreeDOS 1.3 RC4][7]
|
||||
|
||||
安装FreeDOS 1.3 RC4 (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
### 你可以从软盘安装它
|
||||
|
||||
不是每个人都喜欢在虚拟机中运行 FreeDOS。现在有一个复古计算社区,他们收集并精心修复经典的 PC 硬件,如 Pentium 或 486 系统。你甚至可以在那里找到一些 XT(8088)或 AT(80286)系统,它由一个专门的用户社区运营。
|
||||
|
||||
虽然我们认为 FreeDOS 是一个现代的 DOS,但如果我们不在旧的 PC 硬件上运行,我们就不是 “DOS”。因此,在 FreeDOS 1.3 中,我们包含了一个纯软盘版!这个版本可以运行在任何硬件上。这个版本应该可以在任何可以运行 FreeDOS 的硬件上运行,并且有 EGA 或更好的图形。
|
||||
|
||||
你在运行 286 或其他没有 CD-ROM 驱动器的经典系统吗?从这些软盘安装 FreeDOS。你是否只有一个硬盘而没有 CD 或软盘驱动器?只要把软盘的内容复制到一个临时目录,然后从那里运行安装程序。想执行“无头”安装到不同的 DOS 目录吗?用命令行选项就可以了。
|
||||
|
||||
纯软盘版使用一个完全不同的安装程序,并包含一套有限的 FreeDOS 程序,它们在经典的 PC 硬件上更有用。
|
||||
|
||||
![Installing the FreeDOS Floppy-Only Edition][8]
|
||||
|
||||
安装FreeDOS纯软盘版 (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
### 充满了开源应用和游戏
|
||||
|
||||
如果 FreeDOS 是一个闭源的 DOS,它就不是一个_自由_的 DOS。我们希望每个人都能使用和研究 FreeDOS,包括其源代码。当我们计划 FreeDOS 1.3 发行版时,我们仔细检查了每个软件包中的每一个许可证,并专注于只包括_开源_程序。(在以前的 FreeDOS 发行版中,有几个程序并不完全是 "开源",还有一两个程序没有包括源码,但是可以“自由使用和发布”。在这个版本中,所有的东西都是开源的,以开源定义作为我们的模型。)
|
||||
|
||||
而且,这是一个多么棒的开源应用和游戏的集合。游戏是 FreeDOS 1.3 RC4 中我最喜欢的内容。许多人使用 FreeDOS 来玩经典的 DOS 游戏,但我们想提供我们自己的开源游戏给人们玩。
|
||||
|
||||
你可以发现 LiveCD 中已经安装了两个游戏:Simple Senet(可以追溯到古埃及的棋盘游戏)和 Floppy Bird(Flappy Bird 游戏的一个版本)。如果你安装了 FreeDOS,你还会发现很多其他游戏可以尝试,包括 Sudoku86(一个数独游戏)、Wing(一个太空射击游戏)和 Bolitaire(单人纸牌游戏)。
|
||||
|
||||
![Playing the Floppy Bird game][9]
|
||||
|
||||
玩 Floppy Bird 游戏 (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
![The ancient game of Senet][10]
|
||||
|
||||
古老的 Senet 游戏 (Jim Hall, [CC-BY SA 4.0][6])
|
||||
|
||||
### 现在就试试 FreeDOS 1.3 RC4
|
||||
|
||||
你可以在 FreeDOS 的[下载][11]页面上找到新的 FreeDOS 1.3 RC4,。要安装 FreeDOS,你需要至少 20MB 的可用磁盘空间:20MB 用来安装一个普通的 FreeDOS 系统,或者 250MB 用来安装所有,包括应用和游戏。要安装源码,你将需要高达 450MB 的可用空间。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/get-started-freedos
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/retro_old_unix_computer.png?itok=SYAb2xoW (Old UNIX computer)
|
||||
[2]: https://groups.google.com/g/comp.os.msdos.apps/c/oQmT4ETcSzU/m/O1HR8PE2u-EJ
|
||||
[3]: https://www.freedos.org/
|
||||
[4]: https://opensource.com/article/20/8/virt-tools
|
||||
[5]: https://opensource.com/sites/default/files/freedos-livecd.png (Booting the FreeDOS 1.3 RC4 LiveCD)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/sites/default/files/install6.png (Installing FreeDOS 1.3 RC4)
|
||||
[8]: https://opensource.com/sites/default/files/freedos-floppy.png (Installing the FreeDOS Floppy-Only Edition)
|
||||
[9]: https://opensource.com/sites/default/files/floppy-bird.png (Playing the Floppy Bird game)
|
||||
[10]: https://opensource.com/sites/default/files/simple-senet.png (The ancient game of Senet)
|
||||
[11]: https://www.freedos.org/download/
|
@ -0,0 +1,70 @@
|
||||
[#]: subject: (Explore the Kubernetes ecosystem in 2021)
|
||||
[#]: via: (https://opensource.com/article/21/6/kubernetes-ebook)
|
||||
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
在 2021 年探索 Kubernetes 生态系统
|
||||
======
|
||||
这份可下载的指南充满了有用的教程,让 SRE 和系统管理员使用 Kubernetes 获得便利。
|
||||
![A ship wheel with someone steering][1]
|
||||
|
||||
Kubernetes 是容器编排的事实标准,在基础设施管理和应用开发方面已经迅速发展成为容器环境的主导。作为一个拥有庞大的爱好者和专业人士社区的开源平台,以及作为云原生计算基金会的一部分,Kubernetes 不仅成为一个强大而令人印象深刻的编排系统本身,而且它还促进了一个庞大的相关工具和服务的生态系统,使其更容易使用,并通过更强大和复杂的组件扩展其功能。
|
||||
|
||||
在这本新的电子书中,[给 SRE 和系统管理员的 Kubernetes 指导_][2],[Jess Cherry][3](Ben Finkel 也有贡献)涵盖了一系列相关的工具和服务,用于管理和整合 Kubernetes。Cherry 和 Finkel 提供了一些有用的_入门_指南,包括 Kubernetes 和一些工具。他们甚至还分享了面试问题,以帮助读者为在这个快速增长的大规模生态系统中工作做好准备。
|
||||
|
||||
### 了解 Kubernetes
|
||||
|
||||
如果你刚开始接触 Kubernetes 和容器,Ben Finkel 的 _[Kubernetes 入门][4]_既是恰当的标题,也是对你需要了解的相关概念的出色介绍。它也是一本轻量级的快速入门指南,用于设置和使用单节点集群进行测试。没有什么比亲身体验技术并直接进入学习更好的方法了。什么是 Pod? 如何在集群上部署一个应用程序? Ben 为你介绍。
|
||||
|
||||
与集群交互的主要方式是 [**kubectl**][5] 命令,一种 CLI 工具,它提供了一种人类可以访问的方式,与管理集群本身的 API 服务器交互。例如,你可以使用 **kubectl get** 来列出上述的 Pod 和部署,但正如你对 Kubernetes 这样复杂的东西所期望的那样,它的 CLI 界面有很强的功能和灵活性。Jess Cherry 的 [_9 个系统管理员需要知道的 kubectl 命令_][6]速记是一个很好的介绍,是开始使用 **kubectl** 的好方法。
|
||||
|
||||
同样,Cherry 的_[给初学者的 Kubernetes 命令空间[7]_也很好地解释了什么是命名空间以及它们在 Kubernetes 中的使用方式。
|
||||
|
||||
### 简化与 Kubernetes 的工作
|
||||
|
||||
在一个复杂的系统中工作是很困难的,尤其是像 **kubectl** 这样强大但最小的 CLI 工具。幸运的是,在围绕 Kubernetes 的生态系统中,有许多工具可用于简化事情,使扩展服务和集群管理更容易。
|
||||
|
||||
**kubectl** 命令可用于在 Kubernetes 上部署和维护应用和服务,主要是使用 YAML 和 JSON。 一旦你开始管理超过几个应用,然而,用 YAML 的大型仓库这样做会变得既重复又乏味。一个好的解决方案是采用一个模板化的系统来处理你的部署。[Helm][8]就是这样一个工具,被称为 _Kubernetes 的包管理器_,Helm 提供了一种方便的方式来打包和共享应用。Cherry 写了很多关于 Helm 的有用文章:创建有效的 [Helm 图表][9]和有用的 [Helm 命令][10]。
|
||||
|
||||
**kubectl** 也为你提供了很多关于集群本身的信息:上面运行的是什么,以及正在发生的事件。这些信息可以通过 **kubectl** 来查看和交互,但有时有一个更直观的 GUI 来进行交互是有帮助的。[K9s][11] 符合这两个世界的要求。虽然它仍然是一个终端应用,但它提供了视觉反馈和一种与集群交互的方式,而不需要长长的 **kubectl** 命令。Cherry 也写了一份很好的[开始使用 k9s][12] 的指南。
|
||||
|
||||
### 建立在 Kubernetes 的强大和灵活性之上的扩展
|
||||
|
||||
幸运的是,尽管 Kubernetes 是复杂而强大的,但它惊人的灵活和开源。它专注于其核心优势:容器编排,并允许围绕它的爱好者和专业人士社区扩展其能力,以承担不同类型的工作负载。其中一个例子是 [Knative][13],在 Kubernetes 之上提供组件,它为无服务器和事件驱动的服务提供工具,并利用 Kubernetes 的编排能力在容器中运行最小的微服务。事实证明,这样做非常高效,既能提供在容器中开发小型、易于测试和维护的应用的好处,又能提供仅在需要时运行这些应用的成本优势,在特定事件中被触发,但在其他时候处于休眠。
|
||||
|
||||
在这本电子书中,Cherry 介绍了 Knative 和它的事件系统,以及为什么值得自己研究使用 Knative。
|
||||
|
||||
### 有一个完整的世界可以探索
|
||||
|
||||
通过 Jess Cherry 和 Ben Finkel 的这本新的[电子书][2],可以开始了解 Kubernetes 和围绕它的生态系统。除了上述主题外,还有一些关于有用的 Kubernetes 扩展和第三方工具的文章。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/6/kubernetes-ebook
|
||||
|
||||
作者:[Chris Collins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clcollins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv (A ship wheel with someone steering)
|
||||
[2]: https://opensource.com/downloads/kubernetes-sysadmin
|
||||
[3]: https://opensource.com/users/cherrybomb
|
||||
[4]: https://opensource.com/article/17/11/getting-started-kubernetes
|
||||
[5]: https://kubernetes.io/docs/reference/kubectl/kubectl/
|
||||
[6]: https://opensource.com/article/20/5/kubectl-cheat-sheet
|
||||
[7]: https://opensource.com/article/19/12/kubernetes-namespaces
|
||||
[8]: https://helm.sh/
|
||||
[9]: https://opensource.com/article/20/5/helm-charts
|
||||
[10]: https://opensource.com/article/20/2/kubectl-helm-commands
|
||||
[11]: https://k9scli.io/
|
||||
[12]: https://opensource.com/article/20/5/kubernetes-administration
|
||||
[13]: https://cloud.google.com/knative/
|
@ -0,0 +1,103 @@
|
||||
[#]: subject: (How to Install Code Blocks IDE on Ubuntu Linux)
|
||||
[#]: via: (https://itsfoss.com/install-code-blocks-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
如何在 Ubuntu Linux 上安装 Code Blocks IDE
|
||||
======
|
||||
|
||||
Code Blocks 是一个用 C++ 编写的开源 IDE,非常适合 C、C++ 和 Fortran 开发。它是跨平台的,可以在 Linux、macOS 和 Windows 上运行。
|
||||
|
||||
Code Blocks 是轻量级和快速的。它支持工作区、多目标项目、工作区内的项目间依赖关系。
|
||||
|
||||
你可以得到语法高亮,代码折叠,标签式界面,类浏览器,智能缩进等。你还可以通过插件扩展 IDE 的功能。
|
||||
|
||||
在本教程中,你将学习如何在基于 Ubuntu 的 Linux 发行版上安装 Code Blocks。
|
||||
|
||||
注意
|
||||
|
||||
Code Blocks 也可以在 Ubuntu 软件中心找到。然而,从 Ubuntu 21.04 开始,从 Ubuntu 软件中心以图形方式安装 Code Blocks 会安装一个 codeblocks-common 软件包,而不是图形化 IDE。因而你看不到安装在你系统上的 Code Blocks 的运行。由于这个原因,我建议采取终端的方式在 Ubuntu 上安装 Code Blocks。
|
||||
|
||||
### 在基于 Ubuntu 的 Linux 发行版上安装 Code Blocks
|
||||
|
||||
[Code Blocks IDE][1] 在所有 Ubuntu 版本的 universe 库中都有。虽然它通常是默认启用的,但先[启用 universe 仓库][2]也无妨:
|
||||
|
||||
```
|
||||
sudo add-apt-repository universe
|
||||
```
|
||||
|
||||
更新软件包缓存,这样系统就能知道新添加的仓库中的额外软件包的可用性:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
最后,你可以使用 apt install 命令在基于 Ubuntu 的发行版上安装 Code Blocks:
|
||||
|
||||
```
|
||||
sudo apt install codeblocks
|
||||
```
|
||||
|
||||
![][3]
|
||||
|
||||
建议你也安装额外的插件,以便从 Code Blocks IDE 中获得更多。你可以使用 codeblocks-contrib 包来安装它们:
|
||||
|
||||
```
|
||||
sudo apt install codeblocks-contrib
|
||||
```
|
||||
|
||||
### 如何使用 Code Blocks
|
||||
|
||||
在系统菜单中搜索 Code Blocks。这是在 Ubuntu 默认的 GNOME 版本中的样子:
|
||||
|
||||
![][4]
|
||||
|
||||
当你第一次启动 Code Blocks 时,它会寻找你系统中所有可用的编译器,并将其添加到路径中,这样你就不用自己去配置它了。
|
||||
|
||||
在我的例子中,我的 Ubuntu 系统上已经安装了 gcc,Code Blocks很 好地识别了它。
|
||||
|
||||
![][5]
|
||||
|
||||
Code Blocks 的用户界面绝对不现代,但请记住,这个 IDE 是轻量级的,它几乎消耗不到 50MB 的内存。
|
||||
|
||||
如果你曾经使用过像 Eclipse 这样的其他 IDE,你就不会觉得使用 Code Block 有什么困难。你可以写你的代码并把它们组织在项目中。
|
||||
|
||||
构建、运行并构建和运行按钮一起在顶部。
|
||||
|
||||
![][6]
|
||||
|
||||
当你运行代码时,它会打开一个新的终端窗口来显示输出。
|
||||
|
||||
![][7]
|
||||
|
||||
这就是你需要的关于 Code Blocks 的最少信息。我把它留给你,让你通过浏览它的 [wiki][8] 和[用户手册][9]来进一步探索它。
|
||||
|
||||
拥有一个 IDE 可以使[在 Linux 上运行 C 或 C++ 程序][10]更容易。Eclipse 是一个很好的 IDE,但它比 Code Blocks 要消耗更多的系统资源。当然,最后,重要的是你的选择。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-code-blocks-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.codeblocks.org/
|
||||
[2]: https://itsfoss.com/ubuntu-repositories/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/install-code-blocks-ubuntu.png?resize=800%2C445&ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/06/code-blocks-ubuntu.jpg?resize=800%2C231&ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/05/code-blocks-ide-first-run.png?resize=800%2C529&ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/code-blocks-ide.png?resize=800%2C543&ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/06/code-blocks-code-run-terminal.png?resize=504%2C371&ssl=1
|
||||
[8]: https://wiki.codeblocks.org/index.php/Main_Page
|
||||
[9]: https://www.codeblocks.org/user-manual/
|
||||
[10]: https://itsfoss.com/c-plus-plus-ubuntu/
|
Loading…
Reference in New Issue
Block a user