Merge pull request #2 from LCTT/master

更新译文
This commit is contained in:
Kevin3599 2021-05-06 08:28:41 +08:00 committed by GitHub
commit 11d31ebf7d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
114 changed files with 8269 additions and 2877 deletions

View File

@ -0,0 +1,208 @@
[#]: subject: "Access Python package index JSON APIs with requests"
[#]: via: "https://opensource.com/article/21/3/python-package-index-json-apis-requests"
[#]: author: "Ben Nuttall https://opensource.com/users/bennuttall"
[#]: collector: "lujun9972"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13356-1.html"
使用 requests 访问 Python 包索引PyPI的 JSON API
======
> PyPI 的 JSON API 是一种机器可直接使用的数据源,你可以访问和你浏览网站时相同类型的数据。
![](https://img.linux.net.cn/data/attachment/album/202105/03/111943du0lgbjj6br6sruu.jpg)
PyPIPython 软件包索引)提供了有关其软件包信息的 JSON API。本质上它是机器可以直接使用的数据源与你在网站上直接访问是一样的的。例如作为人类我可以在浏览器中打开 [Numpy][2] 项目页面,点击左侧相关链接,查看有哪些版本,哪些文件可用以及发行日期和支持的 Python 版本等内容:
![NumPy project page][3]
但是,如果我想编写一个程序来访问此数据,则可以使用 JSON API而不必在这些页面上抓取和解析 HTML。
顺便说一句:在旧的 PyPI 网站上,还托管在 `pypi.python.org`NumPy 的项目页面位于 `pypi.python.org/pypi/numpy`,访问其 JSON API 也很简单,只需要在最后面添加一个 `/json` ,即 `https://pypi.org/pypi/numpy/json`。现在PyPI 网站托管在 `pypi.org`NumPy 的项目页面是 `pypi.org/project/numpy`。新站点不会有单独的 JSON API URL但它仍像以前一样工作。因此你不必在 URL 后添加 `/json`,只要记住 URL 就够了。
你可以在浏览器中打开 NumPy 的 JSON API URLFirefox 很好地渲染了数据:
![JSON rendered in Firefox][5]
你可以查看 `info``release` 和 `urls` 其中的内容。或者,你可以将其加载到 Python Shell 中,以下是几行入门教程:
```
import requests
url = "https://pypi.org/pypi/numpy/json"
r = requests.get(url)
data = r.json()
```
获得数据后(调用 `.json()` 提供了该数据的 [字典][6]),你可以对其进行查看:
![Inspecting data][7]
查看 `release` 中的键:
![Inspecting keys in releases][8]
这表明 `release` 是一个以版本号为键的字典。选择一个并查看以下内容:
![Inspecting version][9]
每个版本都包含一个列表,`release` 包含 24 项。但是每个项目是什么?由于它是一个列表,因此你可以索引第一项并进行查看:
![Indexing an item][10]
这是一个字典,其中包含有关特定文件的详细信息。因此,列表中的 24 个项目中的每一个都与此特定版本号关联的文件相关,即在 <https://pypi.org/project/numpy/1.20.1/#files> 列出的 24 个文件。
你可以编写一个脚本在可用数据中查找内容。例如,以下的循环查找带有 sdist源代码包的版本它们指定了 `requires_python` 属性并进行打印:
```
for version, files in data['releases'].items():
    for f in files:
        if f.get('packagetype') == 'sdist' and f.get('requires_python'):
            print(version, f['requires_python'])
```
![sdist files with requires_python attribute][11]
### piwheels
去年,我在 piwheels 网站上[实现了类似的 API][12]。[piwheels.org][13] 是一个 Python 软件包索引,为树莓派架构提供了 wheel预编译的二进制软件包。它本质上是 PyPI 软件包的镜像,但带有 Arm wheel而不是软件包维护者上传到 PyPI 的文件。
由于 piwheels 模仿了 PyPI 的 URL 结构,因此你可以将项目页面 URL 的 `pypi.org` 部分更改为 `piwheels.org`。它将向你显示类似的项目页面,其中详细说明了构建的版本和可用的文件。由于我喜欢旧站点允许你在 URL 末尾添加 `/json` 的方式所以我也支持这种方式。NumPy 在 PyPI 上的项目页面为 [pypi.org/project/numpy][14],在 piwheels 上,它是 [piwheels.org/project/numpy][15],而 JSON API 是 [piwheels.org/project/numpy/json][16] 页面。
没有必要重复 PyPI API 的内容,所以我们提供了 piwheels 上可用内容的信息,包括所有已知发行版的列表,一些基本信息以及我们拥有的文件列表:
![JSON files available in piwheels][17]
与之前的 PyPI 例子类似,你可以创建一个脚本来分析 API 内容。例如,对于每个 NumPy 版本,其中有多少 piwheels 文件:
```
import requests
url = "https://www.piwheels.org/project/numpy/json"
package = requests.get(url).json()
for version, info in package['releases'].items():
    if info['files']:
        print('{}: {} files'.format(version, len(info['files'])))
    else:
        print('{}: No files'.format(version))
```
此外,每个文件都包含一些元数据:
![Metadata in JSON files in piwheels][18]
方便的是 `apt_dependencies` 字段,它列出了使用该库所需的 Apt 软件包。本例中的 NumPy 文件,或者通过 `pip` 安装 Numpy你还需要使用 Debian 的 `apt` 包管理器安装 `libatlas3-base``libgfortran`
以下是一个示例脚本,显示了程序包的 Apt 依赖关系:
```
import requests
def get_install(package, abi):
    url = 'https://piwheels.org/project/{}/json'.format(package)
    r = requests.get(url)
    data = r.json()
    for version, release in sorted(data['releases'].items(), reverse=True):
        for filename, file in release['files'].items():
            if abi in filename:
                deps = ' '.join(file['apt_dependencies'])
                print("sudo apt install {}".format(deps))
                print("sudo pip3 install {}=={}".format(package, version))
                return
get_install('opencv-python', 'cp37m')
get_install('opencv-python', 'cp35m')
get_install('opencv-python-headless', 'cp37m')
get_install('opencv-python-headless', 'cp35m')
```
我们还为软件包列表提供了一个通用的 API 入口,其中包括每个软件包的下载统计:
```python
import requests
url = "https://www.piwheels.org/packages.json"
packages = requests.get(url).json()
packages = {
    pkg: (d_month, d_all)
    for pkg, d_month, d_all, *_ in packages
}
package = 'numpy'
d_month, d_all = packages[package]
print(package, "has had", d_month, "downloads in the last month")
print(package, "has had", d_all, "downloads in total")
```
### pip search
`pip search` 因为其 XMLRPC 接口过载而被禁用,因此人们一直在寻找替代方法。你可以使用 piwheels 的 JSON API 来搜索软件包名称,因为软件包的集合是相同的:
```
#!/usr/bin/python3
import sys
import requests
PIWHEELS_URL = 'https://www.piwheels.org/packages.json'
r = requests.get(PIWHEELS_URL)
packages = {p[0] for p in r.json()}
def search(term):
    for pkg in packages:
        if term in pkg:
            yield pkg
if __name__ == '__main__':
    if len(sys.argv) == 2:
        results = search(sys.argv[1].lower())
        for res in results:
            print(res)
    else:
        print("Usage: pip_search TERM")
```
有关更多信息,参考 piwheels 的 [JSON API 文档][19].
* * *
_本文最初发表在 Ben Nuttall 的 [Tooling Tuesday 博客上][20]经许可转载使用。_
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/python-package-index-json-apis-requests
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_programming_question.png?itok=cOeJW-8r "Python programming language logo with question marks"
[2]: https://pypi.org/project/numpy/
[3]: https://opensource.com/sites/default/files/uploads/numpy-project-page.png "NumPy project page"
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/pypi-json-firefox.png "JSON rendered in Firefox"
[6]: https://docs.python.org/3/tutorial/datastructures.html#dictionaries
[7]: https://opensource.com/sites/default/files/uploads/pypi-json-notebook.png "Inspecting data"
[8]: https://opensource.com/sites/default/files/uploads/pypi-json-releases.png "Inspecting keys in releases"
[9]: https://opensource.com/sites/default/files/uploads/pypi-json-inspect.png "Inspecting version"
[10]: https://opensource.com/sites/default/files/uploads/pypi-json-release.png "Indexing an item"
[11]: https://opensource.com/sites/default/files/uploads/pypi-json-requires-python.png "sdist files with requires_python attribute "
[12]: https://blog.piwheels.org/requires-python-support-new-project-page-layout-and-a-new-json-api/
[13]: https://www.piwheels.org/
[14]: https://pypi.org/project/numpy
[15]: https://www.piwheels.org/project/numpy
[16]: https://www.piwheels.org/project/numpy/json
[17]: https://opensource.com/sites/default/files/uploads/piwheels-json.png "JSON files available in piwheels"
[18]: https://opensource.com/sites/default/files/uploads/piwheels-json-numpy.png "Metadata in JSON files in piwheels"
[19]: https://www.piwheels.org/json.html
[20]: https://tooling.bennuttall.com/accessing-python-package-index-json-apis-with-requests/

View File

@ -0,0 +1,154 @@
[#]: subject: (A tool to spy on your DNS queries: dnspeep)
[#]: via: (https://jvns.ca/blog/2021/03/31/dnspeep-tool/)
[#]: author: (Julia Evans https://jvns.ca/)
[#]: collector: (lujun9972)
[#]: translator: (wyxplus)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13353-1.html)
dnspeep监控 DNS 查询的工具
======
![](https://img.linux.net.cn/data/attachment/album/202105/02/191521i4ycjm7veln426vy.jpg)
在过去的几天中,我编写了一个叫作 [dnspeep][1] 的小工具,它能让你看到你电脑中正进行的 DNS 查询,并且还能看得到其响应。它现在只有 [250 行 Rust 代码][2]。
我会讨论如何去尝试它、能做什么、为什么我要编写它,以及当我在开发时所遇到的问题。
### 如何尝试
我构建了一些二进制文件,因此你可以快速尝试一下。
对于 Linuxx86
```
wget https://github.com/jvns/dnspeep/releases/download/v0.1.0/dnspeep-linux.tar.gz
tar -xf dnspeep-linux.tar.gz
sudo ./dnspeep
```
对于 Mac
```
wget https://github.com/jvns/dnspeep/releases/download/v0.1.0/dnspeep-macos.tar.gz
tar -xf dnspeep-macos.tar.gz
sudo ./dnspeep
```
它需要以<ruby>超级用户<rt>root</rt></ruby>身份运行,因为它需要访问计算机正在发送的所有 DNS 数据包。 这与 `tcpdump` 需要以超级身份运行的原因相同:它使用 `libpcap`,这与 tcpdump 使用的库相同。
如果你不想在超级用户下运行下载的二进制文件,你也能在 <https://github.com/jvns/dnspeep> 查看源码并且自行编译。
### 输出结果是什么样的
以下是输出结果。每行都是一次 DNS 查询和响应:
```
$ sudo dnspeep
query name server IP response
A firefox.com 192.168.1.1 A: 44.235.246.155, A: 44.236.72.93, A: 44.236.48.31
AAAA firefox.com 192.168.1.1 NOERROR
A bolt.dropbox.com 192.168.1.1 CNAME: bolt.v.dropbox.com, A: 162.125.19.131
```
这些查询是来自于我在浏览器中访问的 `neopets.com`,而 `bolt.dropbox.com` 查询是因为我正在运行 Dropbox 代理,并且我猜它不时会在后台运行,因为其需要同步。
### 为什么我要开发又一个 DNS 工具?
之所以这样做,是因为我认为当你不太了解 DNS 时DNS 似乎真的很神秘!
你的浏览器(和你电脑上的其他软件)一直在进行 DNS 查询,我认为当你能真正看到请求和响应时,似乎会有更多的“真实感”。
我写这个也把它当做一个调试工具。我想“这是 DNS 的问题?”的时候,往往很难回答。我得到的印象是,当尝试检查问题是否由 DNS 引起时,人们经常使用试错法或猜测,而不是仅仅查看计算机所获得的 DNS 响应。
### 你可以看到哪些软件在“秘密”使用互联网
我喜欢该工具的一方面是,它让我可以感知到我电脑上有哪些程序正使用互联网!例如,我发现在我电脑上,某些软件出于某些理由不断地向 `ping.manjaro.org` 发送请求,可能是为了检查我是否已经连上互联网了。
实际上,我的一个朋友用这个工具发现,他的电脑上安装了一些以前工作时的企业监控软件,但他忘记了卸载,因此你甚至可能发现一些你想要删除的东西。
### 如果你不习惯的话, tcpdump 会令人感到困惑
当我试图向人们展示他们的计算机正在进行的 DNS 查询时,我的第一感是想“好吧,使用 tcpdump”`tcpdump` 确实可以解析 DNS 数据包!
例如,下方是一次对 `incoming.telemetry.mozilla.org.` 的 DNS 查询结果:
```
11:36:38.973512 wlp3s0 Out IP 192.168.1.181.42281 > 192.168.1.1.53: 56271+ A? incoming.telemetry.mozilla.org. (48)
11:36:38.996060 wlp3s0 In IP 192.168.1.1.53 > 192.168.1.181.42281: 56271 3/0/0 CNAME telemetry-incoming.r53-2.services.mozilla.com., CNAME prod.data-ingestion.prod.dataops.mozgcp.net., A 35.244.247.133 (180)
```
绝对可以学着去阅读理解一下,例如,让我们分解一下查询:
`192.168.1.181.42281 > 192.168.1.1.53: 56271+ A? incoming.telemetry.mozilla.org. (48)`
* `A?` 意味着这是一次 A 类型的 DNS **查询**
* `incoming.telemetry.mozilla.org.` 是被查询的名称
* `56271` 是 DNS 查询的 ID
* `192.168.1.181.42281` 是源 IP/端口
* `192.168.1.1.53` 是目的 IP/端口
* `(48)` 是 DNS 报文长度
在响应报文中,我们可以这样分解:
`56271 3/0/0 CNAME telemetry-incoming.r53-2.services.mozilla.com., CNAME prod.data-ingestion.prod.dataops.mozgcp.net., A 35.244.247.133 (180)`
* `3/0/0` 是在响应报文中的记录数3 个回答0 个权威记录0 个附加记录。我认为 tcpdump 甚至只打印出回答响应报文。
* `CNAME telemetry-incoming.r53-2.services.mozilla.com`、`CNAME prod.data-ingestion.prod.dataops.mozgcp.net.` 和 `A 35.244.247.133` 是三个响应记录。
* `56271` 是响应报文 ID和查询报文的 ID 相对应。这就是你如何知道它是对前一行请求的响应。
我认为,这种格式最难处理的是(作为一个只想查看一些 DNS 流量的人),你必须手动匹配请求和响应,而且它们并不总是相邻的行。这就是计算机擅长的事情!
因此,我决定编写一个小程序(`dnspeep`)来进行匹配,并排除一些我认为多余的信息。
### 我在编写时所遇到的问题
在撰写本文时,我遇到了一些问题:
* 我必须给 `pcap` 包打上补丁,使其能在 Mac 操作系统上和 Tokio 配合工作([这个更改][3])。这是其中的一个 bug花了很多时间才搞清楚用了 1 行代码才解决 :)
* 不同的 Linux 发行版似乎有不同的 `libpcap.so` 版本。所以我不能轻易地分发一个动态链接 libpcap 的二进制文件(你可以 [在这里][4] 看到其他人也有同样的问题)。因此,我决定在 Linux 上将 libpcap 静态编译到这个工具中。但我仍然不太了解如何在 Rust 中正确做到这一点作,但我通过将 `libpcap.a` 文件复制到 `target/release/deps` 目录下,然后直接运行 `cargo build`,使其得以工作。
* 我使用的 `dns_parser` carte 并不支持所有 DNS 查询类型,只支持最常见的。我可能需要更换一个不同的工具包来解析 DNS 数据包,但目前为止还没有找到合适的。
* 因为 `pcap` 接口只提供原始字节(包括以太网帧),所以我需要 [编写代码来计算从开头剥离多少字节才能获得数据包的 IP 报头][5]。我很肯定我还遗漏了一些情形。
我对于给它取名也有过一段艰难的时光,因为已经有许多 DNS 工具了dnsspydnssnoopdnssniffdnswatch我基本上只是查了下有关“监听”的每个同义词然后选择了一个看起来很有趣并且还没有被其他 DNS 工具所占用的名称。
该程序没有做的一件事就是告诉你哪个进程进行了 DNS 查询,我发现有一个名为 [dnssnoop][6] 的工具可以做到这一点。它使用 eBPF看上去很酷但我还没有尝试过。
### 可能会有许多 bug
我只在 Linux 和 Mac 上简单测试了一下,并且我已知至少有一个 bug不支持足够多的 DNS 查询类型),所以请在遇到问题时告知我!
尽管这个 bug 没什么危害,因为这 libpcap 接口是只读的。所以可能发生的最糟糕的事情是它得到一些它无法解析的输入,最后打印出错误或是崩溃。
### 编写小型教育工具很有趣
最近,我对编写小型教育的 DNS 工具十分感兴趣。
到目前为止我所编写的工具:
* <https://dns-lookup.jvns.ca>(一种进行 DNS 查询的简单方法)
* <https://dns-lookup.jvns.ca/trace.html>(向你显示在进行 DNS 查询时内部发生的情况)
* 本工具(`dnspeep`
以前我尽力阐述已有的工具(如 `dig``tcpdump`)而不是编写自己的工具,但是经常我发现这些工具的输出结果让人费解,所以我非常关注以更加友好的方式来看这些相同的信息,以便每个人都能明白他们电脑正在进行的 DNS 查询,而不仅仅是依赖 tcmdump。
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2021/03/31/dnspeep-tool/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[wyxplus](https://github.com/wyxplus)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://github.com/jvns/dnspeep
[2]: https://github.com/jvns/dnspeep/blob/f5780dc822df5151f83703f05c767dad830bd3b2/src/main.rs
[3]: https://github.com/ebfull/pcap/pull/168
[4]: https://github.com/google/gopacket/issues/734
[5]: https://github.com/jvns/dnspeep/blob/f5780dc822df5151f83703f05c767dad830bd3b2/src/main.rs#L136
[6]: https://github.com/lilydjwg/dnssnoop

View File

@ -0,0 +1,213 @@
[#]: collector: (lujun9972)
[#]: translator: (stevenzdg988)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13329-1.html)
[#]: subject: (9 Open Source Forum Software That You Can Deploy on Your Linux Servers)
[#]: via: (https://itsfoss.com/open-source-forum-software/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
11 个可以部署在 Linux 服务器上的开源论坛软件
======
> 是否想要建立社区论坛或客户支持门户站点?以下是一些可以在服务器上部署的最佳开源论坛软件。
就像我们的论坛一样,重要的是建立一个让志趣相投的人可以讨论,互动和寻求支持的平台。
论坛为用户(或客户)提供了一个空间,让他们可以接触到在互联网上大多数情况下不容易找到的东西。
如果你是一家企业,则可以聘请开发人员团队并按照自己的方式建立自己的论坛,但这会增加大量预算。
幸运的是,有几个令人印象深刻的开源论坛软件,你只需要将其部署在你的服务器上就万事大吉了!在此过程中,你将节省很多钱,但仍能获得所需的东西。
在这里,我列出了可以在 Linux 服务器上安装的最佳开源论坛软件列表。
### 建立社区门户的最佳开源论坛软件
![][2]
如果你尚未建立过网站,则在部署论坛之前,可能需要看一下 [某些开源网站创建工具][3]。
**注意:** 此列表没有特定的排名顺序。
#### 1、Discourse现代、流行
![][4]
[Discourse][7] 是人们用来部署配置讨论平台的最流行的现代论坛软件。实际上,[It's FOSS 社区][1] 论坛使用了 Discourse 平台。
它提供了我所知道的大多数基本功能包括电子邮件通知、审核工具、样式自定义选项Slack/WordPress 等第三方集成等等。
它的自托管是完全免费的,你也可以在 [GitHub][5] 上找到该项目。如果你要减少将其部署在自托管服务器上的麻烦,可以选择 [Discourse 提供的托管服务][6](肯定会很昂贵)。
#### 2、Talkyard受 Discourse 和 StackOverflow 启发)
![][8]
[Talkyard][10] 是完全免费使用的,是一个开源项目。它看起来很像 Discourse但是如果你深入了解一下还是有区别的。
你可以在这里获得 StackOverflow 的大多数关键功能,以及在论坛平台上期望得到的所有基本功能。它可能不是一个流行的论坛解决方案,但是如果你想要类似于 Discourse 的功能以及一些有趣的功能,那么值得尝试一下。
你可以在他们的 [GitHub 页面][9] 中进一步了解它。
#### 3、Forem (一种独特的社区平台,正在测试中)
![](https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/12/dev-community-forem.png?w=800&ssl=1)
你可能以前没有听说过 [Forem](https://www.forem.com/),但它支持了 [dev.to](https://dev.to/)(这是一个越来越受欢迎的开发者社区网站)。
它仍然处于测试阶段,所以你或许不会选择在生产服务器上实验。但是,你可以通过在他们的官方网站上填写一个表格并与他们取得联系,让他们为你托管。
尽管没有官方的功能列表来强调所有的东西,但如果我们以 [dev.to](https://dev.to/) 为例,你会得到许多基本的特性和功能,如社区列表、商店、帖子格式化等。你可以在他们的 [公告帖子](https://dev.to/devteam/for-empowering-community-2k6h) 中阅读更多关于它提供的内容,并在 [GitHub](https://github.com/forem/forem) 上探索该项目。
#### 4、NodeBB现代化、功能齐全
![][11]
[NodeBB][14] 是一个基于 [Node.js][12] 的开源论坛软件。它的目标是简单、优雅和快速。首先,它面向有托管计划的组织和企业。但是,你也可以选择自己托管它。
你还可以获得实时本地分析功能,以及聊天和通知支持。它还提供一个 API可以将其与你的现有产品集成。它还支持审核工具和打击垃圾邮件的工具。
你可以获得一些开箱即用的第三方集成支持,例如 WordPress、Mailchimp 等。
请在他们的 [GitHub 页面][13] 或官方网站上可以进一步了解它。
#### 5、Vanilla 论坛(面向企业)
![][15]
[Vanilla 论坛][17] 主要是一款以企业为中心的论坛软件,它的基本功能是为你的平台打造品牌,为客户提供问答,还可以对帖子进行投票。
用户体验具有现代的外观,并且已被像 EA、Adobe 和其他一些大公司使用。
当然,如果你想尝试基于云的 Vanilla 论坛(由专业团队管理)以及对某些高级功能的访问权,可以随时申请演示。无论哪种情况,你都可以选择社区版,该社区版可以免费使用大多数最新功能,但需要自己托管和管理。
你可以在他们的官方网站和 [GitHub 页面][16] 上进一步了解它。
#### 6、bbPress (来自 WordPress
![][20]
[bbPress][22] 是一个可靠的论坛软件,由 WordPress 的创建者建立。旨在提供一个简单而迅速的论坛体验。
用户界面看起来很老旧,但易于使用,它提供了你通常在论坛软件中需要的基本功能。审核工具很好用,易于设置。你可以使用现有的插件扩展功能,并从几个可用的主题中进行选择以调整论坛的外观。
如果你只想要一个没有花哨功能的简单论坛平台bbPress 应该是完美的。你也可以查看他们的 [GitHub 页面][21] 了解更多信息。
#### 7、phpBB经典论坛软件
![][23]
如果你想要传统的论坛设计,只想要基本功能,则 [phpBB][25] 软件是一个不错的选择。当然,你可能无法获得最佳的用户体验或功能,但是作为按传统设计的论坛平台,它是实用的并且非常有效。
尤其是,对于习惯使用传统方式的用户而言,这将是一种简单而有效的解决方案。
不仅仅是简单,而且在一般的托管供应商那里,它的设置也是非常容易的。在任何共享主机平台上,你都能获得一键式安装功能,因此也不需要太多的技术知识来进行设置。
你可以在他们的官方网站或 [GitHub 页面][24] 上找到更多有关它的信息。
#### 8、Simple Machines 论坛(另一个经典)
![][26]
与 phpBB 类似,[Simple Machines 论坛][27] 是另一种基本(或简单)的论坛。很大程度上你可能无法自定义外观(至少不容易),但是默认外观是干净整洁的,提供了良好的用户体验。
就个人而言,相比 php BB 我更喜欢它,但是你可以前往他们的 [官方网站][27] 进行进一步的探索。同样,你可以使用一键安装方法在任何共享托管服务上轻松安装 Simple Machines 论坛。
#### 9、FluxBB古典
![][28]
[FluxBB][30] 是另一个简单、轻量级的开源论坛。与其他的相比,它可能维护的不是非常积极,但是如果你只想部署一个只有很少几个用户的基本论坛,则可以轻松尝试一下。
你可以在他们的官方网站和 [GitHub 页面][29] 上找到更多有关它的信息。
#### 10、MyBB不太流行但值得看看
![][31]
[MyBB][33] 是一款独特的开源论坛软件,它提供多种样式,并包含你需要的基本功能。
从插件支持和审核工具开始,你将获得管理大型社区所需的一切。它还支持类似于 Discourse 和同类论坛软件面向个人用户的私人消息传递。
它可能不是一个流行的选项,但是它可以满足大多数用例,并且完全免费。你可以在 [GitHub][32] 上得到支持和探索这个项目。
#### 11、Flarum测试版
![][34]
如果你想要更简单和独特的论坛,请看一下 [Flarum][37]。它是一款轻量级的论坛软件,旨在以移动为先,同时提供快速的体验。
它支持某些第三方集成,也可以使用扩展来扩展功能。就我个人而言,它看起来很漂亮。我没有机会尝试它,你可以看一下它的 [文档][35],可以肯定它具有论坛所需的所有必要功能的特征。
值得注意的是 Flarum 是相当新的,因此仍处于测试阶段。你可能需要先将其部署在测试服务器上测试后,再应用到生产环境。请查看其 [GitHub 页面][36] 了解更多详细信息。
#### 补充Lemmy更像是 Reddit 的替代品,但也是一个不错的选择)
![](https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/12/lemmy-forum.png?w=800&ssl=1)
一个用 [Rust](https://www.rust-lang.org/) 构建的 Reddit 的联盟式论坛的替代品。它的用户界面很简单,有些人可能觉得它不够直观,无法获得有吸引力的论坛体验。
其联盟网络仍在构建中,但如果你想要一个类似 Reddit 的社区平台,你可以很容易地将它部署在你的 Linux 服务器上,并制定好管理规则、版主,然后就可以开始了。它支持跨版发帖(参见 Reddit以及其他基本功能如标签、投票、用户头像等。
你可以通过其 [官方文档](https://lemmy.ml/docs/about.html) 和 [GitHub 页面](https://github.com/LemmyNet/lemmy) 探索更多信息。
### 总结
大多数开源论坛软件都为基本用例提供了几乎相同的功能。如果你正在寻找特定的功能,则可能需要浏览其文档。
就个人而言,我推荐 Discourse。它很流行外观现代拥有大量的用户基础。
你认为最好的开源论坛软件是什么?我是否错过了你的偏爱?在下面的评论中让我知道。
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-forum-software/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.community/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/12/open-source-forum-software.png?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/open-source-cms/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/itsfoss-community-discourse.jpg?resize=800%2C561&ssl=1
[5]: https://github.com/discourse/discourse
[6]: https://discourse.org/buy
[7]: https://www.discourse.org/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/talkyard-forum.jpg?resize=800%2C598&ssl=1
[9]: https://github.com/debiki/talkyard
[10]: https://www.talkyard.io/
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/nodebb.jpg?resize=800%2C369&ssl=1
[12]: https://nodejs.org/en/
[13]: https://github.com/NodeBB/NodeBB
[14]: https://nodebb.org/
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/vanilla-forums.png?resize=800%2C433&ssl=1
[16]: https://github.com/Vanilla
[17]: https://vanillaforums.com/en/
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/open-source-eCommerce.png?fit=800%2C450&ssl=1
[19]: https://itsfoss.com/open-source-ecommerce/
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/11/bbpress.jpg?resize=800%2C552&ssl=1
[21]: https://github.com/bbpress
[22]: https://bbpress.org/
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/phpBB.png?resize=798%2C600&ssl=1
[24]: https://github.com/phpbb/phpbb
[25]: https://www.phpbb.com/
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/simplemachines.jpg?resize=800%2C343&ssl=1
[27]: https://www.simplemachines.org/
[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/FluxBB.jpg?resize=800%2C542&ssl=1
[29]: https://github.com/fluxbb/fluxbb/
[30]: https://fluxbb.org/
[31]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/11/mybb-example.png?resize=800%2C461&ssl=1
[32]: https://github.com/mybb/mybb
[33]: https://mybb.com/
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/11/flarum-screenshot.png?resize=800%2C503&ssl=1
[35]: https://docs.flarum.org/
[36]: https://github.com/flarum
[37]: https://flarum.org/
[38]: https://highoncloud.com/

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: (stevenzdg988)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13347-1.html)
[#]: subject: (Improve your productivity with this Linux automation tool)
[#]: via: (https://opensource.com/article/21/2/linux-autokey)
[#]: author: (Matt Bargenquast https://opensource.com/users/mbargenquast)
使用 Linux 自动化工具提高生产率
======
> 用 AutoKey 配置你的键盘,纠正常见的错别字,输入常用的短语等等。
![](https://img.linux.net.cn/data/attachment/album/202104/30/111130s7ffji6cmb7rkcfx.jpg)
[AutoKey][2] 是一个开源的 Linux 桌面自动化工具,一旦它成为你工作流程的一部分,你就会想,如何没有它,那该怎么办。它可以成为一种提高生产率的变革性工具,或者仅仅是减少与打字有关的身体压力的一种方式。
本文将研究如何安装和开始使用 AutoKey ,介绍一些可以立即在工作流程中使用的简单方法,并探讨 AutoKey 高级用户可能会感兴趣的一些高级功能。
### 安装并设置 AutoKey
AutoKey 在许多 Linux 发行版中都是现成的软件包。该项目的 [安装指南][3] 包含许多平台的说明,也包括了从源代码进行构建的指导。本文使用 Fedora 作为操作平台。
AutoKey 有两个变体:为像 GNOME 等基于 [GTK][4] 环境而设计的 autokey-gtk 和基于 [QT][5] 的 autokey-qt。
你可以从命令行安装任一变体:
```
sudo dnf install autokey-gtk
```
安装完成后,使用 `autokey-gtk`(或 `autokey-qt`)运行它。
### 探究界面
在将 AutoKey 设置为在后台运行并自动执行操作之前你首先需要对其进行配置。调出用户界面UI配置
```
autokey-gtk -c
```
AutoKey 提供了一些预设配置的示例。你可能希望在熟悉 UI 时将他们留作备用,但是可以根据需要删除它们。
![AutoKey 用户界面][6]
左侧窗格包含一个文件夹式的短语和脚本的层次结构。“<ruby>短语<rt>Phrases</rt></ruby>” 代表要让 AutoKey 输入的文本。“<ruby>脚本<rt>Scripts</rt></ruby>” 是动态的、程序化的等效项,可以使用 Python 编写,并且获得与键盘击键发送到活动窗口基本相同的结果。
右侧窗格构建和配置短语和脚本。
对配置满意后,你可能希望在登录时自动运行 AutoKey这样就不必每次都启动它。你可以通过在 “<ruby>首选项<rt>Preferences</rt></ruby>”菜单(“<ruby>编辑 -> 首选项<rt>Edit -> Preferences”</rt></ruby>”)中勾选 “<ruby>登录时自动启动 AutoKey<rt>Automatically start AutoKey at login</rt></ruby>”进行配置。
![登录时自动启动 AutoKey][8]
### 使用 AutoKey 纠正常见的打字排版错误
修复常见的打字排版错误对于 AutoKey 来说是一个容易解决的问题。例如,我始终键入 “gerp” 来代替 “grep”。这里是如何配置 AutoKey 为你解决这些类型问题。
创建一个新的子文件夹,可以在其中将所有“打字排版错误校正”配置分组。在左侧窗格中选择 “My Phrases” ,然后选择 “<ruby>文件 -> 新建 -> 子文件夹<rt>File -> New -> Subfolder</rt></ruby>”。将子文件夹命名为 “Typos”。
在 “<ruby>文件 -> 新建 -> 短语<rt>File -> New -> Phrase</rt></ruby>” 中创建一个新短语。并将其称为 “grep”。
通过高亮选择短语 “grep”然后在 <ruby>输入短语内容<rt>Enter phrase contents</rt></ruby>部分(替换默认的 “Enter phrase contents” 文本)中输入 “grep” ,配置 AutoKey 插入正确的关键词。
接下来,通过定义缩写来设置 AutoKey 如何触发此短语。点击用户界面底部紧邻 “<ruby>缩写<rt>Abbreviations</rt></ruby>” 的 “<ruby>设置<rt>Set</rt></ruby>”按钮。
在弹出的对话框中,单击 “<ruby>添加<rt>Add</rt></ruby>” 按钮,然后将 “gerp” 添加为新的缩写。勾选 “<ruby>删除键入的缩写<rt>Remove typed abbreviation</rt></ruby>”;此选项让 AutoKey 将任何键入 “gerp” 一词的替换为 “grep”。请不要勾选“<ruby>在键入单词的一部分时触发<rt>Trigger when typed as part of a word</rt></ruby>”,这样,如果你键入包含 “grep”的单词例如 “fingerprint”就不会尝试将其转换为 “fingreprint”。仅当将 “grep” 作为独立的单词键入时,此功能才有效。
![在 AutoKey 中设置缩写][9]
### 限制对特定应用程序的更正
你可能希望仅在某些应用程序(例如终端窗口)中打字排版错误时才应用校正。你可以通过设置 <ruby>窗口过滤器<rt>Window Filter</rt></ruby>进行配置。单击 “<ruby>设置<rt>Set</rt></ruby>” 按钮来定义。
设置<ruby>窗口过滤器<rt>Window Filter</rt></ruby>的最简单方法是让 AutoKey 为你检测窗口类型:
1. 启动一个新的终端窗口。
2. 返回 AutoKey单击 “<ruby>检测窗口属性<rt>Detect Window Properties</rt></ruby>”按钮。
3. 单击终端窗口。
这将自动填充窗口过滤器,可能的窗口类值为 `gnome-terminal-server.Gnome-terminal`。这足够了,因此单击 “OK”。
![AutoKey 窗口过滤器][10]
### 保存并测试
对新配置满意后,请确保将其保存。 单击 “<ruby>文件<rt>File</rt></ruby>” ,然后选择 “<ruby>保存<rt>Save</rt></ruby>” 以使更改生效。
现在进行重要的测试!在你的终端窗口中,键入 “gerp” 紧跟一个空格,它将自动更正为 “grep”。要验证窗口过滤器是否正在运行请尝试在浏览器 URL 栏或其他应用程序中键入单词 “gerp”。它并没有变化。
你可能会认为,使用 [shell 别名][11] 可以轻松解决此问题我完全赞成与别名不同只要是面向命令行无论你使用什么应用程序AutoKey 都可以按规则纠正错误。
例如,我在浏览器,集成开发环境和终端中输入的另一个常见打字错误 “openshfit” 替代为 “openshift”。别名不能完全解决此问题而 AutoKey 可以在任何情况下纠正它。
### 键入常用短语
你可以通过许多其他方法来调用 AutoKey 的短语来帮助你。例如,作为从事 OpenShift 的站点可靠性工程师SRE我经常在命令行上输入 Kubernetes 命名空间名称:
```
oc get pods -n openshift-managed-upgrade-operator
```
这些名称空间是静态的,因此它们是键入特定命令时 AutoKey 可以为我插入的理想短语。
为此,我创建了一个名为 “Namespaces” 的短语子文件夹,并为我经常键入的每个命名空间添加了一个短语条目。
### 分配热键
接下来,也是最关键的一点,我为子文件夹分配了一个 “<ruby>热键<rt>hotkey</rt></ruby>”。每当我按下该热键时,它都会打开一个菜单,我可以在其中选择(要么使用 “方向键”+回车键要么使用数字)要插入的短语。这减少了我仅需几次击键就可以输入这些命令的击键次数。
“My Phrases” 文件夹中 AutoKey 的预配置示例使用 `Ctrl+F7` 热键进行配置。如果你将示例保留在 AutoKey 的默认配置中,请尝试一下。你应该在此处看到所有可用短语的菜单。使用数字或箭头键选择所需的项目。
### 高级自动键入
AutoKey 的 [脚本引擎][12] 允许用户运行可以通过相同的缩写和热键系统调用的 Python 脚本。这些脚本可以通过支持的 API 的函数来完成诸如切换窗口、发送按键或执行鼠标单击之类的操作。
AutoKey 用户非常欢迎这项功能,发布了自定义脚本供其他用户采用。例如,[NumpadIME 脚本][13] 将数字键盘转换为旧的手机样式的文本输入方法,[Emojis-AutoKey][14] 可以通过将诸如: `:smile:` 之类的短语转换为它们等价的表情符号来轻松插入。
这是我设置的一个小脚本,该脚本进入 Tmux 的复制模式,以将前一行中的第一个单词复制到粘贴缓冲区中:
```
from time import sleep
# 发送 Tmux 命令前缀b 更改为 s
keyboard.send_keys("<ctr>+s")
# Enter copy mode
keyboard.send_key("[")
sleep(0.01)
# Move cursor up one line
keyboard.send_keys("k")
sleep(0.01)
# Move cursor to start of line
keyboard.send_keys("0")
sleep(0.01)
# Start mark
keyboard.send_keys(" ")
sleep(0.01)
# Move cursor to end of word
keyboard.send_keys("e")
sleep(0.01)
# Add to copy buffer
keyboard.send_keys("<ctrl>+m")
```
之所以有 `sleep` 函数,是因为 Tmux 有时无法跟上 AutoKey 发送击键的速度,并且它们对整体执行时间的影响可忽略不计。
### 使用 AutoKey 自动化
我希望你喜欢这篇使用 AutoKey 进行键盘自动化的探索,它为你提供了有关如何改善工作流程的一些好主意。如果你在使用 AutoKey 时有什么有用的或新颖的方法,一定要在下面的评论中分享。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/linux-autokey
作者:[Matt Bargenquast][a]
选题:[lujun9972][b]
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbargenquast
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://github.com/autokey/autokey
[3]: https://github.com/autokey/autokey/wiki/Installing
[4]: https://www.gtk.org/
[5]: https://www.qt.io/
[6]: https://opensource.com/sites/default/files/uploads/autokey-defaults.png (AutoKey UI)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://opensource.com/sites/default/files/uploads/startautokey.png (Automatically start AutoKey at login)
[9]: https://opensource.com/sites/default/files/uploads/autokey-set_abbreviation.png (Set abbreviation in AutoKey)
[10]: https://opensource.com/sites/default/files/uploads/autokey-window_filter.png (AutoKey Window Filter)
[11]: https://opensource.com/article/19/7/bash-aliases
[12]: https://autokey.github.io/index.html
[13]: https://github.com/luziferius/autokey_scripts
[14]: https://github.com/AlienKevin/Emojis-AutoKey

View File

@ -1,32 +1,34 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13337-1.html)
[#]: subject: (How to Add Fingerprint Login in Ubuntu and Other Linux Distributions)
[#]: via: (https://itsfoss.com/fingerprint-login-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
如何在 Ubuntu 和其他 Linux 发行版中添加指纹登录
如何在 Ubuntu 中添加指纹登录
======
现在很多高端笔记本都配备了指纹识别器。Windows 和 macOS 支持指纹登录已经有一段时间了。在桌面 Linux 中,对指纹登录的支持更多的是极客的调整,但 [GNOME][1] 和 [KDE][2] 已经开始通过系统设置来支持它。
![](https://img.linux.net.cn/data/attachment/album/202104/26/191530msmenm3ges3kgyet.jpg)
现在很多高端笔记本都配备了指纹识别器。Windows 和 macOS 支持指纹登录已经有一段时间了。在桌面 Linux 中,对指纹登录的支持更多需要极客的调整,但 [GNOME][1] 和 [KDE][2] 已经开始通过系统设置来支持它。
这意味着在新的 Linux 发行版上,你可以轻松使用指纹识别。在这里我将在 Ubuntu 中启用指纹登录,但你也可以在其他运行 GNOME 3.38 的发行版上使用这些步骤。
前提条件
当然,这是显而易见的。你的电脑必须有一个指纹识别器。
这个方法适用于任何运行 GNOME 3.38 或更高版本的 Linux 发行版。如果你不确定,你可以[检查你使用的桌面环境版本][3]。
KDE 5.21 也有一个指纹管理器。当然,截图看起来会有所不同。
> **前提条件**
>
> 当然,这是显而易见的。你的电脑必须有一个指纹识别器。
>
> 这个方法适用于任何运行 GNOME 3.38 或更高版本的 Linux 发行版。如果你不确定,你可以[检查你使用的桌面环境版本][3]。
>
> KDE 5.21 也有一个指纹管理器。当然,截图看起来会有所不同。
### 在 Ubuntu 和其他 Linux 发行版中添加指纹登录功能
进入 **Settings** ,然后点击左边栏的 **Users**。你应该可以看到系统中所有的用户账号。你会看到几个选项,包括 **Fingerprint Login**
进入 “设置”,然后点击左边栏的 “用户”。你应该可以看到系统中所有的用户账号。你会看到几个选项,包括 “指纹登录”
点击这里的指纹登录选项。
点击启用这里的指纹登录选项。
![Enable fingerprint login in Ubuntu][4]
@ -44,17 +46,17 @@ KDE 5.21 也有一个指纹管理器。当然,截图看起来会有所不同
![Fingerprint successfully added][7]
如果你想马上测试一下,在 Ubuntu 中按 Super+L 快捷键锁定屏幕,然后使用指纹进行登录。
如果你想马上测试一下,在 Ubuntu 中按 `Super+L` 快捷键锁定屏幕,然后使用指纹进行登录。
![Login With Fingerprint in Ubuntu][8]
#### 在 Ubuntu 上使用指纹登录的经验
指纹登录顾名思义就是用指纹登录。就是这样。当要求对需要 sudo 访问的程序进行认证时,你不能使用手指。它不能代替你的密码。
指纹登录顾名思义就是使你的指纹登录系统。就是这样。当要求对需要 `sudo` 访问的程序进行认证时,你不能使用手指。它不能代替你的密码。
还有一件事。指纹登录可以让你登录,但当系统要求输入 sudo 密码时你不能用手指。Ubuntu 中的 [keyring][9] 也仍然是锁定的。
还有一件事。指纹登录可以让你登录,但当系统要求输入 `sudo` 密码时你不能用手指。Ubuntu 中的 [钥匙环][9] 也仍然是锁定的。
另一件烦人的事情是因为 GNOME 的 GDM 登录界面。当你登录时,你必须先点击你的账户才能进入密码界面。你在这可以使用手指。如果不用麻烦先点击用户帐户 ID 就更好了。
另一件烦人的事情是因为 GNOME 的 GDM 登录界面。当你登录时,你必须先点击你的账户才能进入密码界面。你在这可以使用手指。如果能省去先点击用户帐户 ID 的麻烦就更好了。
我还注意到,指纹识别没有 Windows 中那么流畅和快速。不过,它可以使用。
@ -64,13 +66,13 @@ KDE 5.21 也有一个指纹管理器。当然,截图看起来会有所不同
禁用指纹登录和最初启用指纹登录差不多。
进入 **Settings→User**,然后点击指纹登录选项。它会显示一个有添加更多指纹或删除现有指纹的页面。你需要删除现有的指纹。
进入 “设置→用户”,然后点击指纹登录选项。它会显示一个有添加更多指纹或删除现有指纹的页面。你需要删除现有的指纹。
![Disable Fingerprint Login][10]
指纹登录确实有一些好处,特别是对于我这种懒人来说。我不用每次锁屏时输入密码,我也对这有限的使用感到满意。
指纹登录确实有一些好处,特别是对于我这种懒人来说。我不用每次锁屏时输入密码,我也对这有限的使用感到满意。
用 [PAM][11] 启用指纹解锁 sudo 应该不是完全不可能。我记得我[在 Ubuntu 中设置脸部解锁][12]时,也可以用于 sudo。看看以后的版本是否会增加这个功能吧。
用 [PAM][11] 启用指纹解锁 `sudo` 应该不是完全不可能。我记得我 [在 Ubuntu 中设置脸部解锁][12]时,也可以用于 `sudo`。看看以后的版本是否会增加这个功能吧。
你有带指纹识别器的笔记本吗?你是否经常使用它,或者它只是你不关心的东西之一?
@ -81,7 +83,7 @@ via: https://itsfoss.com/fingerprint-login-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,167 @@
[#]: subject: (My favorite open source project management tools)
[#]: via: (https://opensource.com/article/21/3/open-source-project-management)
[#]: author: (Frank Bergmann https://opensource.com/users/fraber)
[#]: collector: (lujun9972)
[#]: translator: (stevenzdg988)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13344-1.html)
我最喜欢的开源项目管理工具
======
> 如果你要管理大型复杂的项目,请尝试利用开源选择替换 MS-Project。
![](https://img.linux.net.cn/data/attachment/album/202104/29/145942py6qcc3lz1dyt1s6.jpg)
诸如建造卫星、开发机器人或推出新产品之类的项目都是昂贵的,涉及不同的提供商,并且包含必须跟踪的硬依赖性。
大型项目领域中的项目管理方法非常简单(至少在理论上如此)。你可以创建项目计划并将其拆分为较小的部分,直到你可以合理地将成本、持续时间、资源和依赖性分配给各种活动。一旦项目计划获得负责人的批准,你就可以使用它来跟踪项目的执行情况。在时间轴上绘制项目的所有活动将产生一个称为<ruby>[甘特图][2]<rt>Gantt chart</rt></ruby>的条形图。
甘特图一直被用于 [瀑布项目方法][3],也可以用于敏捷方法。例如,大型项目可能将甘特图用于 Scrum 冲刺,而忽略其他像用户需求这样的细节,从而嵌入敏捷阶段。其他大型项目可能包括多个产品版本(例如,最低可行产品 [MVP]、第二版本、第三版本等)。在这种情况下,上层结构是一种敏捷方法,而每个阶段都计划为甘特图,以处理预算和复杂的依赖关系。
### 项目管理工具
不夸张地说,有数百种现成的工具使用甘特图管理大型项目,而 MS-Project 可能是最受欢迎的工具。它是微软办公软件家族的一部分,可支持到成千上万的活动,并且有大量的功能,支持几乎所有可以想象到的管理项目进度的方式。对于 MS-Project有时候你并不知道什么更昂贵是软件许可证还是该工具的培训课程。
另一个缺点是 MS-Project 是一个独立的桌面应用程序,只有一个人可以更新进度表。如果要多个用户进行协作,则需要购买微软 Project 服务器、Web 版的 Project 或 Planner 的许可证。
幸运的是专有工具还有开源的替代品包括本文中提及的应用程序。所有这些都是开源的并且包括基于资源和依赖项的分层活动调度的甘特图。ProjectLibre、GanttProject 和 TaskJuggler 都针对单个项目经理的桌面应用程序。ProjeQtOr 和 Redmine 是用于项目团队的 Web 应用程序,而 ]project-open[ 是用于管理整个组织的 Web 应用程序。
我根据一个单用户计划和对一个大型项目的跟踪评估了这些工具。我的评估标准包括甘特图编辑器功能、Windows/Linux/macOS 上的可用性、可扩展性、导入/导出和报告。(背景披露:我是 ]project-open[ 的创始人,我在多个开源社区中活跃了很多年。此列表包括我们的产品,因此我的观点可能有偏见,但我尝试着眼于每个产品的最佳功能。)
### Redmine 4.1.0
![Redmine][4]
[Redmine][6] 是一个基于 Web 的专注于敏捷方法论的项目管理工具。
其标准安装包括一个甘特图时间轴视图,但缺少诸如调度、拖放、缩进(缩排和凸排)以及资源分配之类的基本功能。你必须单独编辑任务属性才能更改任务树的结构。
Redmine 具有甘特图编辑器插件,但是它们要么已经过时(例如 [Plus Gantt][7]),要么是专有的(例如 [ANKO 甘特图][8])。如果你知道其他开源的甘特图编辑器插件,请在评论中分享它们。
Redmine 用 Ruby on Rails 框架编写,可用于 Windows、Linux 和 macOS。其核心部分采用 GPLv2 许可证。
* **适合于:** 使用敏捷方法的 IT 团队。
* **独特卖点:** 这是 OpenProject 和 EasyRedmine 的原始“上游”父项目。
### ]project-open[ 5.1
![\]project-open\[][9]
[\]project-open\[][10] 是一个基于 Web 的项目管理系统,从整个组织的角度看类似于<ruby>企业资源计划<rt>enterprise resource planning</rt></ruby>ERP系统。它还可以管理项目档案、预算、发票、销售、人力资源和其他功能领域。有一些不同的变体如用于管理项目公司的<ruby>专业服务自动化<rt>professional services automation</rt></ruby>PSA、用于管理企业战略项目的<ruby>项目管理办公室<rt>project management office</rt></ruby>PMO和用于管理部门项目的<ruby>企业项目管理<rt>enterprise project management</rt></ruby>EPM
]project-open[ 甘特图编辑器包括按等级划分的任务、依赖关系和基于计划工作和分配资源的调度。它不支持资源日历和非人力资源。]project-open[ 系统非常复杂,其 GUI 可能需要刷新。
]project-open[ 是用 TCL 和 JavaScript 编写的,可用于 Windows 和 Linux。 ]project-open[ 核心采用 GPLv2 许可证,并具有适用于大公司的专有扩展。
* **适合于:** 需要大量财务项目报告的大中型项目组织。
* **独特卖点:** ]project-open[ 是一个综合系统,可以运行整个项目公司或部门。
### ProjectLibre 1.9.3
![ProjectLibre][11]
在开源世界中,[ProjectLibre][12] 可能是最接近 MS-Project 的产品。它是一个桌面应用程序,支持所有重要的项目计划功能,包括资源日历、基线和成本管理。它还允许你使用 MS-Project 的文件格式导入和导出计划。
ProjectLibre 非常适合计划和执行中小型项目。然而,它缺少 MS-Project 中的一些高级功能,并且它的 GUI 并不是最漂亮的。
ProjectLibre 用 Java 编写,可用于 Windows、Linux 和macOS并在开源的<ruby>通用公共署名许可证<rt>Common Public Attribution License</rt></ruby>CPAL下授权。ProjectLibre 团队目前正在开发一个名为 ProjectLibre Cloud 的 Web 产品,并采用专有许可证。
* **适合于:** 负责中小型项目的个人项目管理者,或者作为没有完整的 MS-Project 许可证的项目成员的查看器。
* **独特卖点:** 这是最接近 MS-Project 的开源软件。
### GanttProject 2.8.11
![GanttProject][13]
[GanttProject][14] 与 ProjectLibre 类似,它是一个桌面甘特图编辑器,但功能集更为有限。它不支持基线,也不支持非人力资源,并且报告功能比较有限。
GanttProject 是一个用 Java 编写的桌面应用程序,可在 GPLv3 许可下用于 Windows、Linux 和 macOS。
* **适合于:** 简单的甘特图或学习基于甘特图的项目管理技术。
* **独特卖点:** 它支持<ruby>流程评估和审阅技术<rt>program evaluation and review technique</rt></ruby>[PERT][15])图表,并使用 WebDAV 的协作。
### TaskJuggler 3.7.1
![TaskJuggler][16]
[TaskJuggler][17] 用于在大型组织中安排多个并行项目,重点是自动解决资源分配冲突(即资源均衡)。
它不是交互式的甘特图编辑器,而是一个命令行工具,其工作方式类似于一个编译器:它从文本文件中读取任务列表,并生成一系列报告,这些报告根据分配的资源、依赖项、优先级和许多其他参数为每个任务提供最佳的开始和结束时间。它支持多个项目、基线、资源日历、班次和时区,并且被设计为可扩展到具有许多项目和资源的企业场景。
使用特定语法编写 TaskJuggler 输入文件可能超出了普通项目经理的能力。但是,你可以使用 ]project-open[ 作为 TaskJuggler 的图形前端来生成输入包括缺勤、任务进度和记录的工作时间。当以这种方式使用时TaskJuggler 就成为了功能强大的假设情景规划器。
TaskJuggler 用 Ruby 编写,并且在 GPLv2 许可证下可用于 Windows、Linux 和 macOS。
* **适合于:** 由真正的技术极客管理的中大型部门。
* **独特卖点:** 它在自动资源均衡方面表现出色。
### ProjeQtOr 9.0.4
![ProjeQtOr][18]
[ProjeQtOr][19] 是适用于 IT 项目的、基于 Web 的项目管理应用程序。除了项目、工单和活动外,它还支持风险、预算、可交付成果和财务文件,以将项目管理的许多方面集成到单个系统中。
ProjeQtOr 提供了一个甘特图编辑器,与 ProjectLibre 功能类似,包括按等级划分的任务、依赖关系以及基于计划工作和分配资源。但是,它不支持取值的就地编辑(例如,任务名称、估计时间等);用户必须在甘特图视图下方的输入表单中更改取值,然后保存。
ProjeQtOr 用 PHP 编写,并且在 Affero GPL3 许可下可用于 Windows、Linux 和 macOS。
* **适合于:** 跟踪项目列表的 IT 部门。
* **独特卖点:** 让你为存储每个项目的大量信息,将所有信息保存在一个地方。
### 其他工具
对于特定的用例,以下系统可能是有效的选择,但由于各种原因,它们被排除在主列表之外。
![LIbrePlan][20]
* [LibrePlan][21] 是一个基于 Web 的项目管理应用程序,专注于甘特图。由于其功能集,它本来会在上面的列表中会占主导地位,但是没有可用于最新 Linux 版本CentOS 7 或 8的安装。作者说更新的说明将很快推出。
* [dotProject][22] 是一个用 PHP 编写的基于 Web 的项目管理系统,可在 GPLv2.x 许可证下使用。它包含一个甘特图时间轴报告,但是没有编辑它的选项,并且依赖项还不起作用(它们“仅部分起作用”)。
* [Leantime][23] 是一个基于 Web 的项目管理系统,具有漂亮的用 PHP 编写的 GUI并且可以在 GPLv2 许可证下使用。它包括一个里程碑的甘特时间线,但没有依赖性。
* [Orangescrum][24] 是基于 Web 的项目管理工具。甘特图图可以作为付费附件或付费订阅使用。
* [Talaia/OpenPPM][25] 是一个基于 Web 的项目组合管理系统。但是,版本 4.6.1 仍显示“即将推出:交互式甘特图”。
* [Odoo][26] 和 [OpenProject][27] 都将某些重要功能限制在付费企业版中。
在这篇评论中,目的是包括所有带有甘特图编辑器和依赖调度的开源项目管理系统。如果我错过了一个项目或误导了什么,请在评论中让我知道。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/open-source-project-management
作者:[Frank Bergmann][a]
选题:[lujun9972][b]
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fraber
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kanban_trello_organize_teams_520.png?itok=ObNjCpxt (Kanban-style organization action)
[2]: https://en.wikipedia.org/wiki/Gantt_chart
[3]: https://opensource.com/article/20/3/agiles-vs-waterfall
[4]: https://opensource.com/sites/default/files/uploads/redmine.png (Redmine)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://www.redmine.org/
[7]: https://redmine.org/plugins/plus_gantt
[8]: https://www.redmine.org/plugins/anko_gantt_chart
[9]: https://opensource.com/sites/default/files/uploads/project-open.png (]project-open[)
[10]: https://www.project-open.com
[11]: https://opensource.com/sites/default/files/uploads/projectlibre.png (ProjectLibre)
[12]: http://www.projectlibre.org
[13]: https://opensource.com/sites/default/files/uploads/ganttproject.png (GanttProject)
[14]: https://www.ganttproject.biz
[15]: https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique
[16]: https://opensource.com/sites/default/files/uploads/taskjuggler.png (TaskJuggler)
[17]: https://taskjuggler.org/
[18]: https://opensource.com/sites/default/files/uploads/projeqtor.png (ProjeQtOr)
[19]: https://www.projeqtor.org
[20]: https://opensource.com/sites/default/files/uploads/libreplan.png (LIbrePlan)
[21]: https://www.libreplan.dev/
[22]: https://dotproject.net/
[23]: https://leantime.io
[24]: https://orangescrum.org/
[25]: http://en.talaia-openppm.com/
[26]: https://odoo.com
[27]: http://openproject.org

View File

@ -4,13 +4,13 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13340-1.html)
使用 Stratis 的网络绑定磁盘加密
======
![][1]
![](https://img.linux.net.cn/data/attachment/album/202104/27/221704gyzyvyroyyrybany.jpg)
在一个有许多加密磁盘的环境中,解锁所有的磁盘是一项困难的任务。<ruby>网络绑定磁盘加密<rt>Network bound disk encryption</rt></ruby>NBDE有助于自动解锁 Stratis 卷的过程。这是在大型环境中的一个关键要求。Stratis 2.1 版本增加了对加密的支持,这在《[Stratis 加密入门][4]》一文中介绍过。Stratis 2.3 版本最近在使用加密的 Stratis 池时引入了对网络绑定磁盘加密NBDE的支持这是本文的主题。
@ -277,7 +277,7 @@ via: https://fedoramagazine.org/network-bound-disk-encryption-with-stratis/
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/stratis-nbde-816x345.jpg
[2]: https://unsplash.com/@imattsmart?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/lock?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/getting-started-with-stratis-encryption/
[4]: https://linux.cn/article-13311-1.html
[5]: https://stratis-storage.github.io/
[6]: https://www.youtube.com/watch?v=CJu3kmY-f5o
[7]: https://github.com/latchset/tang

View File

@ -0,0 +1,70 @@
[#]: subject: (Something bugging you in Fedora Linux? Lets get it fixed!)
[#]: via: (https://fedoramagazine.org/something-bugging-you-in-fedora-linux-lets-get-it-fixed/)
[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/)
[#]: collector: (lujun9972)
[#]: translator: (DCOLIVERSUN)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13333-1.html)
Fedora Linux 中有 Bug 吗?一起来修复它!
======
![][1]
软件有 bug。任何复杂系统都无法保证每个部分都能按计划工作。Fedora Linux 是一个 _非常_ 复杂的系统,包含几千个包,这些包由全球无数个独立的上游项目创建。每周还有数百个更新。因此,问题是不可避免的。本文介绍了 bug 修复过程以及如何确定 bug 优先级。
### 发布开发过程
作为一个 Linux 发行项目,我们希望为用户提供完善的、一切正常的体验。我们的发布起始于 “Rawhide”。我们在 Rawhide 中集成了所有更新的自由及开源软件的新版本。我们一直在不断改进正在进行的测试和<ruby>持续集成<rt>Continuous Integration</rt></ruby>过程,为了让即使是 Rawhide 也能被冒险者安全使用。可是从本质来讲Rawhide 始终有点粗糙。
每年两次,我们把这个粗糙的操作系统先后分支到测试版本、最终版本。当我们这么做时,我们齐心协力地寻找问题。我们在<ruby>测试日<rt>Test Days</rt></ruby>检查特定的区域和功能。制作“<ruby>候选版本<rt>Candidate builds</rt></ruby>”,并根据我们的 [发布验证测试计划][2] 进行检测。然后我们进入<ruby>冻结状态<rt>freeze state</rt></ruby>,只有批准的更改可以并入候选版本。这就把候选版本从持续的开发隔离开来,持续的开发不断并入 Rawhide 中。所以,不会引入新的问题。
在发布过程中许多 bug 被粉碎去除,这些 bug 有大有小。当一切按计划进行时,我们为所有用户提供了按计划发布的崭新的 Fedora Linux 版本。(在过去几年里,我们已经可靠地重复这一动作——感谢每一个为之努力工作的人!)如果确实有问题,我们可以将其标记为<ruby>发布阻碍<rt>release blocker</rt></ruby>。这就意味着我们要等到修复后才能发布。发布阻碍通常代表重大问题,该表达一定会引发对 bug 的关注。
有时,我们遇到的一些问题是持续存在的。可能一些问题已经持续了一两个版本,或者我们还没有达成共识的解决方案。有些问题确实困扰着许多用户,但个别问题并没有达到阻碍发布的程度。我们可以将这些东西标记为<ruby>阻碍<rt>blocker</rt></ruby>。但这会像锤子一样砸下来。阻碍可能导致最终粉碎该 bug但也可能导致破坏了周围。如果进度落后所有其它的 bug 修复、改进以及人们一直在努力的功能,都不能到达用户手中。
### 按优先顺序排列 bug 流程
所以,我们有另一种方法来解决烦人的 bug。[按优先顺序排列 bug 流程][3],与其他方式不同,可以标出导致大量用户不满意的问题。这里没有锤子,更像是聚光灯。与发布阻碍不同,按优先顺序排列 bug 流程没有一套严格定义的标准。每个 bug 都是根据影响范围和严重性来评估的。
一个由感兴趣的贡献者组成的团队帮助策划一个简短列表,上面罗列着需要注意的问题。然后,我们的工作是将问题匹配到能够解决它们的人。这有助于减轻发布过程中的压力,因为它没有给问题指定任何特定的截止时间。理想情况下,我们能在进入测试阶段之前就发现并解决问题。我们尽量保持列表简短,不会超过几个,这样才会真正有重点。这种做法有助于团队和个人解决问题,因为他们知道我们尊重他们捉襟见肘的时间与精力。
通过这个过程Fedora 解决了几十个严重而恼人的问题,包括从键盘输入故障到 SELinux 错误,再到数千兆字节大小的旧包更新会逐渐填满你的磁盘。但是我们可以做得更多——我们实际上收到的提案没有达到我们的处理能力上限。因此,如果你知道有什么事情导致了长期挫折或影响了很多人,至今没有达成解决方案,请遵循 [按优先顺序排列 bug 流程][3],提交给我们。
### 你可以帮助我们
邀请所有 Fedora 贡献者参与按优化顺序排列 bug 的流程。评估会议每两周在 IRC 上举办一次。欢迎任何人加入并帮助我们评估提名的 bug。会议时间和地点参见 [日历][4]。Fedora 项目经理在会议开始的前一天将议程发送到 [triage][5] 和 [devel][6] 邮件列表。
### 欢迎报告 bug
当你发现 bug 时,无论大小,我们很感激你能报告 bug。在很多情况下解决 bug 最好的方式是交给创建该软件的项目。例如,假设渲染数据相机照片的 Darktable 摄影软件出了问题,最好把它带给 Darktable 摄影软件的开发人员。再举个例子,假设 GNOME 或 KDE 桌面环境或组成部分软件出了问题,将这些问题交给这些项目中通常会得到最好的结果。
然而, 如果这是一个特定的 Fedora 问题,比如我们的软件构建或配置或者它的集成方式的问题,请毫不犹豫地 [向我们提交 bug][7]。当你知道有一个问题是我们还没有解决的,也要提交给我们。
我知道这很复杂……最好有一个一站式的地方来处理所有 bug。但是请记住Fedora 打包者大部分是志愿者,他们负责获取上游软件并将其配置到我们系统中。他们并不总是对他们正在使用的软件的代码有深入研究的专家。有疑问的时候,你可以随时提交一个 [Fedora bug][7]。Fedora 中负责相应软件包的人可以通过他们与上游软件项目的联系提供帮助。
请记住,当你发现一个已通过诊断但尚未得到良好修复的 bug 时,当你看到影响很多人的问题时,或者当有一个长期存在的问题没有得到关注时,请将其提名为高优先级 bug。我们会看以看能做些什么。
_附言标题中的著名图片当然是来自哈佛大学马克 2 号计算机的日志,这里曾是格蕾丝·赫柏少将工作的地方。但是与这个故事的普遍看法相背,这并不是 “bug” 一词第一次用于表示系统问题——它在工程中已经很常见了,这就是为什么发现一个字面上的 “bug” 作为问题的原因是很有趣的。 #nowyouknow #jokeexplainer_
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/something-bugging-you-in-fedora-linux-lets-get-it-fixed/
作者:[Matthew Miller][a]
选题:[lujun9972][b]
译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mattdm/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/bugging_you-816x345.jpg
[2]: https://fedoraproject.org/wiki/QA:Release_validation_test_plan
[3]: https://docs.fedoraproject.org/en-US/program_management/prioritized_bugs/
[4]: https://calendar.fedoraproject.org/base/
[5]: https://lists.fedoraproject.org/archives/list/triage%40lists.fedoraproject.org/
[6]: https://lists.fedoraproject.org/archives/list/devel%40lists.fedoraproject.org/
[7]: https://docs.fedoraproject.org/en-US/quick-docs/howto-file-a-bug/

View File

@ -3,14 +3,16 @@
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13343-1.html)
Blanket拥有各种环境噪音的应用帮助保持注意力集中
======
_**简介:一个开源的环境噪音播放器,提供各种声音,帮助你集中注意力或入睡。**_
> 一个开源的环境噪音播放器,提供各种声音,帮助你集中注意力或入睡。
![](https://img.linux.net.cn/data/attachment/album/202104/29/094813oxcitipetajxjiex.jpg)
随着你周围活动的增加,要保持冷静和专注往往是很困难的。
@ -44,13 +46,13 @@ flatpak install flathub com.rafaelmardojai.Blanket
如果你是 Flatpak 的新手,你可能想通过我们的 [Flatpak 指南][5]了解。
如果你不喜欢使用 Flatpaks,你可以使用该项目中的贡献者维护的 PPA 来安装它。对于 Arch Linux 用户,你可以在 [AUR][6] 中找到它,以方便安装。
如果你不喜欢使用 Flatpak你可以使用该项目中的贡献者维护的 PPA 来安装它。对于 Arch Linux 用户,你可以在 [AUR][6] 中找到它,以方便安装。
此外,你还可以找到 Fedora 和 openSUSE 的软件包。要探索所有可用的软件包,你可以前往其 [GitHub 页面][7]。
此外,你还可以找到 Fedora 和 openSUSE 的软件包。要探索所有现成的软件包,你可以前往其 [GitHub 页面][7]。
### 结束语
对于一个简单的环境噪音播放器来说,用户体验是相当好的。我有一副 HyperX Alpha S 耳机,我必须要说声音的质量很好。
对于一个简单的环境噪音播放器来说,用户体验是相当好的。我有一副 HyperX Alpha S 耳机,我必须要说声音的质量很好。
换句话说,它听起来很舒缓,如果你想体验环境声音来集中注意力,摆脱焦虑或只是睡着,我建议你试试。
@ -63,7 +65,7 @@ via: https://itsfoss.com/blanket-ambient-noise-app/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,39 +3,36 @@
[#]: author: (Chris Patrick Carias Stas https://itsfoss.com/author/chris/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13346-1.html)
如何在 Linux 中删除分区(初学者指南)
如何在 Linux 中删除分区
======
![](https://img.linux.net.cn/data/attachment/album/202104/30/095353uhtbhm2fqx44aqfo.jpg)
管理分区是一件严肃的事情,尤其是当你不得不删除它们时。我发现自己经常这样做,特别是在使用 U 盘作为实时磁盘和 Linux 安装程序之后,因为它们创建了几个我以后不需要的分区。
在本教程中,我将告诉你如何使用命令行和 GUI 工具在 Linux 中删除分区。
* [用 GParted 等 GUI 工具删除 Linux 中的分区][1]
* [使用 Linux 命令删除分区][2]
警告!
你删除了分区,就会失去你的数据。无论何时,当你在操作分区时,一定要备份你的数据。一个轻微的打字错误或手滑都可能是昂贵的。不要说我们没有警告你!
> 警告!
>
> 删除了分区,就会失去你的数据。无论何时,当你在操作分区时,一定要备份你的数据。一个轻微的打字错误或手滑都可能是昂贵的。不要说我们没有警告你!
### 使用 GParted 删除磁盘分区 GUI 方法)
作为一个桌面 Linux 用户,你可能会对基于 GUI 的工具感到更舒服,也许更安全。
有[几个让你在 Linux 上管理分区的工具][3]。根据你的发行版,你的系统上已经安装了一个甚至多个这样的工具。
[几个让你在 Linux 上管理分区的工具][3]。根据你的发行版,你的系统上已经安装了一个甚至多个这样的工具。
在本教程中,我将使用 [GParted][4]。它是一个流行的开源工具,使用起来非常简单和直观。
第一步是[安装 GParted][5],如果它还没有在你的系统中。你应该能够在你的发行版的软件中心找到它。
第一步是 [安装 GParted][5],如果它还没有在你的系统中。你应该能够在你的发行版的软件中心找到它。
![][6]
或者,你也可以使用你的发行版的软件包管理器来安装它。在基于 Debian 和 Ubuntu 的 Linux 发行版中,你可以[使用 apt install 命令][7]
或者,你也可以使用你的发行版的软件包管理器来安装它。在基于 Debian 和 Ubuntu 的 Linux 发行版中,你可以 [使用 apt install 命令][7]
```
sudo apt install gparted
@ -47,21 +44,21 @@ sudo apt install gparted
在右上角,你可以选择磁盘,在下面选择你想删除的分区。
接下来,从分区菜单中选择 **Delete** 选项:
接下来,从分区菜单中选择 “删除” 选项:
![][9]
这个过程是不完整的,直到你重写分区表。这是一项安全措施,它让你在确认之前可以选择审查更改。
这个过程是没有完整完成的,直到你重写分区表。这是一项安全措施,它让你在确认之前可以选择审查更改。
要完成它,只需点击位于工具栏中的 **Apply All Operations** 按钮,然后在要求确认时点击 **Apply**
要完成它,只需点击位于工具栏中的 “应用所有操作” 按钮,然后在要求确认时点击 “应用”
![][10]
点击 **Apply** 后,你会看到一个进度条和一个结果消息说所有的操作都成功了。你可以关闭该信息和主窗口,并认为你的分区已从磁盘中完全删除。
点击 “应用” 后,你会看到一个进度条和一个结果消息说所有的操作都成功了。你可以关闭该信息和主窗口,并认为你的分区已从磁盘中完全删除。
现在你已经知道了 GUI 的方法,让我们继续使用命令行。
### 使用 fdisk 命令删除分区
### 使用 fdisk 命令删除分区CLI 方法)
几乎每个 Linux 发行版都默认带有 [fdisk][11],我们今天就来使用这个工具。你需要知道的第一件事是,你想删除的分区被分配到哪个设备上了。为此,在终端输入以下内容:
@ -69,13 +66,13 @@ sudo apt install gparted
sudo fdisk --list
```
这将打印出我们系统中所有的驱动器和分区,以及分配的设备。你[需要有 root 权限][12],以便让它发挥作用。
这将打印出我们系统中所有的驱动器和分区,以及分配的设备。你 [需要有 root 权限][12],以便让它发挥作用。
在本例中,我将使用一个包含两个分区的 USB 驱动器,如下图所示:
![][13]
系统中分配的设备是 /sdb它有两个分区sdb1 和 sdb2。现在你已经确定了哪个设备包含这些分区,你可以通过使用 `fdisk` 和设备的路径开始操作:
系统中分配的设备是 `/sdb`,它有两个分区:`sdb1` 和 `sdb2`。现在你已经确定了哪个设备包含这些分区,你可以通过使用 `fdisk` 和设备的路径开始操作:
```
sudo fdisk /dev/sdb
@ -83,15 +80,15 @@ sudo fdisk /dev/sdb
这将在命令模式下启动 `fdisk`。你可以随时按 `m` 来查看选项列表。
接下来,输入 `p`,然后按`回车`查看分区信息,并确认你正在使用正确的设备。如果使用了错误的设备,你可以使用 `q` 命令退出 `fdisk` 并重新开始。
接下来,输入 `p`,然后按回车查看分区信息,并确认你正在使用正确的设备。如果使用了错误的设备,你可以使用 `q` 命令退出 `fdisk` 并重新开始。
现在输入 `d` 来删除一个分区,它将立即询问分区编号,这与 “Device” 列中列出的编号相对应,在这个例子中是 1 和 2在下面的截图中可以看到但是可以也会根据当前的分区表而有所不同。
![][14]
让我们通过输入 `2` 并按下`回车`来删除第二个分区。你应该看到一条信息:**“Partition 2 has been deleted”**,但实际上,它还没有被删除。`fdisk` 还需要一个步骤来重写分区表并应用这些变化。你看,这就是完全网。
让我们通过输入 `2` 并按下回车来删除第二个分区。你应该看到一条信息:**“Partition 2 has been deleted”**,但实际上,它还没有被删除。`fdisk` 还需要一个步骤来重写分区表并应用这些变化。你看,这就是完全网。
你需要输入 `w`,然后按`回车`来使这些改变成为永久性的。没有再要求确认。
你需要输入 `w`,然后按回车来使这些改变成为永久性的。没有再要求确认。
在这之后,你应该看到下面这样的反馈:
@ -101,7 +98,7 @@ sudo fdisk /dev/sdb
#### 总结
这样,我结束了这个关于如何使用终端和 GUI 工具在 Linux 中删除分区的教程。记住,要始终保持安全,在操作分区之前备份你的文件,并仔细检查你是否使用了正确的设备。删除一个分区将删除其中的所有内容,而几乎没有[恢复][16]的机会。
这样,这个关于如何使用终端和 GUI 工具在 Linux 中删除分区的教程就结束了。记住,要始终保持安全,在操作分区之前备份你的文件,并仔细检查你是否使用了正确的设备。删除一个分区将删除其中的所有内容,而几乎没有 [恢复][16] 的机会。
--------------------------------------------------------------------------------
@ -110,7 +107,7 @@ via: https://itsfoss.com/delete-partition-linux/
作者:[Chris Patrick Carias Stas][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,196 @@
[#]: subject: (Optimize your Python code with C)
[#]: via: (https://opensource.com/article/21/4/cython)
[#]: author: (Alan Smithee https://opensource.com/users/alansmithee)
[#]: collector: (lujun9972)
[#]: translator: (ShuyRoy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13338-1.html)
使用 C 优化你的 Python 代码
======
> Cython 创建的 C 模块可以加速 Python 代码的执行,这对使用效率不高的解释型语言编写的复杂应用是很重要的。
![](https://img.linux.net.cn/data/attachment/album/202104/26/230709qz64z4af3t9b9jab.jpg)
Cython 是 Python 编程语言的编译器,旨在优化性能并形成一个扩展的 Cython 编程语言。作为 Python 的扩展,[Cython][2] 也是 Python 语言的超集,它支持调用 C 函数和在变量和类属性上声明 C 类型。这使得包装外部 C 库、将 C 嵌入现有应用程序或者为 Python 编写像 Python 一样简单的 C 语言扩展语法变得容易。
Cython 一般用于创建 C 模块来加速 Python 代码的执行。这在使用解释型语言编写的效率不高的复杂应用中非常重要。
### 安装 Cython
你可以在 Linux、BSD、Windows 或 macOS 上安装 Cython 来使用 Python
```
$ python -m pip install Cython
```
安装好后,就可以使用它了。
### 将 Python 转换成 C
使用 Cython 的一个好的方式是从一个简单的 “hello world” 开始。这虽然不是展示 Cython 优点的最好方式,但是它展示了使用 Cython 时发生的情况。
首先,创建一个简单的 Python 脚本,文件命名为 `hello.pyx``.pyx` 扩展名并不神奇,从技术上它可以是任何东西,但它是 Cython 的默认扩展名):
```
print("hello world")
```
接下来,创建一个 Python 设置脚本。一个像 Python 的 makefile 一样的 `setup.py`Cython 可以使用它来处理你的 Python 代码:
```
from setuptools import setup
from Cython.Build import cythonize
setup(
    ext_modules = cythonize("hello.pyx")
)
```
最后,使用 Cython 将你的 Python 脚本转换为 C 代码:
```
$ python setup.py build_ext --inplace
```
你可以在你的工程目录中看到结果。Cython 的 `cythonize` 模块将 `hello.pyx` 转换成一个 `hello.c` 文件和一个 `.so` 库。这些 C 代码有 2648 行,所以它比一个一行的 `hello.pyx` 源码的文本要多很多。`.so` 库也比它的源码大 2000 倍(即 54000 字节和 20 字节相比。然后Python 需要运行单个 Python 脚本,所以有很多代码支持这个只有一行的 `hello.pyx` 文件。
要使用 Python 的 “hello world” 脚本的 C 代码版本,请打开一个 Python 提示符并导入你创建的新 `hello` 模块:
```
>>> import hello
hello world
```
### 将 C 代码集成到 Python 中
测试计算能力的一个很好的通用测试是计算质数。质数是一个比 1 大的正数,且它只有被 1 或它自己除后才会产生正整数。虽然理论很简单,但是随着数的变大,计算需求也会增加。在纯 Python 中,可以用 10 行以内的代码完成质数的计算。
```
import sys
number = int(sys.argv[1])
if not number <= 1:
for i in range(2, number):
if (number % i) == 0:
print("Not prime")
break
else:
print("Integer must be greater than 1")
```
这个脚本在成功的时候是不会提醒的,如果这个数不是质数,则返回一条信息:
```
$ ./prime.py 3
$ ./prime.py 4
Not prime.
```
将这些转换为 Cython 需要一些工作,一部分是为了使代码适合用作库,另一部分是为了提高性能。
#### 脚本和库
许多用户将 Python 当作一种脚本语言来学习:你告诉 Python 想让它执行的步骤,然后它来做。随着你对 Python以及一般的开源编程的了解越多你可以了解到许多强大的代码都存在于其他应用程序可以利用的库中。你的代码越 _不具有针对性_,程序员(包括你)就越可能将其重用于其他的应用程序。将计算和工作流解耦可能需要更多的工作,但最终这通常是值得的。
在这个简单的质数计算的例子中,将其转换成 Cython首先是一个设置脚本
```
from setuptools import setup
from Cython.Build import cythonize
setup(
    ext_modules = cythonize("prime.py")
)
```
将你的脚本转换成 C
```
$ python setup.py build_ext --inplace
```
到目前为止,一切似乎都工作的很好,但是当你试图导入并使用新模块时,你会看到一个错误:
```
>>> import prime
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "prime.py", line 2, in init prime
number = sys.argv[1]
IndexError: list index out of range
```
这个问题是 Python 脚本希望从一个终端运行,其中参数(在这个例子中是要测试是否为质数的整数)是一样的。你需要修改你的脚本,使它可以作为一个库来使用。
#### 写一个库
库不使用系统参数,而是接受其他代码的参数。对于用户输入,与其使用 `sys.argv`,不如将你的代码封装成一个函数来接收一个叫 `number`(或者 `num`,或者任何你喜欢的变量名)的参数:
```
def calculate(number):
if not number <= 1:
for i in range(2, number):
if (number % i) == 0:
print("Not prime")
break
else:
print("Integer must be greater than 1")
```
这确实使你的脚本有些难以测试,因为当你在 Python 中运行代码时,`calculate` 函数永远不会被执行。但是Python 编程人员已经为这个问题设计了一个通用、还算直观的解决方案。当 Python 解释器执行一个 Python 脚本时,有一个叫 `__name__` 的特殊变量,这个变量被设置为 `__main__`,但是当它被作为模块导入的时候,`__name__` 被设置为模块的名字。利用这点,你可以写一个既是 Python 模块又是有效 Python 脚本的库:
```
import sys
def calculate(number):
if not number <= 1:
for i in range(2, number):
if (number % i) == 0:
print("Not prime")
break
else:
print("Integer must be greater than 1")
if __name__ == "__main__":
number = sys.argv[1]
calculate( int(number) )
```
现在你可以用一个命令来运行代码了:
```
$ python ./prime.py 4
Not a prime
```
你可以将它转换为 Cython 来用作一个模块:
```
>>> import prime
>>> prime.calculate(4)
Not prime
```
### C Python
用 Cython 将纯 Python 的代码转换为 C 代码是有用的。这篇文章描述了如何做然而Cython 还有功能可以帮助你在转换之前优化你的代码,分析你的代码来找到 Cython 什么时候与 C 进行交互,以及更多。如果你正在用 Python但是你希望用 C 代码改进你的代码,或者进一步理解库是如何提供比脚本更好的扩展性的,或者你只是好奇 Python 和 C 是如何协作的,那么就开始使用 Cython 吧。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/cython
作者:[Alan Smithee][a]
选题:[lujun9972][b]
译者:[ShuyRoy](https://github.com/ShuyRoy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alansmithee
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd (Hands on a keyboard with a Python book )
[2]: https://cython.org/

View File

@ -0,0 +1,94 @@
[#]: subject: (Restore an old MacBook with Linux)
[#]: via: (https://opensource.com/article/21/4/restore-macbook-linux)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13341-1.html)
用 Linux 翻新旧的 MacBook
======
> 不要把你又旧又慢的 MacBook 扔进垃圾桶。用 Linux Mint 延长它的寿命。
![](https://img.linux.net.cn/data/attachment/album/202104/27/225241mdbp59t67699r9de.jpg)
去年,我写了篇关于如何用 Linux 赋予[旧 MacBook 的新生命][2]的文章,在例子中提到了 Elementary OS。最近我用回那台 2015 年左右的 MacBook Air发现遗失了我的登录密码。我下载了最新的 Elementary OS 5.1.7 Hera但无法让实时启动识别我的 Broadcom 4360 无线芯片组。
最近,我一直在使用 [Linux Mint][3] 来翻新旧的笔记本电脑,我想在这台 MacBook Air 上试一下。我下载了 Linux Mint 20.1 ISO并在我的 Linux 台式电脑上使用 [Popsicle][4] 创建了一个 USB 启动器。
![Popsicle ISO burner][5]
接下来,我将 Thunderbolt 以太网适配器连接到 MacBook并插入 USB 启动器。我打开系统电源,按下 MacBook 上的 Option 键,指示它从 USB 驱动器启动系统。
Linux Mint 在实时启动模式下启动没问题,但操作系统没有识别出无线连接。
### 我的无线网络在哪里?
这是因为为苹果设备制造 WiFi 卡的公司 Broadcom 没有发布开源驱动程序。这与英特尔、Atheros 和许多其他芯片制造商形成鲜明对比,但它是苹果公司使用的芯片组,所以这是 MacBook 上的一个常见问题。
我通过我的 Thunderbolt 适配器有线连接到以太网,因此我 _是_ 在线的。通过之前的研究,我知道要让无线适配器在这台 MacBook 上工作,我需要在 Bash 终端执行三条独立的命令。然而,在安装过程中,我了解到 Linux Mint 有一个很好的内置驱动管理器,它提供了一个简单的图形用户界面来协助安装软件。
![Linux Mint Driver Manager][7]
该操作完成后,我重启了安装了 Linux Mint 20.1 的新近翻新的 MacBook Air。Broadcom 无线适配器工作正常,使我能够轻松地连接到我的无线网络。
### 手动安装无线
你可以从终端完成同样的任务。首先,清除 Broadcom 内核源码的残余。
```
$ sudo apt-get purge bcmwl-kernel-source
```
然后添加一个固件安装程序:
```
$ sudo apt install firmware-b43-installer
```
最后,为系统安装新固件:
```
$ sudo apt install linux-firmware
```
### 将 Linux 作为你的 Mac 使用
我安装了 [Phoronix 测试套件][8] 以获得 MacBook Air 的系统信息。
![MacBook Phoronix Test Suite output][9]
系统工作良好。对内核 5.4.0-64-generic 的最新更新显示,无线连接仍然存在,并且我与家庭网络之间的连接为 866Mbps。Broadcom 的 FaceTime 摄像头不能工作,但其他东西都能正常工作。
我非常喜欢这台 MacBook 上的 [Linux Mint Cinnamon 20.1][10] 桌面。
![Linux Mint Cinnamon][11]
如果你有一台因 macOS 更新而变得缓慢且无法使用的旧 MacBook我建议你试一下 Linux Mint。我对这个发行版印象非常深刻尤其是它在我的 MacBook Air 上的工作情况。它无疑延长了这个强大的小笔记本电脑的寿命。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/restore-macbook-linux
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/write-hand_0.jpg?itok=Uw5RJD03 (Writing Hand)
[2]: https://opensource.com/article/20/2/macbook-linux-elementary
[3]: https://linuxmint.com/
[4]: https://github.com/pop-os/popsicle
[5]: https://opensource.com/sites/default/files/uploads/popsicle.png (Popsicle ISO burner)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/sites/default/files/uploads/mint_drivermanager.png (Linux Mint Driver Manager)
[8]: https://www.phoronix-test-suite.com/
[9]: https://opensource.com/sites/default/files/uploads/macbook_specs.png (MacBook Phoronix Test Suite output)
[10]: https://www.linuxmint.com/edition.php?id=284
[11]: https://opensource.com/sites/default/files/uploads/mintcinnamon.png (Linux Mint Cinnamon)

View File

@ -0,0 +1,199 @@
[#]: subject: (Play a fun math game with Linux commands)
[#]: via: (https://opensource.com/article/21/4/math-game-linux-commands)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13358-1.html)
用 Linux 命令玩一个有趣的数学游戏
======
> 在家玩流行的英国游戏节目 “Countdown” 中的数字游戏。
![](https://img.linux.net.cn/data/attachment/album/202105/03/221459uchb0f8xcxfrhc86.jpg)
像许多人一样,我在大流行期间看了不少新的电视节目。我最近发现了一个英国的游戏节目,叫做 [Countdown][2],参赛者在其中玩两种游戏:一种是 _单词_ 游戏,他们试图从杂乱的字母中找出最长的单词;另一种是 _数字_ 游戏,他们从随机选择的数字中计算出一个目标数字。因为我喜欢数学,我发现自己被数字游戏所吸引。
数字游戏可以为你的下一个家庭游戏之夜增添乐趣,所以我想分享我自己的一个游戏变体。你以一组随机数字开始,分为 1 到 10 的“小”数字和 15、20、25以此类推直到 100 的“大”数字。你从大数字和小数字中挑选六个数字的任何组合。
接下来,你生成一个 200 到 999 之间的随机“目标”数字。然后用你的六个数字进行简单的算术运算,尝试用每个“小”和“大”数字计算出目标数字,但使用不能超过一次。如果你能准确地计算出目标数字,你就能得到最高分,如果距离目标数字 10 以内就得到较低的分数。
例如,如果你的随机数是 75、100、2、3、4 和 1而你的目标数是 505你可以说 `2+3=5``5×100=500``4+1=5`,以及 `5+500=505`。或者更直接地:`(2+3)×100 + 4 + 1 = 505`。
### 在命令行中随机化列表
我发现在家里玩这个游戏的最好方法是从 1 到 10 的池子里抽出四个“小”数字,从 15 到 100 的 5 的倍数中抽出两个“大”数字。你可以使用 Linux 命令行来为你创建这些随机数。
让我们从“小”数字开始。我希望这些数字在 1 到 10 的范围内。你可以使用 Linux 的 `seq` 命令生成一个数字序列。你可以用几种不同的方式运行 `seq`,但最简单的形式是提供序列的起始和结束数字。要生成一个从 1 到 10 的列表,你可以运行这个命令:
```
$ seq 1 10
1
2
3
4
5
6
7
8
9
10
```
为了随机化这个列表,你可以使用 Linux 的 `shuf`“shuffle”打乱命令。`shuf` 将随机化你给它的东西的顺序,通常是一个文件。例如,如果你把 `seq` 命令的输出发送到 `shuf` 命令,你会收到一个 1 到 10 之间的随机数字列表:
```
$ seq 1 10 | shuf
3
6
8
10
7
4
5
2
1
9
```
要从 1 到 10 的列表中只选择四个随机数,你可以将输出发送到 `head` 命令,它将打印出输入的前几行。使用 `-4` 选项来指定 `head` 只打印前四行:
```
$ seq 1 10 | shuf | head -4
6
1
8
4
```
注意,这个列表与前面的例子不同,因为 `shuf` 每次都会生成一个随机顺序。
现在你可以采取下一步措施来生成“大”数字的随机列表。第一步是生成一个可能的数字列表,从 15 开始,以 5 为单位递增,直到达到 100。你可以用 Linux 的 `seq` 命令生成这个列表。为了使每个数字以 5 为单位递增,在 `seq` 命令中插入另一个选项来表示 _步进_
```
$ seq 15 5 100
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
```
就像以前一样,你可以随机化这个列表,选择两个“大”数字:
```
$ seq 15 5 100 | shuf | head -2
75
40
```
### 用 Bash 生成一个随机数
我想你可以用类似的方法从 200 到 999 的范围内选择游戏的目标数字。但是生成单个随机数的最简单的方案是直接在 Bash 中使用 `RANDOM` 变量。当你引用这个内置变量时Bash 会生成一个大的随机数。要把它放到 200 到 999 的范围内,你需要先把随机数放到 0 到 799 的范围内,然后加上 200。
要把随机数放到从 0 开始的特定范围内,你可以使用**模数**算术运算符。模数计算的是两个数字相除后的 _余数_。如果我用 801 除以 800结果是 1余数是 1模数是 1。800 除以 800 的结果是 1余数是 0模数是 0。而用 799 除以 800 的结果是 0余数是 799模数是 799
Bash 通过 `$(())` 结构支持算术展开。在双括号之间Bash 将对你提供的数值进行算术运算。要计算 801 除以 800 的模数,然后加上 200你可以输入:
```
$ echo $(( 801 % 800 + 200 ))
201
```
通过这个操作,你可以计算出一个 200 到 999 之间的随机目标数:
```
$ echo $(( RANDOM % 800 + 200 ))
673
```
你可能想知道为什么我在 Bash 语句中使用 `RANDOM` 而不是 `$RANDOM`。在算术扩展中, Bash 会自动扩展双括号内的任何变量. 你不需要在 `$RANDOM` 变量上的 `$` 来引用该变量的值, 因为 Bash 会帮你做这件事。
### 玩数字游戏
让我们把所有这些放在一起,玩玩数字游戏。产生两个随机的“大”数字, 四个随机的“小”数值,以及目标值:
```
$ seq 15 5 100 | shuf | head -2
75
100
$ seq 1 10 | shuf | head -4
4
3
10
2
$ echo $(( RANDOM % 800 + 200 ))
868
```
我的数字是 **75**、**100**、**4**、**3**、**10** 和 **2**,而我的目标数字是 **868**
如果我用每个“小”和“大”数字做这些算术运算,并不超过一次,我就能接近目标数字了:
```
10×75 = 750
750+100 = 850
然后:
4×3 = 12
850+12 = 862
862+2 = 864
```
只相差 4 了,不错!但我发现这样可以用每个随机数不超过一次来计算出准确的数字:
```
4×2 = 8
8×100 = 800
然后:
75-10+3 = 68
800+68 = 868
```
或者我可以做 _这些_ 计算来准确地得到目标数字。这只用了六个随机数中的五个:
```
4×3 = 12
75+12 = 87
然后:
87×10 = 870
870-2 = 868
```
试一试 _Countdown_ 数字游戏,并在评论中告诉我们你做得如何。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/math-game-linux-commands
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_math_formulas.png?itok=B59mYTG3 (Math formulas in green writing)
[2]: https://en.wikipedia.org/wiki/Countdown_%28game_show%29

View File

@ -0,0 +1,135 @@
[#]: subject: (Application observability with Apache Kafka and SigNoz)
[#]: via: (https://opensource.com/article/21/4/observability-apache-kafka-signoz)
[#]: author: (Nitish Tiwari https://opensource.com/users/tiwarinitish86)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13352-1.html)
使用 Apache Kafka 和 SigNoz 实现应用可观测性
======
> SigNoz 帮助开发者使用最小的精力快速实现他们的可观测性目标。
![](https://img.linux.net.cn/data/attachment/album/202105/01/231703oy5ln5nnqkuhxt1t.jpg)
SigNoz 是一个开源的应用可观察性平台。SigNoz 是用 React 和 Go 编写的,它从头到尾都是为了让开发者能够以最小的精力尽快实现他们的可观察性目标。
本文将详细介绍该软件,包括架构、基于 Kubernetes 的部署以及一些常见的 SigNoz 用途。
### SigNoz 架构
SigNoz 将几个组件捆绑在一起,创建了一个可扩展的、耦合松散的系统,很容易上手使用。其中一些最重要的组件有:
* OpenTelemetry Collector
* Apache Kafka
* Apache Druid
[OpenTelemetry Collector][2] 是跟踪或度量数据收集引擎。这使得 SigNoz 能够以行业标准格式获取数据,包括 Jaeger、Zipkin 和 OpenConsensus。之后收集的数据被转发到 Apache Kafka。
SigNoz 使用 Kafka 和流处理器来实时获取大量的可观测数据。然后,这些数据被传递到 Apache Druid它擅长于存储这些数据用于短期和长期的 SQL 分析。
当数据被扁平化并存储在 Druid 中SigNoz 的查询服务可以查询并将数据传递给 SigNoz React 前端。然后,前端为用户创建漂亮的图表,使可观察性数据可视化。
![SigNoz architecture][3]
### 安装 SigNoz
SigNoz 的组件包括 Apache Kafka 和 Druid。这些组件是松散耦合的并协同工作以确保终端用户的无缝体验。鉴于这些组件最好将 SigNoz 作为 Kubernetes 或 Docker Compose用于本地测试上的微服务组合来运行。
这个例子使用基于 Kubernetes Helm Chart 的部署在 Kubernetes 上安装 SigNoz。作为先决条件你需要一个 Kubernetes 集群。如果你没有可用的 Kubernetes 集群,你可以使用 [MiniKube][5] 或 [Kind][6] 等工具,在你的本地机器上创建一个测试集群。注意,这台机器至少要有 4GB 的可用空间才能工作。
当你有了可用的集群,并配置了 kubectl 来与集群通信,运行:
```
$ git clone https://github.com/SigNoz/signoz.git && cd signoz
$ helm dependency update deploy/kubernetes/platform
$ kubectl create ns platform
$ helm -n platform install signoz deploy/kubernetes/platform
$ kubectl -n platform apply -Rf deploy/kubernetes/jobs
$ kubectl -n platform apply -f deploy/kubernetes/otel-collector
```
这将在集群上安装 SigNoz 和相关容器。要访问用户界面 UI运行 `kubectl port-forward` 命令。例如:
```
$ kubectl -n platform port-forward svc/signoz-frontend 3000:3000
```
现在你应该能够使用本地浏览器访问你的 SigNoz 仪表板,地址为 `http://localhost:3000`
现在你的可观察性平台已经建立起来了,你需要一个能产生可观察性数据的应用来进行可视化和追踪。对于这个例子,你可以使用 [HotROD][7],一个由 Jaegar 团队开发的示例应用。
要安装它,请运行:
```
$ kubectl create ns sample-application
$ kubectl -n sample-application apply -Rf sample-apps/hotrod/
```
### 探索功能
现在你应该有一个已经安装合适仪表的应用,并可在演示设置中运行。看看 SigNoz 仪表盘上的指标和跟踪数据。当你登录到仪表盘的主页时,你会看到一个所有已配置的应用列表,这些应用正在向 SigNoz 发送仪表数据。
![SigNoz dashboard][8]
#### 指标
当你点击一个特定的应用时,你会登录到该应用的主页上。指标页面显示最近 15 分钟的信息(这个数字是可配置的),如应用的延迟、平均吞吐量、错误率和应用目前访问最高的接口。这让你对应用的状态有一个大概了解。任何错误、延迟或负载的峰值都可以立即看到。
![Metrics in SigNoz][9]
#### 追踪
追踪页面按时间顺序列出了每个请求的高层细节。当你发现一个感兴趣的请求(例如,比预期时间长的东西),你可以点击追踪,查看该请求中发生的每个行为的单独时间跨度。下探模式提供了对每个请求的彻底检查。
![Tracing in SigNoz][10]
![Tracing in SigNoz][11]
#### 用量资源管理器
大多数指标和跟踪数据都非常有用,但只在一定时期内有用。随着时间的推移,数据在大多数情况下不再有用。这意味着为数据计划一个适当的保留时间是很重要的。否则,你将为存储支付更多的费用。用量资源管理器提供了每小时、每一天和每一周获取数据的概况。
![SigNoz Usage Explorer][12]
### 添加仪表
到目前为止,你一直在看 HotROD 应用的指标和追踪。理想情况下,你会希望对你的应用进行检测,以便它向 SigNoz 发送可观察数据。参考 SigNoz 网站上的[仪表概览][13]。
SigNoz 支持一个与供应商无关的仪表库OpenTelemetry作为配置仪表的主要方式。OpenTelemetry 提供了各种语言的仪表库,支持自动和手动仪表。
### 了解更多
SigNoz 帮助开发者快速开始度量和跟踪应用。要了解更多,你可以查阅 [文档][14],加入[社区][15],并访问 [GitHub][16] 上的源代码。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/observability-apache-kafka-signoz
作者:[Nitish Tiwari][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tiwarinitish86
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://github.com/open-telemetry/opentelemetry-collector
[3]: https://opensource.com/sites/default/files/uploads/signoz_architecture.png (SigNoz architecture)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://minikube.sigs.k8s.io/docs/start/
[6]: https://kind.sigs.k8s.io/docs/user/quick-start/
[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
[8]: https://opensource.com/sites/default/files/uploads/signoz_dashboard.png (SigNoz dashboard)
[9]: https://opensource.com/sites/default/files/uploads/signoz_applicationmetrics.png (Metrics in SigNoz)
[10]: https://opensource.com/sites/default/files/uploads/signoz_tracing.png (Tracing in SigNoz)
[11]: https://opensource.com/sites/default/files/uploads/signoz_tracing2.png (Tracing in SigNoz)
[12]: https://opensource.com/sites/default/files/uploads/signoz_usageexplorer.png (SigNoz Usage Explorer)
[13]: https://signoz.io/docs/instrumentation/overview/
[14]: https://signoz.io/docs/
[15]: https://github.com/SigNoz/signoz#community
[16]: https://github.com/SigNoz/signoz

View File

@ -0,0 +1,107 @@
[#]: subject: (Whats New in Ubuntu MATE 21.04)
[#]: via: (https://news.itsfoss.com/ubuntu-mate-21-04-release/)
[#]: author: (Asesh Basu https://news.itsfoss.com/author/asesh/)
[#]: collector: (lujun9972)
[#]: translator: (Kevin3599)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13349-1.html)
Ubuntu MATE 21.04 更新,多项新功能来袭
======
> 与 Yaru 团队合作Ubuntu MATE 带来了一个主题大修、一系列有趣的功能和性能改进。
![](https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/04/ubuntu-21-04-mate-release.png?w=1200&ssl=1)
自从 18.10 发行版以来Yaru 一直都是 Ubuntu 的默认用户桌面今年Yaru 团队与Canonical Design 和 Ubuntu 桌面团队携手合作,为 Ubuntu MATE 21.04 创建了新的外观界面。
### Ubuntu MATE 21.04 有什么新变化?
以下就是 Ubuntu MATE 21.04 此次发布中的关键变化:
#### MATE 桌面
此次更新的 MATE 桌面相比以往并没有较大改动,此次只是修复了错误 BUG 同时更新了语言翻译Debian 中的 MATE 软件包已经更新,用户可以下载所有的 BUG 修复和更新。
#### Avatana 指示器
![][1]
这是一个控制面板指示器(也称为系统托盘)的动作、布局和行为的系统。现在,你可以从控制中心更改 Ayatana 指示器的设置。
添加了一个新的打印机标识,并删除了 RedShift 以保持稳定。
#### Yaru MATE 主题
Yaru MATE 现在是 Yaru 主题的派生产品。Yaru MATE 将提供浅色和深色主题,浅色作为默认主题。来确保更好的应用程序兼容性。
从现在开始,用户可以使用 GTK 2.x、3.x、4.x 浅色和深色主题,也可以使用 Suru 图标以及一些新的图标。
LibreOffice 在 MATE 上会有新的默认桌面图标,字体对比度也得到了改善。你会发现阅读小字体文本或远距离阅读更加容易。
如果在系统层面选择了深色模式,网站将维持深色。要让网站和系统的其它部分一起使用深色主题,只需启用 Yaru MATE 深色主题即可。
现在Macro、Metacity 和 Compiz 的管理器主题使用了矢量图标。这意味着,如果你的屏幕较大,图标不会看起来像是像素画,又是一个小细节!
#### Yaru MATE Snap 包
尽管你现在无法安装 MATE 主题但是不要着急它很快就可以了。gtk-theme-yaru-mate 和 icon-theme-yaru-mate Snap 包是预安装的,可以在需要将主题连接到兼容的 Snap 软件包时使用。
根据官方发布的公告Snapd 很快就会自动将你的主题连接到兼容的 Snap 包:
> Snapd 很快就能自动安装与你当前活动主题相匹配的主题的 snap 包。我们创建的 snap 包已经准备好在该功能可用时与之整合。
#### Mutiny 布局的新变化
![应用了深色主题的 Mutiny 布局][2]
Mutiny 布局模仿了 Unity 的桌面布局。删除了 MATE 软件坞小应用,并且对 Mutiny 布局进行了优化以使用 Plank。Plank 会被系统自动应用主题。这是通过 Mate Tweak 切换到 Mutiny 布局完成的。Plank 的深色和浅色 Yaru 主题都包含在内。
其他调整和更新使得 Mutiny 在不改变整体风格的前提下具备了更高的可靠性
#### 主要应用升级
* Firefox 87火狐浏览器
* LibreOffice 7.1.2.2(办公软件)
* Evolution 3.40(邮件)
* Celluloid 0.20(视频播放器)
#### 其他更改
* Linux 命令的忠实用户会喜欢在 Ubuntu MATE 中默认安装的 `neofetch`、`htop` 和 `inxi` 之类的命令。
* 树莓派的 21.04 版本很快将会发布。
* Ubuntu MATE 上没有离线升级选项。
* 针对侧边和底部软件坞引入了新的 Plank 主题,使其与 Yaru MATE 的配色方案相匹配。
* Yaru MATE 的窗口管理器为侧边平铺的窗口应用了简洁的边缘风格。
* Ubuntu MATE 欢迎窗口有多种色彩可供选择。
* Yaru MATE 主题和图标主题的快照包已在 Snap Store 中发布。
* 为 Ubuntu MATE 20.04 LTS 的用户发布了 Yaru MATE PPA。
### 下载 Ubuntu MATE 21.04
你可以从官网上下载镜像:
- [Ubuntu MATE 21.04][3]
如果你对此感兴趣,[请查看发行说明][4]。
你对尝试新的 Yaru MATE 感到兴奋吗?你觉得怎么样?请在下面的评论中告诉我们。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntu-mate-21-04-release/
作者:[Asesh Basu][a]
选题:[lujun9972][b]
译者:[Kevin3599](https://github.com/Kevin3599)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/asesh/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/04/yaru-mate-mutiny-dark.jpg?resize=1568%2C882&ssl=1
[2]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/04/yaru-mate-mutiny-dark.jpg?resize=1568%2C882&ssl=1
[3]: https://ubuntu-mate.org/download/
[4]: https://discourse.ubuntu.com/t/hirsute-hippo-release-notes/19221

View File

@ -0,0 +1,74 @@
[#]: subject: (Making computers more accessible and sustainable with Linux)
[#]: via: (https://opensource.com/article/21/4/linux-free-geek)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13362-1.html)
用 Linux 使计算机更容易使用和可持续
======
> Free Geek 是一个非营利组织,通过向有需要的人和团体提供 Linux 电脑,帮助减少数字鸿沟。
![](https://img.linux.net.cn/data/attachment/album/202105/05/135048extplppp7miznpdp.jpg)
有很多理由选择 Linux 作为你的桌面操作系统。在 [为什么每个人都应该选择 Linux][2] 中Seth Kenlon 强调了许多选择 Linux 的最佳理由,并为人们提供了许多开始使用该操作系统的方法。
这也让我想到了我通常向人们介绍 Linux 的方式。这场大流行增加了人们上网购物、远程教育以及与家人和朋友 [通过视频会议][3] 联系的需求。
我和很多有固定收入的退休人员一起工作,他们并不特别精通技术。对于这些人中的大多数人来说,购买电脑是一项充满担忧的大投资。我的一些朋友和客户对在大流行期间去零售店感到不舒服,而且他们完全不熟悉如何买电脑,无论是台式机还是笔记本电脑,即使在非大流行时期。他们来找我,询问在哪里买,要注意些什么。
我总是想看到他们得到一台 Linux 电脑。他们中的许多人买不起名牌供应商出售的 Linux 设备。直到最近,我一直在为他们购买翻新的设备,然后用 Linux 改装它们。
但是,当我发现 [Free Geek][4] 时,这一切都改变了,这是一个位于俄勒冈州波特兰的非营利组织,它的使命是“可持续地重复使用技术,实现数字访问,并提供教育,以创建一个使人们能够实现其潜力的社区。”
Free Geek 有一个 eBay 商店,我在那里以可承受的价格购买了几台翻新的笔记本电脑。他们的电脑都安装了 [Linux Mint][5]。 事实上,电脑可以立即使用,这使得向 [新用户介绍 Linux][6] 很容易,并帮助他们快速体验操作系统的力量。
### 让电脑继续使用,远离垃圾填埋场
Oso Martin 在 2000 年地球日发起了 Free Geek。该组织为其志愿者提供课程和工作计划对他们进行翻新和重建捐赠电脑的培训。志愿者们在服务 24 小时后还会收到一台捐赠的电脑。
这些电脑在波特兰的 Free Geek 实体店和 [网上][7] 出售。该组织还通过其项目 [Plug Into Portland][8]、[Gift a Geekbox][9] 以及[组织][10]和[社区资助][11]向有需要的人和实体提供电脑。
该组织表示,它已经“从垃圾填埋场翻新了 200 多万件物品,向非营利组织、学校、社区变革组织和个人提供了 75000 多件技术设备,并从 Free Geek 学习者那里提供了 5000 多课时”。
### 参与其中
自成立以来Free Geek 已经从 3 名员工发展到近 50 名员工,并得到了世界各地的认可。它是波特兰市的 [数字包容网络][12] 的成员。
你可以在 [Twitter][13]、[Facebook][14]、[LinkedIn][15]、[YouTube][16] 和 [Instagram][17] 上与 Free Geek 联系。你也可以订阅它的[通讯][18]。从 Free Geek 的 [商店][19] 购买物品,可以直接支持其工作,减少数字鸿沟。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/linux-free-geek
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
[2]: https://opensource.com/article/21/2/try-linux
[3]: https://opensource.com/article/20/8/linux-laptop-video-conferencing
[4]: https://www.freegeek.org/
[5]: https://opensource.com/article/21/4/restore-macbook-linux
[6]: https://opensource.com/article/18/12/help-non-techies
[7]: https://www.ebay.com/str/freegeekbasicsstore
[8]: https://www.freegeek.org/our-programs/plug-portland
[9]: https://www.freegeek.org/our-programs/gift-geekbox
[10]: https://www.freegeek.org/our-programs-grants/organizational-hardware-grants
[11]: https://www.freegeek.org/our-programs-grants/community-hardware-grants
[12]: https://www.portlandoregon.gov/oct/73860
[13]: https://twitter.com/freegeekpdx
[14]: https://www.facebook.com/freegeekmothership
[15]: https://www.linkedin.com/company/free-geek/
[16]: https://www.youtube.com/user/FreeGeekMothership
[17]: https://www.instagram.com/freegeekmothership/
[18]: https://app.e2ma.net/app2/audience/signup/1766417/1738557/?v=a
[19]: https://www.freegeek.org/shop

View File

@ -0,0 +1,87 @@
[#]: subject: (3 beloved USB drive Linux distros)
[#]: via: (https://opensource.com/article/21/4/usb-drive-linux-distro)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13355-1.html)
爱了3 个受欢迎的 U 盘 Linux 发行版
======
> 开源技术人员对此深有体会。
![](https://img.linux.net.cn/data/attachment/album/202105/03/104610np5piwaavaa5qu2u.jpg)
Linux 用户几乎都会记得他们第一次发现无需实际安装,就可以用 Linux 引导计算机并在上面运行。当然,许多用户都知道可以引导计算机进入操作系统安装程序,但是 Linux 不同:它根本就不需要安装!你的计算机甚至不需要有一个硬盘。你可以通过一个 U 盘运行 Linux 几个月甚至几 _年_
自然,有几种不同的 “<ruby>临场<rt>live</rt></ruby>” Linux 发行版可供选择。我们向我们的作者们询问了他们的最爱,他们的回答如下。
### 1、Puppy Linux
“作为一名前 **Puppy Linux** 开发者,我对此的看法自然有些偏见,但 Puppy 最初吸引我的地方是:
* 它专注于第三世界国家容易获得的低端和老旧硬件。这为买不起最新的现代系统的贫困地区开放了计算能力
* 它能够在内存中运行,可以利用该能力提供一些有趣的安全优势
* 它在一个单一的 SFS 文件中处理用户文件和会话,使得备份、恢复或移动你现有的桌面/应用/文件到另一个安装中只需一个拷贝命令”
—— [JT Pennington][2]
“对我来说,一直就是 **Puppy Linux**。它启动迅速,支持旧硬件。它的 GUI 很容易就可以说服别人第一次尝试 Linux。” —— [Sachin Patil][3]
“Puppy 是真正能在任何机器上运行的临场发行版。我有一台废弃的 microATX 塔式电脑,它的光驱坏了,也没有硬盘(为了数据安全,它已经被拆掉了),而且几乎没有多少内存。我把 Puppy 插入它的 SD 卡插槽,运行了好几年。” —— [Seth Kenlon][4]
“我在使用 U 盘上的 Linux 发行版没有太多经验,但我把票投给 **Puppy Linux**。它很轻巧,非常适用于旧机器。”  —— [Sergey Zarubin][5]
### 2、Fedora 和 Red Hat
“我最喜欢的 USB 发行版其实是 **Fedora Live USB**。它有浏览器、磁盘工具和终端仿真器,所以我可以用它来拯救机器上的数据,或者我可以浏览网页或在需要时用 ssh 进入其他机器做一些工作。所有这些都不需要在 U 盘或在使用中的机器上存储任何数据,不会在受到入侵时被泄露。” —— [Steve Morris][6]
“我曾经用过 Puppy 和 DSL。如今我有两个 U 盘:**RHEL7** 和 **RHEL8**。 这两个都被配置为完整的工作环境,能够在 UEFI 和 BIOS 上启动。当我有问题要解决而又面对随机的硬件时,在现实生活中这就是时间的救星。” —— [Steven Ellis][7]
### 3、Porteus
“不久前,我安装了 Porteus 系统每个版本的虚拟机。很有趣,所以有机会我会再试试它们。每当提到微型发行版的话题时,我总是想起我记得的第一个使用的发行版:**tomsrtbt**。它总是安装适合放在软盘上来设计。我不知道它现在有多大用处,但我想我应该把它也算上。”  —— [Alan Formy-Duval][8]
“作为一个 Slackware 的长期用户,我很欣赏 **Porteus** 提供的 Slack 的最新版本和灵活的环境。你可以用运行在内存中的 Porteus 进行引导,这样就不需要把 U 盘连接到你的电脑上,或者你可以从驱动器上运行,这样你就可以保留你的修改。打包应用很容易,而且 Slacker 社区有很多现有的软件包。这是我唯一需要的实时发行版。” —— [Seth Kenlon][4]
### 其它Knoppix
“我已经有一段时间没有使用过 **Knoppix** 了,但我曾一度经常使用它来拯救那些被恶意软件破坏的 Windows 电脑。它最初于 2000 年 9 月发布,此后一直在持续开发。它最初是由 Linux 顾问 Klaus Knopper 开发并以他的名字命名的,被设计为临场 CD。我们用它来拯救由于恶意软件和病毒而变得无法访问的 Windows 系统上的用户文件。” —— [Don Watkins][9]
“Knoppix 对临场 Linux 影响很大,但它也是对盲人用户使用最方便的发行版之一。它的 [ADRIANE 界面][10] 被设计成可以在没有视觉显示器的情况下使用,并且可以处理任何用户可能需要从计算机上获得的所有最常见的任务。” —— [Seth Kenlon][11]
### 选择你的临场 Linux
有很多没有提到的,比如 [Slax][12](一个基于 Debian 的实时发行版)、[Tiny Core][13]、[Slitaz][14]、[Kali][15](一个以安全为重点的实用程序发行版)、[E-live][16],等等。如果你有一个空闲的 U 盘,请把 Linux 放在上面,在任何时候都可以在任何电脑上使用 Linux
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/usb-drive-linux-distro
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://opensource.com/users/jtpennington
[3]: https://opensource.com/users/psachin
[4]: http://opensource.com/users/seth
[5]: https://opensource.com/users/sergey-zarubin
[6]: https://opensource.com/users/smorris12
[7]: https://opensource.com/users/steven-ellis
[8]: https://opensource.com/users/alanfdoss
[9]: https://opensource.com/users/don-watkins
[10]: https://opensource.com/life/16/7/knoppix-adriane-interface
[11]: https://opensource.com/article/21/4/opensource.com/users/seth
[12]: http://slax.org
[13]: http://www.tinycorelinux.net/
[14]: http://www.slitaz.org/en/
[15]: http://kali.org
[16]: https://www.elivecd.org/

View File

@ -0,0 +1,106 @@
[#]: subject: (Whats new in Fedora Workstation 34)
[#]: via: (https://fedoramagazine.org/whats-new-fedora-34-workstation/)
[#]: author: (Christian Fredrik Schaller https://fedoramagazine.org/author/uraeus/)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13359-1.html)
Fedora Workstation 34 中的新变化
======
![](https://img.linux.net.cn/data/attachment/album/202105/03/233735glmkkimcz8ilmcmr.jpg)
Fedora Workstation 34 是我们领先的操作系统的最新版本,这次你将获得重大改进。最重要的是,你可以从 [官方网站][2] 下载它。我听到你在问,有什么新的东西?好吧,让我们来介绍一下。
### GNOME 40
[GNOME 40][3] 是对 GNOME 桌面的一次重大更新Fedora 社区成员在其设计和实现过程中发挥了关键作用,因此你可以确信 Fedora 用户的需求被考虑在内。
当你登录到 GNOME 40 桌面时首先注意到的就是你现在会被直接带到一个重新设计的概览屏幕。你会注意到仪表盘已经移到了屏幕的底部。GNOME 40 的另一个主要变化是虚拟工作空间现在是水平摆放的,这使 GNOME 与其他大多数桌面更加一致,因此应该使新用户更容易适应 GNOME 和 Fedora。
我们还做了一些工作来改善桌面中的手势支持,用三根手指水平滑动来切换工作空间,用三根手指垂直滑动来调出概览。
![][4]
更新后的概览设计带来了一系列其他改进,包括:
* 仪表盘现在将收藏的和未收藏的运行中的应用程序分开。这使得可以清楚了解哪些应用已经被收藏,哪些未收藏。
* 窗口缩略图得到了改进,现在每个窗口上都有一个应用程序图标,以帮助识别。
* 当工作区被设置为在所有显示器上显示时,工作区切换器现在会显示在所有显示器上,而不仅仅是主显示器。
* 应用启动器的拖放功能得到了改进,可以更轻松地自定义应用程序网格的排列方式。
GNOME 40 中的变化经历了大量的用户测试,到目前为止反应非常正面,所以我们很高兴能将它们介绍给 Fedora 社区。更多信息请见 [forty.gnome.org][3] 或 [GNOME 40 发行说明][5]。
### 应用程序的改进
GNOME “天气”为这个版本进行了重新设计,具有两个视图,一个是未来 48 小时的小时预报,另一个是未来 10 天的每日预报。
新版本现在显示了更多的信息,并且更适合移动设备,因为它支持更窄的尺寸。
![][6]
其他被改进的应用程序包括“文件”、“地图”、“软件”和“设置”。更多细节请参见 [GNOME 40 发行说明][5]。
### PipeWire
PipeWire 是新的音频和视频服务器,由 Wim Taymans 创建,他也共同创建了 GStreamer 多媒体框架。到目前为止,它只被用于视频捕获,但在 Fedora Workstation 34 中,我们也开始将其用于音频,取代 PulseAudio。
PipeWire 旨在与 PulseAudio 和 Jack 兼容,因此应用程序通常应该像以前一样可以工作。我们还与 Firefox 和 Chrome 合作,确保它们能与 PipeWire 很好地配合。OBS Studio 也即将支持 PipeWire所以如果你是一个播客我们已经帮你搞定了这些。
PipeWire 在专业音频界获得了非常积极的回应。谨慎地说,从一开始就可能有一些专业音频应用不能完全工作,但我们会源源不断收到测试报告和补丁,我们将在 Fedora Workstation 34 的生命周期内使用这些报告和补丁来延续专业音频 PipeWire 的体验。
### 改进的 Wayland 支持
我们预计将在 Fedora Workstation 34 的生命周期内解决在专有的 NVIDIA 驱动之上运行 Wayland 的支持。已经支持在 NVIDIA 驱动上运行纯 Wayland 客户端。然而,当前还缺少对许多应用程序使用的 Xwayland 兼容层的支持。这就是为什么当你安装 NVIDIA 驱动时Fedora 仍然默认为 X.Org。
我们正在 [与 NVIDIA 上游合作][7],以确保 Xwayland 能在 Fedora 中使用 NVIDIA 硬件加速。
### QtGNOME 平台和 Adwaita-Qt
Jan Grulich 继续他在 QtGNOME 平台和 Adawaita-qt 主题上的出色工作,确保 Qt 应用程序与 Fedora 工作站的良好整合。多年来,我们在 Fedora 中使用的 Adwaita 主题已经发生了演变,但随着 QtGNOME 平台和 Adwaita-Qt 在 Fedora 34 中的更新Qt 应用程序将更接近于 Fedora Workstation 34 中当前的 GTK 风格。
作为这项工作的一部分Fedora Media Writer 的外观和风格也得到了改进。
![][8]
### Toolbox
Toolbox 是我们用于创建与主机系统隔离的开发环境的出色工具,它在 Fedora 34 上有了很多改进。例如,我们在改进 Toolbox 的 CI 系统集成方面做了大量的工作,以避免在我们的环境中出现故障时导致 Toolbox 停止工作。
我们在 Toolbox 的 RHEL 集成方面投入了大量的工作,这意味着你可以很容易地在 Fedora 系统上建立一个容器化的 RHEL 环境,从而方便地为 RHEL 服务器和云实例做开发。现在在 Fedora 上创建一个 RHEL 环境就像运行:`toolbox create -distro rhel -release 8.4` 一样简单。 
这给你提供了一个最新桌面的优势:支持最新硬件,同时能够以一种完全原生的方式进行针对 RHEL 的开发。
![][9]
### Btrfs
自 Fedora 33 以来Fedora Workstation 一直使用 Btrfs 作为其默认文件系统。Btrfs 是一个现代文件系统由许多公司和项目开发。Workstation 采用 Btrfs 是通过 Facebook 和 Fedora 社区之间的奇妙合作实现的。根据到目前为止的用户反馈,人们觉得与旧的 ext4 文件系统相比Btrfs 提供了更快捷、更灵敏的体验。
在 Fedora 34 中,新安装的 Workstation 系统现在默认使用 Btrfs 透明压缩。与未压缩的 Btrfs 相比,这可以节省 20-40% 的大量磁盘空间。它也增加了 SSD 和其他闪存介质的寿命。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/whats-new-fedora-34-workstation/
作者:[Christian Fredrik Schaller][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/uraeus/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/f34-workstation-816x345.jpg
[2]: https://getfedora.org/workstation
[3]: https://forty.gnome.org/
[4]: https://lh3.googleusercontent.com/xDklMWAGBWvRGRp2kby-XKr6b0Jvan8Obmn11sfmkKnsnXizKePYV9aWdEgyxmJetcvwMifYRUm6TcPRCH9szZfZOE9pCpv2bkjQhnq2II05Yu6o_DjEBmqTlRUGvvUyMN_VRtq8zkk2J7GUmA
[5]: https://help.gnome.org/misc/release-notes/40.0/
[6]: https://lh6.googleusercontent.com/pQ3IIAvJDYrdfXoTUnrOcCQBjtpXqd_5Rmbo4xwxIj2qMCXt7ZxJEQ12OoV7yUSF8zpVR0VFXkMP0M8UK1nLbU7jhgQPJAHPayzjAscQmTtqqGsohyzth6-xFDjUXogmeFmcP-yR9GWXfXv-yw
[7]: https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/587
[8]: https://lh6.googleusercontent.com/PDXxFS7SBFGI-3jRtR-TmqupvJRxy_CbWTfjB4sc1CKyO1myXkqfpg4jGHQJRK2e1vUh1KD_jyBsy8TURwCIkgAJcETCOlSPFBabqB5yDeWj3cvygOOQVe3X0tLFjuOz3e-ZX6owNZJSqIEHOQ
[9]: https://lh6.googleusercontent.com/dVRCL14LGE9WpmdiH3nI97OW2C1TkiZqREvBlHClNKdVcYvR1nZpZgWfup_GP5SN17iQtSJf59FxX2GYqoajXbdXLRfOwAREn7gVJ1fa_bspmcTZ81zkUQC4tNUx3f7D7uD7Peeg2Zc9Kldpww

View File

@ -0,0 +1,92 @@
[#]: subject: (Keep multiple Linux distros on a USB with this open source tool)
[#]: via: (https://opensource.com/article/21/5/linux-ventoy)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13361-1.html)
神器:在一个 U 盘上放入多个 Linux 发行版
======
> 用 Ventoy 创建多启动 U 盘,你将永远不会缺少自己喜欢的 Linux 发行版。
![](https://img.linux.net.cn/data/attachment/album/202105/05/131432p5q7hh5cm7a8ffsd.jpg)
给朋友和邻居一个可启动 U 盘,里面包含你最喜欢的 Linux 发行版,是向 Linux 新手介绍我们都喜欢的 Linux 体验的好方法。仍然有许多人从未听说过 Linux把你喜欢的发行版放在一个可启动的 U 盘上是让他们进入 Linux 世界的好办法。
几年前,我在给一群中学生教授计算机入门课。我们使用旧笔记本电脑,我向学生们介绍了 Fedora、Ubuntu 和 Pop!_OS。下课后我给每个学生一份他们喜欢的发行版的副本让他们带回家安装在自己选择的电脑上。他们渴望在家里尝试他们的新技能。
### 把多个发行版放在一个驱动器上
最近,一个朋友向我介绍了 Ventoy根据其 [GitHub 仓库][2])是 “一个开源工具,可以为 ISO/WIM/IMG/VHD(x)/EFI 文件创建可启动的 USB 驱动器”。与其为每个我想分享的 Linux 发行版创建单独的驱动器,我可以在一个 U 盘上放入我喜欢的 _所有_ Linux 发行版!
![USB 空间][3]
正如你所能想到的那样U 盘的大小决定了你能在上面容纳多少个发行版。在一个 16GB 的 U 盘上,我放置了 Elementary 5.1、Linux Mint Cinnamon 5.1 和 Linux Mint XFCE 5.1......但仍然有 9.9GB 的空间。
### 获取 Ventoy
Ventoy 是开源的,采用 [GPLv3][5] 许可证,可用于 Windows 和 Linux。有很好的文档介绍了如何在 Windows 上下载和安装 Ventoy。Linux 的安装是通过命令行进行的,所以如果你不熟悉这个过程,可能会有点混乱。然而,其实很容易。
首先,[下载 Ventoy][6]。我把存档文件下载到我的桌面上。
接下来,使用 `tar` 命令解压 `ventoy-x.y.z-linux.tar.gz` 档案(但要用你下载的版本号替换 `x.y.z`)(为了保持简单,我在命令中使用 `*` 字符作为任意通配符):
```
$ tar -xvf ventoy*z
```
这个命令将所有必要的文件提取到我桌面上一个名为 `ventoy-x.y.z` 的文件夹中。
你也可以使用你的 Linux 发行版的存档管理器来完成同样的任务。下载和提取完成后,你就可以把 Ventoy 安装到你的 U 盘上了。
### 在 U 盘上安装 Ventoy 和 Linux
把你的 U 盘插入你的电脑。改变目录进入 Ventoy 的文件夹,并寻找一个名为 `Ventoy2Disk.sh` 的 shell 脚本。你需要确定你的 U 盘的正确挂载点,以便这个脚本能够正常工作。你可以通过在命令行上发出 `mount` 命令或者使用 [GNOME 磁盘][7] 来找到它,后者提供了一个图形界面。后者显示我的 U 盘被挂载在 `/dev/sda`。在你的电脑上,这个位置可能是 `/dev/sdb``/dev/sdc` 或类似的位置。
![GNOME 磁盘中的 USB 挂载点][8]
下一步是执行 Ventoy shell 脚本。因为它被设计成不加选择地复制数据到一个驱动器上,我使用了一个假的位置(`/dev/sdX`)来防止你复制/粘贴错误,所以用你想覆盖的实际驱动器的字母替换后面的 `X`
**让我重申**:这个 shell 脚本的目的是把数据复制到一个驱动器上, _破坏该驱动器上的所有数据。_ 如果该驱动器上有你关心的数据,在尝试这个方法之前,先把它备份! 如果你不确定你的驱动器的位置,在你继续进行之前,请验证它,直到你完全确定为止。
一旦你确定了你的驱动器的位置,就运行这个脚本:
```
$ sudo sh Ventoy2Disk.sh -i /dev/sdX
```
这样就可以格式化它并将 Ventoy 安装到你的 U 盘上。现在你可以复制和粘贴所有适合放在 U 盘上的 Linux 发行版文件。如果你在电脑上用新创建的 U 盘引导,你会看到一个菜单,上面有你复制到 U 盘上的发行版。
![Ventoy 中的 Linux 发行版][9]
### 构建一个便携式的动力源
Ventoy 是你在钥匙串上携带多启动 U 盘的关键(钥匙),这样你就永远不会缺少你所依赖的发行版。你可以拥有一个全功能的桌面、一个轻量级的发行版、一个纯控制台的维护工具,以及其他你想要的东西。
我从来没有在没有 Linux 发行版的情况下离开家,你也不应该。拿上 Ventoy、一个 U 盘,和一串 ISO。你不会后悔的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/5/linux-ventoy
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/markus-winkler-usb-unsplash.jpg?itok=5ZXDp0V4 (USB drive)
[2]: https://github.com/ventoy/Ventoy
[3]: https://opensource.com/sites/default/files/uploads/ventoy1.png (USB space)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://www.ventoy.net/en/doc_license.html
[6]: https://github.com/ventoy/Ventoy/releases
[7]: https://wiki.gnome.org/Apps/Disks
[8]: https://opensource.com/sites/default/files/uploads/usb-mountpoint.png (USB mount point in GNOME Disks)
[9]: https://opensource.com/sites/default/files/uploads/ventoy_distros.jpg (Linux distros in Ventoy)

View File

@ -0,0 +1,169 @@
[#]: subject: (KDE Announces Various App Upgrades With Cutting-Edge Features)
[#]: via: (https://news.itsfoss.com/kde-gear-app-release/)
[#]: author: (Jacob Crume https://news.itsfoss.com/author/jacob/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
KDE Announces Various App Upgrades With Cutting-Edge Features
======
Alongside their Plasma Desktop Environment, KDE develops a huge range of other apps collectively named KDE Gear. These range from content creation apps such as **Kdenlive** and **Kwave** to utilities such as Dolphin, Discover, and Index.
KDE Gear is something new. It includes heaps of improvements to almost all the KDE apps, which we will be exploring here.
### What Is KDE Gear?
![][1]
For many people, this name will sound unfamiliar. This is because [KDE Gear][2] is the new name for the [KDE Applications][3]. Previously, they were released individually. The new name aims to unify their marketing and provide greater clarity to users.
According to **KDE developer Jonathan Riddell**:
> KDE Gear is the new name for the app (and libraries and plugins) bundle of projects that want the release faff taken off their hands… It was once called just KDE, then KDE SC, then KDE Applications, then the unbranded release service, and now were banding it again as KDE Gear.
This rebrand makes sense, especially as the KDE logo itself is pretty much a glorified gear.
### Major KDE App Upgrades
KDE Gear contains many applications, each with its purpose. Here, we will be looking at a few of the key highlights. These include:
* Kdenlive
* Dolphin
* Elisa
* Index
We have also covered the new [Kate editor release challenging Microsofts Visual Studio Code][4] separately, if you are curious.
#### Kdenlive
![][5]
KDEs video editor has improved massively over the past few years, with heaps of new features added with this release. It involves:
* Online Resources tool
* Speech-To-Text
* New AV1 support
The Online resources tool is a fairly recent addition. The main purpose of this tool is to download free stock footage for use in your videos.
The Speech-To-Text tool is a nifty little tool that will automatically create subtitles for you, with surprising accuracy. It is also effortless to use, with it being launched in just 3 clicks.
Finally, we get to see the main new feature in the 21.04 release: AV1 codec support. This is a relatively new video format with features such as higher compression, and a royalty-free license.
#### Dolphin
![][5]
Dolphin, the file manager for Plasma 5, is one of the most advanced file managers existing. Some of its notable features include a built-in terminal emulator and file previews.
With this release, there are a multitude of new features, including the ability to:
* Decompress multiple files at once
* Open a folder in a new tab by holding the control key
* Modify the options in the context menu
While minor, these new features are sure to make using Dolphin an even smoother experience.
#### Elisa
![][6]
Elisa is one of the most exciting additions to KDE Gear. For those who dont know about it yet, Elisa is a new music player based on [Kirigami][7]. The result of this is an app capable of running on both desktop and mobile.
With this release, the list of features offered by this application has grown quite a bit longer. Some of these new features include:
* Support for AAC audio files
* Support for .m3u8 playlists
* Reduced memory usage
As always, the inclusion of support for more formats is welcome. As the KDE release announcement says:
> But [the new features] dont mean Elisa has become clunkier. Quite the contrary: the new version released with KDE Gear today actually consumes less memory when you scroll around the app, making it snappy and a joy to use.
This app is becoming better with each release, and is becoming one of my favorite apps for Linux. At the rate it is improving, we can expect Elisa to become one of the best music players in existence.
#### Index
Index is the file manager for Plasma Mobile. Based on Kirigami technologies, it adapts to both mobile and desktop screens well.
Alongside this convergence advantage, it has almost reached feature-parity with Dolphin, making it a viable alternative on the desktop as well. Because it is constantly being updated with new features and is an evolving application, there isnt a set list of new features.
If you want to check out its latest version, feel free to [download it from the project website.][8]
### Other App Updates
![][5]
In addition to the above-mentioned app upgrades, you will also find significant improvements for **Okular**, **KMail**, and other KDE applications.
To learn more about the app updates, you can check out the [official announcement page][9].
### Wrapping Up
The new KDE Gear 21.04 release includes a wide range of new features and updates all the KDE apps. These promise better performance, usability, and compatibility.
I am really excited about Elisa and Index, especially as they make use of Kirigami.
_What do you think about_ _the latest KDE app updates? Let me know your thoughts down in the comments below!_
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
#### _Related_
* [Linux Release Roundup #21.17: Ubuntu 21.04, VirtualBox 6.1.20, Firefox 88, and More New Releases][10]
* ![][11] ![Linux Release Roundups][12]
* [KDE Plasma 5.21 Brings in a New Application Launcher, Wayland Support, and Other Exciting Additions][13]
* ![][11] ![][14]
* [SparkyLinux 2021.03 Release Introduces a KDE Plasma Edition, Xfce 4.16 Update, and More Upgrades][15]
* ![][11] ![][16]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/kde-gear-app-release/
作者:[Jacob Crume][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lujun9972
[1]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzMxOCcgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[2]: https://kde.org/announcements/gear/21.04/
[3]: https://apps.kde.org/
[4]: https://news.itsfoss.com/kate/
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzQzOScgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzczMCcgd2lkdGg9JzYwMCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[7]: https://develop.kde.org/frameworks/kirigami//
[8]: https://download.kde.org/stable/maui/index/1.2.1/index-v1.2.1-amd64.AppImage
[9]: https://kde.org/announcements/releases/2020-04-apps-update/
[10]: https://news.itsfoss.com/linux-release-roundup-2021-17/
[11]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzIwMCcgd2lkdGg9JzM1MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[12]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2020/12/Linux-release-roundups.png?fit=800%2C450&ssl=1&resize=350%2C200
[13]: https://news.itsfoss.com/kde-plasma-5-21-release/
[14]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/02/kde-plasma-5-21-feat.png?fit=1200%2C675&ssl=1&resize=350%2C200
[15]: https://news.itsfoss.com/sparkylinux-2021-03-release/
[16]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/03/sparky-linux-feat.png?fit=1200%2C675&ssl=1&resize=350%2C200

View File

@ -0,0 +1,146 @@
[#]: subject: (Next Mainline Linux Kernel 5.12 Released with Essential Improvements)
[#]: via: (https://news.itsfoss.com/linux-kernel-5-12-release/)
[#]: author: (Ankush Das https://news.itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Next Mainline Linux Kernel 5.12 Released with Essential Improvements
======
[Linux Kernel 5.11][1] was an impressive release with the support for new hardware thats probably out-of-stock till the end of 2022.
Now, almost after 2 months of work and a week of delay for a release candidate version 8, Linux Kernel 5.12 is here.
The improvements span across many things that include processor support, laptop support, new hardware support, storage enhancements, and a few more essential driver additions.
Here, I will highlight the key changes with this release to give you an overview.
### Linux Kernel 5.12: Essential Improvements &amp; Additions
Linux Kernel 5.12 is a neat release with many essential additions. Also, it is worth noting that Linux [5.13 would be the first Linux Kernel to add initial support for Apple M1 devices][2] if you were expecting it here.
With the [release announcement][3], Linus Torvalds mentioned:
> Thanks to everybody who made last week very calm indeed, which just makes me feel much happier about the final 5.12 release.
>
> Both the shortlog and the diffstat are absolutely tiny, and its mainly just a random collection of small fixes in various areas: arm64 devicetree files, some x86 perf event fixes (and a couple of tooling ones), various minor driver fixes (amd and i915 gpu fixes stand out, but honestly, thats not because they are big, but because the rest is even smaller), a couple of small reverts, and a few locking fixes (one kvm serialization fix, one memory ordering fix for rwlocks).
Let us take a look at whats new overall.
#### Official PlayStation 5 Controller Driver
Sonys open-source driver for controllers were pushed back last cycle, but it has been included with Linux 5.12 Kernel.
Not just as a one-time open-source driver addition but Sony has committed to its maintenance as well.
So, if you were looking to use Sonys DualSense PlayStation 5 Controller, now would be a good time to test it out.
#### AMD FreeSync HDMI Support
While AMD has been keeping up with good improvements for its Linux graphics drivers, there was no [FreeSync][4] support over HDMI port.
With Linux Kernel 5.12, a patch has been merged to the driver that enables FreeSync support on HDMI ports.
#### Intel Adaptive-Sync for Xe Graphics
Intels 12th gen Xe Graphics is an exciting improvement for many users. Now, with Linux Kernel 5.12, adaptive sync support (variable refresh rate) will be added to connections over the Display Port.
Of course, considering that AMD has managed to add FreeSync support with HDMI, Intel would probably be working on the same for the next Linux Kernel release.
#### Nintendo 64 Support
Nintendo 64 is a popular but very [old home video game console][5]. For this reason, it might be totally dropped as an obsolete platform but it is good to see the added support (for those few users out there) in Linux Kernel 5.12.
#### OverDrive Overclocking for Radeon 4000 Series
Overlocking support for AMDs latest GPUs was not yet supporting using the command-line based OverDrive utility.
Even though OverDrive has been officially discontinued, there is no GUI-based utility by AMD for Linux. So, this should help meanwhile.
#### Open-Source Nvidia Driver Support for Ampere Cards
The open-source Nvidia [Nouveau][6] drivers introduces improved support for Ampere-based cards with Linux Kernel 5.12, which is a step-up from Linux Kernel 5.11 improvements.
With the upcoming Linux Kernel 5.13, you should start seeing 3D acceleration support as well.
#### Improvements to exFAT Filesystem
There have been significant optimizations for [exFAT Filesytem][7] that should allow you to delete big files much faster.
#### Intels Open-Source Driver to Display Laptop Hinge/Keyboard Angle
If you have a modern Intel laptop, you are in luck. Intel has contributed another open-source driver to help display the laptop hinge angle in reference to the ground.
Maybe you are someone whos writing a script to get something done in your Laptop when the hinge reaches a certain angle or who knows what else? Tinkerers would mostly benefit from this addition by harnessing the information they did not have.
### Other Improvements
In addition to the key additions I mentioned above, there are numerous other improvements that include:
* Improved battery reporting for Logitech peripherals
* Improved Microsoft Surface laptop support
* Snapdragon 888 support
* Getting rid of obsolete ARM platforms
* Networking improvements
* Security improvements
You might want to check out the [full changelog][8] to know all the technical details.
If you think Linux 5.12 could be a useful upgrade for you, Id suggest you to wait for your Linux distribution to push an update or make it available for you to select it as your Linux Kernel from the repository.
It is also directly available in [The Linux Kernel Archives][9] as a tarball if you want to compile it from source.
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
#### _Related_
* [Linux Release Roundup #21.14: AlmaLinux OS, Linux Lite 5.4, Ubuntu 21.04 and More New Releases][10]
* ![][11] ![Linux Release Roundups][12]
* [Linux Kernel 5.11 Released With Support for Wi-Fi 6E, RTX 'Ampere' GPUs, Intel Iris Xe and More][1]
* ![][11] ![][13]
* [Nitrux 1.3.8 Release Packs in KDE Plasma 5.21, Linux 5.11, and More Changes][14]
* ![][11] ![][15]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/linux-kernel-5-12-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/linux-kernel-5-11-release/
[2]: https://news.itsfoss.com/linux-kernel-5-13-apple-m1/
[3]: https://lore.kernel.org/lkml/CAHk-=wj3ANm8QrkC7GTAxQyXyurS0_yxMR3WwjhD9r7kTiOSTw@mail.gmail.com/
[4]: https://en.wikipedia.org/wiki/FreeSync
[5]: https://en.wikipedia.org/wiki/Nintendo_64
[6]: https://nouveau.freedesktop.org
[7]: https://en.wikipedia.org/wiki/ExFAT
[8]: https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.12
[9]: https://www.kernel.org/
[10]: https://news.itsfoss.com/linux-release-roundup-2021-14/
[11]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzIwMCcgd2lkdGg9JzM1MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[12]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2020/12/Linux-release-roundups.png?fit=800%2C450&ssl=1&resize=350%2C200
[13]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/02/linux-kernel-5-11-release.png?fit=1200%2C675&ssl=1&resize=350%2C200
[14]: https://news.itsfoss.com/nitrux-1-3-8-release/
[15]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/03/nitrux-1-3-8.png?fit=1200%2C675&ssl=1&resize=350%2C200

View File

@ -0,0 +1,90 @@
[#]: subject: (CloudLinux Announces Commercial Support for its CentOS Alternative AlmaLinux OS)
[#]: via: (https://news.itsfoss.com/almalinux-commercial-support/)
[#]: author: (Ankush Das https://news.itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
CloudLinux Announces Commercial Support for its CentOS Alternative AlmaLinux OS
======
CentOS alternative [AlmaLinux][1] announced the availability of their [first stable release][2] a month back.
If you are planning to replace your CentOS deployments or have already started to utilize AlmaLinux OS, you will be happy to know that you are about to get commercial support and premium support soon.
CloudLinux, the sponsor of the project announced that it will start providing multiple support options next month.
### More About the Support Options
According to the press release, they aim to offer reasonable pricing for the support tiers:
> “Support services for AlmaLinux OS from CloudLinux provides both the highest quality support from the OS sponsor along with the benefits of an independent technology partnership,” said Jim Jackson, president and chief revenue officer, CloudLinux. “Reasonably priced and flexible support services keep systems running on AlmaLinux OS continuously updated and secure for production workloads.”
They also clarify that the support tiers will include update delivery commitments and 24/7 incident response services.
This means that you will be getting regular patches and updates for the Linux kernel and core packages, patch delivery service-level agreements (SLAs), and 24/7 incident support.
For any business or enterprise, this should be the perfect incentive to start replacing CentOS on their server if looking for a [CentOS alternative][3].
In addition to the plans for the next month, they also plan to offer a premium support option for enterprise use-cases and more:
> CloudLinux is also planning to introduce a premium support tier for enterprises that require enhanced services, as well as Product NodeOS Support for AlmaLinux OS, explicitly tailored to the needs of vendors and OEMs that are planning to use AlmaLinux as a node OS underlying their commercial products and services.
This is definitely exciting and should grab the attention of OEMs, and businesses looking for a CentOS alternative with a long-term support until 2029 at least.
They also added what the community manager of AlmaLinux OS thinks about it going forward:
> “Since launch, weve received tremendous interest and support from both the community as well as many commercial vendors, many of whom have begun using AlmaLinux OS for some pretty amazing use cases,” said Jack Aboutboul, community manager of AlmaLinux. “Our thriving community has supported each other since day one which led to rapid adoption amongst organizations and requests for commercial support.”
The support service options should start rolling out in **May 2021** (next month). If you want to know more about it before the release or how you can use it for your AlmaLinux OS deployments, fill up the form in the [official support page][4].
[Commercial Support for AlmaLinux OS][4]
_So, what do you think about AlmaLinux OS as a CentOS alternative now with the imminent availability of commercial support? Do you have big hopes for it? Feel free to share what you think!_
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
#### _Related_
* [Much-Anticipated CentOS Alternative 'AlmaLinux' Beta Released for Testing][5]
* ![][6] ![][7]
* [AlmaLinux OS First Stable Release is Here to Replace CentOS][2]
* ![][6] ![][8]
* [After Rocky Linux, We Have Another RHEL Fork in Works to Replace CentOS][9]
* ![][6] ![CloudLinux][10]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/almalinux-commercial-support/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://almalinux.org/
[2]: https://news.itsfoss.com/almalinux-first-stable-release/
[3]: https://itsfoss.com/rhel-based-server-distributions/
[4]: https://almalinux.org/support/
[5]: https://news.itsfoss.com/almalinux-beta-released/
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzIwMCcgd2lkdGg9JzM1MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[7]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/02/almalinux-ft.jpg?fit=1200%2C675&ssl=1&resize=350%2C200
[8]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/03/almalinux-first-iso-ft.png?fit=1200%2C675&ssl=1&resize=350%2C200
[9]: https://news.itsfoss.com/rhel-fork-by-cloudlinux/
[10]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2020/12/Untitled-design-2.png?fit=800%2C450&ssl=1&resize=350%2C200

View File

@ -0,0 +1,118 @@
[#]: subject: (Fedora 34 Releases with GNOME 40, Linux Kernel 5.11, and a New i3 Spin)
[#]: via: (https://news.itsfoss.com/fedora-34-release/)
[#]: author: (Arish V https://news.itsfoss.com/author/arish/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Fedora 34 Releases with GNOME 40, Linux Kernel 5.11, and a New i3 Spin
======
After the release of the [Fedora 34 beta][1] a week ago, Fedora 34 stable release is finally here with exciting changes and improvements.
As expected this release of Fedora arrives with the latest Linux kernel 5.11 along with significant changes such as [Gnome 40][2], [PipeWire][3], availability of a [Fedora i3 Spin][4], and various other changes.
Lets take a look at the important changes coming to Fedora 34.
### Major Highlights of Fedora 34 Release
Here is an overview of the major changes in this release of Fedora.
#### Desktop Environment Updates
![][5]
One of the biggest highlights is the arrival of the [GNOME 40][2] desktop. Fedora 34 is one of the few distributions in which you can experience the latest Gnome 40 right now. So, this change is worth noting.
Taking a look at KDE Plasma, Wayland becomes the default display server for KDE Plasma in Fedora 34. Moreover, KDE Plasma Desktop image is available for AArch64 ARM devices as well.
Coming to other Desktop Environments, the latest Xfce 4.16 is available with this release of Fedora and LXQT also receives an update to the latest version LXQT 0.16.
#### PipeWire to Replace PulseAudio
A noteworthy change happening with this release of Fedora is the replacement of PulseAudio by PipeWire. It replaces PulseAudio and JACK by providing a PulseAudio-compatible server implementation and ABI-compatible libraries for JACK clients.
![][6]
Besides, with this release, theres also a Fedora i3 Spin that provides the popular i3 tiling window manager and offers a complete experience with a minimalist user interface.
####  Zstd Compression by Default
BTRSF file system was made default with Fedora 34, with this release zstd algorithm is made default for transparent compression when using BTRSF. The developers hope that this would increase the life span of flash-based media by reducing write amplification.
#### Other Changes
Some of the other changes include package the following package updates.
* Binutils 2.53
* Golang 1.16
* Ruby 3.0
* BIND 9.16
*  MariaDB 10.5
* Ruby on Rails 6.1
* Stratis 2.3.0
Other changes include replacement of The ntp package with ntpsec. Also, the collection packages xorg-x11 are revoked, and the individual utilities within them will be packaged separately.
If you want to see the entire list of changes in Fedora 34, please take a look at the [official announcement post][7] and the [changeset][8] for more technical details.
### Wrapping up
Most of the above changes in Fedora 34 were expected changes, and fortunately nothing went south after the beta release last week. Above all Fedora 34 in powered by the latest Linux kernel 5.11, and you can experience the latest GNOME desktop as well.
_So, what do you think about these exciting additions to Fedora 34? Let me know in the comments below._
 
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
#### _Related_
* [Fedora 34 Beta Arrives With Awesome GNOME 40 (Unlike Ubuntu 21.04)][1]
* ![][9] ![][10]
* [Linux Release Roundup #21.13: GNOME 40, Manjaro 21.0, Fedora 34 and More New Releases][11]
* ![][9] ![Linux Release Roundups][12]
* [Manjaro 21.0 Ornara Comes Packed With GNOME 3.38, KDE Plasma 5.21, Xfce 4.16 and Linux Kernel 5.10][13]
* ![][9] ![][14]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/fedora-34-release/
作者:[Arish V][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/arish/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/fedora-34-beta-release/
[2]: https://news.itsfoss.com/gnome-40-release/
[3]: https://pipewire.org/
[4]: https://spins.fedoraproject.org/i3/
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzQ2OCcgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzUzNicgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[7]: https://fedoramagazine.org/announcing-fedora-34/
[8]: https://fedoraproject.org/wiki/Releases/34/ChangeSet#i3_Spin
[9]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzIwMCcgd2lkdGg9JzM1MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[10]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/03/fedora-34-beta-ft.png?fit=1200%2C675&ssl=1&resize=350%2C200
[11]: https://news.itsfoss.com/linux-release-roundup-2021-13/
[12]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2020/12/Linux-release-roundups.png?fit=800%2C450&ssl=1&resize=350%2C200
[13]: https://news.itsfoss.com/manjaro-21-0-ornara-release/
[14]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/03/manjaro-21.png?fit=1200%2C675&ssl=1&resize=350%2C200

View File

@ -0,0 +1,190 @@
[#]: subject: (Elementary OS 6 Beta Available Now! Here Are the Top New Features)
[#]: via: (https://news.itsfoss.com/elementary-os-6-beta/)
[#]: author: (Abhishek https://news.itsfoss.com/author/root/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Elementary OS 6 Beta Available Now! Here Are the Top New Features
======
The beta release of elementary OS 6 is here. It is available to download and test for the early adapters and application developers.
Before I give you the details on downloading and upgrade procedure, lets have a look at the changes this new release is bringing.
### New features in elementary OS 6 “Odin”
Every elementary OS release bases itself on an Ubuntu LTS release. The upcoming elementary OS 6, codenamed “Odin”, is based on the latest Ubuntu 20.04 LTS version.
elementary OS has an ecosystem of its own, so the similarities with Ubuntu technically ends here. The Pantheon desktop environment gives it an entire different look and feel that you see in other distributions using GNOME or KDE.
In November last year, we took the early build of elementary OS 6 for a test ride. You may see it in action in the video below.
![][1]
Things have improved and more features have been added since then. Lets take a look at them.
#### Dark theme with customization options
Dark theme is not a luxury anymore. Its popularity has forces operating system and application developers to integrate the dark mode features in their offerings.
![][2]
elementary OS is also offering a dark theme but it has a few additional features to let you enjoy the dark side.
You can choose to automatically switch to the dark theme based on the time of the day. You can also choose an accent color to go with the dark theme.
![][3]
Dont expect a flawless dark theme experience. Like every other operating system, it depends on the applications. Sandboxed Flatpak applications wont go dark automatically unlike the elementary OS apps.
#### Refreshed look and feel
There are many subtle changes to give elementary OS a refreshed look and feel.
![][2]
Youll notice more rounded bottom window corners. The typography has changed for the first time and it now uses [Inter typeface][4] instead of the usual Open Sans. Default font rendering settings opts for grayscale anti-aliasing over RGB.
![][5]
You can now give an accent color to your system. With that, the icons, media buttons etc will have the chosen accented color.
![][6]
#### Multi-touch gestures
Multi-touch gestures are a rarity in Linux desktops. However, elementary OS has worked hard to bring some multi-touch gesture support. You should be able to use it for muti-tasking view as well as for switching workspaces.
You can see it in action in this video.
Individual apps may also provide You should be able to configure it from the settings.
![][7]
The gestures will be used in some other places such as when navigating between panes and views, swiping away notifications and more.
#### New and improved installer
elementary OS 6 will also feature a brand-new installer. This is being developed together with Linux system manufacturer System76. elementary OS team worked on the front end and the System76 team worked on the back end of the installer.
The new installer aims to improve the experience more both from an OS and OEM perspective.
![][8]
![][8]
![][9]
![][9]
![][9]
![][10]
![][10]
![][9]
![][9]
The new installer also plans to have the capability of a creating a recovery partition (which is basically a fresh copy of the operating system). This will make reinstalling and factory resetting the elementary OS a lot easier.
#### Flatpak all the way
You could already use [Flatpak][11] applications in elementary OS 5. Here, the installed application is local to the user account (in its home directory).
elementary OS 6 supports sharing Flatpak apps system wide. This is part of the plan to ship applications in elementary OS as Flatpaks out of the box. It should be ready by the final stable release.
#### Firmware updates from the system settings
elementary OS 6 will notify you of updatable firmware in the system settings. This is for hardware that is compatible with [fwupd][12]. You can download the firmware updates from the settings. Some firmware updates are installed on the next reboot.
![][13]
#### No Wayland
While elementary OS 6 code has some improved support for Wayland in the department of screenshots, it wont be ditching Xorg display server just yet. Ubuntu 20.04 LTS stuck with Xorg and elementary OS 6 will do the same.
#### Easier feedback reporting mechanism
I think this is for the beta testers so that they can easily provide feedback on various system components and functionality. I am not sure if the feedback tool will make its way to the final stable release. However, it is good to see a dedicated, easy to use tool that will make it easier to get feedback from less technical or lazy people (like me).
![][14]
#### Other changes
Here are some other changes in the new version of elementary OS:
* screen locking and sleep experience should be much more reliable and predictable
* improved accessibility features
* improved notifications with emoji support
* Epiphany browser becomes default
* New Task app
* Major rewrite of the Mail application
* Option to show num lock and caps lock in the panel
* Improved booting experience with OEM logo
* Improved performance on lower-clocked processors and slower storage mediums like SD cards
More details can be found on the [official blog of elementary OS][15].
### Download and install elementary OS 6 beta (for testing purpose)
Please note that the experimental [support for Raspberry Pi like ARM devices][16] is on pause for now. You wont find beta download for ARM devices.
There is no way to update elementary OS 5 to the beta of version 6. Also note that if you install elementary OS 6 beta, you will **not be able to upgrade to the final stable release**. Youll need to install it afresh.
Another thing is that some of the features I mentioned are not finished yet so expect some bugs and hiccups. It is better to use it on a spare system or in a virtual machine.
The beta is available for testing for free and you can download the ISO from the link below:
[Download elementary OS 6 beta][17]
### When will elementary OS 6 finally release?
No one can tell that, not even the elementary OS developers. They dont work with a fixed release date. It will be released when the planned features are stable. If I had to guess, I would say expect it in early July.
elementary OS 6 is one of the [most anticipated Linux distributions of 2021][18]. Are you liking the new features? How is the new look in comparison to [Zorin OS 16 beta][19]?
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/elementary-os-6-beta/
作者:[Abhishek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/root/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/i.ytimg.com/vi/ciIeX9b5_A4/hqdefault.jpg?w=780&ssl=1
[2]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzQzOScgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzU2Nycgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[4]: https://rsms.me/inter/
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzYxMycgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzY2Nycgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[7]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzU0MScgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[8]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzUzOScgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[9]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzUzNCcgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[10]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzQ5Nycgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[11]: https://itsfoss.com/what-is-flatpak/
[12]: https://fwupd.org/
[13]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzUyMScgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[14]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzQ3NScgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[15]: https://blog.elementary.io/elementary-os-6-odin-beta/
[16]: https://news.itsfoss.com/elementary-os-raspberry-pi-release/
[17]: https://builds.elementary.io/
[18]: https://news.itsfoss.com/linux-distros-for-2021/
[19]: https://news.itsfoss.com/zorin-os-16-beta/

View File

@ -0,0 +1,101 @@
[#]: subject: (Googles FLoC is Based on the Right Idea, but With the Wrong Implementation)
[#]: via: (https://news.itsfoss.com/google-floc/)
[#]: author: (Jacob Crume https://news.itsfoss.com/author/jacob/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Googles FLoC is Based on the Right Idea, but With the Wrong Implementation
======
Cookies, the well-known web technology, have been a key tool in web developers toolkits for years. They have given us the ability to store passwords, logins, and other essential data that allows us to use the modern web.
However, the technology has been used lately for more invasive purposes: serving creepily targeted ads.
Recently, Google claimed to have the solution to this privacy crisis with their new FLoC initiative.
### What is FLoC?
![][1]
FLoC (Federated Learning of Cohorts) is a new technology that aims to solve the privacy concerns associated with cookies. Unlike the old way of using 3rd party cookies to build an advertising ID, FLoC uses data from your searches to place you into a predefined group (called a cohort) of people interested in similar topics as you.
Advertisers can then serve the same ads to the group of people that are most likely to purchase their product. Because FLoC is built into Chrome, it can collect much more data than third-party cookies. For the average consumer, this should be a huge concern.
In simple terms, if cookies were bad, then FLoC is down-right evil.
### Whats Wrong With Floc?
Simply put, FLoC collects much more data than traditional cookies. This allows advertisers to serve more targeted ads, driving up sales.
Alongside the data concerns, there also some more specific issues associated with it. These include:
* More predictability
* Much easier browser fingerprinting
* The ability to link a user with their browsing habits
All of these issues join together to create the privacy disaster that FLoC is, with heaps of negative impacts on the user.
#### More Predictability
With the rise of machine learning and AI, companies such as Google and Facebook have gained the ability to make shockingly accurate predictions. With the extra data they will have because of FLoC, these predictions could be taken to a whole new level.
The result of this would be a new wave of highly-targeted ads and tracking. Because all your data is in your cohort id, it will be much better for companies to predict your interests and skills.
#### Browser Fingerprinting
Browser fingerprinting is the act of taking small and seemingly insignificant pieces of data to create an ID for a web browser. While no browser has managed to fully stop fingerprinting, some browsers (such as Tor) have managed to limit their fingerprinting abilities at the expense of some features.
Floc enables large corporations to take this shady practice to a whole new level through the extra data it presents.
#### Browsing Habit Linking
Your cohort id is supposed to be anonymous, but when combined with a login, it can be tracked right back to you. This effectively eliminates the privacy benefits FLoC has (standardized tracking) and further worsens the privacy crisis caused by this technology.
This combination of your login and cohort ID is effectively a goldmine for advertisers.
### Cookies are Bad, but so is FLoC
Cookies have been living on their last legs for the past decade. They have received widespread criticism for privacy issues, particularly from open-source advocates such as Mozilla and the FSF.
Instead of replacing them with an even more invasive technology, why not create an open and privacy respecting alternative? We can be sure that none of the large advertisers (Google and Facebook) would do such a thing as this is a crucial part of their profit-making ability.
Googles FLoC **not a sustainable replacement for cookies**, and it must go.
### Wrapping Up
With the amount of criticism Google has received in the past for their privacy policies, you would think they would improve. Unfortunately, this seems not to be the case, with their data collection becoming more widespread by the day.
FLoC seems to be the last nail in the coffin of privacy. If we want internet privacy, FLoC needs to go.
If you want to check if you have been FLoCed, you can check using a web tool by EFF [Am I FLoCed?][2], if you are using Google Chrome version 89 or newer.
What do you think about FLoC? Let me know in the comments below!
_The views and opinions expressed are those of the authors and do not necessarily reflect the official policy or position of Its FOSS._
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/google-floc/
作者:[Jacob Crume][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lujun9972
[1]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzMyNCcgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[2]: https://amifloced.org/

View File

@ -0,0 +1,85 @@
[#]: subject: (Ubuntu 21.10 “Impish Indri” Development Begins, Daily Builds Available Now)
[#]: via: (https://news.itsfoss.com/ubuntu-21-10-release-schedule/)
[#]: author: (Jacob Crume https://news.itsfoss.com/author/jacob/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Ubuntu 21.10 “Impish Indri” Development Begins, Daily Builds Available Now
======
I was slightly disappointed at the lack of enough new features in the [recent release of Ubuntu 21.04][1]. However, Canonical is set to change that with the upcoming release of Ubuntu 21.10 **Impish Indri**.
It is slated to have a variety of new features, including the [recently released Gnome 40][2]/41, [GCC 11][3], and more usage of the [Flutter toolkit][4].
### Ubuntu 21.10 Release Schedule
The final stable release date of Ubuntu 21.10 is October 14, 2021. Here are the milestones of the release schedule:
* Beta release: **23rd September**
* Release Candidate: **7th October**
* Final Release: **14th October**
Ubuntu 21.10 is codenamed Impish Idri. Impish is an adjective that means “inclined to do slightly naughty things for fun”. Idrish is a Lemur found in Madagascar.
If you are not aware already, all Ubuntu releases are codenamed in alphabetical order and composed of an adjective and an animal species, both starting with the same letter.
### New Features Expected in Ubuntu 21.04
Although an official feature list has not been released yet, you can expect the following features to be present:
* [Gnome 40][2]/41
* [GCC 11][3]
* More usage of [Flutter][4]
* [A new desktop installer][5]
* Linux Kernel 5.14
Together, these will provide a huge upgrade from Ubuntu 21.04. In my opinion, the biggest upgrade will be the inclusion of GNOME 40, especially with the new horizontal overview.
Moreover, it should be fascinating to see how the Ubuntu team makes use of the [new design changes in GNOME 40][2].
### Daily Builds of Ubuntu 21.10 Available (For Testing Only)
Although the development of Ubuntu 21.10 has only just started, there are already daily builds available from the official Ubuntu website.
Please bear in mind that these are daily builds (early development) and are not meant to be used as a daily driver.
[Ubuntu 21.10 Daily Builds][6]
### Wrapping Up
With the sheer number of upgrades, the Ubuntu team is rushing to implement all the new features destined for this release. Consequently, this should then allow them time to fully bake the new features ahead of the release of Ubuntu 22.04 LTS (I cant wait already!)
Between Gnome 40, Linux 5.14, and the new desktop installer, Ubuntu 21.10 is shaping up to be one of the biggest releases in recent years. It will be really exciting to see how the Ubuntu team embraces Gnome 40s new looks, as well as what the new desktop installer will look like.
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntu-21-10-release-schedule/
作者:[Jacob Crume][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/ubuntu-21-04-features/
[2]: https://news.itsfoss.com/gnome-40-release/
[3]: https://www.gnu.org/software/gcc/gcc-11/
[4]: https://flutter.dev/
[5]: https://news.itsfoss.com/ubuntu-new-installer/
[6]: https://cdimage.ubuntu.com/ubuntu/daily-live/current/

View File

@ -0,0 +1,92 @@
[#]: subject: (15 unusual paths to tech)
[#]: via: (https://opensource.com/article/21/5/unusual-tech-career-paths)
[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
15 unusual paths to tech
======
Our past lives can be exciting and funny. Here are some surprising ways
folks have made their way to open source.
![Looking at a map for career journey][1]
The lives we led before we arrived where we are now sometimes feel like a distant land full of memories we can't quite recall. And sometimes we have lived experiences that we'll just never forget. Many times those experiences teach us and help us appreciate where we are today. We may even wish for those days as we recount our past lives.
What did you do before tech? Tell us in the comments.
I did **janitorial work** in the university cafeteria after it closed every day, and I got extra pay cleaning it up after live gigs held there (which happened about 4 times a year). We started to clean up for the following morning after the venue was vacated about 4 am, and had to get it cleaned and set up for opening the following morning at 7 am. That was fun. I worked summers in a livestock mart in the West of Ireland, running the office, keeping the account books, minding the cash that came through. I also had stints as a barman, lecturer, and TA at a local university while I was a post-grad, and once spent a few days stocking a ship with boxes of frozen fish in a Dutch port. —[Dave Neary][2]
I was a **musician** in the Marine Corps, but being a bassoonist in the Corps means that you're mostly playing bass drum. After burning out, I changed to data comms for my second enlistment. —Waldo
My last job before tech was as **a papermaker at a hi-speed newsprint plant** around 1990-1998. I loved this job, working with huge machines and a nice product. I did a lot of jobs from clamp lift driver to planner shipments abroad and back to production. What led me to tech was a program at the paper mill; they had a budget for everyone to get a PC. Honestly, for me, it was super vague what purpose that would serve me. But not long after I got into web design with a colleague, I became a hardcore XHTML and CSS frontend developer with the help of my PC. —[Ben Van 't Ende][3]
I worked at McDonald's through high school and college. In summers, I also worked at **a few factory jobs, a screw and bolt factory,** where I got to drive a forklift (which is heaven for an 18-year-old). I also worked at a plastics factory, eventually on the shipping deck. My first tech job was in 1982 for Westwood Family Dentistry. This was a large, mall dentistry chain and they were paying me to write their front desk software and billing software on [MP/M-based][4] PCs from Televideo. If you ever watched the movie "War Games," these are the terminals Mathew Broderick used. This was prior to Microsoft releasing MS-DOS. The code was written in Cobol. —[Daniel Walsh][5]
I was a **sound engineer recording audiobooks** for visually impaired people. There was a global project to set up a new global standard and move to digital recordings which became the DAISY standard and system. After that, I moved to the IT department in the company I worked for. —[Jimmy Sjölund ][6]
Before tech, I was working in **public relations** at an agency that specialized in high tech, scientific, and research clients. I convinced the agency to start working with online information, and my first project in that arena was creating a weekly intelligence report for the Semiconductor Industry Association, based on posts in newsgroups like comp.arch and comp.realtime. When the World Wide Web (yes, that's how everyone referred to it at the time) began becoming more well-known one of my PR clients (a lawyer for tech startups) asked me if I knew how it worked. I did (and told him so), and he hired me to create his firm's website. The site, for Womble, Carlyle, Sandridge &amp; Rice, was the first law firm website in North Carolina. A few more requests in the same vein later, I'd shifted my focus to online-only, leading to 20+ year career in web strategy. —[Gina Likins ][7]
I graduated in humanities in 1978 and started to teach human geography at Milan University while working as a **map editor** at Touring Club Italiano, at the time the largest map publisher in Italy. I soon realized that a career in geography was good for the mind but bad for the wallet, so I moved to a Swedish company, Atlas Copco, as house organ editor. Thanks to a very open-minded manager, I learned an awful lot in term of marketing communications, so after a couple of years I decided that it was time to challenge my skills in real marketing, and I was hired by Honeywell Information Systems, at the time second only to IBM in the information technology market. Although I was hired to manage marketing communications of PC compatible printers, after six months I was promoted to European Marketing Director, and after a couple of years, I become Corporate VP of Peripherals Marketing. In 1987, I moved to real PR at SCR (now Weber Shandwick), then Burson Marsteller, and then Manning Selvage &amp; Lee. In 1992, I started my own PR agency, which was acquired by Fleishman-Hillard in 1998. In 2003, I left Fleishman-Hillard as Senior VP of Technology Communications, to start a freelancing career. While looking at the tools for the trade, I stumbled on OpenOffice, and at age 50 I eventually entered the FOSS community as a volunteer handling marketing and PR (of course). In 2010, at age 56, I was one of the founders of the LibreOffice project, and I am still enjoying the fun here (and in several other places, such as OSI, OASIS, and LibreItalia). —
[Italo Vignoli ][8]
Right after college at age 23, I had a job where I went to hot zones around the US wherever there were **toxic spills or man-made chemical disasters**. So I visited some real cesspools in America full of death and misery and lived there for months at a time. I was there to support the investigators of the Agency for Toxic Substances and Disease Registry and the Center for Disease Control by editing the interviews they collected for clarity and sending them to the home office in Atlanta by modem. That job extended in technical responsibility with every new place they sent me off to, but it was awesome! I was 100% focused on being a toxicologist by that point. So after that, I got a job as a network analyst for the University of Buffalo medical school so I could get discounted tuition to attend the med school. I even taught medical computing to other 25-year-olds my age and saw my future in med technology. But after a year I realized I couldn't do eight more years of university. I didn't even like most doctors I had to work with. The scientists (PhDs) were awesome but the MDs were pretty mean to me. That's when my boss said to me that my true passion was hacking and he thought I was good at it. I told him he was crazy and that medicine was the future, not security. Then he quit, and I didn't like my new boss even more so I quit. I then got an offer to help start IBM's new Ethical Hacking service called eSecurity. And that's how I became a professional hacker. —[Pete Herzog][9]
I was always in information technology, it's just that the technology evolved. When I was still in elementary school, I delivered newspapers, which I would argue to be information produced by information technology. In high school, I continued that but eventually was fetching and storing data from the "stacks" at the local library (as well as doing lookups, working with punch cards, etc: Our books had punch cards for return by dates. So, when someone checked out a book, we would use a microfilm (or was it a microfiche?) camera to photograph the book description card, and a punch card which would then be inserted into the book's pocket. Upon return, the punch cards were removed and stacked to be sent through a card sorter, that -- presumably -- would do something about any cards missing from the sequence. (We weren't privy to the sorter or any computer that might have been attached. The branch would pack up the punch cards and send them to the main library.) As mentioned in a previous article for OpenSource.com, I was introduced to my first computer in high school. I had a job as a graveyard shift computer operator at a local hospital, mounting the tapes, running the backups, running batch jobs that printed reports on five-part carbon -- which left me with a deep-seated hatred for line-printers -- and then delivering those reports throughout the hospital -- kind of like being a newspaper delivery boy again. Then, college, where I ended up being the operator / "sys admin" (well, that last is a bit of a stretch, but not much) of a Data General Nova 3. And finally, onto an internship as a coder that, like Zonker T. Harris, I never left. —[Kevin Cole][10]
Probably the most surprising jobs I had before working in free and open source software (FOSS) were:
* Political organizer working on state-level campaigns for marriage equality, a higher minimum wage and increased transparency in state government
* Local music promoter, booking and promoting shows with noise bands, experimental acts and heavy rock, etc
* Cocktail waitress/bouncer/spotlight operator at a drag bar, whatever they needed that night
—[Deb Nicholson][11]
I never had a job in tech but was a neurologist. After going through the extended initiation of learning Linux, installing it on various machines, I then used it in my practice. I used to have my own computer in the office, running Linux, in addition to the office's system. As far as I know, I was the only doctor to carry around a laptop while on rounds in the hospital. There I kept my patient list, their diagnoses, and which days I visited them, all in a Postgres database. I would then submit my lists and charges for the day to the office from this database. With the hospital's wifi, I had access to the electronic data and lab results also. I would do EMGs (electromyography) and for a while used TeX to generate the reports, but later found that Scribus worked better, with some basic information contained in a file. I would then type out the final report myself. I could have a patient go straight to his doctor's office after the test, carrying a final report with him. To facilitate this, once I found that we had some space set aside for us doctors on the hospital's server, I could install Scribus there for various uses. When I saw a patient who needed one or more prescriptions, I wrote a little Python script to make use of Avery labels, which I would then paste on a prescription blank and sign. Without a doubt, I had the most legible prescriptions you would ever see from a doctor. Patients would tell me later when they went to the pharmacy, the pharmacist would look at the prescription and say, "What's this?!". If I was doing this for a hospitalized patient, this meant I could make an extra copy and take it to the office to put in the patient's chart also. While we still had paper charts in the hospital (used for doctors' notes) I made a mock-up of a physicians' notes and orders page in Scribus with Python, and when I saw a patient, I would enter my notes there, then print out on a blank sheet with the necessary holes to fit in the chart. These pages were complete with the barcode for the page type and also the barcode for the patient's hospital number, generated with that Python script. After an experience of waiting a week or two for my office dictation to come back so I could sign it, I started typing my own office notes, starting with typing notes as I talked to the patient, then once they were gone, typing out a letter to go to the referring physician. So I did have a job in tech, so to speak, because I made it so. —[Greg Pittman][12]
I started my career as a journalist covering the European tech sector while living in London after grad school. I was still desperate to be a journalist despite the writing on that profession's wall. I didn't care which beat I covered, I just wanted to write. It ended up being the perfect way to learn about technology: I didn't need to be the expert, I just had to find the right experts and ask the right questions. The more I learned, the more curious I became. I eventually realized that I wanted to stop writing about tech companies and start joining them. Nearly nine years later, here I am. —[Lauren Maffeo][13]
Well, that degree in English Literature and Theology didn't really set me up for a career in computing, so my first job was *supposed *to be teaching (or training to be a teacher in) English for 11-18-year-olds. I suppose my first real job was working at the Claremont Vaults in Weston-super-mare. It was a real dive, at the wrong end of the seafront, was smoke-filled at all times (I had to shower and wash my hair as soon as I got home every night), and had 3 sets of clientele:
* The underage kids. In the UK, this meant 16 and 17-year-olds pretending to be 18. They were generally little trouble, and we'd ask them to leave if it was too obvious they were too young. They'd generally shrug and go onto the next pub.
* The truckers. Bizarrely (to 18-year-old me, anyway), the nicest folks we had there. Never any trouble, paid-up, didn't get too smashed, played a lot of Country on the jukebox.
* The OAPs (Old-Age Pensioners). Thursday night was the worst, as pensions (in those days) were paid on Thursdays, so the OAPs would make their way down the hill to the nearest post office, get their pension, and then head to the pub to get absolutely ratted. They'd get drunk, abusive, and unpleasant, and Thursdays were always the shift to try to avoid. I don't miss it, but it was an education for an entitled, privately-educated boarding-school boy with little clue about the real world!
—[Mike Bursell][14]
In no particular order: **proofreader, radio station disk jockey,** bookkeeper, archaeology shovelbum, reactor operator, welder, apartment maintenance and security, rent-to-own collections, electrician's helper, sunroom construction... and I'm definitely missing a few. The question isn't so much "what led me to tech" as "what kept me from it," the answer is insufficient personal connections and money. My entire life was leading me to tech, it was just a long, rocky, stumbling road to get there. I might never have gotten there, if I hadn't gotten an injury on the construction job serious enough to warrant six months of light duty—the company decided to have me come into the office and "I don't know, make copies or something" rather than just paying me to sit at home, and I parlayed that into an opportunity to make myself absolutely indispensable and turned it into a job as the company's first Information Technology Manager. It's probably worth noting that the actual conversion to IT Manager didn't just happen because I made myself indispensable—it also happened because I literally cornered the CEO a week prior to me going back out into the field to build sunrooms, and made a passionate case for why it would be an enormous waste to do that. Lucky for me, that particular CEO appreciated aggressive ambition, and promptly gave me a raise and a job title. —[Jim Salter][15]
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/5/unusual-tech-career-paths
作者:[Jen Wike Huger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
[2]: https://opensource.com/users/dneary
[3]: https://opensource.com/users/benvantende
[4]: https://en.wikipedia.org/wiki/MP/M
[5]: https://opensource.com/users/rhatdan
[6]: https://opensource.com/users/jimmysjolund
[7]: https://opensource.com/users/lintqueen
[8]: https://opensource.com/users/italovignoli
[9]: https://opensource.com/users/peteherzog
[10]: https://opensource.com/users/kjcole
[11]: https://opensource.com/users/eximious
[12]: https://opensource.com/users/greg-p
[13]: https://opensource.com/users/lmaffeo
[14]: https://opensource.com/users/mikecamel
[15]: https://opensource.com/users/jim-salter

View File

@ -1,275 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (cooljelly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Network address translation part 1 packet tracing)
[#]: via: (https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/)
[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
Network address translation part 1 packet tracing
======
![][1]
The first post in a series about network address translation (NAT). Part 1 shows how to use the iptables/nftables packet tracing feature to find the source of NAT related connectivity problems.
### Introduction
Network address translation is one way to expose containers or virtual machines to the wider internet. Incoming connection requests have their destination address rewritten to a different one. Packets are then routed to a container or virtual machine instead. The same technique can be used for load-balancing where incoming connections get distributed among a pool of machines.
Connection requests fail when network address translation is not working as expected. The wrong service is exposed, connections end up in the wrong container, request time out, and so on. One way to debug such problems is to check that the incoming request matches the expected or configured translation.
### Connection tracking
NAT involves more than just changing the ip addresses or port numbers. For instance, when mapping address X to Y, there is no need to add a rule to do the reverse translation. A netfilter system called “conntrack” recognizes packets that are replies to an existing connection. Each connection has its own NAT state attached to it. Reverse translation is done automatically.
### Ruleset evaluation tracing
The utility nftables (and, to a lesser extent, iptables) allow for examining how a packet is evaluated and which rules in the ruleset were matched by it. To use this special feature “trace rules” are inserted at a suitable location. These rules select the packet(s) that should be traced. Lets assume that a host coming from IP address C is trying to reach the service on address S and port P. We want to know which NAT transformation is picked up, which rules get checked and if the packet gets dropped somewhere.
Because we are dealing with incoming connections, add a rule to the prerouting hook point. Prerouting means that the kernel has not yet made a decision on where the packet will be sent to. A change to the destination address often results in packets to get forwarded rather than being handled by the host itself.
### Initial setup
```
```
# nft 'add table inet trace_debug'
# nft 'add chain inet trace_debug trace_pre { type filter hook prerouting priority -200000; }'
# nft "insert rule inet trace_debug trace_pre ip saddr $C ip daddr $S tcp dport $P tcp flags syn limit rate 1/second meta nftrace set 1"
```
```
The first rule adds a new table This allows easier removal of the trace and debug rules later. A single “nft delete table inet trace_debug” will be enough to undo all rules and chains added to the temporary table during debugging.
The second rule creates a base hook before routing decisions have been made (prerouting) and with a negative priority value to make sure it will be evaluated before connection tracking and the NAT rules.
The only important part, however, is the last fragment of the third rule: “_meta nftrace set 1″_. This enables tracing events for all packets that match the rule. Be as specific as possible to get a good signal-to-noise ratio. Consider adding a rate limit to keep the number of trace events at a manageable level. A limit of one packet per second or per minute is a good choice. The provided example traces all syn and syn/ack packets coming from host $C and going to destination port $P on the destination host $S. The limit clause prevents event flooding. In most cases a trace of a single packet is enough.
The procedure is similar for iptables users. An equivalent trace rule looks like this:
```
```
# iptables -t raw -I PREROUTING -s $C -d $S -p tcp --tcp-flags SYN SYN  --dport $P  -m limit --limit 1/s -j TRACE
```
```
### Obtaining trace events
Users of the native nft tool can just run the nft trace mode:
```
```
# nft monitor trace
```
```
This prints out the received packet and all rules that match the packet (use CTRL-C to stop it):
```
```
trace id f0f627 ip raw prerouting  packet: iif "veth0" ether saddr ..
```
```
We will examine this in more detail in the next section. If you use iptables, first check the installed version via the “_iptables version”_ command. Example:
```
```
# iptables --version
iptables v1.8.5 (legacy)
```
```
_(legacy)_ means that trace events are logged to the kernel ring buffer. You will need to check _dmesg or_ _journalctl_. The debug output lacks some information but is conceptually similar to the one provided by the new tools. You will need to check the rule line numbers that are logged and correlate those to the active iptables ruleset yourself. If the output shows _(nf_tables)_, you can use the xtables-monitor tool:
```
```
# xtables-monitor --trace
```
```
If the command only shows the version, you will also need to look at dmesg/journalctl instead. xtables-monitor uses the same kernel interface as the nft monitor trace tool. Their only difference is that it will print events in iptables syntax and that, if you use a mix of both iptables-nft and nft, it will be unable to print rules that use maps/sets and other nftables-only features.
### Example
Lets assume youd like to debug a non-working port forward to a virtual machine or container. The command “ssh -p 1222 10.1.2.3” should provide remote access to a container running on the machine with that address, but the connection attempt times out.
You have access to the host running the container image. Log in and add a trace rule. See the earlier example on how to add a temporary debug table. The trace rule looks like this:
```
```
nft "insert rule inet trace_debug trace_pre ip daddr 10.1.2.3 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1"
```
```
After the rule has been added, start nft in trace mode: _nft monitor trace_, then retry the failed ssh command. This will generate a lot of output if the ruleset is large. Do not worry about the large example output below the next section will do a line-by-line walkthrough.
```
```
trace id 9c01f8 inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
trace id 9c01f8 inet trace_debug trace_pre verdict continue
trace id 9c01f8 inet trace_debug trace_pre policy accept
trace id 9c01f8 inet nat prerouting packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp  tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
```
```
### Line-by-line trace walkthrough
The first line generated is the packet id that triggered the subsequent trace output. Even though this is in the same grammar as the nft rule syntax, it contains header fields of the packet that was just received. You will find the name of the receiving network interface (here named “enp0”) the source and destination mac addresses of the packet, the source ip address (can be important maybe the reporter is connecting from a wrong/unexpected host) and the tcp source and destination ports. You will also see a “trace id” at the very beginning. This identification tells which incoming packet matched a rule. The second line contains the first rule matched by the packet:
```
```
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
```
```
This is the just-added trace rule. The first rule is always one that activates packet tracing. If there would be other rules before this, we would not see them. If there is no trace output at all, the trace rule itself is never reached or does not match. The next two lines tell that there are no further rules and that the “trace_pre” hook allows the packet to continue (_verdict accept)_.
The next matching rule is
```
```
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
```
```
This rule sets up a mapping to a different address and port. Provided 192.168.70.10 really is the address of the desired VM, there is no problem so far. If its not the correct VM address, the address was either mistyped or the wrong NAT rule was matched.
### IP forwarding
Next we can see that the IP routing engine told the IP stack that the packet needs to be forwarded to another host:
```
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
```
This is another dump of the packet that was received, but there are a couple of interesting changes. There is now an output interface set. This did not exist previously because the previous rules are located before the routing decision (the prerouting hook). The id is the same as before, so this is still the same packet, but the address and port has already been altered. In case there are rules that match “tcp dport 1222” they will have no effect anymore on this packet.
If the line contains no output interface (oif), the routing decision steered the packet to the local host. Route debugging is a different topic and not covered here.
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
This tells that the packet matched a rule that jumps to a chain named “allowed_dnats”. The next line shows the source of the connection failure:
```
```
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
```
```
The rule unconditionally drops the packet, so no further log output for the packet exists. The next output line is the result of a different packet:
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
The trace id is different, the packet however has the same content. This is a retransmit attempt: The first packet was dropped, so TCP re-tries. Ignore the remaining output, it does not contain new information. Time to inspect that chain.
### Ruleset investigation
The previous section found that the packet is dropped in a chain named “allowed_dnats” in the inet filter table. Time to look at it:
```
```
# nft list chain inet filter allowed_dnats
table inet filter {
 chain allowed_dnats {
  meta nfproto ipv4 ip daddr . tcp dport @allow_in accept
  drop
   }
}
```
```
The rule that accepts packets in the @allow_in set did not show up in the trace log. Double-check that the address is in the @allow_set by listing the element:
```
```
# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
Error: Could not process rule: No such file or directory
```
```
As expected, the address-service pair is not in the set. We add it now.
```
```
# nft "add element inet filter allow_in { 192.168.70.10 . 22 }"
```
```
Run the query command now, it will return the newly added element.
```
# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
table inet filter {
set allow_in {
type ipv4_addr . inet_service
elements = { 192.168.70.10 . 22 }
}
}
```
The ssh command should now work and the trace output reflects the change:
trace id 497abf58 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 497abf58 inet filter allowed_dnats rule meta nfproto ipv4 ip daddr . tcp dport @allow_in accept (verdict accept)
trace id 497abf58 ip postrouting packet: iif "enp0" oif "veth21" ether .. trace id 497abf58 ip postrouting policy accept
This shows the packet passes the last hook in the forwarding path postrouting.
In case the connect is still not working, the problem is somewhere later in the packet pipeline and outside of the nftables ruleset.
### Summary
This Article gave an introduction on how to check for packet drops and other sources of connectivity problems with the nftables trace mechanism. A later post in the series shows how to inspect the connection tracking subsystem and the NAT information that may be attached to tracked flows.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/network-address-translation-part-1-packet-tracing/
作者:[Florian Westphal][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/strlen/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/12/network-address-translation-part-1-816x346.png

View File

@ -1,234 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Cross-compiling made easy with Golang)
[#]: via: (https://opensource.com/article/21/1/go-cross-compiling)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
Cross-compiling made easy with Golang
======
I learned about Go's cross-compilation capabilities by stepping out of
my comfort zone.
![Person using a laptop][1]
I work with multiple servers with various architectures (e.g., Intel, AMD, Arm, etc.) when I'm testing software on Linux. Once I've [provisioned a Linux box][2] and the server meets my testing needs, I still have a number of steps to do:
1. Download and install prerequisite software.
2. Verify whether new test packages for the software I'm testing are available on the build server.
3. Get and set the required yum repos for the dependent software packages.
4. Download and install the new test packages (based on step #2).
5. Get and set up the required SSL certificates.
6. Set up the test environment, get the required Git repos, change configurations in files, restart daemons, etc.
7. Do anything else that needs to be done.
### Script it all away
These steps are so routine that it makes sense to automate them and save the script to a central location (like a file server) where I can download it when I need it. I did this by writing a 100120-line Bash shell script that does all the configuration for me (including error checks). The script simplifies my workflow by:
1. Provisioning a new Linux system (of the architecture under test)
2. Logging into the system and downloading the automated shell script from a central location
3. Running it to configure the system
4. Starting the testing
### Enter Go
I've wanted to learn [Golang][3] for a while, and converting my beloved shell script into a Go program seemed like a good project to help me get started. The syntax seemed fairly simple, and after trying out some test programs, I set out to advance my knowledge and become familiar with the Go standard library.
It took me a week to write the Go program on my laptop. I tested my program often on my go-to x86 server to weed our errors and improve the program. Everything worked fine.
I continued relying on my shell script until I finished the Go program. Then I pushed the binary onto a central file server so that every time I provisioned a new server, all I had to do was wget the binary, set the executable bit on, and run the binary. I was happy with the early results:
```
$ wget <http://file.example.com/\>&lt;myuser&gt;/bins/prepnode
$ chmod  +x ./prepnode
$ ./prepnode
```
### And then, an issue
The next week, I provisioned a fresh new server from the pool, as usual, downloaded the binary, set the executable bit, and ran the binary. It errored out—with a strange error:
```
$ ./prepnode
bash: ./prepnode: cannot execute binary file: Exec format error
$
```
At first, I thought maybe the executable bit was not set. However, it was set as expected:
```
$ ls -l prepnode
-rwxr-xr-x. 1 root root 2640529 Dec 16 05:43 prepnode
```
What happened? I didn't make any changes to the source code, the compilation threw no errors nor warnings, and it worked well the last time I ran it, so I looked more closely at the error message, `format error`.
I checked the binary's format, and everything looked OK:
```
$ file prepnode
prepnode: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped
```
I quickly ran the following command to identify the architecture of the test server I provisioned and where the binary was trying to run. It was Arm64 architecture, but the binary I compiled (on my x86 laptop) was generating an x86-64 format binary:
```
$ uname -m
aarch64
```
### Compilation 101 for scripting folks
Until then, I had never accounted for this scenario (although I knew about it). I primarily work on scripting languages (usually Python) coupled with shell scripting. The Bash shell and the Python interpreter are available on most Linux servers of any architecture. Hence, everything had worked well before.
However, now I was dealing with a compiled language, Go, which produces an executable binary. The compiled binary consists of [opcodes][4] or assembly instructions that are tied to a specific architecture. That's why I got the format error. Since the Arm64 CPU (where I ran the binary) could not interpret the binary's x86-64 instructions, it errored out. Previously, the shell and Python interpreter took care of the underlying opcodes or architecture-specific instructions for me.
### Cross-compiling with Go
I checked the Golang docs and discovered that to produce an Arm64 binary, all I had to do was set two environment variables when compiling the Go program before running the `go build` command.
`GOOS` refers to the operating system (Linux, Windows, BSD, etc.), while `GOARCH` refers to the architecture to build for.
```
`$ env GOOS=linux GOARCH=arm64 go build -o prepnode_arm64`
```
After building the program, I reran the `file` command, and this time it showed Arm AArch64 instead of the x86 it showed before. Therefore, I was able to build a binary for a different architecture than the one on my laptop:
```
$ file prepnode_arm64
prepnode_arm64: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, not stripped
```
I copied the binary onto the Arm server from my laptop. Now, running the binary (after setting the executable bit on) produced no errors:
```
$ ./prepnode_arm64  -h
Usage of ./prepnode_arm64:
  -c    Clean existing installation
  -n    Do not start test run (default true)
  -s    Use stage environment, default is qa
  -v    Enable verbose output
```
### What about other architectures?
x86 and Arm are two of the five architectures I test software on. I was worried that Go might not support the other ones, but that was not the case. You can find out which architectures Go supports with:
```
`$ go tool dist list`
```
Go supports a variety of platforms and operating systems, including:
* AIX
* Android
* Darwin
* Dragonfly
* FreeBSD
* Illumos
* JavaScript
* Linux
* NetBSD
* OpenBSD
* Plan 9
* Solaris
* Windows
To find the specific Linux architectures it supports, run:
```
`$ go tool dist list | grep linux`
```
As the output below shows, Go supports all of the architectures I use. Although x86_64 is not on the list, AMD64 is compatible with x86_64, so you can produce an AMD64 binary, and it will run fine on x86 architecture:
```
$ go tool dist list | grep linux
linux/386
linux/amd64
linux/arm
linux/arm64
linux/mips
linux/mips64
linux/mips64le
linux/mipsle
linux/ppc64
linux/ppc64le
linux/riscv64
linux/s390x
```
### Handling all architectures
Generatiing binaries for all of the architectures under my test is as simple as writing a tiny shell script from my x86 laptop:
```
#!/usr/bin/bash
archs=(amd64 arm64 ppc64le ppc64 s390x)
for arch in ${archs[@]}
do
        env GOOS=linux GOARCH=${arch} go build -o prepnode_${arch}
done
[/code] [code]
$ file prepnode_*
prepnode_amd64:   ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=y03MzCXoZERH-0EwAAYI/p909FDnk7xEUo2LdHIyo/V2ABa7X_rLkPNHaFqUQ6/5p_q8MZiR2WYkA5CzJiF, not stripped
prepnode_arm64:   ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, Go BuildID=q-H-CCtLv__jVOcdcOpA/CywRwDz9LN2Wk_fWeJHt/K4-3P5tU2mzlWJa0noGN/SEev9TJFyvHdKZnPaZgb, not stripped
prepnode_ppc64:   ELF 64-bit MSB executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), statically linked, Go BuildID=DMWfc1QwOGIq2hxEzL_u/UE-9CIvkIMeNC_ocW4ry/r-7NcMATXatoXJQz3yUO/xzfiDIBuUxbuiyaw5Goq, not stripped
prepnode_ppc64le: ELF 64-bit LSB executable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), statically linked, Go BuildID=C6qCjxwO9s63FJKDrv3f/xCJa4E6LPVpEZqmbF6B4/Mu6T_OR-dx-vLavn1Gyq/AWR1pK1cLz9YzLSFt5eU, not stripped
prepnode_s390x:   ELF 64-bit MSB executable, IBM S/390, version 1 (SYSV), statically linked, Go BuildID=faC_HDe1_iVq2XhpPD3d/7TIv0rulE4RZybgJVmPz/o_SZW_0iS0EkJJZHANxx/zuZgo79Je7zAs3v6Lxuz, not stripped
```
Now, whenever I provision a new machine, I just run this wget command to download the binary for a specific architecture, set the executable bit on, and run the binary:
```
$ wget <http://file.domain.com/\>&lt;myuser&gt;/bins/prepnode_&lt;arch&gt;
$ chmod +x ./prepnode_&lt;arch&gt;
$ ./prepnode_&lt;arch&gt;
```
### But why?
You may be wondering why I didn't save all of this hassle by sticking to shell scripts or porting the program over to Python instead of a compiled language. All fair points. But then I wouldn't have learned about Go's cross-compilation capabilities and how programs work underneath the hood when they're executing on the CPU. In computing, there are always trade-offs to be considered, but never let them stop you from learning.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/1/go-cross-compiling
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://opensource.com/article/20/12/linux-server
[3]: https://golang.org/
[4]: https://en.wikipedia.org/wiki/Opcode

View File

@ -1,190 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (stevenzdg988)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Improve your productivity with this Linux automation tool)
[#]: via: (https://opensource.com/article/21/2/linux-autokey)
[#]: author: (Matt Bargenquast https://opensource.com/users/mbargenquast)
Improve your productivity with this Linux automation tool
======
Configure your keyboard to correct common typos, enter frequently used
phrases, and more with AutoKey.
![Linux keys on the keyboard for a desktop computer][1]
[AutoKey][2] is an open source Linux desktop automation tool that, once it's part of your workflow, you'll wonder how you ever managed without. It can be a transformative tool to improve your productivity or simply a way to reduce the physical stress associated with typing.
This article will look at how to install and start using AutoKey, cover some simple recipes you can immediately use in your workflow, and explore some of the advanced features that AutoKey power users may find attractive.
### Install and set up AutoKey
AutoKey is available as a software package on many Linux distributions. The project's [installation guide][3] contains directions for many platforms, including building from source. This article uses Fedora as the operating platform.
AutoKey comes in two variants: autokey-gtk, designed for [GTK][4]-based environments such as GNOME, and autokey-qt, which is [QT][5]-based.
You can install either variant from the command line:
```
`sudo dnf install autokey-gtk`
```
Once it's installed, run it by using `autokey-gtk` (or `autokey-qt`).
### Explore the interface
Before you set AutoKey to run in the background and automatically perform actions, you will first want to configure it. Bring up the configuration user interface (UI):
```
`autokey-gtk -c`
```
AutoKey comes preconfigured with some examples. You may wish to leave them while you're getting familiar with the UI, but you can delete them if you wish.
![AutoKey UI][6]
(Matt Bargenquast, [CC BY-SA 4.0][7])
The left pane contains a folder-based hierarchy of phrases and scripts. _Phrases_ are text that you want AutoKey to enter on your behalf. _Scripts_ are dynamic, programmatic equivalents that can be written using Python and achieve basically the same result of making the keyboard send keystrokes to an active window.
The right pane is where the phrases and scripts are built and configured.
Once you're happy with your configuration, you'll probably want to run AutoKey automatically when you log in so that you don't have to start it up every time. You can configure this in the **Preferences** menu (**Edit -&gt; Preferences**) by selecting **Automatically start AutoKey at login**.
![Automatically start AutoKey at login][8]
(Matt Bargenquast, [CC BY-SA 4.0][7])
### Correct common typos with AutoKey
Fixing common typos is an easy problem for AutoKey to fix. For example, I consistently type "gerp" instead of "grep." Here's how to configure AutoKey to fix these types of problems for you.
Create a new subfolder where you can group all your "typo correction" configurations. Select **My Phrases** in the left pane, then **File -&gt; New -&gt; Subfolder**. Name the subfolder **Typos**.
Create a new phrase in **File -&gt; New -&gt; Phrase**, and call it "grep."
Configure AutoKey to insert the correct word by highlighting the phrase "grep" then entering "grep" in the **Enter phrase contents** section (replacing the default "Enter phrase contents" text).
Next, set up how AutoKey triggers this phrase by defining an Abbreviation. Click the **Set** button next to **Abbreviations** at the bottom of the UI.
In the dialog box that pops up, click the **Add** button and add "gerp" as a new abbreviation. Leave **Remove typed abbreviation** checked; this is what instructs AutoKey to replace any typed occurrence of the word "gerp" with "grep." Leave **Trigger when typed as part of a word** unchecked so that if you type a word containing "gerp" (such as "fingerprint"), it _won't_ attempt to turn that into "fingreprint." It will work only when "gerp" is typed as an isolated word.
![Set abbreviation in AutoKey][9]
(Matt Bargenquast, [CC BY-SA 4.0][7])
### Restrict corrections to specific applications
You may want a correction to apply only when you make the typo in certain applications (such as a terminal window). You can configure this by setting a Window Filter. Click the **Set** button to define one.
The easiest way to set a Window Filter is to let AutoKey detect the window type for you:
1. Start a new terminal window.
2. Back in AutoKey, click the **Detect Window Properties** button.
3. Click on the terminal window.
This will auto-populate the Window Filter, likely with a Window class value of `gnome-terminal-server.Gnome-terminal`. This is sufficient, so click **OK**.
![AutoKey Window Filter][10]
(Matt Bargenquast, [CC BY-SA 4.0][7])
### Save and test
Once you're satisfied with your new configuration, make sure to save it. Click **File** and choose **Save** to make the change active.
Now for the grand test! In your terminal window, type "gerp" followed by a space, and it should automatically correct to "grep." To validate the Window Filter is working, try typing the word "gerp" in a browser URL bar or some other application. It should not change.
You may be thinking that this problem could have been solved just as easily with a [shell alias][11], and I'd totally agree! Unlike aliases, which are command-line oriented, AutoKey can correct mistakes regardless of what application you're using.
For example, another common typo I make is "openshfit" instead of "openshift," which I type into browsers, integrated development environments, and terminals. Aliases can't quite help with this problem, whereas AutoKey can correct it in any occasion.
### Type frequently used phrases with AutoKey
There are numerous other ways you can invoke AutoKey's phrases to help you. For example, as a site reliability engineer (SRE) working on OpenShift, I frequently type Kubernetes namespace names on the command line:
```
`oc get pods -n openshift-managed-upgrade-operator`
```
These namespaces are static, so they are ideal phrases that AutoKey can insert for me when typing ad-hoc commands.
For this, I created a phrase subfolder named **Namespaces** and added a phrase entry for each namespace I type frequently.
### Assign hotkeys
Next, and most crucially, I assign the subfolder a **hotkey**. Whenever I press that hotkey, it opens a menu where I can select (either with **Arrow key**+**Enter** or using a number) the phrase I want to insert. This cuts down on the number of keystrokes I need to enter those commands to just a few keystrokes.
AutoKey's pre-configured examples in the **My Phrases** folder are configured with a **Ctrl**+**F7** hotkey. If you kept the examples in AutoKey's default configuration, try it out. You should see a menu of all the phrases available there. Select the item you want with the number or arrow keys.
### Advanced AutoKeying
AutoKey's [scripting engine][12] allows users to run Python scripts that can be invoked through the same abbreviation and hotkey system. These scripts can do things like switching windows, sending keystrokes, or performing mouse clicks through supporting API functions.
AutoKey users have embraced this feature by publishing custom scripts for others to adopt. For example, the [NumpadIME script][13] transforms a numeric keyboard into an old cellphone-style text entry method, and [Emojis-AutoKey][14] makes it easy to insert emojis by converting phrases such as `:smile:` into their emoji equivalent.
Here's a small script I set up that enters Tmux's copy mode to copy the first word from the preceding line into the paste buffer:
```
from time import sleep
# Send the tmux command prefix (changed from b to s)
keyboard.send_keys("&lt;ctrl&gt;+s")
# Enter copy mode
keyboard.send_key("[")
sleep(0.01)
# Move cursor up one line
keyboard.send_keys("k")
sleep(0.01)
# Move cursor to start of line
keyboard.send_keys("0")
sleep(0.01)
# Start mark
keyboard.send_keys(" ")
sleep(0.01)
# Move cursor to end of word
keyboard.send_keys("e")
sleep(0.01)
# Add to copy buffer
keyboard.send_keys("&lt;ctrl&gt;+m")
```
The sleeps are there because occasionally Tmux can't keep up with how fast AutoKey sends the keystrokes, and they have a negligible effect on the overall execution time.
### Automate with AutoKey
I hope you've enjoyed this excursion into keyboard automation with AutoKey and it gives you some bright ideas about how it can improve your workflow. If you're using AutoKey in a helpful or novel way, be sure to share it in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/linux-autokey
作者:[Matt Bargenquast][a]
选题:[lujun9972][b]
译者:[stevenzdg988](https://github.com/stevenzdg988)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mbargenquast
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://github.com/autokey/autokey
[3]: https://github.com/autokey/autokey/wiki/Installing
[4]: https://www.gtk.org/
[5]: https://www.qt.io/
[6]: https://opensource.com/sites/default/files/uploads/autokey-defaults.png (AutoKey UI)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://opensource.com/sites/default/files/uploads/startautokey.png (Automatically start AutoKey at login)
[9]: https://opensource.com/sites/default/files/uploads/autokey-set_abbreviation.png (Set abbreviation in AutoKey)
[10]: https://opensource.com/sites/default/files/uploads/autokey-window_filter.png (AutoKey Window Filter)
[11]: https://opensource.com/article/19/7/bash-aliases
[12]: https://autokey.github.io/index.html
[13]: https://github.com/luziferius/autokey_scripts
[14]: https://github.com/AlienKevin/Emojis-AutoKey

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (mengxinayan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,230 +0,0 @@
[#]: subject: (Access Python package index JSON APIs with requests)
[#]: via: (https://opensource.com/article/21/3/python-package-index-json-apis-requests)
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Access Python package index JSON APIs with requests
======
PyPI's JSON API is a machine-readable source of the same kind of data
you can access while browsing the website.
![Python programming language logo with question marks][1]
PyPI, the Python package index, provides a JSON API for information about its packages. This is essentially a machine-readable source of the same kind of data you can access while browsing the website. For example, as a human, I can head to the [NumPy][2] project page in my browser, click around, and see which versions there are, what files are available, and things like release dates and which Python versions are supported:
![NumPy project page][3]
(Ben Nuttall, [CC BY-SA 4.0][4])
But if I want to write a program to access this data, I can use the JSON API instead of having to scrape and parse the HTML on these pages.
As an aside: On the old PyPI website, when it was hosted at `pypi.python.org`, the NumPy project page was at `pypi.python.org/pypi/numpy`, and accessing the JSON was a simple matter of adding a `/json` on the end, hence `https://pypi.org/pypi/numpy/json`. Now the PyPI website is hosted at `pypi.org`, and NumPy's project page is at `pypi.org/project/numpy`. The new site doesn't include rendering the JSON, but it still runs as it was before. So now, rather than adding `/json` to the URL, you have to remember the URL where they are.
You can open up the JSON for NumPy in your browser by heading to its URL. Firefox renders it nicely like this:
![JSON rendered in Firefox][5]
(Ben Nuttall, [CC BY-SA 4.0][4])
You can open `info`, `releases`, and `urls` to inspect the contents within. Or you can load it into a Python shell. Here are a few lines to get started:
```
import requests
url = "<https://pypi.org/pypi/numpy/json>"
r = requests.get(url)
data = r.json()
```
Once you have the data (calling `.json()` provides a [dictionary][6] of the data), you can inspect it:
![Inspecting data][7]
(Ben Nuttall, [CC BY-SA 4.0][4])
Open `releases`, and inspect the keys inside it:
![Inspecting keys in releases][8]
(Ben Nuttall, [CC BY-SA 4.0][4])
This shows that `releases` is a dictionary with version numbers as keys. Pick one (say, the latest one) and inspect that:
![Inspecting version][9]
(Ben Nuttall, [CC BY-SA 4.0][4])
Each release is a list, and this one contains 24 items. But what is each item? Since it's a list, you can index the first one and take a look:
![Indexing an item][10]
(Ben Nuttall, [CC BY-SA 4.0][4])
This item is a dictionary containing details about a particular file. So each of the 24 items in the list relates to a file associated with this particular version number, i.e., the 24 files listed at <https://pypi.org/project/numpy/1.20.1/#files>.
You could write a script that looks for something within the available data. For example, the following loop looks for versions with sdist (source distribution) files that specify a `requires_python` attribute and prints them:
```
for version, files in data['releases'].items():
    for f in files:
        if f.get('packagetype') == 'sdist' and f.get('requires_python'):
            print(version, f['requires_python'])
```
![sdist files with requires_python attribute ][11]
(Ben Nuttall, [CC BY-SA 4.0][4])
### piwheels
Last year I [implemented a similar API][12] on the piwheels website. [piwheels.org][13] is a Python package index that provides wheels (precompiled binary packages) for the Raspberry Pi architecture. It's essentially a mirror of the package set on PyPI, but with Arm wheels instead of files uploaded to PyPI by package maintainers.
Since piwheels mimics the URL structure of PyPI, you can change the `pypi.org` part of a project page's URL to `piwheels.org`. It'll show you a similar kind of project page with details about which versions we have built and which files are available. Since I liked how the old site allowed you to add `/json` to the end of the URL, I made ours work that way, so NumPy's project page on PyPI is [pypi.org/project/numpy][14]. On piwheels, it is [piwheels.org/project/numpy][15], and the JSON is at [piwheels.org/project/numpy/json][16].
There's no need to duplicate the contents of PyPI's API, so we provide information about what's available on piwheels and include a list of all known releases, some basic information, and a list of files we have:
![JSON files available in piwheels][17]
(Ben Nuttall, [CC BY-SA 4.0][4])
Similar to the previous PyPI example, you could create a script to analyze the API contents, for example, to show the number of files piwheels has for each version of NumPy:
```
import requests
url = "<https://www.piwheels.org/project/numpy/json>"
package = requests.get(url).json()
for version, info in package['releases'].items():
    if info['files']:
        print('{}: {} files'.format(version, len(info['files'])))
    else:
        print('{}: No files'.format(version))
```
Also, each file contains some metadata:
![Metadata in JSON files in piwheels][18]
(Ben Nuttall, [CC BY-SA 4.0][4])
One handy thing is the `apt_dependencies` field, which lists the Apt packages needed to use the library. In the case of this NumPy file, as well as installing NumPy with pip, you'll also need to install `libatlas3-base` and `libgfortran` using Debian's Apt package manager.
Here is an example script that shows the Apt dependencies for a package:
```
import requests
def get_install(package, abi):
    url = '<https://piwheels.org/project/{}/json'.format(package)>
    r = requests.get(url)
    data = r.json()
    for version, release in sorted(data['releases'].items(), reverse=True):
        for filename, file in release['files'].items():
            if abi in filename:
                deps = ' '.join(file['apt_dependencies'])
                print("sudo apt install {}".format(deps))
                print("sudo pip3 install {}=={}".format(package, version))
                return
get_install('opencv-python', 'cp37m')
get_install('opencv-python', 'cp35m')
get_install('opencv-python-headless', 'cp37m')
get_install('opencv-python-headless', 'cp35m')
```
We also provide a general API endpoint for the list of packages, which includes download stats for each package:
```
import requests
url = "<https://www.piwheels.org/packages.json>"
packages = requests.get(url).json()
packages = {
    pkg: (d_month, d_all)
    for pkg, d_month, d_all, *_ in packages
}
package = 'numpy'
d_month, d_all = packages[package]
print(package, "has had", d_month, "downloads in the last month")
print(package, "has had", d_all, "downloads in total")
```
### pip search
Since `pip search` is currently disabled due to its XMLRPC interface being overloaded, people have been looking for alternatives. You can use the piwheels JSON API to search for package names instead since the set of packages is the same:
```
#!/usr/bin/python3
import sys
import requests
PIWHEELS_URL = '<https://www.piwheels.org/packages.json>'
r = requests.get(PIWHEELS_URL)
packages = {p[0] for p in r.json()}
def search(term):
    for pkg in packages:
        if term in pkg:
            yield pkg
if __name__ == '__main__':
    if len(sys.argv) == 2:
        results = search(sys.argv[1].lower())
        for res in results:
            print(res)
    else:
        print("Usage: pip_search TERM")
```
For more information, see the piwheels [JSON API documentation][19].
* * *
_This article originally appeared on Ben Nuttall's [Tooling Tuesday blog][20] and is reused with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/python-package-index-json-apis-requests
作者:[Ben Nuttall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bennuttall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_programming_question.png?itok=cOeJW-8r (Python programming language logo with question marks)
[2]: https://pypi.org/project/numpy/
[3]: https://opensource.com/sites/default/files/uploads/numpy-project-page.png (NumPy project page)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/pypi-json-firefox.png (JSON rendered in Firefox)
[6]: https://docs.python.org/3/tutorial/datastructures.html#dictionaries
[7]: https://opensource.com/sites/default/files/uploads/pypi-json-notebook.png (Inspecting data)
[8]: https://opensource.com/sites/default/files/uploads/pypi-json-releases.png (Inspecting keys in releases)
[9]: https://opensource.com/sites/default/files/uploads/pypi-json-inspect.png (Inspecting version)
[10]: https://opensource.com/sites/default/files/uploads/pypi-json-release.png (Indexing an item)
[11]: https://opensource.com/sites/default/files/uploads/pypi-json-requires-python.png (sdist files with requires_python attribute )
[12]: https://blog.piwheels.org/requires-python-support-new-project-page-layout-and-a-new-json-api/
[13]: https://www.piwheels.org/
[14]: https://pypi.org/project/numpy
[15]: https://www.piwheels.org/project/numpy
[16]: https://www.piwheels.org/project/numpy/json
[17]: https://opensource.com/sites/default/files/uploads/piwheels-json.png (JSON files available in piwheels)
[18]: https://opensource.com/sites/default/files/uploads/piwheels-json-numpy.png (Metadata in JSON files in piwheels)
[19]: https://www.piwheels.org/json.html
[20]: https://tooling.bennuttall.com/accessing-python-package-index-json-apis-with-requests/

View File

@ -1,160 +0,0 @@
[#]: subject: (A tool to spy on your DNS queries: dnspeep)
[#]: via: (https://jvns.ca/blog/2021/03/31/dnspeep-tool/)
[#]: author: (Julia Evans https://jvns.ca/)
[#]: collector: (lujun9972)
[#]: translator: (wyxplus)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
A tool to spy on your DNS queries: dnspeep
======
Hello! Over the last few days I made a little tool called [dnspeep][1] that lets you see what DNS queries your computer is making, and what responses its getting. Its about [250 lines of Rust right now][2].
Ill talk about how you can try it, what its for, why I made it, and some problems I ran into while writing it.
### how to try it
I built some binaries so you can quickly try it out.
For Linux (x86):
```
wget https://github.com/jvns/dnspeep/releases/download/v0.1.0/dnspeep-linux.tar.gz
tar -xf dnspeep-linux.tar.gz
sudo ./dnspeep
```
For Mac:
```
wget https://github.com/jvns/dnspeep/releases/download/v0.1.0/dnspeep-macos.tar.gz
tar -xf dnspeep-macos.tar.gz
sudo ./dnspeep
```
It needs to run as root because it needs access to all the DNS packets your computer is sending. This is the same reason `tcpdump` needs to run as root it uses `libpcap` which is the same library that tcpdump uses.
You can also read the source and build it yourself at <https://github.com/jvns/dnspeep> if you dont want to just download binaries and run them as root :).
### what the output looks like
Heres what the output looks like. Each line is a DNS query and the response.
```
$ sudo dnspeep
query name server IP response
A firefox.com 192.168.1.1 A: 44.235.246.155, A: 44.236.72.93, A: 44.236.48.31
AAAA firefox.com 192.168.1.1 NOERROR
A bolt.dropbox.com 192.168.1.1 CNAME: bolt.v.dropbox.com, A: 162.125.19.131
```
Those queries are from me going to `neopets.com` in my browser, and the `bolt.dropbox.com` query is because Im running a Dropbox agent and I guess it phones home behind the scenes from time to time because it needs to sync.
### why make another DNS tool?
I made this because I think DNS can seem really mysterious when you dont know a lot about it!
Your browser (and other software on your computer) is making DNS queries all the time, and I think it makes it seem a lot more “real” when you can actually see the queries and responses.
I also wrote this to be used as a debugging tool. I think the question “is this a DNS problem?” is harder to answer than it should be I get the impression that when trying to check if a problem is caused by DNS people often use trial and error or guess instead of just looking at the DNS responses that their computer is getting.
### you can see which software is “secretly” using the Internet
One thing I like about this tool is that it gives me a sense for what programs on my computer are using the Internet! For example, I found out that something on my computer is making requests to `ping.manjaro.org` from time to time for some reason, probably to check Im connected to the internet.
A friend of mine actually discovered using this tool that he had some corporate monitoring software installed on his computer from an old job that hed forgotten to uninstall, so you might even find something you want to remove.
### tcpdump is confusing if youre not used to it
My first instinct when trying to show people the DNS queries their computer is making was to say “well, use tcpdump”! And `tcpdump` does parse DNS packets!
For example, heres what a DNS query for `incoming.telemetry.mozilla.org.` looks like:
```
11:36:38.973512 wlp3s0 Out IP 192.168.1.181.42281 > 192.168.1.1.53: 56271+ A? incoming.telemetry.mozilla.org. (48)
11:36:38.996060 wlp3s0 In IP 192.168.1.1.53 > 192.168.1.181.42281: 56271 3/0/0 CNAME telemetry-incoming.r53-2.services.mozilla.com., CNAME prod.data-ingestion.prod.dataops.mozgcp.net., A 35.244.247.133 (180)
```
This is definitely possible to learn to read, for example lets break down the query:
`192.168.1.181.42281 > 192.168.1.1.53: 56271+ A? incoming.telemetry.mozilla.org. (48)`
* `A?` means its a DNS **query** of type A
* `incoming.telemetry.mozilla.org.` is the name being qeried
* `56271` is the DNS querys ID
* `192.168.1.181.42281` is the source IP/port
* `192.168.1.1.53` is the destination IP/port
* `(48)` is the length of the DNS packet
And in the response breaks down like this:
`56271 3/0/0 CNAME telemetry-incoming.r53-2.services.mozilla.com., CNAME prod.data-ingestion.prod.dataops.mozgcp.net., A 35.244.247.133 (180)`
* `3/0/0` is the number of records in the response: 3 answers, 0 authority, 0 additional. I think tcpdump will only ever print out the answer responses though.
* `CNAME telemetry-incoming.r53-2.services.mozilla.com`, `CNAME prod.data-ingestion.prod.dataops.mozgcp.net.`, and `A 35.244.247.133` are the three answers
* `56271` is the responses ID, which matches up with the querys ID. Thats how you can tell its a response to the request in the previous line.
I think what makes this format the most difficult to deal with (as a human who just wants to look at some DNS traffic) though is that you have to manually match up the requests and responses, and theyre not always on adjacent lines. Thats the kind of thing computers are good at!
So I decided to write a little program (`dnspeep`) which would do this matching up and also remove some of the information I felt was extraneous.
### problems I ran into while writing it
When writing this I ran into a few problems.
* I had to patch the `pcap` crate to make it work properly with Tokio on Mac OS ([this change][3]). This was one of those bugs which took many hours to figure out and 1 line to fix :)
* Different Linux distros seem to have different versions of `libpcap.so`, so I couldnt easily distribute a binary that dynamically links libpcap (you can see other people having the same problem [here][4]). So I decided to statically compile libpcap into the tool on Linux. I still dont really know how to do this properly in Rust, but I got it to work by copying the `libpcap.a` file into `target/release/deps` and then just running `cargo build`.
* The `dns_parser` crate Im using doesnt support all DNS query types, only the most common ones. I probably need to switch to a different crate for parsing DNS packets but I havent found the right one yet.
* Becuase the `pcap` interface just gives you raw bytes (including the Ethernet frame), I needed to [write code to figure out how many bytes to strip from the beginning to get the packets IP header][5]. Im pretty sure there are some cases Im still missing there.
I also had a hard time naming it because there are SO MANY DNS tools already (dnsspy! dnssnoop! dnssniff! dnswatch!). I basically just looked at every synonym for “spy” and then picked one that seemed fun and did not already have a DNS tool attached to it.
One thing this program doesnt do is tell you which process made the DNS query, theres a tool called [dnssnoop][6] I found that does that. It uses eBPF and it looks cool but I havent tried it.
### there are probably still lots of bugs
Ive only tested this briefly on Linux and Mac and I already know of at least one bug (caused by not supporting enough DNS query types), so please report problems you run into!
The bugs arent dangerous though because the libpcap interface is read-only the worst thing that can happen is that itll get some input it doesnt understand and print out an error or crash.
### writing small educational tools is fun
Ive been having a lot of fun writing small educational DNS tools recently.
So far Ive made:
* <https://dns-lookup.jvns.ca> (a simple way to make DNS queries)
* <https://dns-lookup.jvns.ca/trace.html> (shows you exactly what happens behind the scenes when you make a DNS query)
* this tool (`dnspeep`)
Historically Ive mostly tried to explain existing tools (like `dig` or `tcpdump`) instead of writing my own tools, but often I find that the output of those tools is confusing, so Im interested in making more friendly ways to see the same information so that everyone can understand what DNS queries their computer is making instead of just tcpdump wizards :).
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2021/03/31/dnspeep-tool/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://github.com/jvns/dnspeep
[2]: https://github.com/jvns/dnspeep/blob/f5780dc822df5151f83703f05c767dad830bd3b2/src/main.rs
[3]: https://github.com/ebfull/pcap/pull/168
[4]: https://github.com/google/gopacket/issues/734
[5]: https://github.com/jvns/dnspeep/blob/f5780dc822df5151f83703f05c767dad830bd3b2/src/main.rs#L136
[6]: https://github.com/lilydjwg/dnssnoop

View File

@ -2,7 +2,7 @@
[#]: via: (https://fedoramagazine.org/scheduling-tasks-with-cron/)
[#]: author: (Darshna Das https://fedoramagazine.org/author/climoiselle/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,209 +0,0 @@
[#]: subject: (Play a fun math game with Linux commands)
[#]: via: (https://opensource.com/article/21/4/math-game-linux-commands)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Play a fun math game with Linux commands
======
Play the numbers game from the popular British game show "Countdown" at
home.
![Math formulas in green writing][1]
Like many people, I've been exploring lots of new TV shows during the pandemic. I recently discovered a British game show called _[Countdown][2]_, where contestants play two types of games: a _words_ game, where they try to make the longest word out of a jumble of letters, and a _numbers_ game, where they calculate a target number from a random selection of numbers. Because I enjoy mathematics, I've found myself drawn to the numbers game.
The numbers game can be a fun addition to your next family game night, so I wanted to share my own variation of it. You start with a collection of random numbers, divided into "small" numbers from 1 to 10 and "large" numbers that are 15, 20, 25, and so on until 100. You pick any combination of six numbers from both large and small numbers.
Next, you generate a random "target" number between 200 and 999. Then use simple arithmetic operations with your six numbers to try to calculate the target number using each "small" and "large" number no more than once. You get the highest number of points if you calculate the target number exactly and fewer points if you can get within 10 of the target number.
For example, if your random numbers were 75, 100, 2, 3, 4, and 1, and your target number was 505, you might say _2+3=5_, _5×100=500_, _4+1=5_, and _5+500=505_. Or more directly: (**2**+**3**)×**100** \+ **4** \+ **1** = **505**.
### Randomize lists on the command line
I've found the best way to play this game at home is to pull four "small" numbers from a pool of 1 to 10 and two "large" numbers from multiples of five from 15 to 100. You can use the Linux command line to create these random numbers for you.
Let's start with the "small" numbers. I want these to be in the range of 1 to 10. You can generate a sequence of numbers using the Linux `seq` command. You can run `seq` a few different ways, but the simplest form is to provide the starting and ending numbers for the sequence. To generate a list from 1 to 10, you might run this command:
```
$ seq 1 10
1
2
3
4
5
6
7
8
9
10
```
To randomize this list, you can use the Linux `shuf` ("shuffle") command. `shuf` will randomize the order of whatever you give it, usually a file. For example, if you send the output of the `seq` command to the `shuf` command, you will receive a randomized list of numbers between 1 and 10:
```
$ seq 1 10 | shuf
3
6
8
10
7
4
5
2
1
9
```
To select just four random numbers from a list of 1 to 10, you can send the output to the `head` command, which prints out the first few lines of its input. Use the `-4` option to specify that `head` should print only the first four lines:
```
$ seq 1 10 | shuf | head -4
6
1
8
4
```
Note that this list is different from the earlier example because `shuf` will generate a random order every time.
Now you can take the next step to generate the random list of "large" numbers. The first step is to generate a list of possible numbers starting at 15, incrementing by five, until you reach 100. You can generate this list with the Linux `seq` command. To increment each number by five, insert another option for the `seq` command to indicate the _step_:
```
$ seq 15 5 100
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
```
And just as before, you can randomize this list and select two of the "large" numbers:
```
$ seq 15 5 100 | shuf | head -2
75
40
```
### Generate a random number with Bash
I suppose you could use a similar method to select the game's target number from the range 200 to 999. But the simplest solution to generate a single random value is to use the `RANDOM` variable directly in Bash. When you reference this built-in variable, Bash generates a large random number. To put this in the range of 200 to 999, you need to put the random number into the range 0 to 799 first, then add 200.
To put a random number into a specific range starting at 0, you can use the **modulo** arithmetic operation. Modulo calculates the _remainder_ after dividing two numbers. If I started with 801 and divided by 800, the result is 1 _with a remainder of_ 1 (the modulo is 1). Dividing 800 by 800 gives 1 _with a remainder of_ 0 (the modulo is 0). And dividing 799 by 800 results in 0 _with a remainder of_ 799 (the modulo is 799).
Bash supports arithmetic expansion with the `$(( ))` construct. Between the double parentheses, Bash will perform arithmetic operations on the values you provide. To calculate the modulo of 801 divided by 800, then add 200, you would type:
```
$ echo $(( 801 % 800 + 200 ))
201
```
With that operation, you can calculate a random target number between 200 and 999:
```
$ echo $(( RANDOM % 800 + 200 ))
673
```
You might wonder why I used `RANDOM` instead of `$RANDOM` in my Bash statement. In arithmetic expansion, Bash will automatically expand any variables within the double parentheses. You don't need the `$` on the `$RANDOM` variable to reference the value of the variable because Bash will do it for you.
### Playing the numbers game
Let's put all that together to play the numbers game. Generate two random "large" numbers, four random "small" values, and the target value:
```
$ seq 15 5 100 | shuf | head -2
75
100
$ seq 1 10 | shuf | head -4
4
3
10
2
$ echo $(( RANDOM % 800 + 200 ))
868
```
My numbers are **75**, **100**, **4**, **3**, **10**, and **2**, and my target number is **868**.
I can get close to the target number if I do these arithmetic operations using each of the "small" and "large" numbers no more than once:
```
10×75 = 750
750+100 = 850
and:
4×3 = 12
850+12 = 862
862+2 = 864
```
That's only four away—not bad! But I found this way to calculate the exact number using each random number no more than once:
```
4×2 = 8
8×100 = 800
and:
75-10+3 = 68
800+68 = 868
```
Or I could perform _these_ calculations to get the target number exactly. This uses only five of the six random numbers:
```
4×3 = 12
75+12 = 87
and:
87×10 = 870
870-2 = 868
```
Give the _Countdown_ numbers game a try, and let us know how well you did in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/math-game-linux-commands
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_math_formulas.png?itok=B59mYTG3 (Math formulas in green writing)
[2]: https://en.wikipedia.org/wiki/Countdown_%28game_show%29

View File

@ -1,11 +1,11 @@
[#]: subject: (A beginner's guide to network management)
[#]: via: (https://opensource.com/article/21/4/network-management)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: "A beginner's guide to network management"
[#]: via: "https://opensource.com/article/21/4/network-management"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "ddl-hust"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
A beginner's guide to network management
======
@ -206,13 +206,13 @@ via: https://opensource.com/article/21/4/network-management
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk "Tips and gears turning"
[2]: https://tools.ietf.org/html/rfc793
[3]: https://tools.ietf.org/html/rfc791
[4]: https://opensource.com/sites/default/files/uploads/crossover.jpg (Crossover cable)
[4]: https://opensource.com/sites/default/files/uploads/crossover.jpg "Crossover cable"
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/article/17/4/build-your-own-name-server
[7]: http://redhat.com
[8]: https://www.redhat.com/sysadmin/secure-linux-network-firewall-cmd
[9]: https://opensource.com/article/20/1/open-source-networking
[10]: https://opensource.com/downloads/cheat-sheet-networking
[10]: https://opensource.com/downloads/cheat-sheet-networking

View File

@ -1,157 +0,0 @@
[#]: subject: (Application observability with Apache Kafka and SigNoz)
[#]: via: (https://opensource.com/article/21/4/observability-apache-kafka-signoz)
[#]: author: (Nitish Tiwari https://opensource.com/users/tiwarinitish86)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Application observability with Apache Kafka and SigNoz
======
SigNoz helps developers start meeting their observability goals quickly
and with minimum effort.
![Ship captain sailing the Kubernetes seas][1]
SigNoz is an open source application observability platform. Built in React and Go, SigNoz is written from the ground up to allow developers to get started with their observability goals as soon as possible and with minimum effort.
This article looks at the software in detail, including the architecture, Kubernetes-based deployment, and some common SigNoz uses.
### SigNoz architecture
SigNoz ties several components together to create a scalable, loosely coupled system that is easy to get started with. Some of the most important components are:
* OpenTelemetry Collector
* Apache Kafka
* Apache Druid
[OpenTelemetry Collector][2] is the trace or metrics data collection engine. This enables SigNoz to ingest data in industry-standard formats, including Jaeger, Zipkin, and OpenConsensus. Then the collected data is forwarded to Apache Kafka.
SigNoz uses Kafka and stream processors for real-time ingestion of high volumes of observability data. This data is then passed on to Apache Druid, which excels at storing such data for short- and long-term SQL analysis.
Once the data is flattened and stored in Druid, SigNoz's query service can query and pass the data to the SigNoz React frontend. The front end then creates nice graphs for users to visualize the observability data.
![SigNoz architecture][3]
(Nitish Tiwari, [CC BY-SA 4.0][4])
### Install SigNoz
SigNoz's components include Apache Kafka and Druid. These components are loosely coupled and work in tandem to ensure a seamless experience for the end user. Given all the components, it is best to run SigNoz as a combination of microservices on Kubernetes or Docker Compose (for local testing).
This example uses a Kubernetes Helm chart-based deployment to install SigNoz on Kubernetes. As a prerequisite, you'll need a Kubernetes cluster. If you don't have a Kubernetes cluster available, you can use tools like [MiniKube][5] or [Kind][6] to create a test cluster on your local machine. Note that the machine should have at least 4GB available for this to work.
Once you have the cluster available and kubectl configured to communicate with the cluster, run:
```
$ git clone <https://github.com/SigNoz/signoz.git> &amp;&amp; cd signoz
$ helm dependency update deploy/kubernetes/platform
$ kubectl create ns platform
$ helm -n platform install signoz deploy/kubernetes/platform
$ kubectl -n platform apply -Rf deploy/kubernetes/jobs
$ kubectl -n platform apply -f deploy/kubernetes/otel-collector
```
This installs SigNoz and related containers on the cluster. To access the user interface (UI), run the `kubectl port-forward` command; for example:
```
`$ kubectl -n platform port-forward svc/signoz-frontend 3000:3000`
```
You should now be able to access your SigNoz dashboard using a local browser on the address `http://localhost:3000`.
Now that your observability platform is up, you need an application that generates observability data to visualize and trace. For this example, you can use [HotROD][7], a sample application developed by the Jaegar team.
To install it, run:
```
$ kubectl create ns sample-application
$ kubectl -n sample-application apply -Rf sample-apps/hotrod/
```
### Explore the features
You should now have a sample application with proper instrumentation up and running in the demo setup. Look at the SigNoz dashboard for metrics and trace data. As you land on the dashboard's home, you will see a list of all the configured applications that are sending instrumentation data to SigNoz.
![SigNoz dashboard][8]
(Nitish Tiwari, [CC BY-SA 4.0][4])
#### Metrics
When you click on a specific application, you will land on the application's homepage. The Metrics page displays the last 15 minutes worth (this number is configurable) of information, like application latency, average throughput, error rate, and the top endpoints the application is accessing. This gives you a birds-eye view of the application's status. Any spikes in errors, latency, or load are immediately visible.
![Metrics in SigNoz][9]
(Nitish Tiwari, [CC BY-SA 4.0][4])
#### Tracing
The Traces page lists every request in chronological order with high-level details. As soon as you identify a single request of interest (e.g., something taking longer than expected to complete), you can click the trace and look at individual spans for every action that happened inside that request. The drill-down mode offers thorough inspection for each request.
![Tracing in SigNoz][10]
(Nitish Tiwari, [CC BY-SA 4.0][4])
![Tracing in SigNoz][11]
(Nitish Tiwari, [CC BY-SA 4.0][4])
#### Usage Explorer
Most of the metrics and tracing data are very useful, but only for a certain period. As time passes, the data ceases to be useful in most cases. This means it is important to plan a proper retention duration for data; otherwise, you will pay more for the storage. The Usage Explorer provides an overview of ingested data per hour, day, and week.
![SigNoz Usage Explorer][12]
(Nitish Tiwari, [CC BY-SA 4.0][4])
### Add instrumentation
So far, you've been looking at metrics and traces from the sample HotROD application. Ideally, you'll want to instrument your application so that it sends observability data to SigNoz. Do this by following the [Instrumentation Overview][13] on SigNoz's website.
SigNoz supports a vendor-agnostic instrumentation library, OpenTelemetry, as the primary way to configure instrumentation. OpenTelemetry offers instrumentation libraries for various languages with support for both automatic and manual instrumentation.
### Learn more
SigNoz helps developers get started quickly with metrics and tracing applications. To learn more, you can consult the [documentation][14], join the [community][15], and access the source code on [GitHub][16].
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/observability-apache-kafka-signoz
作者:[Nitish Tiwari][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tiwarinitish86
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://github.com/open-telemetry/opentelemetry-collector
[3]: https://opensource.com/sites/default/files/uploads/signoz_architecture.png (SigNoz architecture)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://minikube.sigs.k8s.io/docs/start/
[6]: https://kind.sigs.k8s.io/docs/user/quick-start/
[7]: https://github.com/jaegertracing/jaeger/tree/master/examples/hotrod
[8]: https://opensource.com/sites/default/files/uploads/signoz_dashboard.png (SigNoz dashboard)
[9]: https://opensource.com/sites/default/files/uploads/signoz_applicationmetrics.png (Metrics in SigNoz)
[10]: https://opensource.com/sites/default/files/uploads/signoz_tracing.png (Tracing in SigNoz)
[11]: https://opensource.com/sites/default/files/uploads/signoz_tracing2.png (Tracing in SigNoz)
[12]: https://opensource.com/sites/default/files/uploads/signoz_usageexplorer.png (SigNoz Usage Explorer)
[13]: https://signoz.io/docs/instrumentation/overview/
[14]: https://signoz.io/docs/
[15]: https://github.com/SigNoz/signoz#community
[16]: https://github.com/SigNoz/signoz

View File

@ -1,352 +0,0 @@
[#]: subject: (Build smaller containers)
[#]: via: (https://fedoramagazine.org/build-smaller-containers/)
[#]: author: (Daniel Schier https://fedoramagazine.org/author/danielwtd/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Build smaller containers
======
![build smaller containers][1]
Otter image excerpted from photo by [Dele Oluwayomi][2] on [Unsplash][3]
Working with containers is a daily task for many users and developers. Container developers often need to (re)build container images frequently. If you develop containers, have you ever thought about reducing the image size? Smaller images have several benefits. They require less bandwidth to download and they save costs when run in cloud environments. Also, using smaller container images on Fedora [CoreOS][4], [IoT][5] and [Silverblue][6] improves overall system performance because those operating systems rely heavily on container workflows. This article will provide a few tips for reducing the size of container images.
### The tools
The host operating system in the following examples is Fedora Linux 33. The examples use [Podman][7] 3.1.0 and [Buildah][8] 1.2.0. Podman and Buildah are pre-installed in most Fedora Linux variants. If you dont have Podman or Buildah installed, run the following command to install them.
```
$ sudo dnf install -y podman buildah
```
### The task
Begin with a basic example. Build a web container meeting the following requirements.
* The container must be based on Fedora Linux
* Use the Apache httpd web server
* Include a custom website
* The container should be relatively small
The following steps will also work on more complex images.
### The setup
First, create a project directory. This directory will include your website and container file.
```
$ mkdir smallerContainer
$ cd smallerContainer
$ mkdir files
$ touch files/index.html
```
Make a simple landing page. For this demonstration, you may copy the below HTML into the _index.html_ file.
```
<!doctype html>
<html lang="de">
<head>
<title>Container Page</title>
</head>
<body>
<header>
<h1>Container Page</h1>
</header>
<main>
<h2>Fedora</h2>
<ul>
<li><a href="https://getfedora.org">Fedora Project</a></li>
<li><a href="https://docs.fedoraproject.org/">Fedora Documentation</a></li>
<li><a href="https://fedoramagazine.org">Fedora Magazine</a></li>
<li><a href="https://communityblog.fedoraproject.org/">Fedora Community Blog</a></li>
</ul>
<h2>Podman</h2>
<ul>
<li><a href="https://podman.io">Podman</a></li>
<li><a href="https://docs.podman.io/">Podman Documentation</a></li>
<li><a href="https://github.com/containers/podman">Podman Code</a></li>
<li><a href="https://podman.io/blogs/">Podman Blog</a></li>
</ul>
<h2>Buildah</h2>
<ul>
<li><a href="https://buildah.io">Buildah</a></li>
<li><a href="https://github.com/containers/buildah">Buildah Code</a></li>
<li><a href="https://buildah.io/blogs/">Buildah Blog</a></li>
</ul>
<h2>Skopeo</h2>
<ul>
<li><a href="https://github.com/containers/skopeo">skopeo Code</a></li>
</ul>
<h2>CRI-O</h2>
<ul>
<li><a href="https://cri-o.io/">CRI-O</a></li>
<li><a href="https://github.com/cri-o/cri-o">CRI-O Code</a></li>
<li><a href="https://medium.com/cri-o">CRI-O Blog</a></li>
</ul>
</main>
</body>
</html>
```
Optionally, test the above _index.html_ file in your browser.
```
$ firefox files/index.html
```
Finally, create a container file. The file can be named either _Dockerfile_ or _Containerfile_.
```
$ touch Containerfile
```
You should now have a project directory with a file system layout similar to what is shown in the below diagram.
```
smallerContainer/
|- files/
| |- index.html
|
|- Containerfile
```
### The build
Now make the image. Each of the below stages will add a layer of improvements to help reduce the size of the image. You will end up with a series of images, but only one _Containerfile_.
#### Stage 0: a baseline container image
Your new image will be very simple and it will only include the mandatory steps. Place the following text in _Containerfile_.
```
# Use Fedora 33 as base image
FROM registry.fedoraproject.org/fedora:33
# Install httpd
RUN dnf install -y httpd
# Copy the website
COPY files/* /var/www/html/
# Expose Port 80/tcp
EXPOSE 80
# Start httpd
CMD ["httpd", "-DFOREGROUND"]
```
In the above file there are some comments to indicate what is being done. More verbosely, the steps are:
1. Create a build container with the base FROM registry.fedoraproject.org/fedora:33
2. RUN the command: _dnf install -y httpd_
3. COPY files relative to the _Containerfile_ to the container
4. Set EXPOSE 80 to indicate which port is auto-publishable
5. Set a CMD to indicate what should be run if one creates a container from this image
Run the below command to create a new image from the project directory.
```
$ podman image build -f Containerfile -t localhost/web-base
```
Use the following command to examine your images attributes. Note in particular the size of your image (467 MB).
```
$ podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/web-base latest ac8c5ed73bb5 5 minutes ago 467 MB
registry.fedoraproject.org/fedora 33 9f2a56037643 3 months ago 182 MB
```
The example image shown above is currently occupying 467 MB of storage. The remaining stages should reduce the size of the image significantly. But first, verify that the image works as intended.
Enter the following command to start the container.
```
$ podman container run -d --name web-base -P localhost/web-base
```
Enter the following command to list your containers.
```
$ podman container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d24063487f9f localhost/web-base httpd -DFOREGROUN... 2 seconds ago Up 3 seconds ago 0.0.0.0:46191->80/tcp web-base
```
The container shown above is running and it is listening on port _46191_. Going to _localhost:46191_ from a web browser running on the host operating system should render your web page.
```
$ firefox localhost:46191
```
#### Stage 1: clear caches and remove other leftovers from the container
The first step one should always perform to optimize the size of their container image is “clean up”. This will ensure that leftovers from installations and packaging are removed. What exactly this process entails will vary depending on your container. For the above example you can just edit _Containerfile_ to include the following lines.
```
[...]
# Install httpd
RUN dnf install -y httpd && \
dnf clean all -y
[...]
```
Build the modified _Containerfile_ to reduce the size of the image significantly (237 MB in this example).
```
$ podman image build -f Containerfile -t localhost/web-clean
$ podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/web-clean latest f0f62aece028 6 seconds ago 237 MB
```
#### Stage 2: remove documentation and unneeded package dependencies
Many packages will pull in recommendations, weak dependencies and documentation when they are installed. These are often not needed in a container and can be excluded. The _dnf_ command has options to indicate that it should not include weak dependencies or documentation.
Edit _Containerfile_ again and add the options to exclude documentation and weak dependencies on the _dnf install_ line:
```
[...]
# Install httpd
RUN dnf install -y httpd --nodocs --setopt install_weak_deps=False && \
dnf clean all -y
[...]
```
Build _Containerfile_ with the above modifications to achieve an even smaller image (231 MB).
```
$ podman image build -f Containerfile -t localhost/web-docs
$ podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/web-docs latest 8a76820cec2f 8 seconds ago 231 MB
```
#### Stage 3: use a smaller container base image
The prior stages, in combination, have reduced the size of the example image by half. But there is still one more thing that can be done to reduce the size of the image. The base image _registry.fedoraproject.org/fedora:33_ is meant for general purpose use. It provides a collection of packages that many people expect to be pre-installed in their Fedora Linux containers. The collection of packages provided in the general purpose Fedora Linux base image is often more extensive than needed, however. The Fedora Project also provides a _fedora-minimal_ base image for those who wish to start with only the essential packages and then add only what they need to achieve a smaller total image size.
Use _podman image search_ to search for the _fedora-minimal_ image as shown below.
```
$ podman image search fedora-minimal
INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
fedoraproject.org registry.fedoraproject.org/fedora-minimal 0
```
The _fedora-minimal_ base image excludes [DNF][9] in favor of the smaller [microDNF][10] which does not require Python. When _registry.fedoraproject.org/fedora:33_ is replaced with _registry.fedoraproject.org/fedora-minimal:33_, _dnf_ needs to be replaced with _microdnf_.
```
# Use Fedora minimal 33 as base image
FROM registry.fedoraproject.org/fedora-minimal:33
# Install httpd
RUN microdnf install -y httpd --nodocs --setopt install_weak_deps=0 && \
microdnf clean all -y
[...]
```
Rebuild the image to see how much storage space has been recovered by using _fedora-minimal_ (169 MB).
```
$ podman image build -f Containerfile -t localhost/web-docs
$ podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/web-minimal latest e1603bbb1097 7 minutes ago 169 MB
```
The initial image size was **467 MB**. Combining the methods detailed in each of the above stages has resulted in a final image size of **169 MB**. The final _total_ image size is smaller than the original _base_ image size of 182 MB!
### Building containers from scratch
The previous section used a container file and Podman to build a new image. There is one last thing to demonstrate — building a container from scratch using Buildah. Podman uses the same libraries to build containers as Buildah. But Buildah is considered a pure build tool. Podman is designed to work as a replacement for Docker.
When building from scratch using Buildah, the container is empty — there is _nothing_ in it. Everything needed must be installed or copied from outside the container. Fortunately, this is quite easy with Buildah. Below, a small Bash script is provided which will build the image from scratch. Instead of running the script, you can run each of the commands from the script individually in a terminal to better understand what is being done.
```
#!/usr/bin/env bash
set -o errexit
# Create a container
CONTAINER=$(buildah from scratch)
# Mount the container filesystem
MOUNTPOINT=$(buildah mount $CONTAINER)
# Install a basic filesystem and minimal set of packages, and nginx
dnf install -y --installroot $MOUNTPOINT --releasever 33 glibc-minimal-langpack httpd --nodocs --setopt install_weak_deps=False
dnf clean all -y --installroot $MOUNTPOINT --releasever 33
# Cleanup
buildah unmount $CONTAINER
# Copy the website
buildah copy $CONTAINER 'files/*' '/var/www/html/'
# Expose Port 80/tcp
buildah config --port 80 $CONTAINER
# Start httpd
buildah config --cmd "httpd -DFOREGROUND" $CONTAINER
# Save the container to an image
buildah commit --squash $CONTAINER web-scratch
```
Alternatively, the image can be built by passing the above script to Buildah. Notice that root privileges are not required.
```
$ buildah unshare bash web-scratch.sh
$ podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/web-scratch latest acca45fc9118 9 seconds ago 155 MB
```
The final image is only **155 MB**! Also, the [attack surface][11] has been reduced. Not even DNF (or microDNF) is installed in the final image.
### Conclusion
Building smaller container images has many advantages. Reducing the needed bandwidth, the disk footprint and attack surface will lead to better images overall. It is easy to reduce the footprint with just a few small changes. Many of the changes can be done without altering the functionality of the resulting image.
It is also possible to build very small images from scratch which will only hold the needed binaries and configuration files.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/build-smaller-containers/
作者:[Daniel Schier][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/danielwtd/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/podman-smaller-1-816x345.jpg
[2]: https://unsplash.com/@errbodysaycheese?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/otter?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/getting-started-with-fedora-coreos/
[5]: https://getfedora.org/en/iot/
[6]: https://fedoramagazine.org/what-is-silverblue/
[7]: https://podman.io/
[8]: https://buildah.io/
[9]: https://github.com/rpm-software-management/dnf
[10]: https://github.com/rpm-software-management/microdnf
[11]: https://en.wikipedia.org/wiki/Attack_surface

View File

@ -1,104 +0,0 @@
[#]: subject: (Restore an old MacBook with Linux)
[#]: via: (https://opensource.com/article/21/4/restore-macbook-linux)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Restore an old MacBook with Linux
======
Don't throw your old, slow MacBook into the recycling bin; extend its
life with Linux Mint.
![Writing Hand][1]
Last year, I wrote about how you can give [new life to an old MacBook][2] with Linux, specifically Elementary OS in that instance. Recently, I returned to that circa 2015 MacBook Air and discovered I had lost my login password. I downloaded the latest Elementary OS 5.1.7 Hera release and could not get the live boot to recognize my Broadcom 4360 wireless chipset.
Lately, I have been using [Linux Mint][3] to refurbish older laptops, and I thought I would give it a try on this MacBook Air. I downloaded the Linux Mint 20.1 ISO and created a USB boot drive using the [Popsicle][4] software on my Linux desktop computer.
![Popsicle ISO burner][5]
(Don Watkins, [CC BY-SA 4.0][6])
Next, I connected the Thunderbolt Ethernet adapter to the MacBook and inserted the USB boot drive. I powered on the system and pressed the Option key on the MacBook to instruct it to start it from a USB drive.
Linux Mint started up nicely in live-boot mode, but the operating system didn't recognize a wireless connection.
### Where's my wireless?
This is because Broadcom, the company that makes WiFi cards for Apple devices, doesn't release open source drivers. This is in contrast to Intel, Atheros, and many other chip manufacturers—but it's the chipset used by Apple, so it's a common problem on MacBooks.
I had a hard-wired Ethernet connection to the internet through my Thunderbolt adapter, so I _was_ online. From prior research, I knew that to get the wireless adapter working on this MacBook, I would need to issue three separate commands in the Bash terminal. However, during the installation process, I learned that Linux Mint has a nice built-in Driver Manager that provides an easy graphical user interface to assist with installing the software.
![Linux Mint Driver Manager][7]
(Don Watkins, [CC BY-SA 4.0][6])
Once that operation completed, I rebooted and brought up my newly refurbished MacBook Air with Linux Mint 20.1 installed. The Broadcom wireless adapter was working properly, allowing me to connect to my wireless network easily.
### Installing wireless the manual way
You can accomplish the same task from a terminal. First, purge any vestige of the Broadcom kernel source:
```
`$ sudo apt-get purge bcmwl-kernel-source`
```
Then add a firmware installer:
```
`$ sudo apt install firmware-b43-installer`
```
Finally, install the new firmware for the system:
```
`$ sudo apt install linux-firmware`
```
### Using Linux as your Mac
I installed [Phoronix Test Suite][8] to get a good snapshot of the MacBook Air.
![MacBook Phoronix Test Suite output][9]
(Don Watkins, [CC BY-SA 4.0][6])
The system works very well. A recent update to kernel 5.4.0-64-generic revealed that the wireless connection survived, and I have an 866Mbps connection to my home network. The Broadcom FaceTime camera does not work, but everything else works fine.
I really like the [Linux Mint Cinnamon 20.1][10] desktop on this MacBook.
![Linux Mint Cinnamon][11]
(Don Watkins, [CC BY-SA 4.0][6])
I recommend giving Linux Mint a try if you have an older MacBook that has been rendered slow and inoperable due to macOS updates. I am very impressed with the distribution, and especially how it works on my MacBook Air. It has definitely extended the life expectancy of this powerful little laptop.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/restore-macbook-linux
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/write-hand_0.jpg?itok=Uw5RJD03 (Writing Hand)
[2]: https://opensource.com/article/20/2/macbook-linux-elementary
[3]: https://linuxmint.com/
[4]: https://github.com/pop-os/popsicle
[5]: https://opensource.com/sites/default/files/uploads/popsicle.png (Popsicle ISO burner)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/sites/default/files/uploads/mint_drivermanager.png (Linux Mint Driver Manager)
[8]: https://www.phoronix-test-suite.com/
[9]: https://opensource.com/sites/default/files/uploads/macbook_specs.png (MacBook Phoronix Test Suite output)
[10]: https://www.linuxmint.com/edition.php?id=284
[11]: https://opensource.com/sites/default/files/uploads/mintcinnamon.png (Linux Mint Cinnamon)

View File

@ -1,73 +0,0 @@
[#]: subject: (Making computers more accessible and sustainable with Linux)
[#]: via: (https://opensource.com/article/21/4/linux-free-geek)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Making computers more accessible and sustainable with Linux
======
Free Geek is a nonprofit organization that helps decrease the digital
divide by providing Linux computers to people and groups in need.
![Working from home at a laptop][1]
There are many reasons to choose Linux for your desktop operating system. In [_Why everyone should choose Linux_][2], Opensource.com's Seth Kenlon highlighted many of the best reasons to select Linux and provided lots of ways for people to get started with the operating system.
This also got me thinking about how I usually introduce folks to Linux. The pandemic has increased the need for people to go online for shopping, doing remote education, and connecting with family and friends [over video conferencing][3].
I work with a lot of retirees who have fixed incomes and are not particularly tech-savvy. For most of these folks, buying a computer is a major investment fraught with concern. Some of my friends and clients are uncomfortable going to a retail store during a pandemic, and they're completely unfamiliar with what to look for in a computer, whether it's a desktop or laptop, even in non-pandemic times. They come to me with questions about where to buy one and what to look for.
I'm always eager to see them get a Linux computer. Many of them cannot afford the Linux units sold by name-brand vendors. Until recently, I've been purchasing refurbished units for them and refitting them with Linux.
But that all changed when I discovered [Free Geek][4], a nonprofit organization based in Portland, Ore., with the mission "to sustainably reuse technology, enable digital access, and provide education to create a community that empowers people to realize their potential."
Free Geek has an eBay store where I have purchased several refurbished laptops at affordable prices. Their computers come with [Linux Mint][5] installed. The fact that a computer comes ready-to-use makes it easy to introduce [new users to Linux][6] and help them quickly experience the operating system's power.
### Keeping computers in service and out of landfills
Oso Martin launched Free Geek on Earth Day 2000. The organization provides classes and work programs to its volunteers, who are trained to refurbish and rebuild donated computers. Volunteers also receive a donated computer after 24 hours of service.
The computers are sold in Free Geek's brick-and-mortar store in Portland and [online][7]. The organization also provides computers to people and entities in need through its programs [Plug Into Portland][8], [Gift a Geekbox][9], and [organizational][10] and [community grants][11].
The organization says it has "diverted over 2 million items from landfills, granted over 75,000 technology devices to nonprofits, schools, community change organizations, and individuals, and plugged over 5,000 classroom hours from Free Geek learners."
### Get involved
Since its inception, Free Geek has grown from a staff of three to almost 50 and has been recognized around the world. It is a member of the City of Portland's [Digital Inclusion Network][12].
You can connect with Free Geek on [Twitter][13], [Facebook][14], [LinkedIn][15], [YouTube][16], and [Instagram][17]. You can also subscribe to its [newsletter][18]. Purchasing items from Free Geek's [shop][19] directly supports its work and reduces the digital divide.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/linux-free-geek
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
[2]: https://opensource.com/article/21/2/try-linux
[3]: https://opensource.com/article/20/8/linux-laptop-video-conferencing
[4]: https://www.freegeek.org/
[5]: https://opensource.com/article/21/4/restore-macbook-linux
[6]: https://opensource.com/article/18/12/help-non-techies
[7]: https://www.ebay.com/str/freegeekbasicsstore
[8]: https://www.freegeek.org/our-programs/plug-portland
[9]: https://www.freegeek.org/our-programs/gift-geekbox
[10]: https://www.freegeek.org/our-programs-grants/organizational-hardware-grants
[11]: https://www.freegeek.org/our-programs-grants/community-hardware-grants
[12]: https://www.portlandoregon.gov/oct/73860
[13]: https://twitter.com/freegeekpdx
[14]: https://www.facebook.com/freegeekmothership
[15]: https://www.linkedin.com/company/free-geek/
[16]: https://www.youtube.com/user/FreeGeekMothership
[17]: https://www.instagram.com/freegeekmothership/
[18]: https://app.e2ma.net/app2/audience/signup/1766417/1738557/?v=a
[19]: https://www.freegeek.org/shop

View File

@ -0,0 +1,129 @@
[#]: subject: (Play retro video games on Linux with this open source project)
[#]: via: (https://opensource.com/article/21/4/scummvm-retro-gaming)
[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Play retro video games on Linux with this open source project
======
ScummVM is one of the most straightforward ways to play old video games
on modern hardware.
![Gaming artifacts with joystick, GameBoy, paddle][1]
Playing adventure games has always been a big part of my experience with computers. From the earliest text-based adventure games to 2D pixel art, full-motion video, and 3D games, the adventure game genre has provided me with a lot of fond memories.
Sometimes I want to revisit those old games, but many were released before Linux was even a thing, so how do I go about replaying those games? I use [ScummVM][2], which is honestly one of my favorite open source projects.
### What is ScummVM
![ScummVM][3]
(Joshua Allen Holm, [CC BY-SA 4.0][4])
ScummVM is a program designed to play old adventure games on modern hardware. Originally designed to run games developed using LucasArt's Script Creation Utility for Maniac Mansion (SCUMM), ScummVM now supports many different game engines. It can play almost all of the classic Sierra On-Line and LucasArts adventure games as well as a wide selection of adventure games from other publishers. ScummVM does not support _every_ adventure game (yet), but it can be used to play hundreds of them. ScummVM is available for multiple platforms, including Windows, macOS, Linux, Android, iOS, and several game consoles.
### Why use ScummVM
There are plenty of ways to play old games on modern hardware, but they tend to be more complicated than using ScummVM. [DOSBox][5] can be used to play DOS games, but it requires tweaking to get the settings right so that the game plays at the right speed. Windows games can be played using [WINE][6], but that requires both the game and the game's installer to be compatible with WINE.
Even if a game runs under WINE, some games still do not work well on modern hardware because the hardware is too fast. One example of this is a puzzle in King's Quest VII that involves taking a lit firecracker somewhere. On modern hardware, the firecracker explodes way too quickly, which makes it impossible to get to the right location without the character dying multiple times.
ScummVM eliminates many of the problems present in other methods for playing retro adventure games. If ScummVM supports a game, it is straightforward to configure and play. In most cases, copying the game files from the original game discs to a directory and adding that directory in ScummVM is all that is needed to play the game. For games that came on multiple discs, it might be necessary to rename some files to avoid file name conflicts. The instructions for what data files are needed and any renaming instructions are documented on the ScummVM Wiki page for [each supported game][7].
One of the wonderful things about ScummVM is how each new release adds support for more games. ScummVM 2.2.0 added support for a dozen interactive fiction interpreters, which means ScummVM can now play hundreds of text-based adventure games. The development branch of ScummVM, which should become version 2.3.0 soon, integrates [ResidualVM][8]'s support for 3D adventure games, so now ScummVM can be used to play Grim Fandango, Myst III: Exile, and The Longest Journey. The development branch also recently added support for games created using [Adventure Game Studio][9], which adds hundreds, possibly thousands, of games to ScummVM's repertoire.
### How to install ScummVM
If you want to install ScummVM from your Linux distribution's repositories, the process is very simple. You just need to run one command. However, your distribution might offer an older release of ScummVM that does not support as many games as the latest release, so do keep that in mind.
**Install ScummVM on Debian/Ubuntu:**
```
`sudo apt install scummvm`
```
**Install ScummVM on Fedora:**
```
`sudo dnf install scummvm`
```
#### Install ScummVM using Flatpak or Snap
ScummVM is also available as a Flatpak and as a Snap. If you use one of those options, you can use one of the following commands to install the relevant version, which should always be the latest release of ScummVM:
```
`flatpak install flathub org.scummvm.ScummVM`
```
or
```
`snap install scummvm`
```
#### Compile the development branch of ScummVM
If you want to try the latest and greatest features in the not-yet-stable development branch of ScummVM, you can do so by compiling ScummVM from the source code. Do note that the development branch is constantly changing, so things might not always work correctly. If you are still interested in trying out the development branch, follow the instructions below.
To start, you will need the required development tools and libraries for your distribution, which are listed on the [Compiling ScummVM/GCC page][10] on the ScummVM Wiki.
Once you have the prerequisites installed, run the following commands:
```
git clone <https://github.com/scummvm/scummvm.git>
cd scummvm
./configure
make
sudo make install
```
### Add games to ScummVM
Adding games to ScummVM is the last thing you need to do before playing. If you do not have any supported adventure games in your collection, you can download 11 wonderful games from the [ScummVM Games page][11]. You can also purchase many of the games supported by ScummVM from [GOG.com][12]. If you purchase a game from GOG.com and need to extract the game files from the GOG download, you can use the [innoextract][13] utility.
Most games need to be in their own directory (the only exceptions to this are games that consist of a single data file), so it is best to begin by creating a directory to store your ScummVM games. You can do this using the command line or a graphical file manager. Where you store your games does not matter (except in the case of the ScummVM Flatpak, which is a sandbox and requires the games to be stored in the `~/Documents` directory). After creating this directory, place the data files for each game in their own subdirectories.
Once the files are copied to where you want them, run ScummVM and add the game to the collection by clicking **Add Game…**, selecting the appropriate directory in the file-picker dialog box that opens, and clicking **Choose**. If ScummVM properly detects the game, it will open its settings options. You can select advanced configuration options from the various tabs if you want (which can also be changed later by using the **Edit Game…** button), or you can just click **OK** to add the game with the default options. If the game is not detected, check the [Supported Games pages][14] on the ScummVM Wiki for details about special instructions that might be needed for a particular game's data files.
The only thing left to do now is select the game in ScummVM's list of games, click on **Start**, and enjoy replaying an old favorite or experiencing a classic adventure game for the first time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/scummvm-retro-gaming
作者:[Joshua Allen Holm][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_gaming_games_roundup_news.png?itok=KM0ViL0f (Gaming artifacts with joystick, GameBoy, paddle)
[2]: https://www.scummvm.org/
[3]: https://opensource.com/sites/default/files/uploads/scummvm.png (ScummVM)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://www.dosbox.com/
[6]: https://www.winehq.org/
[7]: https://wiki.scummvm.org/index.php?title=Category:Supported_Games
[8]: https://www.residualvm.org/
[9]: https://www.adventuregamestudio.co.uk/
[10]: https://wiki.scummvm.org/index.php/Compiling_ScummVM/GCC
[11]: https://www.scummvm.org/games/
[12]: https://www.gog.com/
[13]: https://constexpr.org/innoextract/
[14]: https://wiki.scummvm.org/index.php/Category:Supported_Games

View File

@ -0,0 +1,196 @@
[#]: subject: (Exploring the world of declarative programming)
[#]: via: (https://fedoramagazine.org/exploring-the-world-of-declarative-programming/)
[#]: author: (pampelmuse https://fedoramagazine.org/author/pampelmuse/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Exploring the world of declarative programming
======
![][1]
Photo by [Stefan Cosma][2] on [Unsplash][3]
### Introduction
Most of us use imperative programming languages like C, Python, or Java at home. But the universe of programming languages is endless and there are languages where no imperative command has gone before. That which may sound impossible at the first glance is feasible with Prolog and other so called declarative languages. This article will demonstrate how to split a programming task between Python and Prolog.
In this article I do not want to teach Prolog. There are [resources available][4] for that. We will demonstrate how simple it is to solve a puzzle solely by describing the solution. After that it is up to the reader how far this idea will take them.
To proceed, you should have a basic understanding of Python. Installation of Prolog and the Python-Prolog bridge is accomplished using this command:
dnf install pl python3-pyswip
Our exploration uses [SWI-Prolog][5], an actively developed Prolog which has the Fedora package name “pl”. The Python/SWI-Prolog bridge is [pyswip][6].
If you are a bold adventurer you are welcome to follow me exploring the world of declarative programming.
### Puzzle
The example problem for our exploration will be a puzzle similar to what you may have seen before.
**How many triangles are there?**
![][7]
### Getting started
Get started by opening a fresh text file with your favorite text editor. Copy all three text blocks in the sections below (Input, Process and Output) together into one file.
#### Input
This section sets up access to the Prolog interface and defines data for the problem. This is a simple case so it is fastest to write the data lines by hand. In larger problems you may get your input data from a file or from a database.
```
#!/usr/bin/python
from pyswip import Prolog
prolog = Prolog()
prolog.assertz("line([a, e, k])")
prolog.assertz("line([a, d, f, j])")
prolog.assertz("line([a, c, g, i])")
prolog.assertz("line([a, b, h])")
prolog.assertz("line([b, c, d, e])")
prolog.assertz("line([e, f, g, h])")
prolog.assertz("line([h, i, j, k])")
```
* The first line is the UNIX way to tell that this text file is a Python program.
Dont forget to make your file executable by using _chmod +x yourfile.py_ .
* The second line imports a Python module which is doing the Python/Prolog bridge.
* The third line makes a Prolog instance available inside Python.
* Next lines are puzzle related. They describe the picture you see above.
Single **small** letters stand for concrete points.
_[a,e,k]_ is the Prolog way to describe a list of three points.
_line()_ declares that it is true that the list inside parentheses is a line .
The idea is to let Python do the work and to feed Prolog.
#### “Process”
This section title is quoted because nothing is actually processed here. This is simply the description (declaration) of the solution.
There is no single variable which gets a new value. Technically the processing is done in the section titled Output below where you find the command _prolog.query()_.
```
prolog.assertz("""
triangle(A, B, C) :-
line(L1),
line(L2),
line(L3),
L1 \= L2,
member(A, L1),
member(B, L1),
member(A, L2),
member(C, L2),
member(B, L3),
member(C, L3),
A @< B,
B @< C""")
```
First of all: All capital letters and strings starting with a capital letter are Prolog variables!
The statements here are the description of what a triangle is and you can read this like:
* **If** all lines after _“:-“_ are true, **then** _triangle(A, B, C)_ is a triangle
* There must exist three lines (L1 to L3).
* Two lines must be different. “\_=_” means not equal in Prolog. We do not want to count a triangle where all three points are on the same line! So we check if at least two different lines are used.
* _member()_ is a Prolog predicate which is true if the first argument is inside the second argument which must be a list. In sum these six lines express that the three points must be pairwise on different lines.
* The last two lines are only true if the three points are in alphabetical order. (“_@&lt;_” compares terms in Prolog.) This is necessary, otherwise [a, h, k] and [a, k, h] would count as two triangles. Also, the case where a triangle contains the same point two or even three times is excluded by these final two lines.
As you can see, it is often not that obvious what defines a triangle. But for a computed approach you must be rather strict and rigorous.
#### Output
After the hard work in the process chapter the rest is easy. Just have Python ask Prolog to search for triangles and count them all.
```
total = 0
for result in prolog.query("triangle(A, B, C)"):
print(result)
total += 1
print("There are", total, "triangles.")
```
Run the program using this command in the directory containing _yourfile.py_ :
```
./yourfile.py
```
The output shows the listing of each triangle found and the final count.
```
{'A': 'a', 'B': 'e', 'C': 'f'}
{'A': 'a', 'B': 'e', 'C': 'g'}
{'A': 'a', 'B': 'e', 'C': 'h'}
{'A': 'a', 'B': 'd', 'C': 'e'}
{'A': 'a', 'B': 'j', 'C': 'k'}
{'A': 'a', 'B': 'f', 'C': 'g'}
{'A': 'a', 'B': 'f', 'C': 'h'}
{'A': 'a', 'B': 'c', 'C': 'e'}
{'A': 'a', 'B': 'i', 'C': 'k'}
{'A': 'a', 'B': 'c', 'C': 'd'}
{'A': 'a', 'B': 'i', 'C': 'j'}
{'A': 'a', 'B': 'g', 'C': 'h'}
{'A': 'a', 'B': 'b', 'C': 'e'}
{'A': 'a', 'B': 'h', 'C': 'k'}
{'A': 'a', 'B': 'b', 'C': 'd'}
{'A': 'a', 'B': 'h', 'C': 'j'}
{'A': 'a', 'B': 'b', 'C': 'c'}
{'A': 'a', 'B': 'h', 'C': 'i'}
{'A': 'd', 'B': 'e', 'C': 'f'}
{'A': 'c', 'B': 'e', 'C': 'g'}
{'A': 'b', 'B': 'e', 'C': 'h'}
{'A': 'e', 'B': 'h', 'C': 'k'}
{'A': 'f', 'B': 'h', 'C': 'j'}
{'A': 'g', 'B': 'h', 'C': 'i'}
There are 24 triangles.
```
There are certainly more elegant ways to display this output but the point is:
**Python should do the output handling for Prolog.**
If you are a star programmer you can make the output look like this:
```
***************************
* There are 24 triangles. *
***************************
```
### Conclusion
Splitting a programming task between Python and Prolog makes it easy to keep the Prolog part pure and monotonic, which is good for logic reasoning. It is also easy to make the input and output handling with Python.
Be aware that Prolog is a bit more complicated and can do much more than what I explained here. You can find a really good and modern introduction here: [The Power of Prolog][4].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/exploring-the-world-of-declarative-programming/
作者:[pampelmuse][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pampelmuse/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/explore_declarative-816x345.jpg
[2]: https://unsplash.com/@stefanbc?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/star-trek?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://www.metalevel.at/prolog
[5]: https://www.swi-prolog.org/
[6]: https://github.com/yuce/pyswip
[7]: https://fedoramagazine.org/wp-content/uploads/2021/04/triangle2.png

View File

@ -0,0 +1,134 @@
[#]: subject: (How we built an open source design system to create new community logos)
[#]: via: (https://opensource.com/article/21/4/ansible-community-logos)
[#]: author: (Fiona Lin https://opensource.com/users/fionalin)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How we built an open source design system to create new community logos
======
Learn how Ansible's new logos were developed with stakeholder input to
ensure a consistent brand across the entire project.
![UX design Mac computer with mobile and laptop][1]
As interaction designers on Red Hat's User Experience (UX) Design and Ansible product teams, we worked for about six months to build a logo family with the Ansible community. This journey started even earlier when a project manager asked us for a "quick and easy" logo for a slide deck. After gathering a few requirements, we presented a logo to the stakeholders within a few days and without much need for iteration. A few months later, another stakeholder decided they would also benefit from having imagery for their materials, so we repeated the process.
At this point, we noticed a pattern: logo resources like these no longer represented individual requests but rather a common need across the Ansible project. After completing several logo requests, we had built a makeshift series that—without conscious branding and design conventions—created the potential for visual inconsistencies across the Ansible brand. As the logo collection grew, we recognized this looming problem and the need to combat it.
Our solution was to create an Ansible design system, a brand-specific resource to guide consistent logo design well into the future.
### What is a design system?
A design system is a collection of reusable assets and guidelines that help inform the visual language of any digital product suite. Design systems create patterns to bring separate products together and elevate brands through scalability and consistency.
Especially in a large corporation with multiple products in the portfolio, scaling does not come easily without standardization as different teams contribute to each product. Design systems work as a baseline for each team to build new assets on. With a standardized look and feel, products are unified as one family across the portfolio.
### Getting started building a design system
After receiving a series of requests from stakeholders to create logos for the open source Ansible community, such as Ansible Builder, Ansible Runner, and Project Receptor, we decided to design a structure for our workflow and create a single source of truth to work for moving forward.
First, we conducted a visual audit of the existing logos to determine what we had to work with. Ansible's original logo family consists of four main images: the Angry Spud for AWX, the Ansibull for Ansible Core/Engine, and the monitor with wings for AWX. Most of the logos were tied together with a consistent shade of red and bull imagery, but the stroke width, stroke color, line quality, and typography were vast and varied.
![Original Ansible logos][2]
(Fiona Lin and Taufique Rahman, [CC BY-SA 4.0][3])
The Angry Spud uses a tan outline and a hand-drawn style, while the bull is a symmetrical, geometric vector. The AWX monitor was the outlier with its thin line-art wings, blue vector rectangle, and Old English typeface (not included here, but an exception from the rest of the family, which uses a modern sans serif).
### Establishing new design criteria
Taking color palette, typography, and imagery into consideration, we generated a consistent composition that features the Ansibull for all core Ansible products, along with bold lines and vibrant colors.
![Ansible design system][4]
(Fiona Lin and Taufique Rahman, [CC BY-SA 4.0][3])
The new Ansible community logo design style guide details the color palette, typography, sizing, spacing, and logo variations for Ansible product logos.
The new style guide presents a brand new, modern custom typeface based on GT America by [Grilli Type][5], an independent Swiss type foundry. We created a softer look for the typeface to match the imagery's roundedness by rounding out certain corners of each letter.
We decided to curate a more lively, saturated, and universal color palette by incorporating more colors in the spectrum and basing them on primary colors. The new palette features light blue, yellow, and pink, each with a lighter highlight and darker shadow. This broader color scope allows more flexibility within the system and introduces a 3D look and feel.
![New Ansible logos][6]
(Fiona Lin and Taufique Rahman, [CC BY-SA 4.0][3])
We also introduced new imagery, such as the hexagons in the Receptor and AWX logos for visual continuity. Finally, we made sure each logo works on both light and dark backgrounds for maximum flexibility.
### Expanding the design portfolio
Once we established the core logo family, we moved onto creating badges for Ansible services, such as Ansible Demo and Ansible Workshop. To differentiate services from products, we decided to enclose service graphics in a circle that contains the name of the service in the same custom typography. The new service badges show the baby Ansibull (from the Ansible Builder logo) completing tasks related to each service, such as pointing to a whiteboard for Ansible Demo or using building tools for Ansible Workshop.
![New Ansible services logos][7]
(Fiona Lin and Taufique Rahman, [CC BY-SA 4.0][3])
### Using open source for design decisions
The original AWX logo was influenced by rock-and-roll imagery, such as the wings and the heavy metal typeface (omitted from the image here).
![Original AWX logo][8]
(Fiona Lin and Taufique Rahman, [CC BY-SA 4.0][3])
Several members of the Ansible community, including the Red Hat Diversity and Inclusion group, brought to our attention that these elements resemble imagery used by hate groups.
Given the social implications of the original logo's imagery, we had to work quickly with the Ansible community to design a replacement. Instead of working in a silo, as we did for the initial logos, we broadened the project's scope to carefully consider a wider range of stakeholders, including the Ansible community, Red Hat Diversity and Inclusion group, and Red Hat Legal team.
We started brainstorming by reaching out to the Ansible open source community for ideas. One of the Ansible engineers, Rebeccah Hunter, contributed in the sketching phase and later became an embedded part of our design team. Part of the challenge of involving a large group of stakeholders was that we had a variety of ideas for new logo concepts, ranging from an auxiliary cable to a bowl of ramen.
We sketched five community-surfaced logos, each featuring a different branded visual: a sprout, a rocket, a monitor, a bowl of ramen, and an auxiliary cable.
![AWX logo concepts][9]
(Fiona Lin and Taufique Rahman, [CC BY-SA 4.0][3])
After completing these initial concept sketches, we set up a virtual voting mechanism that we used throughout the iteration process. This voting system allowed us to use community feedback to narrow from five initial concepts down to three: the rocket, the bowl of ramen, and the monitor. We further iterated on these three directions and presented back, via a Slack channel dedicated to this effort, until we landed on one direction, the AWX monitor, that aligned with the community's vision.
![New AWX logo][10]
(Fiona Lin and Taufique Rahman, [CC BY-SA 4.0][3])
With community voices as our guide, we pursued the monitor logo concept for AWX. We preserved the monitor element from the original logo while modernizing the look and feel to match our updated design system. We used a more vibrant color palette, a cleaner sans-serif typeface, and elements, including the hexagon motif, from the Project Receptor logo.
By engaging with our community from the beginning of the process, we were able to design and iterate in the open with a sense of inclusiveness from all stakeholders. In the end, we felt this was the best approach for replacing a controversial logo. The final version was handed off to the Red Hat Legal team, and after approval, we replaced all current assets with this new logo.
### Key takeaways
Creating a set of rules and assets for a design system keeps your digital products consistent across the board, eliminates brand confusion, and enables scalability.
As you explore building a design system with your own community, you may benefit from these key takeaways we learned along our path:
* Scaling new logos with a design system is a much easier process than without one.
* Juggling design options becomes less daunting when you use a polling system to validate results.
* Directing a large audience's attention on sets of three eliminates decision fatigue and focuses community feedback.
We hope this article provides insight into designing a system with an open source community and helps you recognize the benefit of developing a system early in your process. If you are creating a new design system, what questions do you have? And if you have created one, what lessons have you learned? Please share your ideas in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/ansible-community-logos
作者:[Fiona Lin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fionalin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ux-design-mac-laptop.jpg?itok=9-HKgXa9 (UX design Mac computer with mobile and laptop)
[2]: https://opensource.com/sites/default/files/pictures/original_logos.png (Original Ansible logos)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://opensource.com/sites/default/files/pictures/design_system.png (Ansible design system)
[5]: https://www.grillitype.com/
[6]: https://opensource.com/sites/default/files/pictures/new_logos.png (New Ansible logos)
[7]: https://opensource.com/sites/default/files/pictures/new_service_badges.png (New Ansible services logos)
[8]: https://opensource.com/sites/default/files/uploads/awx_original.png (Original AWX logo)
[9]: https://opensource.com/sites/default/files/uploads/awx_concepts.png (AWX logo concepts)
[10]: https://opensource.com/sites/default/files/uploads/awx.png (New AWX logo)

View File

@ -0,0 +1,92 @@
[#]: subject: (An Open-Source App to Control All Your RGB Lighting Settings)
[#]: via: (https://itsfoss.com/openrgb/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
An Open-Source App to Control All Your RGB Lighting Settings
======
**_Brief_:** _OpenRGB is a useful open-source utility to manage all your RGB lighting under a single roof. Lets find out more about it._
No matter whether it is your keyboard, mouse, CPU fan, AIO, and other connected peripherals or components, Linux does not have official software support to control the RGB lighting.
And, OpenRGB seems to be an all-in-one RGB lighting control utility for Linux.
### OpenRGB: An All-in-One RGB Lighting Control Center
![][1]
Yes, you may find different tools to tweak the settings like **Piper** to specifically [configure a gaming mouse on Linux][2]. But, if you have a variety of components or peripherals, it will be a cumbersome task to set them all to your preference of RGB color.
OpenRGB is an impressive utility that not only focuses on Linux but also available for Windows and macOS.
It is not just an idea to have all the RGB lighting settings under one roof, but it aims to get rid of all the bloatware apps that you need to install to tweak lighting settings.
Even if you are using a Windows-powered machine, you probably know that software tools like Razer Synapse are resource hogs and come with their share of issues. So, OpenRGB is not just limited for Linux users but for every user looking to tweak RGB settings.
It supports a long list of devices, but you should not expect support for everything.
### Features of OpenRGB
![][3]
It empowers you with many useful functionalities while offering a simple user experience. Some of the features are:
* Lightweight user interface
* Cross-platform support
* Ability to extend functionality using plugins
* Set colors and effects
* Ability to save and load profiles
* View device information
* Connect multiple instances of OpenRGB to synchronize lighting across multiple PCs
![][4]
Along with all the above-mentioned features, you get a good control over the lighting zones, color mode, colors, and more.
### Installing OpenRGB in Linux
You can find AppImage files and DEB packages on their official website. For Arch Linux users, you can also find it in [AUR][5].
For additional help, you can refer to our [AppImage guide][6] and [ways to install DEB files][7] to set it up.
The official website should let you download packages for other platforms as well. But, if you want to explore more about it or compile it yourself, head to its [GitLab page][8].
[OpenRGB][9]
### Closing Thoughts
Even though I do not have many RGB-enabled devices/components, I could tweak my Logitech G502 mouse successfully.
I would definitely recommend you to give it a try if you want to get rid of multiple applications and use a lightweight interface to manage all your RGB lighting.
Have you tried it already? Feel free to share what you think about it in the comments!
--------------------------------------------------------------------------------
via: https://itsfoss.com/openrgb/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/openrgb.jpg?resize=800%2C406&ssl=1
[2]: https://itsfoss.com/piper-configure-gaming-mouse-linux/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/openrgb-supported-devices.jpg?resize=800%2C404&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/openrgb-logi.jpg?resize=800%2C398&ssl=1
[5]: https://itsfoss.com/aur-arch-linux/
[6]: https://itsfoss.com/use-appimage-linux/
[7]: https://itsfoss.com/install-deb-files-ubuntu/
[8]: https://gitlab.com/CalcProgrammer1/OpenRGB
[9]: https://openrgb.org/

View File

@ -0,0 +1,75 @@
[#]: subject: (Fedora Linux 34 is officially here!)
[#]: via: (https://fedoramagazine.org/announcing-fedora-34/)
[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/)
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Fedora Linux 34 is officially here!
======
![][1]
Today, Im excited to share the results of the hard work of thousands of contributors to the Fedora Project: our latest release, Fedora Linux 34, is here! I know a lot of you have been waiting… Ive seen more “is it out yet???” anticipation on social media and forums than I can remember for any previous release. So, if you want, wait no longer — [upgrade now][2] or go to [Get Fedora][3] to download an install image. Or, if youd like to learn more first, read on. 
The first thing you might notice is our beautiful new logo. Developed by the Fedora Design Team with input from the wider community, this new logo solves a lot of the technical problems with our old logo while keeping its Fedoraness. Stay tuned for new Fedora swag featuring the new design!
### A Fedora Linux for every use case
Fedora Editions are targeted outputs geared toward specific “showcase” uses on the desktop, in server &amp; cloud environments, and the Internet of Things.
Fedora Workstation focuses on the desktop, and in particular, its geared toward software developers who want a “just works” Linux operating system experience. This release features [GNOME 40][4], the next step in focused, distraction-free computing. GNOME 40 brings improvements to navigation whether you use a trackpad, a keyboard, or a mouse. The app grid and settings have been redesigned to make interaction more intuitive. You can read more about [what changed and why in a Fedora Magazine article][5] from March.
Fedora CoreOS is an emerging Fedora Edition. Its an automatically-updating, minimal operating system for running containerized workloads securely and at scale. It offers several update streams that can be followed for automatic updates that occur roughly every two weeks. Currently the next stream is based on Fedora Linux 34, with the testing and stable streams to follow. You can find information about released artifacts that follow the next stream from the [download page][6] and information about how to use those artifacts in the [Fedora CoreOS Documentation][7].
Fedora IoT provides a strong foundation for IoT ecosystems and edge computing use cases. With this release, weve improved support for popular ARM devices like Pine64, RockPro64, and Jetson Xavier NX. Some i.MX8 system on a chip devices like the 96boards Thor96 and Solid Run HummingBoard-M have improved hardware support. In addition, Fedora IoT 34 improves support for hardware watchdogs for automated system recovery.”
Of course, we produce more than just the Editions. [Fedora Spins][8] and [Labs][9] target a variety of audiences and use cases, including [Fedora Jam][10], which allows you to unleash your inner musician, and desktop environments like the new Fedora i3 Spin, which provides a tiling window manager. And, dont forget our alternate architectures: [ARM AArch64, Power, and S390x][11].
### General improvements
No matter what variant of Fedora you use, youre getting the latest the open source world has to offer. Following our “[First][12]” foundation, weve updated key programming language and system library packages, including Ruby 3.0 and Golang 1.16. In Fedora KDE Plasma, weve switched from X11 to Wayland as the default.
Following the introduction of BTRFS as the default filesystem on desktop variants in Fedora Linux 33, weve introduced [transparent compression on BTRFS filesystems][13].
Were excited for you to try out the new release! Go to <https://getfedora.org/> and download it now. Or if youre already running Fedora Linux, follow the [easy upgrade instructions][2]. For more information on the new features in Fedora Linux 34, see the [release notes][14].
### In the unlikely event of a problem…
If you run into a problem, check out the [Fedora 34 Common Bugs page][15], and if you have questions, visit our Ask Fedora user-support platform.
### Thank you everyone
Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release during a pandemic. Fedora is a community, and its great to see how much weve supported each other. Be sure to join us on April 30 and May 1 for a [virtual release party][16]!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/announcing-fedora-34/
作者:[Matthew Miller][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/mattdm/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/f34-final-816x345.jpg
[2]: https://docs.fedoraproject.org/en-US/quick-docs/upgrading/
[3]: https://getfedora.org
[4]: https://forty.gnome.org/
[5]: https://fedoramagazine.org/fedora-34-feature-focus-updated-activities-overview/
[6]: https://getfedora.org/en/coreos
[7]: https://docs.fedoraproject.org/en-US/fedora-coreos/
[8]: https://spins.fedoraproject.org/
[9]: https://labs.fedoraproject.org/
[10]: https://labs.fedoraproject.org/en/jam/
[11]: https://alt.fedoraproject.org/alt/
[12]: https://docs.fedoraproject.org/en-US/project/#_first
[13]: https://fedoramagazine.org/fedora-workstation-34-feature-focus-btrfs-transparent-compression/
[14]: https://docs.fedoraproject.org/en-US/fedora/f34/release-notes/
[15]: https://fedoraproject.org/wiki/Common_F34_bugs
[16]: https://hopin.com/events/fedora-linux-34-release-party

View File

@ -0,0 +1,510 @@
[#]: subject: (Perform Linux memory forensics with this open source tool)
[#]: via: (https://opensource.com/article/21/4/linux-memory-forensics)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Perform Linux memory forensics with this open source tool
======
Find out what's going on with applications, network connections, kernel
modules, files, and much more with Volatility
![Brain on a computer screen][1]
A computer's operating system and applications use the primary memory (or RAM) to perform various tasks. This volatile memory, containing a wealth of information about running applications, network connections, kernel modules, open files, and just about everything else is wiped out each time the computer restarts.
Memory forensics is a way to find and extract this valuable information from memory. [Volatility][2] is an open source tool that uses plugins to process this type of information. However, there's a problem: Before you can process this information, you must dump the physical memory into a file, and Volatility does not have this ability.
Therefore, this article has two parts:
* The first part deals with acquiring the physical memory and dumping it into a file.
* The second part uses Volatility to read and process information from this memory dump.
I used the following test system for this tutorial, but it will work on any Linux distribution:
```
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
$
$ uname -r
4.18.0-240.el8.x86_64
$
```
> **A note of caution:** Part 1 involves compiling and loading a kernel module. Don't worry; it isn't as difficult as it sounds. Some guidelines:
>
> * Follow the steps.
> * Do not try any of these steps on a production system or your primary machine.
> * Always use a test virtual machine (VM) to try things out until you are comfortable using the tools and understand how they work.
>
### Install the required packages
Before you get started, install the requisite tools. If you are using a Debian-based distro, use the equivalent `apt-get` commands. Most of these packages provide the required kernel information and tools to compile the code:
```
`$ yum install kernel-headers kernel-devel gcc elfutils-libelf-devel make git libdwarf-tools python2-devel.x86_64-y`
```
### Part 1: Use LiME to acquire memory and dump it to a file
Before you can begin to analyze memory, you need a memory dump at your disposal. In an actual forensics event, this could come from a compromised or hacked system. Such information is often collected and stored to analyze how the intrusion happened and its impact. Since you probably do not have a memory dump available, you can take a memory dump of your test VM and use that to perform memory forensics.
Linux Memory Extractor ([LiME][3]) is a popular tool for acquiring memory on a Linux system. Get LiME with:
```
$ git clone <https://github.com/504ensicsLabs/LiME.git>
$
$ cd LiME/src/
$
$ ls
deflate.c  disk.c  hash.c  lime.h  main.c  Makefile  Makefile.sample  tcp.c
$
```
#### Build the LiME kernel module
Run the `make` command inside the `src` folder. This creates a kernel module with a .ko extension. Ideally, the `lime.ko` file will be renamed using the format `lime-<your-kernel-version>.ko` at the end of `make`:
```
$ make
make -C /lib/modules/4.18.0-240.el8.x86_64/build M="/root/LiME/src" modules
make[1]: Entering directory '/usr/src/kernels/4.18.0-240.el8.x86_64'
&lt;&lt; snip &gt;&gt;
make[1]: Leaving directory '/usr/src/kernels/4.18.0-240.el8.x86_64'
strip --strip-unneeded lime.ko
mv lime.ko lime-4.18.0-240.el8.x86_64.ko
$
$
$ ls -l lime-4.18.0-240.el8.x86_64.ko
-rw-r--r--. 1 root root 25696 Apr 17 14:45 lime-4.18.0-240.el8.x86_64.ko
$
$ file lime-4.18.0-240.el8.x86_64.ko
lime-4.18.0-240.el8.x86_64.ko: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), BuildID[sha1]=1d0b5cf932389000d960a7e6b57c428b8e46c9cf, not stripped
$
```
#### Load the LiME kernel module
Now it's time to load the kernel module to acquire the system memory. The `insmod` command helps load the kernel module; once loaded, the module reads the primary memory (RAM) on your system and dumps the memory's contents to the file provided in the `path` directory on the command line. Another important parameter is `format`; keep the format `lime`, as shown below. After inserting the kernel module, verify that it loaded using the `lsmod` command:
```
$ lsmod  | grep lime
$
$ insmod ./lime-4.18.0-240.el8.x86_64.ko "path=../RHEL8.3_64bit.mem format=lime"
$
$ lsmod  | grep lime
lime                   16384  0
$
```
You should see that the file given to the `path` command was created, and the file size is (not surprisingly) the same as the physical memory size (RAM) on your system. Once you have the memory dump, you can remove the kernel module using the `rmmod` command:
```
$
$ ls -l ~/LiME/RHEL8.3_64bit.mem
-r--r--r--. 1 root root 4294544480 Apr 17 14:47 /root/LiME/RHEL8.3_64bit.mem
$
$ du -sh ~/LiME/RHEL8.3_64bit.mem
4.0G    /root/LiME/RHEL8.3_64bit.mem
$
$ free -m
              total        used        free      shared  buff/cache   available
Mem:           3736         220         366           8        3149        3259
Swap:          4059           8        4051
$
$ rmmod lime
$
$ lsmod  | grep lime
$
```
#### What's in the memory dump?
This dump file is just raw data, as you can see using the `file` command below. You cannot make much sense out of it manually; yes, there are some ASCII strings in there somewhere, but you can't open the file in an editor and read it out. The hexdump output shows that the initial few bytes are `EmiL`; this is because your request format was "lime" in the command above:
```
$ file ~/LiME/RHEL8.3_64bit.mem
/root/LiME/RHEL8.3_64bit.mem: data
$
$ hexdump -C ~/LiME/RHEL8.3_64bit.mem | head
00000000  45 4d 69 4c 01 00 00 00  00 10 00 00 00 00 00 00  |EMiL............|
00000010  ff fb 09 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000020  b8 fe 4c cd 21 44 00 32  20 00 00 2a 2a 2a 2a 2a  |..L.!D.2 ..*****|
00000030  2a 2a 2a 2a 2a 2a 2a 2a  2a 2a 2a 2a 2a 2a 2a 2a  |****************|
00000040  2a 2a 2a 2a 2a 2a 2a 2a  2a 2a 2a 2a 2a 20 00 20  |************* . |
00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000080  00 00 00 00 00 00 00 00  00 00 00 00 70 78 65 6c  |............pxel|
00000090  69 6e 75 78 2e 30 00 00  00 00 00 00 00 00 00 00  |inux.0..........|
000000a0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
$
```
### Part 2: Get Volatility and use it to analyze your memory dump
Now that you have a sample memory dump to analyze, get the Volatility software with the command below. Volatility has been rewritten in Python 3, but this tutorial uses the original Volatility package, which uses Python 2. If you want to experiment with Volatility 3, download it from the appropriate Git repo and use Python 3 instead of Python 2 in the following commands:
```
$ git clone <https://github.com/volatilityfoundation/volatility.git>
$
$ cd volatility/
$
$ ls
AUTHORS.txt    contrib      LEGAL.txt    Makefile     PKG-INFO     pyinstaller.spec  resources  tools       vol.py
CHANGELOG.txt  CREDITS.txt  LICENSE.txt  MANIFEST.in  pyinstaller  README.txt        setup.py   volatility
$
```
Volatility uses two Python libraries for some functionality, so please install them using the following commands. Otherwise, you might see some import errors when you run the Volatility tool; you can ignore them unless you are running a plugin that needs these libraries; in that case, the tool will error out:
```
$ pip2 install pycrypto
$ pip2 install distorm3
```
#### List Volatility's Linux profiles
The first Volatility command you'll want to run lists what Linux profiles are available. The main entry point to running any Volatility commands is the `vol.py` script. Invoke it using the Python 2 interpreter and provide the `--info` option. To narrow down the output, look for strings that begin with Linux. As you can see, not many Linux profiles are listed:
```
$ python2 vol.py --info  | grep ^Linux
Volatility Foundation Volatility Framework 2.6.1
LinuxAMD64PagedMemory          - Linux-specific AMD 64-bit address space.
$
```
#### Build your own Linux profile
Linux distros are varied and built for various architectures. This why profiles are essential—Volatility must know the system and architecture that the memory dump was acquired from before extracting information. There are Volatility commands to find this information; however, this method is time-consuming. To speed things up, build a custom Linux profile using the following commands.
Move to the `tools/linux` directory within the Volatility repo, and run the `make` command:
```
$ cd tools/linux/
$
$ pwd
/root/volatility/tools/linux
$
$ ls
kcore  Makefile  Makefile.enterprise  module.c
$
$ make
make -C //lib/modules/4.18.0-240.el8.x86_64/build CONFIG_DEBUG_INFO=y M="/root/volatility/tools/linux" modules
make[1]: Entering directory '/usr/src/kernels/4.18.0-240.el8.x86_64'
&lt;&lt; snip &gt;&gt;
make[1]: Leaving directory '/usr/src/kernels/4.18.0-240.el8.x86_64'
$
```
You should see a new `module.dwarf` file. You also need the `System.map` file from the `/boot` directory, as it contains all of the symbols related to the currently running kernel:
```
$ ls
kcore  Makefile  Makefile.enterprise  module.c  module.dwarf
$
$ ls -l module.dwarf
-rw-r--r--. 1 root root 3987904 Apr 17 15:17 module.dwarf
$
$ ls -l /boot/System.map-4.18.0-240.el8.x86_64
-rw-------. 1 root root 4032815 Sep 23  2020 /boot/System.map-4.18.0-240.el8.x86_64
$
$
```
To create a custom profile, move back to the Volatility directory and run the command below. The first argument provides a custom .zip with a file name of your choice. I used the operating system and kernel versions in the name. The next argument is the `module.dwarf` file created above, and the final argument is the `System.map` file from the `/boot` directory:
```
$
$ cd volatility/
$
$ zip volatility/plugins/overlays/linux/Redhat8.3_4.18.0-240.zip tools/linux/module.dwarf /boot/System.map-4.18.0-240.el8.x86_64
  adding: tools/linux/module.dwarf (deflated 91%)
  adding: boot/System.map-4.18.0-240.el8.x86_64 (deflated 79%)
$
```
Your custom profile is now ready, so verify the .zip file was created at the location given above. If you want to know if Volatility detects this custom profile, run the `--info` command again. This time, you should see the new profile listed below:
```
$
$ ls -l volatility/plugins/overlays/linux/Redhat8.3_4.18.0-240.zip
-rw-r--r--. 1 root root 1190360 Apr 17 15:20 volatility/plugins/overlays/linux/Redhat8.3_4.18.0-240.zip
$
$
$ python2 vol.py --info  | grep Redhat
Volatility Foundation Volatility Framework 2.6.1
LinuxRedhat8_3_4_18_0-240x64 - A Profile for Linux Redhat8.3_4.18.0-240 x64
$
$
```
#### Start using Volatility
Now you are all set to do some actual memory forensics. Remember, Volatility is made up of custom plugins that you can run against a memory dump to get information. The command's general format is:
```
`python2 vol.py -f <memory-dump-file-taken-by-Lime> <plugin-name> --profile=<name-of-our-custom-profile>`
```
Armed with this information, run the **linux_banner** plugin to see if you can identify the correct distro information from the memory dump:
```
$ python2 vol.py -f ~/LiME/RHEL8.3_64bit.mem linux_banner --profile=LinuxRedhat8_3_4_18_0-240x64
Volatility Foundation Volatility Framework 2.6.1
Linux version 4.18.0-240.el8.x86_64 ([mockbuild@vm09.test.com][4]) (gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)) #1 SMP Wed Sep 23 05:13:10 EDT 2020
$
```
#### Find Linux plugins
That worked well, so now you're probably curious about how to find all the names of all the Linux plugins. There is an easy trick: run the `--info` command and `grep` for the `linux_` string. There are a variety of plugins available for different uses. Here is a partial list:
```
$ python2 vol.py --info  | grep linux_
Volatility Foundation Volatility Framework 2.6.1
linux_apihooks             - Checks for userland apihooks
linux_arp                  - Print the ARP table
linux_aslr_shift           - Automatically detect the Linux ASLR shift
&lt;&lt; snip &gt;&gt;
linux_banner               - Prints the Linux banner information
linux_vma_cache            - Gather VMAs from the vm_area_struct cache
linux_volshell             - Shell in the memory image
linux_yarascan             - A shell in the Linux memory image
$
```
Check which processes were running on the system when you took the memory dump using the **linux_psaux** plugin. Notice the last command in the list: it's the `insmod` command you ran before the dump:
```
$ python2 vol.py -f ~/LiME/RHEL8.3_64bit.mem linux_psaux --profile=LinuxRedhat8_3_4_18_0-240x64
Volatility Foundation Volatility Framework 2.6.1
Pid    Uid    Gid    Arguments                                                      
1      0      0      /usr/lib/systemd/systemd --switched-root --system --deserialize 18
2      0      0      [kthreadd]                                                      
3      0      0      [rcu_gp]                                                        
4      0      0      [rcu_par_gp]                                                    
861    0      0      /usr/libexec/platform-python -Es /usr/sbin/tuned -l -P          
869    0      0      /usr/bin/rhsmcertd                                              
875    0      0      /usr/libexec/sssd/sssd_be --domain implicit_files --uid 0 --gid 0 --logger=files
878    0      0      /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --logger=files      
&lt;&lt;&lt; snip &gt;&gt;&gt;
11064  89     89     qmgr -l -t unix -u                                              
227148 0      0      [kworker/0:0]                                                  
227298 0      0      -bash                                                          
227374 0      0      [kworker/u2:1]                                                  
227375 0      0      [kworker/0:2]                                                  
227884 0      0      [kworker/0:3]                                                  
228573 0      0      insmod ./lime-4.18.0-240.el8.x86_64.ko path=../RHEL8.3_64bit.mem format=lime
228576 0      0                                                                      
$
```
Want to know about the system's network stats? Run the **linux_netstat** plugin to find the state of the network connections during the memory dump:
```
$ python2 vol.py -f ~/LiME/RHEL8.3_64bit.mem linux_netstat --profile=LinuxRedhat8_3_4_18_0-240x64
Volatility Foundation Volatility Framework 2.6.1
UNIX 18113              systemd/1     /run/systemd/private
UNIX 11411              systemd/1     /run/systemd/notify
UNIX 11413              systemd/1     /run/systemd/cgroups-agent
UNIX 11415              systemd/1    
UNIX 11416              systemd/1    
&lt;&lt; snip&gt;&gt;
$
```
Next, use the **linux_mount** plugin to see which filesystems were mounted during the memory dump:
```
$ python2 vol.py -f ~/LiME/RHEL8.3_64bit.mem linux_mount --profile=LinuxRedhat8_3_4_18_0-240x64
Volatility Foundation Volatility Framework 2.6.1
tmpfs                     /sys/fs/cgroup                      tmpfs        ro,nosuid,nodev,noexec                  
cgroup                    /sys/fs/cgroup/pids                 cgroup       rw,relatime,nosuid,nodev,noexec        
systemd-1                 /proc/sys/fs/binfmt_misc            autofs       rw,relatime                            
sunrpc                    /var/lib/nfs/rpc_pipefs             rpc_pipefs   rw,relatime                            
/dev/mapper/rhel_kvm--03--guest11-root /                                   xfs          rw,relatime                
tmpfs                     /dev/shm                            tmpfs        rw,nosuid,nodev                        
selinuxfs                 /sys/fs/selinux                     selinuxfs    rw,relatime                                                      
&lt;&lt; snip&gt;&gt;
cgroup                    /sys/fs/cgroup/net_cls,net_prio     cgroup       rw,relatime,nosuid,nodev,noexec        
cgroup                    /sys/fs/cgroup/cpu,cpuacct          cgroup       rw,relatime,nosuid,nodev,noexec        
bpf                       /sys/fs/bpf                         bpf          rw,relatime,nosuid,nodev,noexec        
cgroup                    /sys/fs/cgroup/memory               cgroup       ro,relatime,nosuid,nodev,noexec        
cgroup                    /sys/fs/cgroup/cpuset               cgroup       rw,relatime,nosuid,nodev,noexec        
mqueue                    /dev/mqueue                         mqueue       rw,relatime                            
$
```
Curious what kernel modules were loaded? Volatility has a plugin for that too, aptly named **linux_lsmod**:
```
$ python2 vol.py -f ~/LiME/RHEL8.3_64bit.mem linux_lsmod --profile=LinuxRedhat8_3_4_18_0-240x64
Volatility Foundation Volatility Framework 2.6.1
ffffffffc0535040 lime 20480
ffffffffc0530540 binfmt_misc 20480
ffffffffc05e8040 sunrpc 479232
&lt;&lt; snip &gt;&gt;
ffffffffc04f9540 nfit 65536
ffffffffc0266280 dm_mirror 28672
ffffffffc025e040 dm_region_hash 20480
ffffffffc0258180 dm_log 20480
ffffffffc024bbc0 dm_mod 151552
$
```
Want to find all the commands the user ran that were stored in the Bash history? Run the **linux_bash** plugin:
```
$ python2 vol.py -f ~/LiME/RHEL8.3_64bit.mem linux_bash --profile=LinuxRedhat8_3_4_18_0-240x64 -v
Volatility Foundation Volatility Framework 2.6.1
Pid      Name                 Command Time                   Command
\-------- -------------------- ------------------------------ -------
  227221 bash                 2021-04-17 18:38:24 UTC+0000   lsmod
  227221 bash                 2021-04-17 18:38:24 UTC+0000   rm -f .log
  227221 bash                 2021-04-17 18:38:24 UTC+0000   ls -l /etc/zzz
  227221 bash                 2021-04-17 18:38:24 UTC+0000   cat ~/.vimrc
  227221 bash                 2021-04-17 18:38:24 UTC+0000   ls
  227221 bash                 2021-04-17 18:38:24 UTC+0000   cat /proc/817/cwd
  227221 bash                 2021-04-17 18:38:24 UTC+0000   ls -l /proc/817/cwd
  227221 bash                 2021-04-17 18:38:24 UTC+0000   ls /proc/817/
&lt;&lt; snip &gt;&gt;
  227298 bash                 2021-04-17 18:40:30 UTC+0000   gcc prt.c
  227298 bash                 2021-04-17 18:40:30 UTC+0000   ls
  227298 bash                 2021-04-17 18:40:30 UTC+0000   ./a.out
  227298 bash                 2021-04-17 18:40:30 UTC+0000   vim prt.c
  227298 bash                 2021-04-17 18:40:30 UTC+0000   gcc prt.c
  227298 bash                 2021-04-17 18:40:30 UTC+0000   ./a.out
  227298 bash                 2021-04-17 18:40:30 UTC+0000   ls
$
```
Want to know what files were opened by which processes? Use the **linux_lsof** plugin to list that information:
```
$ python2 vol.py -f ~/LiME/RHEL8.3_64bit.mem linux_lsof --profile=LinuxRedhat8_3_4_18_0-240x64
Volatility Foundation Volatility Framework 2.6.1
Offset             Name                           Pid      FD       Path
\------------------ ------------------------------ -------- -------- ----
0xffff9c83fb1e9f40 rsyslogd                          71194        0 /dev/null
0xffff9c83fb1e9f40 rsyslogd                          71194        1 /dev/null
0xffff9c83fb1e9f40 rsyslogd                          71194        2 /dev/null
0xffff9c83fb1e9f40 rsyslogd                          71194        3 /dev/urandom
0xffff9c83fb1e9f40 rsyslogd                          71194        4 socket:[83565]
0xffff9c83fb1e9f40 rsyslogd                          71194        5 /var/log/messages
0xffff9c83fb1e9f40 rsyslogd                          71194        6 anon_inode:[9063]
0xffff9c83fb1e9f40 rsyslogd                          71194        7 /var/log/secure
&lt;&lt; snip &gt;&gt;
0xffff9c8365761f40 insmod                           228573        0 /dev/pts/0
0xffff9c8365761f40 insmod                           228573        1 /dev/pts/0
0xffff9c8365761f40 insmod                           228573        2 /dev/pts/0
0xffff9c8365761f40 insmod                           228573        3 /root/LiME/src/lime-4.18.0-240.el8.x86_64.ko
$
```
#### Access the Linux plugins scripts location
You can get a lot more information by reading the memory dump and processing the information. If you know Python and are curious how this information was processed, go to the directory where all the plugins are stored, pick one that interests you, and see how Volatility gets this information:
```
$ ls volatility/plugins/linux/
apihooks.py              common.py            kernel_opened_files.py   malfind.py          psaux.py
apihooks.pyc             common.pyc           kernel_opened_files.pyc  malfind.pyc         psaux.pyc
arp.py                   cpuinfo.py           keyboard_notifiers.py    mount_cache.py      psenv.py
arp.pyc                  cpuinfo.pyc          keyboard_notifiers.pyc   mount_cache.pyc     psenv.pyc
aslr_shift.py            dentry_cache.py      ld_env.py                mount.py            pslist_cache.py
aslr_shift.pyc           dentry_cache.pyc     ld_env.pyc               mount.pyc           pslist_cache.pyc
&lt;&lt; snip &gt;&gt;
check_syscall_arm.py     __init__.py          lsmod.py                 proc_maps.py        tty_check.py
check_syscall_arm.pyc    __init__.pyc         lsmod.pyc                proc_maps.pyc       tty_check.pyc
check_syscall.py         iomem.py             lsof.py                  proc_maps_rb.py     vma_cache.py
check_syscall.pyc        iomem.pyc            lsof.pyc                 proc_maps_rb.pyc    vma_cache.pyc
$
$
```
One reason I like Volatility is that it provides a lot of security plugins. This information would be difficult to acquire manually:
```
linux_hidden_modules       - Carves memory to find hidden kernel modules
linux_malfind              - Looks for suspicious process mappings
linux_truecrypt_passphrase - Recovers cached Truecrypt passphrases
```
Volatility also allows you to open a shell within the memory dump, so instead of running all the commands above, you can run shell commands instead and get the same information:
```
$ python2 vol.py -f ~/LiME/RHEL8.3_64bit.mem linux_volshell --profile=LinuxRedhat8_3_4_18_0-240x64 -v
Volatility Foundation Volatility Framework 2.6.1
Current context: process systemd, pid=1 DTB=0x1042dc000
Welcome to volshell! Current memory image is:
file:///root/LiME/RHEL8.3_64bit.mem
To get help, type 'hh()'
&gt;&gt;&gt;
&gt;&gt;&gt; sc()
Current context: process systemd, pid=1 DTB=0x1042dc000
&gt;&gt;&gt;
```
### Next steps
Memory forensics is a good way to learn more about Linux internals. Try all of Volatility's plugins and study their output in detail. Then think about ways this information can help you identify an intrusion or a security issue. Dive into how the plugins work, and maybe even try to improve them. And if you didn't find a plugin for what you want to do, write one and submit it to Volatility so others can use it, too.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/linux-memory-forensics
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_computer_solve_fix_tool.png?itok=okq8joti (Brain on a computer screen)
[2]: https://github.com/volatilityfoundation/volatility
[3]: https://github.com/504ensicsLabs/LiME
[4]: mailto:mockbuild@vm09.test.com

View File

@ -0,0 +1,253 @@
[#]: subject: (Upgrade your Linux PC hardware using open source tools)
[#]: via: (https://opensource.com/article/21/4/upgrade-linux-hardware)
[#]: author: (Howard Fosdick https://opensource.com/users/howtech)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Upgrade your Linux PC hardware using open source tools
======
Get more performance from your PC with the hardware upgrades that will
give you the biggest payback.
![Business woman on laptop sitting in front of window][1]
In my article on [identifying Linux performance bottlenecks using open source tools][2], I explained some simple ways to monitor Linux performance using open source graphical user interface (GUI) tools. I focused on identifying _performance bottlenecks_, situations where a hardware resource reaches its limits and holds back your PC's performance.
How can you address a performance bottleneck? You could tune the applications or system software. Or you could run more efficient apps. You could even alter your behavior using your computer, for example, by scheduling background programs for off-hours.
You can also improve your PC's performance through a hardware upgrade. This article focuses on the upgrades that give you the biggest payback.
Open source tools are the key. GUI tools help you monitor your system to predict which hardware improvements will be effective. Otherwise, you might buy hardware and find that it doesn't improve performance. After an upgrade, these tools also help verify that the upgrade produced the benefits you expected.
This article outlines a simple approach to PC hardware upgrades. The "secret sauce" is open source GUI tools.
### How to upgrade memory
Years ago, memory upgrades were a no-brainer. Adding memory nearly always improved performance.
Today, that's no longer the case. PCs come with much more memory, and Linux uses it very efficiently. If you buy memory your system doesn't need, you've wasted money.
So you'll want to spend some time monitoring your computer to see if a memory upgrade will help its performance. For example, watch memory use while you go about your typical day. And be sure to check what happens during memory-intensive workloads.
A wide variety of open source tools can help with this monitoring, but I'll use the [GNOME System Monitor][3]. It's available in most Linux repositories.
When you start up the System Monitor, its **Resources** panel displays this output:
![Monitoring memory with GNOME System Monitor][4]
Fig. 1. Monitoring memory with GNOME System Monitor (Howard Fosdick, [CC BY-SA 4.0][5])
The middle of the screen shows memory use. [Swap][6] is disk space that Linux uses when it runs low on memory. Linux effectively increases memory by using swap as a slower extension to memory.
Since swap is slower than memory, if swap activity becomes significant, adding memory will improve your computer's performance. How much improvement you'll get depends on the amount of swap activity and the speed of your swap device.
If a lot of swap space is used, you'll get a bigger performance improvement by adding memory than if only a small amount of swap is used.
And if swap resides on a slow mechanical hard drive, you'll see a greater improvement by adding memory than you will if swap resides on the fastest available solid-state disk.
Here's an example of when to add memory. This computer shows increased swap activity after memory utilization hits 80%. It becomes unresponsive as memory use surpasses 90%:
![System Monitor - Out Of Memory Condition][7]
Fig. 2. A memory upgrade will help (Howard Fosdick, [CC BY-SA 4.0][5])
#### How to perform a memory upgrade
Before you upgrade, you need to determine how many memory slots you have, how many are open, the kinds of memory sticks they require, and your motherboard's maximum allowable memory.
You can read your computer's documentation to get those answers. Or, you can just enter these Linux line commands:
_What are the characteristics of the installed memory sticks?_ | `sudo lshw -short -C memory`
---|---
_What is the maximum allowable memory for this computer?_ | `sudo dmidecode -t memory | grep -i max`
_How many memory slots are open?_ (A null response means none are available) | `sudo lshw -short -C memory | grep -i empty`
As with all hardware upgrades, unplug the computer beforehand. Ground yourself before you touch your hardware—even the tiniest shock can damage circuitry. Fully seat the memory sticks into the motherboard slots.
After the upgrade, start System Monitor. Run the same programs that overloaded your memory before.
System Monitor should show your expanded memory, and you should see better performance.
### How to upgrade storage
We're in an era of rapid storage improvements. Even computers that are only a few years old can benefit from disk upgrades. But first, you'll want to make sure an upgrade makes sense for your computer and workload.
Start by finding out what disk you have. Many open source tools will tell you. [Hardinfo][8] or [GNOME Disks][9] are good options because both are widely available, and their output is easy to understand. These apps will tell you your disk's make, model, and other details.
Next, determine your disk's performance by benchmarking it. GNOME Disks makes this easy. Just start the tool and click on its **Benchmark Disk** option. This gives you disk read and write rates and the average disk access time:
![GNOME Disks benchmark][10]
Fig. 3. GNOME Disks benchmark output (Howard Fosdick, [CC BY-SA 4.0][5])
With this information, you can compare your disk to others at benchmarking websites like [PassMark Software][11] and [UserBenchmark][12]. Those provide performance statistics, speed rankings, and even price and performance numbers. You can get an idea of how your disk compares to possible replacements.
Here's an example of some of the detailed disk info you'll find at UserBenchmark:
![Disk comparisons at UserBenchmark][13]
Fig. 4. Disk comparisons at [UserBenchmark][14]
#### Monitor disk utilization
Just as you did with memory, monitor your disk in real time to see if a replacement would improve performance. The [`atop` line command][15] tells you how busy a disk is.
In its output below, you can see that device `sdb` is `busy 101%`. And one of the processors is waiting on that disk to do its work 85% of the time (`cpu001 w 85%`):
![atop command shows disk utilization][16]
Fig. 5. atop command shows disk utilization (Howard Fosdick, [CC BY-SA 4.0][5])
Clearly, you could improve performance with a faster disk.
You'll also want to know which program(s) are causing all that disk usage. Just start up the System Monitor and click on its **Processes** tab.
Now you know how busy your disk is and what program(s) are using it, so you can make an educated judgment whether a faster disk would be worth the expense.
#### Buying the disk
You'll encounter three major technologies when buying a new internal disk:
* Mechanical hard drives (HDDs)
* SATA-connected solid-state disks (SSDs)
* PCIe-connected NVMe solid-state disks (NVMe SSDs)
What are their speed differences? You'll see varying numbers all over the web. Here's a typical example:
![Relative disk speeds][17]
Fig. 6. Relative speeds of internal disk technologies ([Unihost][18])
* **Red bar:** Mechanical hard disks offer the cheapest bulk storage. But in terms of performance, they're slowest by far.
* **Green bar:** SSDs are faster than mechanical hard drives. But if an SSD uses a SATA interface, that limits its performance. This is because the SATA interface was designed over a decade ago for mechanical hard drives.
* **Blue bar:** The fastest technology for internal disks is the new [PCIe-connected NVMe solid-state disk][19]. These can be roughly five times faster than SATA-connected SSDs and 20 times faster than mechanical hard disks.
For external SSDs, you'll find that the [latest Thunderbolt and USB interfaces][20] are the fastest.
#### How to install an internal disk
Before purchasing any disk, verify that your computer can support the necessary physical interface.
For example, many NVMe SSDs use the popular new M.2 (2280) form factor. That requires either a tailor-made motherboard slot, a PCIe adapter card, or an external USB adapter. Your choice could affect your new disk's performance.
Always back up your data and operating system before installing a new disk. Then copy them to the new disk. Open source [tools][21] like Clonezilla, Mondo Rescue, or GParted can do the job. Or you could use Linux line commands like `dd` or `cp`.
Be sure to use your fast new disk in situations where it will have the most impact. Employ it as a boot drive, for storing your operating system and apps, for swap space, and for your most frequently processed data.
After the upgrade, run GNOME Disks to benchmark your new disk. This helps you verify that you got the performance boost you expected. You can verify real-time operation with the `atop` command.
### How to upgrade USB ports
Like disk storage, USB performance has shown great strides in the past several years. Many computers only a few years old could get a big performance boost simply by adding a cheap USB port card.
Whether the upgrade is worthwhile depends on how frequently you use your ports. Use them rarely, and it doesn't matter if they're slow. Use them frequently, and an upgrade might really impact your work.
Here's how dramatically maximum USB data rates vary across port standards: 
![USB speeds][22]
Fig. 7. USB speeds vary greatly (Howard Fosdick, [CC BY-SA 4.0][5], based on data from [Tripplite][23] and [Wikipedia][24])
To see the actual USB speeds you're getting, start GNOME Disks. GNOME Disks can benchmark a USB-connected device just like it can an internal disk. Select its **Benchmark Disk** option.
The device you plug in and the USB port together determine the speed you'll get. If the port and device are mismatched, you'll experience the slower speed of the two.
For example, connect a device that supports USB 3.1 speeds to a 2.0 port, and you'll get the 2.0 data rate. (And your system won't tell you this unless you investigate with a tool like GNOME Disks.) Conversely, connect a 2.0 device to a 3.1 port, and you'll also get the 2.0 speed. So for best results, always match your port and device speeds.
To monitor a USB-connected device in real time, use the `atop` command and System Monitor together, the same way you did to monitor an internal disk. This helps you see if you're bumping into your current setup's limit and could benefit by upgrading.
Upgrading your ports is easy. Just buy a USB card that fits into an open PCIe slot.
USB 3.0 cards are only about $25. Newer, more expensive cards offer USB 3.1 and 3.2 ports. Nearly all USB cards are plug-and-play, so Linux automatically recognizes them. (But always verify before you buy.)
Be sure to run GNOME Disks after the upgrade to verify the new speeds.
### How to upgrade your internet connection
Upgrading your internet bandwidth is easy. Just write a check to your ISP.
The question is: should you?
System Monitor shows your bandwidth use (see Figure 1). If you consistently bump against the limit you pay your ISP for, you'll benefit from buying a higher limit.
But first, verify that you don't have a problem you could fix yourself. I've seen many cases where someone thinks they need to buy more bandwidth from their ISP when they actually just have a connection problem they could fix themselves.
Start by testing your maximum internet speed at websites like [Speedtest][25] or [Fast.com][26]. For accurate results, close all programs and run _only_ the speed test; turn off your VPN; run tests at different times of day; and compare the results from several testing sites. If you use WiFi, test with it and without it (by directly cabling your laptop to the modem).
If you have a separate router, test with and without it. That will tell you if your router is a bottleneck. Sometimes just repositioning the router in your home or updating its firmware will improve connection speed.
These tests will verify that you're getting the speeds you're paying your ISP for. They'll also expose any local WiFi or router problem you could fix yourself.
Only after you've done these tests should you conclude that you need to purchase more internet bandwidth.
### Should you upgrade your CPU or GPU?
What about upgrading your CPU (central processing unit) or GPU (graphics processing unit)?
Laptop owners typically can't upgrade either because they're soldered to the motherboard.
Most desktop motherboards support a range of CPUs and are upgradeable—assuming you're not already using the topmost processor in the series.
Use System Monitor to watch your CPU and determine if an upgrade would help. Its **Resources** panel will show your CPU load. If all your logical processors consistently stay above 80% or 90%, you could benefit from more CPU power.
It's a fun project to upgrade your CPU. Anyone can do it if they're careful.
Unfortunately, it's rarely cost-effective. Most sellers charge a premium for an individual CPU chip versus the deal they'll give you on a new system unit. So for many people, a CPU upgrade doesn't make economic sense.
If you plug your display monitor directly into your desktop's motherboard, you might benefit by upgrading your graphics processing. Just add a video card.
The trick is to achieve a balanced workload between the new video card and your CPU. This [online tool][27] identifies exactly which video cards will best work with your CPU. [This article][28] provides a detailed explanation of how to go about upgrading your graphics processing.
### Gather data before you upgrade
Personal computer users sometimes upgrade their Linux hardware based on gut feel. A better way is to monitor performance and gather some data first. Open source GUI tools make this easy. They help predict whether a hardware upgrade will be worth your time and money. Then, after your upgrade, you can use them to verify that your changes had the intended effect.
These are the most popular hardware upgrades. With a little effort and the right open source tools, any Linux user can cost-effectively upgrade a PC.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/upgrade-linux-hardware
作者:[Howard Fosdick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/howtech
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
[2]: https://opensource.com/article/21/3/linux-performance-bottlenecks
[3]: https://vitux.com/how-to-install-and-use-task-manager-system-monitor-in-ubuntu/
[4]: https://opensource.com/sites/default/files/uploads/system_monitor_-_resources_panel_0.jpg (Monitoring memory with GNOME System Monitor)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/article/18/9/swap-space-linux-systems
[7]: https://opensource.com/sites/default/files/uploads/system_monitor_-_out_of_memory_0.jpg (System Monitor - Out Of Memory Condition)
[8]: https://itsfoss.com/hardinfo/
[9]: https://en.wikipedia.org/wiki/GNOME_Disks
[10]: https://opensource.com/sites/default/files/uploads/gnome_disks_-_benchmark_0.jpg (GNOME Disks benchmark)
[11]: https://www.harddrivebenchmark.net/
[12]: https://www.userbenchmark.com/
[13]: https://opensource.com/sites/default/files/uploads/userbenchmark_disk_comparisons_0.jpg (Disk comparisons at UserBenchmark)
[14]: https://ssd.userbenchmark.com/
[15]: https://opensource.com/life/16/2/open-source-tools-system-monitoring
[16]: https://opensource.com/sites/default/files/uploads/atop_-_storage_bottleneck_0.jpg (atop command shows disk utilization)
[17]: https://opensource.com/sites/default/files/uploads/hdd_vs_ssd_vs_nvme_speeds_0.jpg (Relative disk speeds)
[18]: https://unihost.com/help/nvme-vs-ssd-vs-hdd-overview-and-comparison/
[19]: https://www.trentonsystems.com/blog/pcie-gen4-vs-gen3-slots-speeds
[20]: https://www.howtogeek.com/449991/thunderbolt-3-vs.-usb-c-whats-the-difference/
[21]: https://www.linuxlinks.com/diskcloning/
[22]: https://opensource.com/sites/default/files/uploads/usb_standards_-_speeds_0.jpg (USB speeds)
[23]: https://www.tripplite.com/products/usb-connectivity-types-standards
[24]: https://en.wikipedia.org/wiki/USB
[25]: https://www.speedtest.net/
[26]: https://fast.com/
[27]: https://www.gpucheck.com/gpu-benchmark-comparison
[28]: https://helpdeskgeek.com/how-to/see-how-much-your-cpu-bottlenecks-your-gpu-before-you-buy-it/

View File

@ -0,0 +1,347 @@
[#]: subject: (5 ways to process JSON data in Ansible)
[#]: via: (https://opensource.com/article/21/4/process-json-data-ansible)
[#]: author: (Nicolas Leiva https://opensource.com/users/nicolas-leiva)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
5 ways to process JSON data in Ansible
======
Structured data is friendly for automation, and you can take full
advantage of it with Ansible.
![Net catching 1s and 0s or data in the clouds][1]
Exploring and validating data from an environment is a common practice for preventing service disruptions. You can choose to run the process periodically or on-demand, and the data you're checking can come from different sources: telemetry, command outputs, etc.
If the data is _unstructured_, you must do some custom regex magic to retrieve key performance indicators (KPIs) relevant for specific scenarios. If the data is _structured_, you can leverage a wide array of options to make parsing it simpler and more consistent. Structured data conforms to a data model, which allows access to each data field separately. The data for these models is exchanged as key/value pairs and encoded using different formats. JSON, which is widely used in Ansible, is one of them.
There are many resources available in Ansible to work with JSON data, and this article presents five of them. While all these resources are used together in sequence in the examples, it is probably sufficient to use just one or two in most real-life scenarios.
![Magnifying glass looking at 0's and 1's][2]
([Geralt][3], Pixabay License)
The following code snippet is a short JSON document used as input for the examples in this article. If you just want to see the code, it's available in my [GitHub repository][4].
This is sample [pyATS][5] output from a `show ip ospf neighbors` command on a Cisco IOS-XE device:
```
{
   "parsed": {
      "interfaces": {
          "Tunnel0": {
              "neighbors": {
                  "203.0.113.2": {
                      "address": "198.51.100.2",
                      "dead_time": "00:00:39",
                      "priority": 0,
                      "state": "FULL/  -"
                  }
              }
          },
          "Tunnel1": {
              "neighbors": {
                  "203.0.113.2": {
                      "address": "192.0.2.2",
                      "dead_time": "00:00:36",
                      "priority": 0,
                      "state": "INIT/  -"
                  }
              }
          }
      }
   }
}
```
This document lists various interfaces from a networking device describing the Open Shortest Path First ([OSPF][6]) state of any OSPF neighbor present per interface. The goal is to validate that the state of all these OSPF sessions is good (i.e., **FULL**).
This goal is visually simple, but if you have a lot of entries, it wouldn't be. Fortunately, as the following examples demonstrate, you can do this at scale with Ansible.
### 1\. Access a subset of the data
If you are only interested in a specific branch of the data tree, a reference to its path will take you down the JSON structure hierarchy and allow you to select only that portion of the JSON object. The path is made of dot-separated key names.
To begin, create a variable (`input`) in Ansible that reads the JSON-formatted message from a file.
To go two levels down, for example, you need to follow the hierarchy of the key names to that point, which translates to `input.parsed.interfaces`, in this case. `input` is the variable that stores the JSON data, `parsed` the top-level key, and `interfaces` is the subsequent one. In a playbook, this will looks like:
```
\- name: Go down the JSON file 2 levels
  hosts: localhost
  vars:
    input: "{{ lookup('file','output.json') | from_json }}"
  tasks:
   - name: Create interfaces Dictionary
     set_fact:
       interfaces: "{{ input.parsed.interfaces }}"
   - name: Print out interfaces
     debug:
       var: interfaces
```
It gives the following output:
```
TASK [Print out interfaces] *************************************************************************************************************************************
ok: [localhost] =&gt; {
    "msg": {
        "Tunnel0": {
            "neighbors": {
                "203.0.113.2": {
                    "address": "198.51.100.2",
                    "dead_time": "00:00:39",
                    "priority": 0,
                    "state": "FULL/  -"
                }
            }
        },
        "Tunnel1": {
            "neighbors": {
                "203.0.113.2": {
                    "address": "192.0.2.2",
                    "dead_time": "00:00:36",
                    "priority": 0,
                    "state": "INIT/  -"
                }
            }
        }
    }
}
```
The view hasn't changed much; you only trimmed the edges. Baby steps!
### 2\. Flatten out the content
If the previous output doesn't help or you want a better understanding of the data hierarchy, you can produce a more compact output with the `to_paths` filter:
```
\- name: Print out flatten interfaces input
  debug:
    msg: "{{ lookup('ansible.utils.to_paths', interfaces) }}"
```
This will print out as:
```
TASK [Print out flatten interfaces input] ***********************************************************************************************************************
ok: [localhost] =&gt; {
    "msg": {
        "Tunnel0.neighbors['203.0.113.2'].address": "198.51.100.2",
        "Tunnel0.neighbors['203.0.113.2'].dead_time": "00:00:39",
        "Tunnel0.neighbors['203.0.113.2'].priority": 0,
        "Tunnel0.neighbors['203.0.113.2'].state": "FULL/  -",
        "Tunnel1.neighbors['203.0.113.2'].address": "192.0.2.2",
        "Tunnel1.neighbors['203.0.113.2'].dead_time": "00:00:36",
        "Tunnel1.neighbors['203.0.113.2'].priority": 0,
        "Tunnel1.neighbors['203.0.113.2'].state": "INIT/  -"
    }
}
```
### 3\. Use json_query filter (JMESPath)
If you are familiar with a JSON query language such as [JMESPath][7], then Ansible's json_query filter is your friend because it is built upon JMESPath, and you can use the same syntax. If this is new to you, there are plenty of JMESPath examples you can learn from in [JMESPath examples][8]. It is a good resource to have in your toolbox.
Here's how to use it to create a list of the neighbors for all interfaces. The query executed in this is `*.neighbors`:
```
\- name: Create neighbors dictionary (this is now per interface)
  set_fact:
    neighbors: "{{ interfaces | json_query('*.neighbors') }}"
\- name: Print out neighbors
  debug:
    msg: "{{ neighbors }}"
```
Which returns a list you can iterate over:
```
TASK [Print out neighbors] **************************************************************************************************************************************
ok: [localhost] =&gt; {
    "msg": [
        {
            "203.0.113.2": {
                "address": "198.51.100.2",
                "dead_time": "00:00:39",
                "priority": 0,
                "state": "FULL/  -"
            }
        },
        {
            "203.0.113.2": {
                "address": "192.0.2.2",
                "dead_time": "00:00:36",
                "priority": 0,
                "state": "INIT/  -"
            }
        }
    ]
}
```
Other options to query JSON are [jq][9] or [Dq][10] (for pyATS).
### 4\. Access specific data fields
Now you can go through the list of neighbors in a loop to access individual data. This example is interested in the `state` of each one. Based on the field's value, you can trigger an action.
This will generate a message to alert the user if the state of a session isn't **FULL**. Typically, you would notify users through mechanisms like email or a chat message rather than just a log entry, as in this example.
As you loop over the `neighbors` list generated in the previous step, it executes the tasks described in `tasks.yml` to instruct Ansible to print out a **WARNING** message only if the state of the neighbor isn't **FULL** (i.e., `info.value.state is not match("FULL.*")`):
```
\- name: Loop over neighbors
  include_tasks: tasks.yml
  with_items: "{{ neighbors }}"
```
The `tasks.yml` file considers `info` as the dictionary item produced for each neighbor in the list you iterate over:
```
\- name: Print out a WARNING if OSPF state is not FULL
 debug:
   msg: "WARNING: Neighbor {{ info.key }}, with address {{ info.value.address }} is in state {{ info.value.state[0:4]  }}"
 vars:
   info: "{{ lookup('dict', item) }}"
 when: info.value.state is not match("FULL.*")
```
This produces a custom-generated message with different data fields for each neighbor that isn't operational:
```
TASK [Print out a WARNING if OSPF state is not FULL] ************************************************************************************************************
ok: [localhost] =&gt; {
    "msg": "WARNING: Neighbor 203.0.113.2, with address 192.0.2.2 is in state INIT"
}
```
> Note: Filter JSON data in Ansible using [json_query][11].
### 5\. Use a JSON schema to validate your data
A more sophisticated way to validate the data from a JSON message is by using a JSON schema. This gives you more flexibility and a wider array of options to validate different types of data. A schema for this example would need to specify `state` is a `string` that starts with **FULL** if that's the only state you want to be valid (you can access this code in my [GitHub repository][12]):
```
{
 "$schema": "<http://json-schema.org/draft-07/schema\#>",
 "definitions": {
     "neighbor" : {
         "type" : "object",
         "properties" : {
             "address" : {"type" : "string"},
             "dead_time" : {"type" : "string"},
             "priority" : {"type" : "number"},
             "state" : {
                 "type" : "string",
                 "pattern" : "^FULL"
                 }
             },
         "required" : [ "address","state" ]
     }
 },
 "type": "object",
 "patternProperties": {
     ".*" : { "$ref" : "#/definitions/neighbor" }
 }
}
```
As you loop over the neighbors, it reads this schema (`schema.json`) and uses it to validate each neighbor item with the module `validate` and engine `jsonschema`:
```
\- name: Validate state of the neighbor is FULL
  ansible.utils.validate:
    data: "{{ item }}"
    criteria:
     - "{{ lookup('file',  'schema.json') | from_json }}"
    engine: ansible.utils.jsonschema
  ignore_errors: true
  register: result
\- name: Print the neighbor that does not satisfy the desired state
  ansible.builtin.debug:
    msg:
     - "WARNING: Neighbor {{ info.key }}, with address {{ info.value.address }} is in state {{ info.value.state[0:4] }}"
     - "{{ error.data_path }}, found: {{ error.found }}, expected: {{ error.expected }}"
  when: "'errors' in result"
  vars:
    info: "{{ lookup('dict', item) }}"
    error: "{{ result['errors'][0] }}"
```
Save the output of the ones that fail the validation so that you can alert the user with a message:
```
TASK [Validate state of the neighbor is FULL] *******************************************************************************************************************
fatal: [localhost]: FAILED! =&gt; {"changed": false, "errors": [{"data_path": "203.0.113.2.state", "expected": "^FULL", "found": "INIT/  -", "json_path": "$.203.0.113.2.state", "message": "'INIT/  -' does not match '^FULL'", "relative_schema": {"pattern": "^FULL", "type": "string"}, "schema_path": "patternProperties..*.properties.state.pattern", "validator": "pattern"}], "msg": "Validation errors were found.\nAt 'patternProperties..*.properties.state.pattern' 'INIT/  -' does not match '^FULL'. "}
...ignoring
TASK [Print the neighbor that does not satisfy the desired state] ***********************************************************************************************
ok: [localhost] =&gt; {
    "msg": [
        "WARNING: Neighbor 203.0.113.2, with address 192.0.2.2 is in state INIT",
        "203.0.113.2.state, found: INIT/  -, expected: ^FULL"
    ]
}
```
If you'd like a deeper dive:
* You can find a more elaborated example and references in [Using new Ansible utilities for operational state management and remediation][13].
* A good resource to practice JSON schema generation is the [JSON Schema Validator and Generator][14].
* A similar approach is the [Schema Enforcer][15], which lets you create the schema in YAML (helpful if you prefer that syntax).
### Conclusion
Structured data is friendly for automation, and you can take full advantage of it with Ansible. As you determine your KPIs, you can automate checks on them to give you peace of mind in situations such as before and after a maintenance window.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/process-json-data-ansible
作者:[Nicolas Leiva][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/nicolas-leiva
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
[2]: https://opensource.com/sites/default/files/uploads/data_pixabay.jpg (Magnifying glass looking at 0's and 1's)
[3]: https://pixabay.com/illustrations/window-hand-magnifying-glass-binary-4354467/
[4]: https://github.com/nleiva/ansible-networking/blob/master/test-json.md#parsing-json-outputs
[5]: https://pypi.org/project/pyats/
[6]: https://en.wikipedia.org/wiki/Open_Shortest_Path_First
[7]: https://jmespath.org/
[8]: https://jmespath.org/examples.html
[9]: https://stedolan.github.io/jq/
[10]: https://pubhub.devnetcloud.com/media/genie-docs/docs/userguide/utils/index.html
[11]: https://blog.networktocode.com/post/ansible-filtering-json-query/
[12]: https://github.com/nleiva/ansible-networking/blob/master/files/schema.json
[13]: https://www.ansible.com/blog/using-new-ansible-utilities-for-operational-state-management-and-remediation
[14]: https://extendsclass.com/json-schema-validator.html
[15]: https://blog.networktocode.com/post/introducing_schema_enforcer/

View File

@ -0,0 +1,99 @@
[#]: subject: (How to create your first Quarkus application)
[#]: via: (https://opensource.com/article/21/4/quarkus-tutorial)
[#]: author: (Saumya Singh https://opensource.com/users/saumyasingh)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to create your first Quarkus application
======
The Quarkus framework is considered the rising star for
Kubernetes-native Java.
![woman on laptop sitting at the window][1]
Programming languages and frameworks continuously evolve to help developers who want to develop and deploy applications with even faster speeds, better performance, and lower footprint. Engineers push themselves to develop the "next big thing" to satisfy developers' demands for faster deployments.
[Quarkus][2] is the latest addition to the Java world and considered the rising star for Kubernetes-native Java. It came into the picture in 2019 to optimize Java and commonly used open source frameworks for cloud-native environments. With the Quarkus framework, you can easily go serverless with Java. This article explains why this open source framework is grabbing lots of attention these days and how to create your first Quarkus app.
## What is Quarkus?
Quarkus reimagines the Java stack to give the performance characteristics and developer experience needed to create efficient, high-speed applications. It is a container-first and cloud-native framework for writing Java apps.
You can use your existing skills to code in new ways with Quarkus. It also helps reduce the technical burden in moving to a Kubernetes-centric environment. High-density deployment platforms like Kubernetes need apps with a faster boot time and lower memory usage. Java is still a popular language for developing software but suffers from its focus on productivity at the cost of RAM and CPU.
In the world of virtualization, serverless, and cloud, many developers find Java is not the best fit for developing cloud-native apps. However, the introduction of Quarkus (also known as "Supersonic and Subatomic Java") helps to resolve these issues.
## What are the benefits of Quarkus?
![Quarkus benefits][3]
(Saumya Singh, [CC BY-SA 4.0][4])
Quarkus improves start-up times, execution costs, and productivity. Its main objective is to reduce applications' startup time and memory footprint while providing "developer joy." It fulfills these objectives with native compilation and hot reload features.
### Runtime benefits
![How Quarkus uses memory][5]
(Saumya Singh, [CC BY-SA 4.0][4])
* Lowers memory footprint
* Reduces RSS memory, using 10% of the memory needed for a traditional cloud-native stack
* Offers very fast startup
* Provides a container-first framework, as it is designed to run in a container + Kubernetes environment.
* Focuses heavily on making things work in Kubernetes
### Development benefits
![Developers love Quarkus][6]
(Saumya Singh, [CC BY-SA 4.0][4])
* Provides very fast, live reload during development and coding
* Uses "best of breed" libraries and standards
* Brings specifications and great support
* Unifies and supports imperative and reactive (non-blocking) styles
## Create a Quarkus application in 10 minutes
Now that you have an idea about why you may want to try Quarkus, I'll show you how to use it.
First, ensure you have the prerequisites for creating a Quarkus application
* An IDE like Eclipse, IntelliJ IDEA, VS Code, or Vim
* JDK 8 or 11+ installed with JAVA_HOME configured correctly
* Apache Maven 3.6.2+
You can create a project with either a Maven command or by using code.quarkus.io.
### Use a Maven command:
One of the easiest ways to create a new Quarkus project is to open a terminal and run the following commands, as outlined in the [getting started guide][7]. 
**Linux and macOS users:**
```
mvn io.quarkus:quarkus-maven-plugin:1.13.2.Final:create \
    -DprojectGroupId=org.acme \
    -DprojectArtifactId=getting-started \
    -DclassName="org.acme.getting.started.GreetingResource" \
    -Dpath="/hello"
cd getting-started
```
**Windows users:**
* If you are using `cmd`, don't use the backward slash (`\`): [code]`mvn io.quarkus:quarkus-maven-plugin:1.13.2.Final:create -DprojectGroupId=org.acme -DprojectArtifactId=getting-started -DclassName="org.acme.getting.started.GreetingResource" -Dpath="/hello"`
```
* If you are using PowerShell, wrap `-D` parameters in double-quotes:
```
`mvn io.quarkus:quarkus-maven-plugin:1.13.2.Final:create "

View File

@ -0,0 +1,274 @@
[#]: subject: (Share files between Linux and Windows computers)
[#]: via: (https://opensource.com/article/21/4/share-files-linux-windows)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: (wyxplus)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Share files between Linux and Windows computers
======
Set up cross-platform file sharing with Samba.
![Blue folders flying in the clouds above a city skyline][1]
If you work with different operating systems, it's handy to be able to share files between them. This article explains how to set up file access between Linux ([Fedora 33][2]) and Windows 10 using [Samba][3] and [mount.cifs][4].
Samba is the Linux implementation of the [SMB/CIFS][5] protocol, allowing direct access to shared folders and printers over a network. Mount.cifs is part of the Samba suite and allows you to mount the [CIFS][5] filesystem under Linux.
> **Caution**: These instructions are for sharing files within your private local network or in a virtualized host-only network between a Linux host machine and a virtualized Windows guest. Don't consider this article a guideline for your corporate network, as it doesn't implement the necessary cybersecurity considerations.
### Access Linux from Windows
This section explains how to access a user's Linux home directory from Windows File Explorer.
#### 1\. Install and configure Samba
Start on your Linux system by installing Samba:
```
`dnf install samba`
```
Samba is a system daemon, and its configuration file is located in `/etc/samba/smb.conf`. Its default configuration should work. If not, this minimal configuration should do the job:
```
[global]
        workgroup = SAMBA
        server string = %h server (Samba %v)
        invalid users = root
        security = user
[homes]
        comment = Home Directories
        browseable = no
        valid users = %S
        writable = yes
```
You can find a detailed description of the parameters in the [smb.conf][6] section of the project's website.
#### 2\. Modify LinuxSE
If your Linux distribution is protected by [SELinux][7] (as Fedora is), you have to enable Samba to be able to access the user's home directory:
```
`setsebool -P samba_enable_home_dirs on`
```
Check that the value is set by typing:
```
`getsebool samba_enable_home_dirs`
```
Your output should look like this:
![Sebool][8]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
#### 3\. Enable your user
Samba uses a set of users and passwords that have permission to connect. Add your Linux user to the set by typing:
```
`smbpasswd -a <your-user>`
```
You will be prompted for a password. This is a _completely new_ password; it is not the current password for your account. Enter the password you want to use to log in to Samba.
To get a list of allowed user types:
```
`pdbedit -L -v`
```
Remove a user by typing:
```
`smbpasswd -x <user-name>`
```
#### 4\. Start Samba
Because Samba is a system daemon, you can start it on Fedora with:
```
`systemctl start smb`
```
This starts Samba for the current session. If you want Samba to start automatically on system startup, enter:
```
`systemctl enable smb`
```
On some systems, the Samba daemon is registered as `smbd`.
#### 4\. Configure the firewall
By default, Samba is blocked by your firewall. Allow Samba to access the network permanently by configuring the firewall.
You can do it on the command line with:
```
`firewall-cmd --add-service=samba --permanent`
```
Or you do it graphically with the firewall-config tool:
![firewall-config][10]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
#### 5\. Access Samba from Windows
In Windows, open File Explorer. On the address line, type in two backslashes followed by your Linux machine's address (IP address or hostname):
![Accessing Linux machine from Windows][11]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
You will be prompted for your login information. Type in the username and password combination from step 3. You should now be able to access your home directory on your Linux machine:
![Accessing Linux machine from Windows][12]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
### Access Windows from Linux
The following steps explain how to access a shared Windows folder from Linux. To implement them, you need Administrator rights on your Windows user account.
#### 1\. Enable file sharing
Open the** Network and Sharing Center** either by clicking on the
**Windows Button &gt; Settings &gt; Network &amp; Internet**
or by right-clicking the little monitor icon on the bottom-right of your taskbar:
![Open network and sharing center][13]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
In the window that opens, find the connection you want to use and note its profile. I used **Ethernet 3**, which is tagged as a **Public network**.
> **Caution**: Consider changing your local machine's connection profile to **Private** if your PC is frequently connected to public networks.
Remember your network profile and click on **Change advanced sharing settings**:
![Change advanced sharing settings][14]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
Select the profile that corresponds to your connection and turn on **network discovery** and **file and printer sharing**:
![Network sharing settings][15]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
#### 2\. Define a shared folder
Open the context menu by right-clicking on the folder you want to share, navigate to **Give access to**, and select **Specific people...** :
![Give access][16]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
Check whether your current username is on the list. Click on **Share** to tag this folder as shared:
![Tag as shared][17]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
You can display a list of all shared folders by entering `\\localhost` in File Explorer's address line:
![Shared folders][18]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
![Shared folders][19]
(Stephan Avenwedde, [CC BY-SA 4.0][9])
#### 3\. Mount the shared folder under Linux
Go back to your Linux system, open a command shell, and create a new folder where you want to mount the Windows share:
```
`mkdir ~/WindowsShare`
```
Mounting Windows shares is done with mount.cifs, which should be installed by default. To mount your shared folder temporarily, use:
```
`sudo mount.cifs //<address-of-windows-pc>/MySharedFolder ~/WindowsShare/ -o user=<Windows-user>,uid=$UID`
```
In this command:
* `<address-of-windows-pc>` is the Windows PC's address info (IP or hostname)
* `<Windows-user>`is the user that is allowed to access the shared folder (from step 2)
You will be prompted for your Windows password. Enter it, and you will be able to access the shared folder on Windows with your normal Linux user.
To unmount the shared folder:
```
`sudo umount ~/WindowsShare/`
```
You can also mount a Windows shared folder on system startup. Follow [these steps][20] to configure your system accordingly.
### Summary
This shows how to establish temporary shared folder access that must be renewed after each boot. It is relatively easy to modify this configuration for permanent access. I often switch back and forth between different systems, so I consider it incredibly practical to set up direct file access.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/share-files-linux-windows
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO (Blue folders flying in the clouds above a city skyline)
[2]: https://getfedora.org/en/workstation/download/
[3]: https://www.samba.org/
[4]: https://linux.die.net/man/8/mount.cifs
[5]: https://en.wikipedia.org/wiki/Server_Message_Block
[6]: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html
[7]: https://www.redhat.com/en/topics/linux/what-is-selinux
[8]: https://opensource.com/sites/default/files/uploads/sebool.png (Enabling Samba to enable user directory access)
[9]: https://creativecommons.org/licenses/by-sa/4.0/
[10]: https://opensource.com/sites/default/files/uploads/firewall_configuration.png (firewall-config tool)
[11]: https://opensource.com/sites/default/files/uploads/windows_access_shared_1.png (Accessing Linux machine from Windows)
[12]: https://opensource.com/sites/default/files/uploads/windows_acess_shared_2.png (Accessing Linux machine from Windows)
[13]: https://opensource.com/sites/default/files/uploads/open_network_and_sharing_center.png (Open network and sharing center)
[14]: https://opensource.com/sites/default/files/uploads/network_and_sharing_center_2.png (Change advanced sharing settings)
[15]: https://opensource.com/sites/default/files/uploads/network_sharing.png (Network sharing settings)
[16]: https://opensource.com/sites/default/files/pictures/give_access_to.png (Give access)
[17]: https://opensource.com/sites/default/files/pictures/tag_as_shared.png (Tag as shared)
[18]: https://opensource.com/sites/default/files/uploads/show_shared_folder_1.png (Shared folders)
[19]: https://opensource.com/sites/default/files/uploads/show_shared_folder_2.png (Shared folders)
[20]: https://timlehr.com/auto-mount-samba-cifs-shares-via-fstab-on-linux/

View File

@ -0,0 +1,133 @@
[#]: subject: (Experiencing the /e/ OS: The Open Source De-Googled Android Version)
[#]: via: (https://itsfoss.com/e-os-review/)
[#]: author: (Dimitrios https://itsfoss.com/author/dimitrios/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Experiencing the /e/ OS: The Open Source De-Googled Android Version
======
/e/ Android operating system is a privacy oriented, Google-free mobile operating system, fork of Lineage OS and was founded in mid-2018 by [Gaël Duval][1], creator of Mandrake Linux (now [Mandriva Linux)][2].
Despite making Android an open source project in 2007, Google replaced some OS elements with proprietary software when Android gained popularity. /e/ Foundation has replaced the proprietary apps and services with [MicroG][3], an open source alternative framework which minimizes tracking and device activity.
Its FOSS received [Fairphone 3][4] with /e/ OS preinstalled, an [ethically created smartphone][5] from the /e/ Foundation. I used the device for a month before returning it to them and I am going to share my experience with this privacy device. I forgot to take screenshots so Ill be sharing the generic images from the official website.
### Experiencing the /e/ mobile operating system on the ethical Fairphone device
Before I go any further, let me clear that Fairphone 3 is not the only option to get /e/ in your hands. The /e/ foundation gives you [a few smartphone options to choose][6] if you are buying a device from them.
You dont have to buy a device to use /e/ OS. As per the /e/ Foundation, you can [use it on over 100 supported devices][7].
Despite I enjoyed using the Fairphone 3, and my personal beliefs are in line with the Fairphone manifesto, I wont focus my attention on the device but to the /e/ operating system only.
#### Apps with rated privacy
![][8]
I used Fairphone 3 as my daily driver for a couple of days, to compare the usage with my “ordinary” Android phone in reality.
First and foremost I wanted to see if all the apps that I use, are available at the “[App Store][9]” /e/ foundation has created. The /e/ App Store contains apps with privacy ratings.
![/e/ OS app store has privacy ratings of the apps][10]
I could find many applications, including apps from Google. This means that if someone really wants to use some Google service, it is still available as an option to download. Though unlike other Andriod devices, Google services are not forced down your throat.
Though there are lot of apps available, I could not find the mobile banking app I use in the UK. I have to admit that the mobile banking app can contribute to a level of convenience. As an alternative, I had to access a computer to use the online banking platform if needed.
From a usability point of view, /e/ OS could replace my “standard” Android OS with minor hiccups like the banking apps.
#### If not Google, then what?
Wondering what essential apps /e/ OS uses instead of the ones from Google? Heres a quick list:
* [Magic Earth][11] Turn by turn navigation
* Web-browser an ungoogled fork of Chromium
* Mail a fork of [K9-mail][12]
* SMS a fork of QKSMS
* Camera a fork of OpenCamera
* Weather a fork of GoodWeather
* OpenTasks Task organizer
* Calendar -Calendar: a fork of [Etar calendar][13]
#### Bliss Launcher and overall design
![][14]
The default launcher application of /e/ OS is called “Bliss Launcher” which aims to an attractive look and feel. To me, the design felt similar to iOS.
By Swiping to the left panel, you can access a few useful widgets /e/ has selected.
![][15]
* Search: Quick search of pre-installed apps or search the web
* APP Suggestions: The top 4 most used apps will appear on this widget
* Weather: The weather widget is showing the local weather. It doesnt automatically detect the location and it needs to be configured.
* Edit: If you want more widgets on the screen, you can add them by clicking the edit button
All in all, the user interface is clean and neat. Being simple and straightforward enhances a pleasant user experience.
#### DeGoogled and privacy oriented OS
As mentioned earlier /e/ OS is a Google-free operating system which is based on an open source core of [Lineage OS][16]. All the Google apps have been removed and the Google services have been replaced with the Micro G framework. The /e/ OS is still compatible with all Android apps.
##### Key privacy features:
* Google search engine has been replaced with alternatives such as DuckDuckGo
* Google Services have been replaced by microG framework
* Alternative default apps are used instead of Google Apps
* Connectivity check against Google servers is removed
* NTP servers have been replaced with the standard NTP service: pool.ntp.orgs
* DNS default servers are replaced by 9.9.9.9 and can be edited to users choice
* Geolocation is using Mozilla Location Services on top of GPS
Privacy notice
Please be mindful that using a smartphone, provided by /e/ foundation doesnt automatically mean that your privacy is guaranteed no matter what you do. Social media apps that share your personal information should be used under your awareness.
#### Conclusion
I have been an Android user for more than a decade. /e/ OS surprised me positively. A privacy concerned user can find this solution very appealing, and depending on the selected apps and settings can feel secure again using a smartphone.
I could recommend it to you if you are a privacy aware tech-savvy and can find your way around things on your own. The /e/ ecosystem is likely to be overwhelming for people who are used to of mainstream Google services.
Have you used /e/ OS? How was your experience with it? What do you think of projects like these that focus on privacy?
--------------------------------------------------------------------------------
via: https://itsfoss.com/e-os-review/
作者:[Dimitrios][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/dimitrios/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Ga%C3%ABl_Duval
[2]: https://en.wikipedia.org/wiki/Mandriva_Linux
[3]: https://en.wikipedia.org/wiki/MicroG
[4]: https://esolutions.shop/shop/e-os-fairphone-3-fr/
[5]: https://www.fairphone.com/en/story/?ref=header
[6]: https://esolutions.shop/shop/
[7]: https://doc.e.foundation/devices/
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/e-ecosystem.png?resize=768%2C510&ssl=1
[9]: https://e.foundation/e-os-available-applications/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/04/e-os-apps-privacy-ratings.png?resize=300%2C539&ssl=1
[11]: https://www.magicearth.com/
[12]: https://k9mail.app/
[13]: https://github.com/Etar-Group/Etar-Calendar
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/fairphone.jpg?resize=600%2C367&ssl=1
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/e-bliss-launcher.jpg?resize=300%2C533&ssl=1
[16]: https://lineageos.org/

View File

@ -0,0 +1,97 @@
[#]: subject: (Linux tips for using GNU Screen)
[#]: via: (https://opensource.com/article/21/4/gnu-screen-cheat-sheet)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Linux tips for using GNU Screen
======
Learn the basics of terminal multiplexing with GNU Screen, then download
our cheat sheet so you always have the essential shortcuts at hand.
![Terminal command prompt on orange background][1]
To the average user, a terminal window can be baffling and cryptic. But as you learn more about the Linux terminal, it doesn't take long before you realize how efficient and powerful it is. It also doesn't take long for you to want it to be even _more_ efficient, though, and what better way to make your terminal better than to put more terminals into your terminal?
### Terminal multiplexing
One of the many advantages to the terminal is that it's a centralized interface with centralized controls. It's one window that affords you access to hundreds of applications, and all you need to interact with each one of them is a keyboard. But modern computers almost always have processing power to spare, and modern computerists love to multitask, so one window for hundreds of applications can be pretty limiting.
A common answer for this flaw is terminal multiplexing: the ability to layer virtual terminal windows on top of one another and then move between them all. With a multiplexer, you retain your centralized control, but you gain the ability to swap out the interface as you multitask. Better yet, you can split your virtual screens within your terminal so you can have multiple screens up at the same time.
### Choose the right multiplexer
Some terminals offer similar features, with tabbed interfaces and split views, but there are subtle differences. First of all, these terminals' features depend on a graphical desktop environment. Second, many graphical terminal features require mouse interaction or use inconvenient keyboard shortcuts. A terminal multiplexer's features work just as well in a text console as on a graphical desktop, and the keybindings are conveniently designed around common terminal sequences.
There are two popular multiplexers: [tmux][2] and [GNU Screen][3]. They do the same thing and mostly have the same features, although the way you interact with each is slightly different. This article is a getting-started guide for GNU Screen. For information about tmux, read Kevin Sonney's [introduction to tmux][4].
### Using GNU Screen
GNU Screen's basic usage is simple. Launch it with the `screen` command, and you're placed into the zeroeth window in a Screen session. You may hardly notice anything's changed until you decide you need a new prompt.
When one terminal window is occupied with an activity (for instance, you've launched a text editor like [Vim][5] or [Jove][6], or you're processing video or audio, or running a batch job), you can just open a new one. To open a new window, press **Ctrl+A**, release, and then press **c**. This creates a new window on top of your existing window.
You'll know you're in a new window because your terminal appears to be clear of anything aside from its default prompt. Your other terminal still exists, of course; it's just hiding behind the new one. To traverse through your open windows, press **Ctrl+A**, release, and then **n** for _next_ or **p** for _previous_. With just two windows open, **n** and **p** functionally do the same thing, but you can always open more windows (**Ctrl+A** then **c**) and walk through them.
### Split screen
GNU Screen's default behavior is more like a mobile device screen than a desktop: you can only see one window at a time. If you're using GNU Screen because you love to multitask, being able to focus on only one window may seem like a step backward. Luckily, GNU Screen lets you split your terminal into windows within windows.
To create a horizontal split, press **Ctrl+A** and then **s**. This places one window above another, just like window panes. The split space is, however, left unpurposed until you tell it what to display. So after creating a split, you can move into the split pane with **Ctrl+A** and then **Tab**. Once there, use **Ctrl+A** then **n** to navigate through all your available windows until the content you want to be displayed is in the split pane.
You can also create vertical splits with **Ctrl+A** then **|** (that's a pipe character, or the **Shift** option of the **\** key on most keyboards).
### Make GNU Screen your own
GNU Screen uses shortcuts based around **Ctrl+A**. Depending on your habits, this can either feel very natural or be supremely inconvenient because you use **Ctrl+A** to move to the beginning of a line anyway. Either way, GNU Screen permits all manner of customization through the `.screenrc` configuration file. You can change the trigger keybinding (called the "escape" keybinding) with this:
```
`escape ^jJ`
```
You can also add a status line to help you keep yourself oriented during a Screen session:
```
# status bar, with current window highlighted
hardstatus alwayslastline
hardstatus string '%{= kG}[%{G}%H%? %1`%?%{g}][%= %{= kw}%-w%{+b yk} %n*%t%?(%u)%? %{-}%+w %=%{g}][%{B}%m/%d %{W}%C%A%{g}]'
 
# enable 256 colors
attrcolor b ".I"
termcapinfo xterm 'Co#256:AB=\E[48;5;%dm:AF=\E[38;5;%dm'
defbce on
```
Having an always-on reminder of what window has focus activity and which windows have background activity is especially useful during a session with multiple windows open. It's a sort of task manager for your terminal.
### Download the cheat sheet
When you're learning GNU Screen, you'll have a lot of new keyboard commands to remember. Some you'll remember right away, but the ones you use less often might be difficult to keep track of. You can always access a Help screen within GNU Screen with **Ctrl+A** then **?**, but if you prefer something you can print out and keep by your keyboard, **[download our GNU Screen cheat sheet][7]**.
Learning GNU Screen is a great way to increase your efficiency and alacrity with your favorite [terminal emulator][8]. Give it a try!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/gnu-screen-cheat-sheet
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE (Terminal command prompt on orange background)
[2]: https://github.com/tmux/tmux/wiki
[3]: https://www.gnu.org/software/screen/
[4]: https://opensource.com/article/20/1/tmux-console
[5]: https://opensource.com/tags/vim
[6]: https://opensource.com/article/17/1/jove-lightweight-alternative-vim
[7]: https://opensource.com/downloads/gnu-screen-cheat-sheet
[8]: https://opensource.com/article/21/2/linux-terminals

View File

@ -0,0 +1,154 @@
[#]: subject: (Access an alternate internet with OpenNIC)
[#]: via: (https://opensource.com/article/21/4/opennic-internet)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Access an alternate internet with OpenNIC
======
Take a detour on the super information highway.
![An intersection of pipes.][1]
In the words of Dan Kaminsky, the legendary DNS hacker, "the Internet's proven to be a pretty big deal for global society." For the Internet to work, computers must be able to find one another on the most complex network of all: the World Wide Web. This was the problem posed to government workers and academic IT staff a few decades ago, and it's their solutions that we use today. They weren't, however, actually seeking to build _the Internet_, they were defining specifications for _internets_ (actually for _catenets_, or "concatenated networks", but the term that eventually fell out of vogue), a generic term for _interconnected networks_.
According to these specifications, a network uses a combination of numbers that serve as a sort of home address for each online computer and assigns a human-friendly but highly structured "hostname" (such as `example.com`) to each website. Because users primarily interact with the internet through website _names_, it can be said that the internet works only because we've all agreed to a standardized naming scheme. The Internet _could_ work differently, should enough people decide to use a different naming scheme. A group of users could form a parallel internet, one that exists using the same physical infrastructure (the cables and satellites and other modes of transport that get data from one place to another) but uses a different means of correlating hostnames with numbered addresses.
In fact, this already exists, and this article shows how you can access it.
### Understand name servers
The term "internet" is actually a portmanteau of the terms _interconnected_ and _networks_ because that's exactly what it is. Like neighborhoods in a city, or cities in a country, or countries on a continent, or continents on a planet, the internet spans the globe by transmitting data from one home or office network to data centers and server rooms or other home or office networks. It's a gargantuan task—but it's not without precedent. After all, phone companies long ago connected the world, and before that, telegraph and postal services did the same.
In a phone or mail system, there's a list, whether it's formal or informal, that relates human names to physical addresses. This used to be delivered to houses in the form of telephone books, a directory of every phone owner in that phone book's community. Post offices operate differently: they usually rely on the person sending the letter to know the name and address of the intended recipient, but postcodes and city names are used to route the letter to the correct post office. Either way, the need for a standard organizational scheme is necessary.
For computers, the [IP protocol][2] describes how addresses on the internet must be formatted. The domain name server [(DNS) protocol][3] describes how human-friendly names may be assigned to and resolved from IP addresses. Whether you're using IPv4 or IPv6, the idea is the same: When a node (which could be a computer or a gateway leading to another network) joins a network, it is assigned an IP address.
If you wish, you may register a domain name with [ICANN][4] (a non-profit organization that helps coordinate website names on the internet) and register the name as a pointer to an IP address. There is no requirement that you "own" the IP address. Anyone can point any domain name to any IP address. The only restrictions are that only one person can own a specific domain name at a time, and the domain name must follow the recognized DNS naming scheme.
Records of a domain name and its associated IP address are entered into a DNS. When you navigate to a website in your browser, it quickly consults the DNS network to find what IP address is associated with whatever URL you've entered (or clicked on from a search engine).
### A different DNS
To avoid arguments over who owns which domain name, most domain name registrars charge a fee for domain registration. The fee is usually nominal, and sometimes it's even $0 (for instance, `freenom.com` offers gratis `.tk`, `.ml`, `.gq`, and `.cf` domains on a first-come, first-served basis).
For a very long time, there were only a few "top-level" domains, including `.org`, `.edu`, and `.com`. Now there are a lot more, including `.club`, `.biz`, `.name`, `.international`, and so on. Letter combinations being what they are, however, there are lots of potential top-level domains that aren't valid, such as `.null`. If you try to navigate to a website ending in `.null`, then you won't get very far. It's not available for registration, it's not a valid entry for a domain name server, and it just doesn't exist.
The [OpenNIC Project][5] has established an alternate DNS network to resolve domain names to IP addresses, but it includes names not currently used by the internet. Available top-level domains include:
* .geek
* .indy
* .bbs
* .gopher
* .o
* .libre
* .oss
* .dyn
* .null
You can register a domain within these (and more) top-level domains and register them on the OpenNIC DNS system so that they map to an IP address of your choice.
In other words, a website may exist in the OpenNIC network but remain inaccessible to anyone not using OpenNIC name servers. This isn't by any means a security measure or even a means of obfuscation; it's just a conscious choice to take a detour on the _super information highway_.
### How to use an OpenNIC DNS server
To access OpenNIC sites, you must configure your computer to use OpenNIC DNS servers. Luckily, this isn't a binary choice. By using an OpenNIC DNS server, you get access to both OpenNIC and the standard web.
To configure your Linux computer to use an OpenNIC DNS server, you can use the [nmcli][6] command, a terminal interface to Network Manager. Before starting the configuration, visit [opennic.org][5] and look for your nearest OpenNIC DNS server. As with standard DNS and [edge computing][7], the closer the server is to you geographically, the less delay you'll experience when your browser queries it.
Here's how to use OpenNIC:
1. First, get a list of connections:
```
$ sudo nmcli connection
NAME                TYPE             DEVICE
Wired connection 1  802-3-ethernet   eth0
MyPersonalWifi      802-11-wireless  wlan0
ovpn-phx2-tcp       vpn              --
```
Your connections are sure to differ from this example, but focus on the first column. This provides the human-readable name of your connections. In this example, I'll configure my Ethernet connection, but the process is the same for a wireless connection.
2. Now that you know the name of the connection you need to modify, use `nmcli` to update its `ipv4.dns` property:
```
$ sudo nmcli con modify \
"Wired connection 1" \
ipv4.dns "134.195.4.2"
```
In this example, `134.195.4.2` is my closest server.
3. Prevent Network Manager from auto-updating `/etc/resolv.conf` with what your router is set to use:
```
$ sudo nmcli con modify \
"Wired connection 1" \
ipv4.ignore-auto-dns yes
```
4. Bring your network connection down and then up again to instantiate the new settings:
```
$ sudo nmcli con down \
"Wired connection 1"
$ sudo nmcli con up \
"Wired connection 1"
```
That's it. You're now using the OpenNIC DNS servers.
#### DNS at your router
You can set your entire network to use OpenNIC by making this change to your router. You won't have to configure your computer's connection because the router will provide the correct DNS server automatically. I can't demonstrate this because router interfaces differ depending on the manufacturer. Furthermore, some internet service providers (ISP) don't allow you to modify your name server settings, so this isn't always an option.
### Test OpenNIC
To explore the "other" internet you've unlocked, try navigating to `grep.geek` in your browser. If you enter `http://grep.geek`, then your browser takes you to a search engine for OpenNIC. If you enter just `grep.geek`, then your browser interferes, taking you to your default search engine (such as [Searx][8] or [YaCy][9]), with an offer at the top of the window to navigate to the page you requested in the first place.
![OpenNIC][10]
(Klaatu, [CC BY-SA 4.0][11])
Either way, you end up at `grep.geek` and can now search the OpenNIC version of the web.
### Great wide open
The internet is meant to be a place of exploration, discovery, and equal access. OpenNIC helps ensure all of these things using existing infrastructure and technology. It's an opt-in internet alternative. If these ideas appeal to you, give it a try!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/4/opennic-internet
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
[2]: https://tools.ietf.org/html/rfc791
[3]: https://tools.ietf.org/html/rfc1035
[4]: https://www.icann.org/resources/pages/register-domain-name-2017-06-20-en
[5]: http://opennic.org
[6]: https://opensource.com/article/20/7/nmcli
[7]: https://opensource.com/article/17/9/what-edge-computing
[8]: http://searx.me
[9]: https://opensource.com/article/20/2/open-source-search-engine
[10]: https://opensource.com/sites/default/files/uploads/did-you-mean.jpg (OpenNIC)
[11]: https://creativecommons.org/licenses/by-sa/4.0/

View File

@ -0,0 +1,133 @@
[#]: subject: (Access freenode using Matrix clients)
[#]: via: (https://fedoramagazine.org/access-freenode-using-matrix-clients/)
[#]: author: (TheEvilSkeleton https://fedoramagazine.org/author/theevilskeleton/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Access freenode using Matrix clients
======
![][1]
Fedora Linux 34 Background with freenode and Matrix logos
Matrix (also written [matrix]) is [an open source project][2] and [a communication protocol][3]. The protocol standard is open and it is free to use or implement. Matrix is being recognized as a modern successor to the older [Internet Relay Chat (IRC)][4] protocol. [Mozilla][5], [KDE][6], [FOSDEM][7] and [GNOME][8] are among several large projects that have started using chat clients and servers that operate over the Matrix protocol. Members of the Fedora project have [discussed][9] whether or not the community should switch to using the Matrix protocol.
The Matrix project has implemented an IRC bridge to enable communication between IRC networks (for example, [freenode][10]) and [Matrix homeservers][11]. This article is a guide on how to register, identify and join freenode channels from a Matrix client via the [Matrix IRC bridge][12].
Check out _[Beginners guide to IRC][13]_ for more information about IRC.
### Preparation
You need to set everything up before you register a nick. A nick is a username.
#### Install a client
Before you use the IRC bridge, you need to install a Matrix client. This guide will use Element. Other [Matrix clients][14] are available.
First, install the Matrix client _Element_ from [Flathub][15] on your PC. Alternatively, browse to [element.io][16] to run the Element client directly in your browser.
Next, click _Create Account_ to register a new account on matrix.org (a homeserver hosted by the Matrix project).
#### Create rooms
For the IRC bridge, you need to create rooms with the required users.
First, click the (plus) button next to _People_ on the left side in Element and type _@appservice-irc:matrix.org_ in the field to create a new room with the user.
Second, create another new room with _@freenode_NickServ:matrix.org_.
### Register a nick at freenode
If you have already registered a nick at freenode, skip the remainder of this section.
Registering a nickname is optional, but strongly recommended. Many freenode channels require a registered nickname to join.
First, open the room with _appservice-irc_ and enter the following:
```
!nick <your_nick>
```
Substitute _&lt;your_nick&gt;_ with the username you want to use. If the nick is already taken, _NickServ_ will send you the following message:
```
This nickname is registered. Please choose a different nickname, or identify via /msg NickServ identify <password>.
```
If you receive the above message, use another nick.
Second, open the room with _NickServ_ and enter the following:
```
REGISTER <your_password> <your_email@example.com>
```
You will receive a verification email from freenode. The email will contain a verification command similar to the following:
```
/msg NickServ VERIFY REGISTER <your_nick> <verification_code>
```
Ignore _/msg NickServ_ at the start of the command. Enter the remainder of the command in the room with _NickServ_. Be quick! You will have 24 hours to verify before the code expires.
### Identify your nick at freenode
If you just registered a new nick using the procedure in the previous section, then you should already be identified. If you are already identified, skip the remainder of this section.
First, open the room with _@appservice-irc:matrix.org_ and enter the following:
```
!nick <your_nick>
```
Next, open the room with _@freenode_NickServ:matrix.org_ and enter the following:
```
IDENTIFY <your_nick> <your_password>
```
### Join a freenode channel
To join a freenode channel, press the (plus) button next to _Rooms_ on the left side in Element and type _#freenode_#&lt;your_channel&gt;:matrix.org_. Substitute _&lt;your_channel&gt;_ with the freenode channel you want to join. For example, to join the _#fedora_ channel, use _#freenode_#fedora:matrix.org_. For a list of Fedora Project IRC channels, see _[Communicating_and_getting_help — IRC_for_interactive_community_support][17]_.
### Further reading
* [Matrix IRC wiki][18]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/access-freenode-using-matrix-clients/
作者:[TheEvilSkeleton][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/theevilskeleton/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/04/freenode-matrix-816x345.jpeg
[2]: https://matrix.org/
[3]: https://matrix.org/docs/spec/
[4]: https://en.wikipedia.org/wiki/Internet_Relay_Chat
[5]: https://matrix.org/blog/2019/12/19/welcoming-mozilla-to-matrix/
[6]: https://matrix.org/blog/2019/02/20/welcome-to-matrix-kde/
[7]: https://matrix.org/blog/2021/01/04/taking-fosdem-online-via-matrix
[8]: https://wiki.gnome.org/Initiatives/Matrix
[9]: https://discussion.fedoraproject.org/t/the-future-of-real-time-chat-discussion-for-the-fedora-council/24628
[10]: https://en.wikipedia.org/wiki/Freenode
[11]: https://en.wikipedia.org/wiki/Matrix_(protocol)#Servers
[12]: https://github.com/matrix-org/matrix-appservice-irc
[13]: https://fedoramagazine.org/beginners-guide-irc/
[14]: https://matrix.org/clients/
[15]: https://flathub.org/apps/details/im.riot.Riot
[16]: https://app.element.io/
[17]: https://fedoraproject.org/wiki/Communicating_and_getting_help#IRC_for_interactive_community_support
[18]: https://github.com/matrix-org/matrix-appservice-irc/wiki

View File

@ -0,0 +1,94 @@
[#]: subject: (Chrome Browser Keeps Detecting Network Change in Linux? Heres How to Fix it)
[#]: via: (https://itsfoss.com/network-change-detected/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
[#]: collector: (lujun9972)
[#]: translator: (HuengchI)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Chrome Browser Keeps Detecting Network Change in Linux? Heres How to Fix it
======
For the past several days, I faced a strange issue in my system running Ubuntu Linux. I use Firefox and [Brave browsers][1]. Everything was normal in Firefox but Brave keeps on detecting a network change on almost every refresh.
![][2]
This went on to the extent that it became impossible to use the browser. I could not use [Feedly][3] to browse feeds from my favorite websites, every search result ends in multiple refresh, websites needed to be refreshed multiple times as well.
As an alternative, I tried [installing Chrome on Ubuntu][4]. The problem remained the same. I [installed Microsoft Edge on Linux][5] and yet, the problem persisted there as well. Basically, any Chromium-based browser keep encountering the ERR_NETWORK_CHANGED error.
Luckily, I found a way to fix the issue. I am going to share the steps with you so that it helps you if you are also facing the same problem.
### Fixing frequent network change detection issues in Chromium based browsers
The trick that worked for me was to disable IPv6 in the network settings. Now, I am not sure why this happens but I know that IPv6 is known to create network problems in many systems. If your system, router and other devices use IPv6 instead of the good old IPv4, you may encounter network connection issues like the one I encountered.
Thankfully, it is not that difficult to [disable IPv6 in Ubuntu][6]. There are several ways to do that and I am going to share the easiest method perhaps. This method uses GRUB to disable IPv6.
Attention Beginners!
If you are not too comfortable with the command line and terminal, please pay extra attention on the steps. Read the instructions carefully.
#### Step 1: Open GRUB config file for editing
Open the terminal. Now use the following command to edit the GRUB config file in Nano editor. Youll have to enter your accounts password.
```
sudo nano /etc/default/grub
```
I hope you know a little bit about [using Nano editor][7]. Use the arrow keys to go to the line starting with GRUB_CMDLINE_LINUX. Make its value look like this:
```
GRUB_CMDLINE_LINUX="ipv6.disable=1"
```
Be careful of the inverted commas and spaces. Dont touch other lines.
![][8]
Save your changes by using the Ctrl+x keys. It will ask you to confirm the changes. Press Y or enter when asked.
#### Step 2: Update grub
You have made changes to the GRUB bootloader configuration. These changes wont be taken into account until you update grub. Use the command below for that:
```
sudo update-grub
```
![][9]
Now when you restart your system, IPv6 will be disabled for your networks. You should not encounter the network interruption issue anymore.
You may think why I didnt mention disabling IPv6 from the network settings. Its because Ubuntu uses [Netplan][10] to manage network configuration these days and it seems that changes in Network Manager are not fully taken into account by Netplan. I tried it but despite IPv6 being disabled in the Network Manager, the problem didnt go away until I used the command line method.
Even after so many years, IPv6 support has not matured and it keeps causing trouble. Disabling IPv6 sometimes [improve WiFi speed in Linux][11]. Weird, I know.
Anyway, I hope this trick helps you with the network change detection issue in your system as well.
--------------------------------------------------------------------------------
via: https://itsfoss.com/network-change-detected/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/brave-web-browser/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/network-change-detected.png?resize=800%2C418&ssl=1
[3]: https://feedly.com/
[4]: https://itsfoss.com/install-chrome-ubuntu/
[5]: https://itsfoss.com/microsoft-edge-linux/
[6]: https://itsfoss.com/disable-ipv6-ubuntu-linux/
[7]: https://itsfoss.com/nano-editor-guide/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/04/disabling-ipv6-via-grub.png?resize=800%2C453&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/04/updating-grub-ubuntu.png?resize=800%2C434&ssl=1
[10]: https://netplan.io/
[11]: https://itsfoss.com/speed-up-slow-wifi-connection-ubuntu/

View File

@ -0,0 +1,44 @@
[#]: subject: (Flipping burgers to flipping switches: A tech guy's journey)
[#]: via: (https://opensource.com/article/21/5/open-source-story-burgers)
[#]: author: (Clint Byrum https://opensource.com/users/spamaps)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Flipping burgers to flipping switches: A tech guy's journey
======
You never know how your first job might influence your career path.
![Multi-colored and directional network computer cables][1]
In my last week of high school in 1996, I quit my job at Carl's Jr. because I thought maybe without school, I'd have time to learn enough skills to get hired at a PC shop or something. I didn't know that I actually had incredibly marketable skills as a Linux sysadmin and C programmer, because I was the only tech person I'd ever known (except the people I chatted with on Undernet's #LinuxHelp channel).
I applied at a local company that had maybe the weirdest tech mission I've experienced: Its entire reason for existing was the general lack of industrial-sized QIC-80 tape-formatting machines. Those 80MB backup tapes (gargantuan at a time when 200MB hard disks were huge) were usually formatted at the factory as they came off the line, or you could buy them already formatted at a significantly higher price.
One of the people who developed that line at 3M noticed that formatting them took an hour—over 90% of their time in manufacturing. The machine developed to speed up formatting was, of course, buggy and years too late.
Being a shrewd businessman, instead of fixing the problem for 3M, he quit his job, bought a bunch of cheap PCs and a giant pile of unformatted tapes, and began paying minimum wage to workers in my hometown of San Marcos, Calif., to stuff them into the PCs and pull them out all day long. Then he sold the formatted tapes at a big markup—but less than what 3M charged for them. It was a success.
By the time I got there in 1996, they'd streamlined things a bit. They had a big degaussing machine, about 400 486 PCs stuffed with specialized floppy controllers so that you could address eight tape drives in one machine, custom software (including hardware multiplexers for data collection), and contracts with all the major tape makers (Exabyte, 3M, etc.). I thought I was coming in to be a PC repair tech, as I had passed the test, which asked me to identify all the parts of a PC.
A few weeks in, the lead engineer noticed I had an electronics book (I was studying electronics at [ITT Tech][2], of all places) and pulled me in to help him debug and build the next feature they had cooked up—a custom printed circuit board (PCB) that lit up LEDs to signal a tape's status: formatting (yellow), error (red), or done (green). I didn't write any code or do anything useful, but he still told me I was wasting my time there and should go out and get a real tech job.
That "real tech job" I got was as a junior sysadmin for a local medical device manufacturer. I helped bridge their HP-UX ERP system to their new parent company's Windows NT printers using Linux and Samba—and I was hooked forever on the power of free and open source software (FOSS). I also did some fun debugging while I was there, which you can [read about][3] on my blog.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/5/open-source-story-burgers
作者:[Clint Byrum][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/spamaps
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connections_wires_sysadmin_cable.png?itok=d5WqHmnJ (Multi-colored and directional network computer cables)
[2]: https://en.wikipedia.org/wiki/ITT_Technical_Institute
[3]: https://fewbar.com/2020/04/a-bit-of-analysis/

Some files were not shown because too many files have changed in this diff Show More