This commit is contained in:
Donkey-Hao 2022-10-24 08:52:08 +08:00
commit f0cb98d3b4
58 changed files with 6379 additions and 3329 deletions

View File

@ -0,0 +1,184 @@
[#]: subject: (Write your first web component)
[#]: via: (https://opensource.com/article/21/7/web-components)
[#]: author: (Ramakrishna Pattnaik https://opensource.com/users/rkpattnaik780)
[#]: collector: (lujun9972)
[#]: translator: (cool-summer-021)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-15148-1.html)
开发你的第一个 Web 组件
======
> 不要做重复的工作;基于浏览器开发 Web App 时,需要制作一些可重用的模块。
![](https://img.linux.net.cn/data/attachment/album/202210/17/101134uzsiis8xsu9wqibi.jpg)
Web 组件是一系列开源技术(例如 JavaScript 和 HTML的集合你可以用它们创建一些 Web App 中可重用的自定义元素。你创建的组件是独立于其他代码的,所以这些组件可以方便地在多个项目中重用。
首先,它是一个平台标准,所有主流的浏览器都支持它。
### Web 组件中包含什么?
* **定制元素**JavaScript API 支持定义 HTML 元素的新类别。
* **影子 DOM**JavaScript API 提供了一种将一个隐藏的、独立的 [文档对象模型][2]DOM附加到一个元素的方法。它通过保留从页面的其他代码分离出来的样式、标记结构和行为特征对 Web 组件进行了封装。它会确保 Web 组件内样式不会被外部样式覆盖反之亦然Web 组件内样式也不会“泄露”到页面的其他部分。
* **HTML 模板**:该元素支持定义可重用的 DOM 元素。可重用 DOM 元素和它的内容不会呈现在 DOM 内,但仍然可以通过 JavaScript 被引用。
### 开发你的第一个 Web 组件
你可以借助你最喜欢的文本编辑器和 JavaScript 写一个简单的 Web 组件。本指南使用 Bootstrap 生成简单的样式,并创建一个简易的卡片式的 Web 组件,给定了位置信息,该组件就能显示该位置的温度。该组件使用了 [Open Weather API][3],你需要先注册,然后创建 APPID/APIKey才能正常使用。
调用该组件,需要给出位置的经度和纬度:
```
<weather-card longitude='85.8245' latitude='20.296' />
```
创建一个名为 `weather-card.js` 的文件,这个文件包含 Web 组件的所有代码。首先,需要定义你的组件,创建一个模板元素,并在其中加入一些简单的 HTML 标签:
```
const template = document.createElement('template');
template.innerHTML = `
  <div class="card">
    <div class="card-body"></div>
  </div>
`
```
定义 Web 组件的类及其构造函数:
```
class WeatherCard extends HTMLElement {
  constructor() {
    super();
    this._shadowRoot = this.attachShadow({ 'mode': 'open' });
    this._shadowRoot.appendChild(template.content.cloneNode(true));
  }
  ......
}
```
构造函数中,附加了 `shadowRoot` 属性,并将它设置为开启模式。然后这个模板就包含了 shadowRoot 属性。
接着,编写获取属性的函数。对于经度和纬度,你需要向 Open Weather API 发送 GET 请求。这些功能需要在 `connectedCallback` 函数中完成。你可以使用 `getAttribute` 方法访问相应的属性,或定义读取属性的方法,把它们绑定到本对象中。
```
get longitude() {
  return this.getAttribute('longitude');
}
get latitude() {
  return this.getAttribute('latitude');
}
```
现在定义 `connectedCallBack` 方法,它的功能是在需要时获取天气数据:
```
connectedCallback() {
var xmlHttp = new XMLHttpRequest();
const url = `http://api.openweathermap.org/data/2.5/weather?lat=${this.latitude}&lon=${this.longitude}&appid=API_KEY`
xmlHttp.open("GET", url, false);
xmlHttp.send(null);
this.$card = this._shadowRoot.querySelector('.card-body');
let responseObj = JSON.parse(xmlHttp.responseText);
let $townName = document.createElement('p');
$townName.innerHTML = `Town: ${responseObj.name}`;
this._shadowRoot.appendChild($townName);
let $temperature = document.createElement('p');
$temperature.innerHTML = `${parseInt(responseObj.main.temp - 273)} &deg;C`
this._shadowRoot.appendChild($temperature);
}
```
一旦获取到天气数据,附加的 HTML 元素就添加进了模板。至此,完成了类的定义。
最后,使用 `window.customElements.define` 方法定义并注册一个新的自定义元素:
```
window.customElements.define('weather-card', WeatherCard);
```
其中,第一个参数是自定义元素的名称,第二个参数是所定义的类。这里是 [整个组件代码的链接][5]。
你的第一个 Web 组件的代码已完成!现在应该把它放入 DOM。为了把它放入 DOM你需要在 HTML 文件(`index.html`)中载入指向 Web 组件的 JavaScript 脚本。
```
<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
</head>
<body>
<weather-card longitude='85.8245' latitude='20.296'/>
  <script src='./weather-card.js'></script>
</body>
</html>
```
这就是显示在浏览器中的 Web 组件:
![Web component displayed in a browser][6]
由于 Web 组件中只包含 HTML、CSS 和 JavaScript它们本来就是浏览器所支持的并且可以无瑕疵地跟前端框架例如 React 和 Vue一同使用。下面这段简单的代码展现的是它跟一个由 [Create React App][8] 引导的一个简单的 React App 的整合方法。如果你需要,可以引入前面定义的 `weather-card.js`,把它作为一个组件使用:
```
import './App.css';
import './weather-card';
function App() {
  return (
  <weather-card longitude='85.8245' latitude='20.296'></weather-card>
  );
}
export default App;
```
### Web 组件的生命周期
一切组件都遵循从初始化到移除的生命周期法则。每个生命周期事件都有相应的方法你可以借助这些方法令组件更好地工作。Web 组件的生命周期事件包括:
* `Constructor`Web 组件的构造函数在它被挂载前调用,意味着在元素附加到文档对象前被创建。它用于初始化本地状态、绑定事件处理器以及创建影子 DOM。在构造函数中必须调用 `super()`,执行父类的构造函数。
* `ConnectedCallBack`:当一个元素被挂载(即,插入 DOM 树)时调用。该函数处理创建 DOM 节点的初始化过程中的相关事宜大多数情况下用于类似于网络请求的操作。React 开发者可以将它与 `componentDidMount` 相关联。
* `attributeChangedCallback`:这个方法接收三个参数:`name`, `oldValue``newValue`。组件的任一属性发生变化,就会执行这个方法。属性由静态 `observedAttributes` 方法声明:
```
static get observedAttributes() {
  return ['name', '_id'];
}
```
一旦属性名或 `_id` 改变,就会调用 `attributeChangedCallback` 方法。
* `DisconnectedCallBack`:当一个元素从 DOM 树移除,会执行这个方法。它相当于 React 中的 `componentWillUnmount`。它可以用于释放不能由垃圾回收机制自动清除的资源,比如 DOM 事件的取消订阅、停用计时器或取消所有已注册的回调方法。
* `AdoptedCallback`:每次自定义元素移动到一个新文档时调用。只有在处理 IFrame 时会发生这种情况。
### 模块化开源
Web 组件对于开发 Web App 很有用。无论你是熟练使用 JavaScript 的老手,还是初学者,无论你的目标客户使用哪种浏览器,借助这种开源标准创建可重用的代码都是一件可以轻松完成的事。
*插图Ramakrishna Pattnaik, [CC BY-SA 4.0][7]*
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/web-components
作者:[Ramakrishna Pattnaik][a]
选题:[lujun9972][b]
译者:[cool-summer-021](https://github.com/cool-summer-021)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rkpattnaik780
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://en.wikipedia.org/wiki/Document_Object_Model
[3]: https://openweathermap.org/api
[4]: http://api.openweathermap.org/data/2.5/weather?lat=${this.latitude}\&lon=${this.longitude}\&appid=API\_KEY\`
[5]: https://gist.github.com/rkpattnaik780/acc683d3796102c26c1abb03369e31f8
[6]: https://opensource.com/sites/default/files/uploads/webcomponent.png (Web component displayed in a browser)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://create-react-app.dev/docs/getting-started/

View File

@ -0,0 +1,160 @@
[#]: subject: "Troubleshooting “Bash: Command Not Found” Error in Linux"
[#]: via: "https://itsfoss.com/bash-command-not-found/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "chai001125"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15164-1.html"
解决 Linux 中的 “Bash: Command Not Found” 报错
======
> 本新手教程展示了在 Debian、Ubuntu 和其他的 Linux 发行版上如何解决 “Bash: command not found” 这一报错。
当你在 Linux 中使用命令时,你希望得到终端输出的结果。但有时候,你会遇到终端显示“<ruby>命令未找到<rt>command not found</rt></ruby>”这一报错。
![][1]
对于这个问题,并没有直截了当且单一的解决方案。你必须自己做一些故障排除来解决这个报错。
老实说,要解决它并不难。该报错信息已经给出了一些提示:“命令未找到”,这说明你的 shell或者 Linux 系统)找不到你输入的那条命令。
shell或 Linux 系统)找不到命令,有三个可能的原因:
* 你将命令的名称拼错了
* 该命令还没有安装
* 该命令是一个可执行脚本,但其位置未知
接下来,我们会详细介绍“命令未找到”这一报错的每一个原因。
### 解决“命令未找到”报错
![][2]
#### 方法 1再次检查命令名称有没有写错
每个人都会犯错误,尤其是在打字的时候。你输入的命令可能存在错别字(也就是你写错啦)。
你应该特别注意:
* 是否拼对了正确的命令名称
* 是否在命令与其选项之间加上了空格
* 是否在拼写中混淆了 1数字 1、I大写的 i和 l小写的 L
* 是否正确使用了大写字母或者小写字母
看看下面的示例,因为我写错了 `ls` 命令所以会导致“command not found”报错。
![][3]
所以,请再次仔细确认你输入得对不对。
#### 方法 2确保命令已安装在你的系统上
这是“命令未找到”错误的另一个常见原因。如果命令尚未安装,则无法运行该命令。
虽然在默认情况下,你的 Linux 发行版自带安装了大量命令,但是不会在系统中预装 _所有的_ 命令行工具。如果你尝试运行的命令不是一个流行的常用命令,那么你需要先安装它。
你可以使用发行版的软件包管理器来安装命令。
![You may have to install the missing command][4]
有时候,某一常用命令可能也不再能使用了,甚至你也不能够安装这个命令了。这种情况下,你需要找到一个替代的命令,来得到结果。
以现已弃用的 `ifconfig` 命令为例。网络上的旧教程依旧会让你使用 `ifconfig` 命令,来 [获取本机的 IP 地址][5] 和网络接口信息,但是,在较新的 Linux 版本中,你已经无法使用 `ifconfig` 了。`ifconfig` 命令已被 `ip` 命令所取代。
![Some popular commands get discontinued over the time][1]
有时候,你的系统可能甚至找不到一些非常常见的命令。当你在 Docker 容器中运行 Linux 发行版时就通常如此。Docker 容器为了缩小操作系统镜像的大小,容器中通常不包含那些常见的 Linux 命令。
这就是为什么使用 Docker 的用户会碰到 [ping 命令未找到][6] 等报错的原因。
![Docker containers often have only a few commands installed][7]
因此,这种情况下的解决方案是安装缺失的命令,或者是找到一个与缺失命令有同等功能的工具。
### 方法 3确保命令是真实的而不是一个别名
我希望你知道 Linux 中的别名概念。你可以配置你自己的较短的命令来代替一个较长命令的输入。
一些发行版,如 Ubuntu会自动提供 `ll``ls -l` 的别名)、`la``ls -a` 的别名)等命令。
![][13]
想象一下,你习惯于在你的个人系统上输入 `ll``la`,而你登录到另一个 Linux 系统,发现 `ll` 命令并不存在。你甚至不能安装 `ll` 命令,因为它不是一个真正的命令。
所以,如果你找不到一个命令,甚至不能安装,你应该尝试在互联网上搜索该命令是否存在。如果不存在,可能是其他系统上的一个别名。
#### 方法 4检查命令是否是一个路径正确的可执行脚本
这是 Linux 新手在 [运行 shell 脚本][8] 时常犯的错误。
即使你在同一目录下,仅用可执行脚本的名称,来运行可执行脚本,也会显示错误。
```
[email protected]:~/scripts# sample
-bash: sample: command not found
```
因为你需要显式指定 shell 解释器或可执行脚本的路径!
![][9]
如果你在其他目录下,在未提供文件正确路径的情况下,运行 shell 脚本,则会有“<ruby>找不到文件<rt>no such file or directory</rt></ruby>”的报错。
![][10]
> **把可执行文件的路径加到 PATH 变量中**
>
> 有时候你下载了一个软件的压缩文件tar 格式),解压这个 tar 文件,然后找到一个可执行文件和其他程序文件。你需要运行可执行文件,来运行那个软件。
>
> 但是,你需要在可执行文件的同一目录下或指定可执行文件的整个路径,才能运行那个可执行文件。这很令人烦扰。
>
> 你可以使用 `PATH` 变量来解决这个问题。`PATH` 变量包含了有各种 Linux 命令的二进制(可执行)文件的目录集合。当你运行一个命令时,你的 Linux 系统会检查 `PATH` 变量中的上述目录,以查找该命令的可执行文件。
>
> 你可以使用 `which` 命令,来检查某一命令的二进制文件的位置:
>
> ![][11]
>
> 如果你想从系统上的任何地方都能运行可执行文件或脚本,你需要将可执行文件的位置添加到 `PATH` 变量中。
>
> ![][12]
>
> 然后,`PATH` 变量需要添加到 shell 的 rc 文件中,如此对 `PATH` 变量的更改就是永久性的。
>
> 这里的要点是:你的 Linux 系统必须了解可执行脚本的位置。要么在运行时给出可执行文件的整个路径,要么将其位置添加到 `PATH` 变量中。
### 以上的内容有帮到你吗?
我懂得,当你是 Linux 新手时,很多事情可能会让你不知所措。但是,当你了解问题的根本原因时,你的知识会逐渐增加。
对于“未找到命令”报错来说,没有简单的解决方案。我提供给你了一些提示和要点,我希望这对你的故障排除有帮助。
如果你仍然有疑问或需要帮助,请在评论区告诉我吧。
--------------------------------------------------------------------------------
via: https://itsfoss.com/bash-command-not-found/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[chai001125](https://github.com/chai001125)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-command-not-found-error.png?resize=741%2C291&ssl=1
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-command-not-found-error-1.png?resize=800%2C450&ssl=1
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/command-not-found-error.png?resize=723%2C234&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/command-not-found-debian.png?resize=741%2C348&ssl=1
[5]: https://itsfoss.com/check-ip-address-ubuntu/
[6]: https://linuxhandbook.com/ping-command-ubuntu/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ping-command-not-found-ubuntu.png?resize=786%2C367&ssl=1
[8]: https://itsfoss.com/run-shell-script-linux/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-script-command-not-found-error-800x331.png?resize=800%2C331&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/script-file-not-found-error-800x259.png?resize=800%2C259&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/path-location.png?resize=800%2C241&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/adding-executable-to-PATH-variable-linux.png?resize=800%2C313&ssl=1
[13]: https://itsfoss.com/wp-content/uploads/2022/01/alias-in-ubuntu.png

View File

@ -3,52 +3,51 @@
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972"
[#]: translator: "chai001125"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15142-1.html"
服务器支持 IPv6 的原因
======
![](https://img.linux.net.cn/data/attachment/album/202210/15/155046v94vbmo5imykfkxz.jpg)
我一直在努力学习关于 IPv6 的相关知识。一方面IPv6 的基础概念是很简单的(没有足够的 IPv4 地址可以满足互联网上的所有设备,所以人们发明了 IPv6每个人都能有足够的 IPv6 地址!)
但是当我试图进一步理解它时,我遇到了很多问题。其中一个问题是:为什么 `twitter.com` 不支持 IPv6。假设网站不支持 IPv6 并不会造成很多困难,那么为什么网站需要支持 IPv6 呢?
但是当我试图进一步理解它时,我遇到了很多问题。其中一个问题是:为什么 twitter.com 不支持 IPv6。假设网站不支持 IPv6 并不会造成很多困难,那么为什么网站需要支持 IPv6 呢?
我在 Twitter 上询问了很多人 [为什么他们的服务器支持 IPv6][1],我得到了很多很好的答案,我将在这里总结一下。事先说明一下,因为我对 IPv6 基本上毫无经验,所以下面所总结的理由中可能会有写得不准确的地方,请大家多多包涵。
首先,我想解释一下为什么 `twitter.com` 可以不支持 IPv6因为这是最先让我困惑的地方。
首先,我想解释一下为什么 twitter.com 可以不支持 IPv6因为这是最先让我困惑的地方。
### 怎么知道 `twitter.com` 不支持 IPv6 呢?
### 怎么知道 twitter.com 不支持 IPv6 呢?
你可以使用 dig 命令以 AAAA 的选项查询某一个域名的 IPv6 地址记录,如果没有记录,则表明该域名不支持 IPv6。除了 `twitter.com`,还有一些大型网站,如 `github.com``stripe.com` 也不支持 IPv6。
你可以使用 `dig` 命令以 `AAAA` 的选项查询某一个域名的 IPv6 地址记录,如果没有记录,则表明该域名不支持 IPv6。除了 twitter.com还有一些大型网站如 github.com 和 stripe.com 也不支持 IPv6。
```
$ dig AAAA twitter.com
(empty response)
$ dig AAAA github.com
(empty response)
$ dig AAAA stripe.com
(empty response)
$ dig AAAA twitter.com
(empty response)
$ dig AAAA github.com
(empty response)
$ dig AAAA stripe.com
(empty response)
```
### 为什么 `twitter.com` 仍然适用于 IPv6 用户?
### 为什么 twitter.com 仍然适用于 IPv6 用户?
我发现这真的很令人困惑。我一直听说因为 IPv4 地址已经用完了,从而很多互联网用户被迫要使用 IPv6 地址。但如果这是真的twitter.com 怎么能继续为那些没有 IPv6 支持的人提供服务呢?以下内容是我昨天从 Twitter 线程中学习到的。
我发现这真的很令人困惑。我一直听说因为 IPv4 地址已经用完了,从而很多互联网用户被迫要使用 IPv6 地址。但如果这是真的twitter.com 怎么能继续为那些没有 IPv6 支持的人提供服务呢?以下内容是我昨天从 Twitter 会话中学习到的。
互联网服务提供商(ISP)有两种:
互联网服务提供商ISP有两种:
1. 能为所有用户拥有足够 IPv4 地址的 ISP
2. 不能为所有用户拥有足够 IPv4 地址的 ISP
我的互联网服务提供商属于第 1 类,因此我的计算机有自己的 IPv4 地址,实际上我的互联网服务提供商甚至根本不支持 IPv6。
但是很多互联网服务提供商(尤其是北美以外的)都属于第 2 类:他们没有足够的 IPv4 地址供所有用户使用。 这些互联网服务提供商通过以下方式处理问题:
但是很多互联网服务提供商(尤其是北美以外的)都属于第 2 类:他们没有足够的 IPv4 地址供所有用户使用。这些互联网服务提供商通过以下方式处理问题:
* 为所有用户提供唯一的 IPv6 地址,以便他们可以直接访问 IPv6 网站
* 让用户 _共享_ IPv4 地址,这可以使用 CGNAT[运营商级 NAT(carrier-grade NAT)][2]或者“464XLAT”或其他方式。
* 让用户 _共享_ IPv4 地址,这可以使用 CGNAT<ruby>[运营商级 NAT][2]<rt>carrier-grade NAT</rt></ruby>或者“464XLAT”或其他方式。
所有互联网服务提供商都需要 _一些_ IPv4 地址,否则他们的用户将无法访问 twitter.com 等只能使用 IPv4 的网站。
@ -56,9 +55,9 @@
现在,我们已经解释了为什么可以 _不支持_ IPv6。那为什么要支持 IPv6 呢?有下面这些原因。
### 原因一CGNAT 是一个性能瓶颈
#### 原因一CGNAT 是一个性能瓶颈
对我而言,支持 IPv6 最有说服力的论点是CGNATcarrier-grade NAT是一个瓶颈,它会导致性能问题,并且随着对 IPv4 地址的访问变得越来越受限,它的性能会变得更糟。
对我而言,支持 IPv6 最有说服力的论点是CGNAT 是一个瓶颈,它会导致性能问题,并且随着对 IPv4 地址的访问变得越来越受限,它的性能会变得更糟。
有人也提到:因为 CGNAT 是一个性能瓶颈因此它成为了一个有吸引力的拒绝服务攻击DDoS的目标因为你可以通过攻击一台服务器影响其他用户对该服务器的网站的可用性。
@ -70,7 +69,7 @@
不过,使用 IPv6 还有很多更自私的论点,所以让我们继续探讨吧。
### 原因二:只能使用 IPv6 的服务器也能够访问你的网站
#### 原因二:只能使用 IPv6 的服务器也能够访问你的网站
我之前说过,大多数 IPv6 用户仍然可以通过 NAT 方式访问 IPv4 的网站。但是有些 IPv6 用户是不能访问 IPv4 网站的,因为他们发现他们运行的服务器只有 IPv6 地址,并且不能使用 NAT。因此这些服务器完全无法访问只能使用 IPv4 的网站。
@ -78,7 +77,7 @@
但对我来说,即使没有 IPv4 地址,一台主机也应该能够访问我的站点。
### 原因三:更好的性能
#### 原因三:更好的性能
对于同时使用 IPv4 和 IPv6即具有专用 IPv6 地址和共享 IPv4 地址的用户IPv6 通常更快,因为它不需要经过额外的 NAT 地址转换。
@ -88,61 +87,59 @@
以下是网站支持 IPv6 的一些其他性能优势:
* 使用 IPv6 可以提高搜索引擎优化Search Engine Optimization),因为 IPv6 具有更好的性能。
* 使用 IPv6 可以提高搜索引擎优化SEO因为 IPv6 具有更好的性能。
* 使用 IPv6 可能会使你的数据包通过更好(更快)的网络硬件,因为相较于 IPv4IPv6 是一个更新的协议。
### 原因四:能够恢复 IPv4 互联网中断
#### 原因四:能够恢复 IPv4 互联网中断
有人说他碰到过由于意外的 BGP 中毒,而导致仅影响 IPv4 流量的互联网中断问题。
因此,支持 IPv6 的网站意味着在中断期间,网站仍然可以保持部分在线。
### 原因五避免家庭服务器的NAT问题
#### 原因五:避免家庭服务器的 NAT 问题
将 IPv6 与家庭服务器一起使用,会变得简单很多,因为数据包不必通过路由器进行端口转发,因此只需为每台服务器分配一个唯一的 IPv6 地址,然后直接访问服务器的 IPv6 地址即可。
当然,要实现这一点,客户端需要支持 IPv6但如今越来越多的客户端也能支持 IPv6 了。
### 原因六:为了拥有自己的 IP 地址
#### 原因六:为了拥有自己的 IP 地址
你也可以自己购买 IPv6 地址,并将它们用于家庭网络的服务器上。如果你更换了互联网服务提供商,可以继续使用相同的 IP 地址。
我不太明白这是如何工作的,是如何让 Internet 上的计算机将这些 IP 地址路由转发给你的我猜测你需要运行自己的自治系统AS或其他东西。
我不太明白这是如何工作的,是如何让互联网上的计算机将这些 IP 地址路由转发给你的我猜测你需要运行自己的自治系统AS或其他东西。
### 原因七:为了学习 IPv6
#### 原因七:为了学习 IPv6
有人说他们在安全领域中工作,为保证信息安全,了解互联网协议的工作原理非常重要(攻击者正在使用互联网协议进行攻击!)。因此,运行 IPv6 服务器有助于他们了解其工作原理。
### 原因八:为了推进 IPv6
#### 原因八:为了推进 IPv6
有人说因为 IPv6 是当前的标准,因此他们希望通过支持 IPv6 来为 IPv6 的成功做出贡献。
很多人还说他们的服务器支持 IPv6是因为他们认为只能使用 IPv4 的网站已经太“落后”了。
### 原因九IPv6 很简单
#### 原因九IPv6 很简单
我还得到了一堆“IPv6 很容易,为什么不做呢”的答案。在所有情况下添加 IPv6 支持并不容易,但在某些情况下添加 IPv6 支持会是很容易的,有以下的几个原因:
我还得到了一堆“使用 IPv6 很容易,为什么不用呢”的答案。在所有情况下添加 IPv6 支持并不容易,但在某些情况下添加 IPv6 支持会是很容易的,有以下的几个原因:
* 你可以从托管公司自动地获得 IPv6 地址,因此你只需要做的就是添加指向该地址的 `AAAA` 记录
* 你的网站是基于支持 IPv6 的内容分发网络CDN因此你无需做任何额外的事情
### 原因十:为了实施更安全的网络实验
#### 原因十:为了实施更安全的网络实验
因为 IPv6 的地址空间很大,所以如果你想在网络中尝试某些东西的时候,你可以使用 IPv6 子网进行实验,基本上你之后不会再用到这个子网了。
### 原因十一为了运行自己的自治系统AS
#### 原因十一为了运行自己的自治系统AS
也有人说他们为了运行自己的自治系统(我在这篇 [BGP 帖子][3] 中谈到了什么是 AS因此在服务器中提供 IPv6。IPv4 地址太贵了,所以他们为运行自治系统而购买了 IPv6 地址。
### 原因十二IPv6 更加安全
#### 原因十二IPv6 更加安全
如果你的服务器 _只_ 有公共的 IPv6 地址,那么攻击者扫描整个网络,也不能轻易地找出你的服务器地址,这是因为 IPv6 地址空间太大了以至于不能扫描出来!
这显然不能是你仅有的安全策略,但是这是安全上的一个大大的福利。每次我运行 IPv4 服务器时,我都会惊讶于 IPv4 地址一直能够被扫描出来的脆弱性,就像是老版本的 WordPress 博客系统那样。
### 一个很傻的理由:你可以在你的 IPv6 地址中放个小彩蛋
#### 一个很傻的理由:你可以在你的 IPv6 地址中放个小彩蛋
IPv6 地址中有很多额外的位你可以用它们做一些不重要的事情。例如Facebook 的 IPv6 地址之一是“2a03:2880:f10e:83:face:b00c:0:25de”其中包含 `face:b00c`)。
@ -152,7 +149,7 @@ IPv6 地址中有很多额外的位,你可以用它们做一些不重要的事
在我理解这些原因后,相较于以前,我在我的(非常小的)服务器上支持 IPv6 更有动力了。但那是因为我觉得支持 IPv6对我来说只需要很少的努力。现在我使用的是支持 IPv6 的 CDN所以我基本上不用做什么额外的事情
我仍然对 IPv6 知之甚少,但是在我的印象中,支持 IPv6并不是不需要花费努力的,实际上可能需要大量工作。例如,我不知道 Twitter 在其边缘服务器上添加 IPv6 支持需要做多少繁杂的工作。
我仍然对 IPv6 知之甚少,但是在我的印象中,支持 IPv6 并不是不需要花费精力的,实际上可能需要大量工作。例如,我不知道 Twitter 在其边缘服务器上添加 IPv6 支持需要做多少繁杂的工作。
### 其它关于 IPv6 的问题
@ -160,10 +157,9 @@ IPv6 地址中有很多额外的位,你可以用它们做一些不重要的事
* 支持 IPv6 的缺点是什么?什么会出错呢?
* 对于拥有了足够 IPv4 地址的 ISP 来说,有什么让他们提供 IPv6 的激励措施?(另一种问法是:我的 ISP 是否有可能在未来几年内转为支持 IPv6或者他们可能不会支持 IPv6
* [Digital Ocean][4] 译注一家建立于美国的云基础架构提供商面向软件开发人员提供虚拟专用服务器VPS只提供 IPv4 的浮动地址,不提供 IPv6 的浮动地址。为什么不提供呢?有更多 IPv6 地址,那提供 IPv6 的浮动地址不是变得更 _便捷_ 吗?
* [Digital Ocean][4] LCTT 译注一家建立于美国的云基础架构提供商面向软件开发人员提供虚拟专用服务器VPS只提供 IPv4 的浮动地址,不提供 IPv6 的浮动地址。为什么不提供呢?有更多 IPv6 地址,那提供 IPv6 的浮动地址不是变得更 _便捷_ 吗?
* 当我尝试 ping IPv6 地址时(例如 example.com 的 IP 地址`2606:2800:220:1:248:1893:25c8:1946`),我得到一个报错信息 `ping: connect: Network is unreachable`。这是为什么呢?(回答:因为我的 ISP 不支持 IPv6所以我的电脑没有公共 IPv6 地址)
这篇 [来自 Tailscale 的 IPv4 与 IPv6 文章][5] 非常有意思,并回答了上述的一些问题。
--------------------------------------------------------------------------------
@ -173,7 +169,7 @@ via: https://jvns.ca/blog/2022/01/29/reasons-for-servers-to-support-ipv6/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[chai001125](https://github.com/chai001125)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,197 @@
[#]: subject: "7 summer book recommendations from open source enthusiasts"
[#]: via: "https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list"
[#]: author: "Joshua Allen Holm https://opensource.com/users/holmja"
[#]: collector: "lkxed"
[#]: translator: "chai001125"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15157-1.html"
来自开源爱好者的 7 本读物推荐
======
> 社区的成员们推荐这些书籍,涵盖了从有趣的悬疑小说到发人深省的非小说作品的各种类型,你一定能从中找到一本你想看的书!
![](https://img.linux.net.cn/data/attachment/album/202210/20/115515jsppwzz8s1ssle7p.jpg)
很高兴能为大家介绍 Opensource.com 的 2022 年暑期阅读清单。今年的榜单包含来自 Opensource.com 社区成员的 7 本精彩的读物推荐。你可以发现各种各样的书籍,涵盖从有趣舒适的谜团到探索发人深省主题的非小说类作品。我希望你能在这个榜单中找到感兴趣的书本。
希望你喜欢!
### 《每个 Java 程序员都应该知道的 97 件事:专家的集体智慧》
![Book title 97 Things Every Java Programmer Should Know][4]
> **<ruby>[每个 Java 程序员都应该知道的 97 件事:专家的集体智慧][5]<rt>97 Things Every Java Programmer Should Know: Collective Wisdom from the Experts</rt></ruby>》**
编辑Kevlin Henney 和 Trisha Gee
*[由 Seth Kenlon 推荐][6]*
这本书是由 73 位在软件行业工作的不同作者共同撰写。它的优秀之处在于它不仅仅适用于 Java 编程。当然,有些章节会涉及 Java但是也还有一些其他话题例如了解你的容器环境、如何更快更好地交付软件、以及不要隐藏你的开发工具这些适用于任何语言的开发。
更好的是,有些章节同样适用于生活中的问题。将问题和任务分成小的部分是解决任何问题的好建议;建立多样化的团队对所有合作者都很重要;由从散乱的一块块拼图到拼好的完成品,看起来像是拼图玩家的思路,也适用于不同的工作角色。
每章只有几页,总共有 97 个章节,你可以轻松跳过不适用于你自己的章节。无论你是一直在写 Java 代码、或者只是学过一点 Java亦或是尚未开始学习 Java对于对代码和软件开发过程感兴趣的极客来说这都会是一本好书。
### 《城市不是计算机:其他的城市智能》
![Book title A City is Not a Computer][7]
> **<ruby>[城市不是计算机:其他的城市智能][8]<rt>A City is Not a Computer: Other Urban Intelligences</rt></ruby>》**
作者Shannon Mattern
*[由 Scott Nesbitt 推荐][9]*
如今,让一切变得智能已经成为一种 *时尚*:我们的手机、家用电器、手表、汽车,甚至是城市都变得智能化了。
对于城市的智能化,这意味着传感器变得无处不在,在我们开展业务时收集数据,并根据这些数据向我们推送信息(无论数据有用与否)。
这就引出了一个问题,将所有高科技技术嵌入到城市中是否会使得城市智能化呢?在《城市不是计算机》这本书中,作者 Shannon Mattern 认为并不是这样的。
城市智能化的目标之一是为市民提供服务和更好的城市参与感。Mattern 指出,但是实际上,智慧城市“希望将技术专家的管理想法与公共服务相融合,从而将公民重新设置为‘消费者’和‘用户’”,然而,这并不是在鼓励公民积极参与城市的生活和治理。
第二个问题是关于智慧城市收集的数据。我们不知道收集了什么数据,以及收集了多少数据。我们也不知道这些数据使用在什么地方,以及是谁使用的。收集的数据太多了,以至于处理数据的市政工作人员会不堪重负。他们无法处理所有数据,因此他们专注于短期容易实现的任务,而忽略了更深层次和更紧迫的问题。这绝对达不到在推广智慧城市时所承诺的目标:智慧城市将成为解决城市困境的良药。
《城市不是计算机》是一本短小精悍、经过深入研究的、反对拥抱智慧城市的论证。这本书让我们思考智慧城市的真正目的:要让百姓真正受益于城市智能化,并引发我们的思考:发展智慧城市是否必要呢。
### 《git sync 谋杀案》
![Book title git sync murder][10]
> **<ruby>[git sync 谋杀案][11]<rt>git sync murder</rt></ruby>》**
作者Michael Warren Lucas
*[由 Joshua Allen Holm 推荐][12]*
Dale Whitehead 宁愿呆在家里通过他的电脑终端与世界连接尤其是在他参加的最后一次会议上发生的事情之后。在那次会议上Dale 扮演了一个业余侦探的角色,解决了一桩谋杀案。你可以在该系列的第一本书《<ruby>git commit 谋杀案<rt>git commit murder</rt></ruby>》中读到那个案件。
现在Dale 回到家,参加另一个会议,他再次发现自己成为了侦探。在《<ruby>git sync 谋杀案<rt>git sync murder</rt></ruby>》中Dale 参加了一个当地科技会议/科幻大会会议上发现一具尸体。这是谋杀还是只是一场意外现在Dale 是这些问题的“专家”他发现自己被卷入了这件事并要亲自去弄清楚到底发生了什么。再多说的话就剧透了所以我能说《git sync 谋杀案》这本书十分引人入胜而且读起来很有趣。不必先阅读《git commit 谋杀案》才能阅读《git sync 谋杀案》,但我强烈推荐一起阅读该系列中的这两本书。
作者 Michael Warren Lucas 的《git 谋杀案》系列非常适合喜欢悬疑小说的科技迷。Lucas 写过很多复杂的技术题材的书这本书也延续了他的技术题材《git sync 谋杀案》这本书中的人物在会议活动上谈论技术话题。如果你因为新冠疫情最近没有参加过会议怀念参会体验的话Lucas 将带你参加一个技术会议其中还有一个谋杀之谜以待解决。Dale Whitehead 是一个有趣的业余侦探,我相信大多数读者会喜欢和 Dale 一起参加技术会议,并充当侦探破解谜案的。
### 《像女孩一样踢球》
![Book title Kick Like a Girl][13]
> **<ruby>[像女孩一样踢球][14]<rt>Kick Like a Girl</rt></ruby>》**
作者Melissa Di Donato Roos
*[由 Joshua Allen Holm 推荐][15]*
没有人喜欢被孤立,当女孩 Francesca 想在公园里踢足球时,她也是这样。男孩们不会和她一起玩,因为她是女孩,所以她不高兴地回家了。她的母亲安慰她,讲述了有重要影响力的著名女性的故事。《像女孩一样踢球》中详述的历史人物包括历史中来自许多不同领域的女性。读者将了解 Frida Kahlo、Madeleine Albright、<ruby>阿达·洛芙莱斯<rt>Ada Lovelace</rt></ruby>、Rosa Parks、Amelia Earhart、<ruby>玛丽·居里<rt>Marie Curie</rt></ruby>居里夫人、Valentina Tereshkova、<ruby>弗洛伦斯·南丁格尔<rt>Florence Nightingale</rt></ruby> 和 Malala Yousafzai 的故事。听完这些鼓舞人心的人物故事后Francesca 回到公园,向男孩们发起了一场足球挑战。
《像女孩一样踢球》这本书的特色是作者 Melissa Di Donato RoosSUSE 的 CEOLCTT 译注SUSE 是一家总部位于德国的软件公司,创立于 1992 年,以提供企业级 Linux 为主要业务)引人入胜的写作和 Ange Allen 的出色插图。这本书非常适合年轻读者他们会喜欢押韵的文字和书中的彩色插图。Melissa Di Donato Roos 还写了另外两本童书,《<ruby>美人鱼如何便便<rt>How Do Mermaids Poo?</rt></ruby>》和《<ruby>魔盒<rt>The Magic Box</rt></ruby>》,这两本书也都值得一读。
### 《这是我的!:所有权的潜规则如何控制着我们的生活》
![Book title Mine!][16]
> **<ruby>[这是我的!:所有权的潜规则如何控制着我们的生活][17]<rt>Mine!: How the Hidden Rules of Ownership Control Our Lives</rt></ruby>》**
作者Michael Heller 和 James Salzman
*[由 Bryan Behrenshausen 推荐][18]*
作者 Michael Heller 和 James Salzman 在文章《这是我的》中写道“你对所有权的很多了解都是错误的”。这是一种被吸引到开源领域的人不得不接受所有权规则的对抗性邀请。这本书肯定是为开源爱好者而写的他们对代码、思想、各种知识产权的所有权的看法往往与主流观点和普遍接受的认知不同。在本书中Heller 和 Salzman 列出了“所有权的隐藏规则”,这些规则管理着谁能控制对什么事物的访问。这些所有权规则是微妙的、强大的、有着深刻的历史惯例。这些所有权规则已经变得如此普遍,以至于看起来无可争议,这是因为“先到先得”或“种瓜得瓜,种豆得豆”的规则已经成为陈词滥调。然而,我们看到它们无处不在:在飞机上,为宝贵的腿部空间而战;在街道上,邻居们为铲好雪的停车位发生争执;在法庭上,陪审团决定谁能控制你的遗产和你的 DNA。在当下的数字时代所有权的替代理论能否为重新思考基本权利创造空间作者们认为这是可以的。如果这是正确的我们可能会回应在未来开源软件能否成为所有权运作的模型呢
### 《并非所有童话故事都有幸福的结局:雪乐山公司的兴衰》
![Book Title Not All Fairy Tales Have Happy Endings][19]
> **<ruby>[并非所有童话故事都有幸福的结局:雪乐山公司的兴衰][20]<rt>Not All Fairy Tales Have Happy Endings: The Rise and Fall of Sierra On-Line</rt></ruby>》**
作者Ken Williams
*[由 Joshua Allen Holm 推荐][21]*
在 1980 年代和 1990 年代,<ruby>雪乐山公司<rt>Sierra On-Line</rt></ruby>是计算机软件行业的巨头。这家由 Ken 和 Roberta Williams 夫妻创立的公司,出身并不起眼,但却发布了许多标志性的电脑游戏。《<ruby>国王密使<rt>King's Quest</rt></ruby>》、《<ruby>宇宙传奇<rt>Space Quest</rt></ruby>》、《<ruby>荣耀任务<rt>Quest for Glory</rt></ruby>》、《Leisure Suit Larry》 和 《<ruby>狩魔猎人<rt>Gabriel Knight</rt></ruby>》 只是该公司几个最大的专属系列中的很小一部分。
《并非所有童话故事都有幸福的结局》这本书,涵盖了从雪乐山公司发布第一款游戏 《<ruby>[神秘屋][22]<rt>Mystery House</rt></ruby>》,到该公司不幸地被 CUC 国际公司收购以及后续的所有内容。雪乐山品牌在被收购后仍存活了一段时间,但 Williams 创立的雪乐山已不复存在。Ken Williams 以一种只有他才能做到的方式,讲述了雪乐山公司的整个历史。雪乐山的历史叙述穿插了一些 Williams 提出的管理和计算机编程建议的章节。虽然 Ken Williams 在写这本书时,已经离开这个行业很多年了,但他的建议仍然非常重要。
虽然雪乐山公司已不复存在,但该公司对计算机游戏行业产生了持久的影响。对于任何对计算机软件历史感兴趣的人来说,《并非所有童话故事都有美好的结局》都是值得一读的。雪乐山公司在其鼎盛时期处于游戏开发的最前沿,从带领公司走过那个激动人心的岁月的 Ken Williams 身上,我们可以学到许多宝贵的经验。
### 《新机器的灵魂》
![Book title The Soul of a New Machine][23]
> **<ruby>[新机器的灵魂][24]<rt>The Soul of a New Machine</rt></ruby>》**
作者Tracy Kidder
*[由 Guarav Kamathe 推荐][25]*
我是计算机历史的狂热读者。知道这些人们如此依赖(并且经常被认为是理所当然)的计算机是如何形成的,真是令人着迷!我是在 [Bryan Cantrill][27] 的博客文章中,第一次听说 《[新机器的灵魂][26]》这本书的。这是一本由 [Tracy Kidder][29] 编著的非虚构书籍,于 1981 年出版,作者 Tracy Kidder也因此获得了 [普利策奖][30]。故事发生在 1970 年代,想象一下你是负责设计 [下一代计算机][31] 工程团队中的一员。故事的背景是在<ruby>通用数据公司<rt>Data General Corporation</rt></ruby>,该公司当时是一家小型计算机供应商,正在与美国<ruby>数字设备公司<rt>Digital Equipment Corporation</rt></ruby>DEC的 32 位 VAX 计算机相竞争。该书概述了通用数据公司内部两个相互竞争的团队,都想在设计新机器上一展身手,结果导致了一场争斗。接下来,细致地描绘了随之展开的事件。这本书深入地讲述了相关工程师的思想、他们的工作环境、他们在此过程中面临的技术挑战、他们是如何克服这些困难的、以及压力如何影响到了他们的个人生活等等。任何想知道计算机是怎么制造出来的人都应该阅读这本书。
以上就是 2022 年的推荐阅读书目。它提供了很多非常棒的选择,我相信读者们能得到数小时发人深省的阅读时光。想获取更多书籍推荐,请查看我们历年的阅读书目。
* [2021 年 Opensource.com 推荐阅读书目][32]
* [2020 年 Opensource.com 推荐阅读书目][33]
* [2019 年 Opensource.com 推荐阅读书目][34]
* [2018 年 Open Organization 推荐阅读书目][35]
* [2016 年 Opensource.com 推荐阅读书目][36]
* [2015 年 Opensource.com 推荐阅读书目][37]
* [2014 年 Opensource.com 推荐阅读书目][38]
* [2013 年 Opensource.com 推荐阅读书目][39]
* [2012 年 Opensource.com 推荐阅读书目][40]
* [2011 年 Opensource.com 推荐阅读书目][41]
* [2010 年 Opensource.com 推荐阅读书目][42]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list
作者:[Joshua Allen Holm][a]
选题:[lkxed][b]
译者:[chai001125](https://github.com/chai001125)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/tea-cup-mug-flowers-book-window.jpg
[2]: https://unsplash.com/@sixteenmilesout?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/tea?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://opensource.com/sites/default/files/2022-06/97_Things_Every_Java_Programmer_Should_Know_1.jpg
[5]: https://www.oreilly.com/library/view/97-things-every/9781491952689/
[6]: https://opensource.com/users/seth
[7]: https://opensource.com/sites/default/files/2022-06/A_City_is_Not_a_Computer_0.jpg
[8]: https://press.princeton.edu/books/paperback/9780691208053/a-city-is-not-a-computer
[9]: https://opensource.com/users/scottnesbitt
[10]: https://opensource.com/sites/default/files/2022-06/git_sync_murder_0.jpg
[11]: https://mwl.io/fiction/crime#gsm
[12]: https://opensource.com/users/holmja
[13]: https://opensource.com/sites/default/files/2022-06/Kick_Like_a_Girl.jpg
[14]: https://innerwings.org/books/kick-like-a-girl
[15]: https://opensource.com/users/holmja
[16]: https://opensource.com/sites/default/files/2022-06/Mine.jpg
[17]: https://www.minethebook.com/
[18]: https://opensource.com/users/bbehrens
[19]: https://opensource.com/sites/default/files/2022-06/Not_All_Fairy_Tales.jpg
[20]: https://kensbook.com/
[21]: https://opensource.com/users/holmja
[22]: https://en.wikipedia.org/wiki/Mystery_House
[23]: https://opensource.com/sites/default/files/2022-06/The_Soul_of_a_New_Machine.jpg
[24]: https://www.hachettebookgroup.com/titles/tracy-kidder/the-soul-of-a-new-machine/9780316204552/
[25]: https://opensource.com/users/gkamathe
[26]: https://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine
[27]: https://en.wikipedia.org/wiki/Bryan_Cantrill
[28]: http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-soul-of-a-new-machine/
[29]: https://en.wikipedia.org/wiki/Tracy_Kidder
[30]: https://www.pulitzer.org/winners/tracy-kidder
[31]: https://en.wikipedia.org/wiki/Data_General_Eclipse_MV/8000
[32]: https://opensource.com/article/21/6/2021-opensourcecom-summer-reading-list
[33]: https://opensource.com/article/20/6/summer-reading-list
[34]: https://opensource.com/article/19/6/summer-reading-list
[35]: https://opensource.com/open-organization/18/6/summer-reading-2018
[36]: https://opensource.com/life/16/6/2016-summer-reading-list
[37]: https://opensource.com/life/15/6/2015-summer-reading-list
[38]: https://opensource.com/life/14/6/annual-reading-list-2014
[39]: https://opensource.com/life/13/6/summer-reading-list-2013
[40]: https://opensource.com/life/12/7/your-2012-open-source-summer-reading
[41]: https://opensource.com/life/11/7/summer-reading-list
[42]: https://opensource.com/life/10/8/open-books-opensourcecom-summer-reading-list

View File

@ -0,0 +1,527 @@
[#]: subject: "Python Microservices Using Flask on Kubernetes"
[#]: via: "https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15154-1.html"
在 Kubernetes 上使用 Flask 搭建 Python 微服务
======
![](https://img.linux.net.cn/data/attachment/album/202210/19/124429nmw0xmfz3x3mrrf2.jpg)
*微服务遵循领域驱动设计DDD与开发平台无关。Python 微服务也不例外。Python3 的面向对象特性使得按照 DDD 对服务进行建模变得更加容易。本系列的第 10 部分演示了如何将用户管理系统的查找服务作为 Python 微服务部署在 Kubernetes 上。*
微服务架构的强大之处在于它的多语言性。企业将其功能分解为一组微服务,每个团队自由选择一个平台。
我们的用户管理系统已经分解为四个微服务,分别是添加、查找、搜索和日志服务。添加服务在 Java 平台上开发并部署在 Kubernetes 集群上,以实现弹性和可扩展性。这并不意味着其余的服务也要使用 Java 开发,我们可以自由选择适合个人服务的平台。
让我们选择 Python 作为开发查找服务的平台。查找服务的模型已经设计好了(参考 2022 年 3 月份的文章),我们只需要将这个模型转换为代码和配置。
### Pythonic 方法
Python 是一种通用编程语言,已经存在了大约 30 年。早期,它是自动化脚本的首选。然而,随着 Django 和 Flask 等框架的出现它的受欢迎程度越来越高现在各种领域中都在应用它如企业应用程序开发。数据科学和机器学习进一步推动了它的发展Python 现在是三大编程语言之一。
许多人将 Python 的成功归功于它容易编码。这只是一部分原因。只要你的目标是开发小型脚本Python 就像一个玩具,你会非常喜欢它。然而,当你进入严肃的大规模应用程序开发领域时,你将不得不处理大量的 `if``else`Python 变得与任何其他平台一样好或一样坏。例如,采用一种面向对象的方法!许多 Python 开发人员甚至可能没意识到 Python 支持类、继承等功能。Python 确实支持成熟的面向对象开发,但是有它自己的方式 -- Pythonic让我们探索一下
### 领域模型
`AddService` 通过将数据保存到一个 MySQL 数据库中来将用户添加到系统中。`FindService` 的目标是提供一个 REST API 按用户名查找用户。域模型如图 1 所示。它主要由一些值对象组成,如 `User` 实体的`Name`、`PhoneNumber` 以及 `UserRepository`
![图 1: 查找服务的域模型][1]
让我们从 `Name` 开始。由于它是一个值对象,因此必须在创建时进行验证,并且必须保持不可变。基本结构如所示:
```
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError("Invalid Name")
```
如你所见,`Name` 包含一个字符串类型的值。作为后期初始化的一部分,我们会验证它。
Python 3.7 提供了 `@dataclass` 装饰器,它提供了许多开箱即用的数据承载类的功能,如构造函数、比较运算符等。如下是装饰后的 `Name` 类:
```
from dataclasses import dataclass
@dataclass
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError("Invalid Name")
```
以下代码可以创建一个 `Name` 对象:
```
name = Name("Krishna")
```
`value` 属性可以按照如下方式读取或写入:
```
name.value = "Mohan"
print(name.value)
```
可以很容易地与另一个 `Name` 对象比较,如下所示:
```
other = Name("Mohan")
if name == other:
print("same")
```
如你所见,对象比较的是值而不是引用。这一切都是开箱即用的。我们还可以通过冻结对象使对象不可变。这是 `Name` 值对象的最终版本:
```
from dataclasses import dataclass
@dataclass(frozen=True)
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError("Invalid Name")
```
`PhoneNumber` 也遵循类似的方法,因为它也是一个值对象:
```
@dataclass(frozen=True)
class PhoneNumber:
value: int
def __post_init__(self):
if self.value < 9000000000:
raise ValueError("Invalid Phone Number")
```
`User` 类是一个实体,不是一个值对象。换句话说,`User` 是可变的。以下是结构:
```
from dataclasses import dataclass
import datetime
@dataclass
class User:
_name: Name
_phone: PhoneNumber
_since: datetime.datetime
def __post_init__(self):
if self._name is None or self._phone is None:
raise ValueError("Invalid user")
if self._since is None:
self.since = datetime.datetime.now()
```
你能观察到 `User` 并没有冻结,因为我们希望它是可变的。但是,我们不希望所有属性都是可变的。标识字段如 `_name``_since` 是希望不会修改的。那么,这如何做到呢?
Python3 提供了所谓的描述符协议,它会帮助我们正确定义 getter 和 setter。让我们使用 `@property` 装饰器将 getter 添加到 `User` 的所有三个字段中。
```
@property
def name(self) -> Name:
return self._name
@property
def phone(self) -> PhoneNumber:
return self._phone
@property
def since(self) -> datetime.datetime:
return self._since
```
`phone` 字段的 setter 可以使用 `@<字段>.setter` 来装饰:
```
@phone.setter
def phone(self, phone: PhoneNumber) -> None:
if phone is None:
raise ValueError("Invalid phone")
self._phone = phone
```
通过重写 `__str__()` 函数,也可以为 `User` 提供一个简单的打印方法:
```
def __str__(self):
return self.name.value + " [" + str(self.phone.value) + "] since " + str(self.since)
```
这样,域模型的实体和值对象就准备好了。创建异常类如下所示:
```
class UserNotFoundException(Exception):
pass
```
域模型现在只剩下 `UserRepository` 了。Python 提供了一个名为 `abc` 的有用模块来创建抽象方法和抽象类。因为 `UserRepository` 只是一个接口,所以我们可以使用 `abc` 模块。
任何继承自 `abc.ABC` 的类都将变为抽象类,任何带有 `@abc.abstractmethod` 装饰器的函数都会变为一个抽象函数。下面是 `UserRepository` 的结构:
```
from abc import ABC, abstractmethod
class UserRepository(ABC):
@abstractmethod
def fetch(self, name:Name) -> User:
pass
```
`UserRepository` 遵循仓储模式。换句话说,它在 `User` 实体上提供适当的 CRUD 操作,而不会暴露底层数据存储语义。在本例中,我们只需要 `fetch()` 操作,因为 `FindService` 只查找用户。
因为 `UserRepository` 是一个抽象类,我们不能从抽象类创建实例对象。创建对象必须依赖于一个具体类实现这个抽象类。数据层 `UserRepositoryImpl` 提供了 `UserRepository` 的具体实现:
```
class UserRepositoryImpl(UserRepository):
def fetch(self, name:Name) -> User:
pass
```
由于 `AddService` 将用户数据存储在一个 MySQL 数据库中,因此 `UserRepositoryImpl` 也必须连接到相同的数据库去检索数据。下面是连接到数据库的代码。注意,我们正在使用 MySQL 的连接库。
```
from mysql.connector import connect, Error
class UserRepositoryImpl(UserRepository):
def fetch(self, name:Name) -> User:
try:
with connect(
host="mysqldb",
user="root",
password="admin",
database="glarimy",
) as connection:
with connection.cursor() as cursor:
cursor.execute("SELECT * FROM ums_users where name=%s", (name.value,))
row = cursor.fetchone()
if cursor.rowcount == -1:
raise UserNotFoundException()
else:
return User(Name(row[0]), PhoneNumber(row[1]), row[2])
except Error as e:
raise e
```
在上面的片段中,我们使用用户 `root` / 密码 `admin` 连接到一个名为 `mysqldb` 的数据库服务器,使用名为 `glarimy` 的数据库(模式)。在演示代码中是可以包含这些信息的,但在生产中不建议这么做,因为这会暴露敏感信息。
`fetch()` 操作的逻辑非常直观,它对 `ums_users` 表执行 SELECT 查询。回想一下,`AddService` 正在将用户数据写入同一个表中。如果 SELECT 查询没有返回记录,`fetch()` 函数将抛出 `UserNotFoundException` 异常。否则,它会从记录中构造 `User` 实体并将其返回给调用者。这没有什么特殊的。
### 应用层
最终,我们需要创建应用层。此模型如图 2 所示。它只包含两个类:控制器和一个 DTO。
![图 2: 添加服务的应用层][2]
众所周知,一个 DTO 只是一个没有任何业务逻辑的数据容器。它主要用于在 `FindService` 和外部之间传输数据。我们只是提供了在 REST 层中将 `UserRecord` 转换为字典以便用于 JSON 传输:
```
class UserRecord:
def toJSON(self):
return {
"name": self.name,
"phone": self.phone,
"since": self.since
}
```
控制器的工作是将 DTO 转换为用于域服务的域对象,反之亦然。可以从 `find()` 操作中观察到这一点。
```
class UserController:
def __init__(self):
self._repo = UserRepositoryImpl()
def find(self, name: str):
try:
user: User = self._repo.fetch(Name(name))
record: UserRecord = UserRecord()
record.name = user.name.value
record.phone = user.phone.value
record.since = user.since
return record
except UserNotFoundException as e:
return None
```
`find()` 操作接收一个字符串作为用户名,然后将其转换为 `Name` 对象,并调用 `UserRepository` 获取相应的 `User` 对象。如果找到了,则使用检索到的 `User`` 对象创建 `UserRecord`。回想一下,将域对象转换为 DTO 是很有必要的,这样可以对外部服务隐藏域模型。
`UserController` 不需要有多个实例,它也可以是单例的。通过重写 `__new__`,可以将其建模为一个单例。
```
class UserController:
def __new__(self):
if not hasattr(self, instance):
self.instance = super().__new__(self)
return self.instance
def __init__(self):
self._repo = UserRepositoryImpl()
def find(self, name: str):
try:
user: User = self._repo.fetch(Name(name))
record: UserRecord = UserRecord()
record.name = user.name.getValue()
record.phone = user.phone.getValue()
record.since = user.since
return record
except UserNotFoundException as e:
return None
```
我们已经完全实现了 `FindService` 的模型,剩下的唯一任务是将其作为 REST 服务公开。
### REST API
`FindService` 只提供一个 API那就是通过用户名查找用户。显然 URI 如下所示:
```
GET /user/{name}
```
此 API 希望根据提供的用户名查找用户,并以 JSON 格式返回用户的电话号码等详细信息。如果没有找到用户API 将返回一个 404 状态码。
我们可以使用 Flask 框架来构建 REST API它最初的目的是使用 Python 开发 Web 应用程序。除了 HTML 视图,它还进一步扩展到支持 REST 视图。我们选择这个框架是因为它足够简单。
创建一个 Flask 应用程序:
```
from flask import Flask
app = Flask(__name__)
```
然后为 Flask 应用程序定义路由,就像函数一样简单:
```
@app.route('/user/<name>')
def get(name):
pass
```
注意 `@app.route` 映射到 API `/user/<name>`,与之对应的函数的 `get()`
如你所见,每次用户访问 API 如 `http://server:port/user/Krishna` 时,都将调用这个 `get()` 函数。Flask 足够智能,可以从 URL 中提取 `Krishna` 作为用户名,并将其传递给 `get()` 函数。
`get()` 函数很简单。它要求控制器找到该用户,并将其与通常的 HTTP 头一起打包为 JSON 格式后返回。如果控制器返回 `None`,则 `get()` 函数返回合适的 HTTP 状态码。
```
from flask import jsonify, abort
controller = UserController()
record = controller.find(name)
if record is None:
abort(404)
else:
resp = jsonify(record.toJSON())
resp.status_code = 200
return resp
```
最后,我们需要 Flask 应用程序提供服务,可以使用 `waitress` 服务:
```
from waitress import serve
serve(app, host="0.0.0.0", port=8080)
```
在上面的片段中,应用程序在本地主机的 8080 端口上提供服务。最终代码如下所示:
```
from flask import Flask, jsonify, abort
from waitress import serve
app = Flask(__name__)
@app.route('/user/<name>')
def get(name):
controller = UserController()
record = controller.find(name)
if record is None:
abort(404)
else:
resp = jsonify(record.toJSON())
resp.status_code = 200
return resp
serve(app, host="0.0.0.0", port=8080)
```
### 部署
`FindService` 的代码已经准备完毕。除了 REST API 之外,它还有域模型、数据层和应用程序层。下一步是构建此服务,将其容器化,然后部署到 Kubernetes 上。此过程与部署其他服务妹有任何区别,但有一些 Python 特有的步骤。
在继续前进之前,让我们来看下文件夹和文件结构:
```
+ ums-find-service
+ ums
- domain.py
- data.py
- app.py
- Dockerfile
- requirements.txt
- kube-find-deployment.yml
```
如你所见,整个工作文件夹都位于 `ums-find-service` 下,它包含了 `ums` 文件夹中的代码和一些配置文件,例如 `Dockerfile`、`requirements.txt` 和 `kube-find-deployment.yml`
`domain.py` 包含域模型,`data.py` 包含 `UserRepositoryImpl``app.py` 包含剩余代码。我们已经阅读过代码了,现在我们来看看配置文件。
第一个是 `requirements.txt`,它声明了 Python 系统需要下载和安装的外部依赖项。我们需要用查找服务中用到的每个外部 Python 模块来填充它。如你所见,我们使用了 MySQL 连接器、Flask 和 Waitress 模块。因此,下面是 `requirements.txt` 的内容。
```
Flask==2.1.1
Flask_RESTful
mysql-connector-python
waitress
```
第二步是在 `Dockerfile` 中声明 Docker 相关的清单,如下:
```
FROM python:3.8-slim-buster
WORKDIR /ums
ADD ums /ums
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 8080
ENTRYPOINT ["python"]
CMD ["/ums/app.py"]
```
总的来说,我们使用 Python 3.8 作为基线,除了移动 `requirements.txt` 之外,我们还将代码从 `ums` 文件夹移动到 Docker 容器中对应的文件夹中。然后,我们指示容器运行 `pip3 install` 命令安装对应模块。最后,我们向外暴露 8080 端口(因为 waitress 运行在此端口上)。
为了运行此服务,我们指示容器使用使用以下命令:
```
python /ums/app.py
```
一旦 `Dockerfile` 准备完成,在 `ums-find-service` 文件夹中运行以下命令,创建 Docker 镜像:
```
docker build -t glarimy/ums-find-service
```
它会创建 Docker 镜像,可以使用以下命令查找镜像:
```
docker images
```
尝试将镜像推送到 Docker Hub你也可以登录到 Docker。
```
docker login
docker push glarimy/ums-find-service
```
最后一步是为 Kubernetes 部署构建清单。
在之前的文章中,我们已经介绍了如何建立 Kubernetes 集群、部署和使用服务的方法。我假设仍然使用之前文章中的清单文件来部署添加服务、MySQL、Kafka 和 Zookeeper。我们只需要将以下内容添加到 `kube-find-deployment.yml` 文件中:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: ums-find-service
labels:
app: ums-find-service
spec:
replicas: 3
selector:
matchLabels:
app: ums-find-service
template:
metadata:
labels:
app: ums-find-service
spec:
containers:
- name: ums-find-service
image: glarimy/ums-find-service
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: ums-find-service
labels:
name: ums-find-service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: ums-find-service
```
上面清单文件的第一部分声明了 `glarimy/ums-find-service` 镜像的 `FindService`,它包含三个副本。它还暴露 8080 端口。清单的后半部分声明了一个 Kubernetes 服务作为 `FindService` 部署的前端。请记住在之前文章中mysqldb 服务已经是上述清单的一部分了。
运行以下命令在 Kubernetes 集群上部署清单文件:
```
kubectl create -f kube-find-deployment.yml
```
部署完成后,可以使用以下命令验证容器组和服务:
```
kubectl get services
```
输出如图 3 所示:
![图 3: Kubernetes 服务][3]
它会列出集群上运行的所有服务。注意查找服务的外部 IP使用 `curl` 调用此服务:
```
curl http://10.98.45.187:8080/user/KrishnaMohan
```
注意10.98.45.187 对应查找服务,如图 3 所示。
如果我们使用 `AddService` 创建一个名为 `KrishnaMohan` 的用户,那么上面的 `curl` 命令看起来如图 4 所示:
![图 4: 查找服务][4]
用户管理系统UMS的体系结构包含 `AddService``FindService`,以及存储和消息传递所需的后端服务,如图 5 所示。可以看到终端用户使用 `ums-add-service` 的 IP 地址添加新用户,使用 `ums-find-service` 的 IP 地址查找已有用户。每个 Kubernetes 服务都由三个对应容器的节点支持。还要注意:同样的 mysqldb 服务用于存储和检索用户数据。
![图 5: UMS 的添加服务和查找服务][5]
### 其他服务
UMS 系统还包含两个服务:`SearchService` 和 `JournalService`。在本系列的下一部分中,我们将在 Node 平台上设计这些服务,并将它们部署到同一个 Kubernetes 集群,以演示多语言微服务架构的真正魅力。最后,我们将观察一些与微服务相关的设计模式。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-1-The-domain-model-of-FindService-1.png
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-2-The-application-layer-of-FindService.png
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-3-Kubernetes-services-1.png
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-4-FindService.png
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-5-UMS-with-AddService-and-FindService.png
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Python-Microservices-1-696x477.jpg

View File

@ -0,0 +1,323 @@
[#]: subject: "How To Find Default Gateway IP Address In Linux And Unix From Commandline"
[#]: via: "https://ostechnix.com/find-default-gateway-linux/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15158-1.html"
在 Linux 中如何从命令行查找默认网关的 IP 地址
======
![](https://img.linux.net.cn/data/attachment/album/202210/20/161605f5ispl5jslbpllss.jpg)
> Linux 下查找网关或路由器 IP 地址的 5 种方法。
**网关** 是一个节点或一个路由器,当连接到同一路由器时,它允许两个或多个 IP 地址不同的主机相互通信。如果没有网关,它们将无法相互通信。换句话说,网关充当接入点,将网络数据从本地网络传输到远程网络。在本指南中,我们将看到在 Linux 和 Unix 中从命令行找到默认网关的所有可能方法。
### 在 Linux 中查找默认网关
Linux 中有各种各样的命令行工具可用于查看网关 IP 地址。最常用的工具是:`ip`、`ss` 和 `netcat`。我们将通过示例了解如何使用每种工具查看默认网关。
#### 1、使用 ip 命令查找默认网关
`ip` 命令用于显示和操作 Linux 中的路由、网络设备、接口和隧道。
要查找默认网关或路由器 IP 地址,只需运行:
```
$ ip route
```
或者:
```
$ ip r
```
或者:
```
$ ip route show
```
示例输出:
```
default via 192.168.1.101 dev eth0 proto static metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.20 metric 100
```
你从输出中看到了 `default via 192.168.1.101` 这一行吗?它就是默认网关。我的默认网关是 `192.168.1.101`
你可以使用 `-4` 参数只`显示 IPv4 网关`
```
$ ip -4 route
```
或者,使用 `-6` 参数只**显示 IPv6 网关**
```
$ ip -6 route
```
如你所见IP 地址和子网详细信息也一并显示了。如果你想只显示默认网关,排除所有其他细节,可以使用 `ip route` 搭配 `awk` 命令,如下所示。
使用 `ip route``awk` 命令打印网关地址,执行命令:
```
$ ip route | awk '/^default/{print $3}'
```
LCTT 译注wsl1 上无输出结果,正常 Linux 发行版无问题)
或者:
```
$ ip route show default | awk '{print $3}'
```
这将只列出网关 IP
示例输出:
```
192.168.1.101
```
![使用 ip 命令列出默认网关][1]
你也可以使用 [grep][2] 命令配合 `ip route` 对默认网关进行过滤。
使用 `ip route``grep` 查找默认网关 IP 地址,执行命令:
```
$ ip route | grep default
default via 192.168.1.101 dev eth0 proto static metric 100
```
在最新的 Linux 发行版中,`ip route` 是查找默认网关 IP 地址的推荐命令。然而,你们中的一些人可能仍然在使用传统的工具,如 `route``netstat`。旧习难改,对吧?下面的部分将介绍如何在 Linux 中使用 `route``netstat` 命令确定网关。
#### 2、使用 route 命令显示默认网关 IP 地址
`route` 命令用于在较老的 Linux 发行版中显示和操作路由表,如 RHEL 6、CentOS 6 等。
如果你正在使用较老的 Linux 发行版,你可以使用 `route` 命令来显示默认网关。
请注意,在最新的 Linux 发行版中,`route` 工具已被弃用,`ip route` 命令取而代之。如果你因为某些原因仍然想使用 `route`,你需要安装它。
首先,我们需要检查哪个包提供了 `route` 命令。为此,在基于 RHEL 的系统上运行以下命令:
```
$ dnf provides route
```
示例输出:
```
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo : @System
Matched from:
Filename : /usr/sbin/route
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo : baseos
Matched from:
Filename : /usr/sbin/route
```
如你所见,`net-tools` 包提供了 `route` 命令。所以,让我们使用以下命令来安装它:
```
$ sudo dnf install net-tools
```
现在,运行带有 `-n` 参数的 `route` 命令来显示 Linux 系统中的网关或路由器 IP 地址:
```
$ route -n
```
示例输出:
```
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.101 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
```
![使用 route 命令显示默认网关 IP 地址][3]
如你所见,网关 IP 地址是 192.168.1.101。你还将在 Flags 下面看到两个字母 `UG`。字母 `U` 代表接口是 “Up”在运行`G` 表示 “Gateway”网关
#### 3、使用 netstat 命令查看网关 IP 地址
`netstat` 会输出 Linux 网络子系统的信息。使用 `netstat` 工具,我们可以在 Linux 和 Unix 系统中打印网络连接、路由表、接口统计信息、伪装连接和组播成员关系。
`netstat``net-tools` 包的一部分,所以确保你已经在 Linux 系统中安装了它。使用以下命令在基于 RHEL 的系统中安装它:
```
$ sudo dnf install net-tools
```
使用 netstat 命令打印默认网关 IP 地址:
```
$ netstat -rn
```
示例输出:
```
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.1.101 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
![使用 netstat 命令查看网关 IP 地址][4]
`netstat` 命令与 `route` 命令的输出信息相同。如上输出可知,网关的 IP 地址为 `192.168.1.191``UG` 表示网关连接的网卡是有效的,`G` 表示网关。
请注意 `netstat` 也已弃用,建议使用 `ss` 命令代替 `netstat`
#### 4、使用 routel 命令打印默认网关或路由器 IP 地址
`routel` 是一个脚本,它以一种漂亮格式的输出路由。`routel` 脚本的输出让一些人认为比 `ip route` 列表更直观。
`routel` 脚本也是 `net-tools` 包的一部分。
打印默认网关或路由器 IP 地址,不带任何参数运行 `routel` 脚本,如下所示:
```
$ routel
```
示例输出:
```
target gateway source proto scope dev tbl
default 192.168.1.101 static eth0
172.17.0.0/ 16 172.17.0.1 kernel linkdocker0
192.168.1.0/ 24 192.168.1.20 kernel link eth0
127.0.0.0/ 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.17.0.1 local 172.17.0.1 kernel hostdocker0 local
172.17.255.255 broadcast 172.17.0.1 kernel linkdocker0 local
192.168.1.20 local 192.168.1.20 kernel host eth0 local
192.168.1.255 broadcast 192.168.1.20 kernel link eth0 local
::1 kernel lo
::/ 96 unreachable lo
::ffff:0.0.0.0/ 96 unreachable lo
2002:a00::/ 24 unreachable lo
2002:7f00::/ 24 unreachable lo
2002:a9fe::/ 32 unreachable lo
2002:ac10::/ 28 unreachable lo
2002:c0a8::/ 32 unreachable lo
2002:e000::/ 19 unreachable lo
3ffe:ffff::/ 32 unreachable lo
fe80::/ 64 kernel eth0
::1 local kernel lo local
fe80::d085:cff:fec7:c1c3 local kernel eth0 local
```
![使用 routel 命令打印默认网关或路由器 IP 地址][5]
只打印默认网关,和 `grep` 命令配合,如下所示:
```
$ routel | grep default
default 192.168.1.101 static eth0
```
#### 5、从以太网配置文件中查找网关
如果你在 [Linux 或 Unix 中配置了静态 IP 地址][6],你可以通过查看网络配置文件查看默认网关或路由器 IP 地址。
在基于 RPM 的系统上,如 Fedora、RHEL、CentOS、AlmaLinux 和 Rocky Linux 等,网络接口卡配置存储在 `/etc/sysconfig/network-scripts/` 目录下。
查找网卡的名称:
```
# ip link show
```
示例输出:
```
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d2:85:0c:c7:c1:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
```
网卡名为 `eth0`。所以让我们打开这个网卡文件的网卡配置:
```
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
```
示例输出:
```
DEVICE=eth0
ONBOOT=yes
UUID=eb6b6a7c-37f5-11ed-a59a-a0e70bdf3dfb
BOOTPROTO=none
IPADDR=192.168.1.20
NETMASK=255.255.255.0
GATEWAY=192.168.1.101
DNS1=8.8.8.8
```
如你所见,网关 IP 为 `192.168.1.101`
在 Debian、Ubuntu 及其衍生版中,所有的网络配置文件都存储在 `/etc/network` 目录下。
```
$ cat /etc/network/interfaces
```
示例输出:
```
auto ens18
iface ens18 inet static
address 192.168.1.150
netmask 255.255.255.0
gateway 192.168.1.101
dns-nameservers 8.8.8.8
```
请注意,此方法仅在手动配置 IP 地址时有效。对于启用 DHCP 的网络,需要按照前面的 4 种方法操作。
### 总结
在本指南中,我们列出了在 Linux 和 Unix 系统中找到默认网关的 5 种不同方法,我们还在每种方法中包含了显示网关/路由器 IP 地址的示例命令。希望它对你有所帮助。
--------------------------------------------------------------------------------
via: https://ostechnix.com/find-default-gateway-linux/
作者:[sk][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/wp-content/uploads/2022/09/Find-Default-Gateway-Using-ip-Command.png
[2]: https://ostechnix.com/the-grep-command-tutorial-with-examples-for-beginners/
[3]: https://ostechnix.com/wp-content/uploads/2022/09/Display-Default-Gateway-IP-Address-Using-route-Command.png
[4]: https://ostechnix.com/wp-content/uploads/2022/09/View-Gateway-IP-Address-Using-netstat-Command.png
[5]: https://ostechnix.com/wp-content/uploads/2022/09/Print-Default-Gateway-IP-Address-Or-Router-IP-Address-Using-routel-Command.png
[6]: https://ostechnix.com/configure-static-ip-address-linux-unix/

View File

@ -0,0 +1,186 @@
[#]: subject: "PyLint: The good, the bad, and the ugly"
[#]: via: "https://opensource.com/article/22/9/pylint-good-bad-ugly"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15144-1.html"
PyLint 的优点、缺点和危险
======
![](https://img.linux.net.cn/data/attachment/album/202210/16/093840z9pnzfv9ykfccoq9.jpg)
> 充分利用 PyLint。
敲黑板PyLint 实际上很好!
“PyLint 可以拯救你的生命”这是一句夸张的描述但没有你想象的那么夸张。PyLint 可以让你远离非常难找到的和复杂的缺陷。最差的情况下,它只可以节省测试运行的时间。最好的情况下,它可以帮你避免生产环境中复杂的错误。
### 优点
我不好意思说这种情况是多么普遍。测试的命名总是*那么奇怪*:没有人关心这个名称,而且通常也找不到一个自然的名称。例如以下代码:
```
def test_add_small():
    # Math, am I right?
    assert 1 + 1 == 3
   
def test_add_large():
    assert 5 + 6 == 11
   
def test_add_small():
    assert 1 + 10 == 11
```
测试生效:
```
collected 2 items                                                                        
test.py ..
2 passed
```
但问题是:如果你覆盖了一个测试的名称,测试框架将愉快地跳过这个测试!
实际上,这些文件可能有数百行,而添加新测试的人可能并不知道所有的名称。除非有人仔细查看测试输出,否则一切看起来都很好。
最糟糕的是,*被覆盖测试的添加*、*被覆盖测试造成的破坏*,以及*连锁反应的问题*可能要几天、几月甚至几年才能发现。
### PyLint 会找到它
就像一个好朋友一样PyLint 可以帮助你。
```
test.py:8:0: E0102: function already defined line 1
     (function-redefined)
```
### 缺点
就像 90 年代的情景喜剧一样,你对 PyLint 了解的越多,问题就越多。以下是一个库存建模程序的常规代码:
```
"""Inventory abstractions"""
import attrs
@attrs.define
class Laptop:
    """A laptop"""
    ident: str
    cpu: str
```
但 PyLint 似乎有自己的观点(可能形成于 90 年代),并且不怕把它们作为事实陈述出来:
```
$ pylint laptop.py | sed -n '/^laptop/s/[^ ]*: //p'
R0903: Too few public methods (0/2) (too-few-public-methods)
```
### 危险
有没有想过在一个数百万人使用的工具中加入自己未证实的观点PyLint 每月有 1200 万次下载。
> “如果太挑剔,人们会取消检查” — 这是 PyLint GitHub 的 6987 号议题,于 2022 年 7 月 3 号提出
对于添加一个可能有许多误报的测试,它的态度是 ... “*嗯*”。
### 让它为你工作
PyLint 很好,但你需要小心地与它配合。为了让 PyLint 为你工作,以下是我推荐的三件事:
#### 1、固定版本
固定你使用的 PyLint 版本,避免任何惊喜!
在你的 `.toml` 文件中定义:
```
[project.optional-dependencies]
pylint = ["pylint"]
```
在代码中定义:
```
from unittest import mock
```
这与以下代码对应:
```
# noxfile.py
...
@nox.session(python=VERSIONS[-1])
def refresh_deps(session):
    """Refresh the requirements-*.txt files"""
    session.install("pip-tools")
    for deps in [..., "pylint"]:
        session.run(
            "pip-compile",
            "--extra",
            deps,
            "pyproject.toml",
            "--output-file",
            f"requirements-{deps}.txt",
        )
```
#### 2、默认禁止
禁用所有检查,然后启用那些你认为误报比率高的。(不仅仅是漏报/误报的比率!)
```
# noxfile.py
...
@nox.session(python="3.10")
def lint(session):
    files = ["src/", "noxfile.py"]
    session.install("-r", "requirements-pylint.txt")
    session.install("-e", ".")
    session.run(
        "pylint",
        "--disable=all",
        *(f"--enable={checker}" for checker in checkers)
        "src",
    )
```
#### 3、检查器
以下是我喜欢的检查器。加强项目的一致性,避免一些明显的错误。
```
checkers = [
    "missing-class-docstring",
    "missing-function-docstring",
    "missing-module-docstring",
    "function-redefined",
]
```
### 使用 PyLint
你可以只使用 PyLint 好的部分。在 CI 中运行它以保持一致性,并使用常用检查器。
放弃不好的部分:默认禁止检查器。
避免危险的部分:固定版本以避免意外。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/pylint-good-bad-ugly
作者:[Moshe Zadka][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/python_programming_question.png

View File

@ -0,0 +1,141 @@
[#]: subject: "The story behind Joplin, the open source note-taking app"
[#]: via: "https://opensource.com/article/22/9/joplin-interview"
[#]: author: "Richard Chambers https://opensource.com/users/20i"
[#]: collector: "lkxed"
[#]: translator: "MareDevi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15161-1.html"
开源笔记软件 Joplin 背后的故事
======
![](https://img.linux.net.cn/data/attachment/album/202210/21/112935tfapsvpac06h2sth.jpg)
> Laurent Cozic 与我坐下来,讨论了 Joplin 是如何开始的,以及这个开源笔记软件的下一步计划。
在这次采访中,我见到了笔记软件 Joplin 的创建者 Laurent Cozic。[Joplin][2] 是 [20i][3] 奖励的赢家,所以我想了解是什么让它如此成功,以及他如何实现的。
### 你能概述一下什么是 Joplin 吗?
[Joplin][4] 是一个开源的笔记软件。它可以让你捕获你的想法并从任何设备安全地访问它们。
### 显然,还有很多其他的笔记应用,那么除了免费使用之外,它还有什么不同呢?
对我们的许多用户来说,它是开源的这一事实是一个非常重要的方面,因为这意味着没有供应商对数据的封锁,而且数据可以很容易地被导出并以各种方式访问。
我们还关注用户的安全和数据隐私,特别是端到端加密同步功能,以及通过对应用的任何连接保持透明。我们还与安全研究人员合作,以保证软件更加安全。
最后Joplin 可以通过几种不同的方式进行定制 —— 通过插件(可以添加新的功能)和主题来定制应用程序的外观。我们还公开了一个数据 API它允许第三方应用程序访问 Joplin 的数据。
> **[相关阅读5 款 Linux 上的笔记应用][5]**
### 这是一个竞争非常激烈的市场,那么是什么激发了你创建它的想法?
这是有原因的的。我从 2016 年开始研究它,因为我不喜欢现有的商业记事应用程序:笔记、附件或标签不能轻易被其他工具导出或操作。
这主要是由于供应商的封锁,另外还有供应商缺乏动力,因为他们没有动力帮助用户将他们的数据转移到其他应用程序。还有一个问题是,这些公司通常会以纯文本形式保存笔记,而这有可能造成数据隐私和安全方面的问题。
因此,我决定开始创建一个简单且具有同步功能的移动和终端应用程序,使我的笔记能够轻松地在我的设备上访问。之后又创建了桌面应用程序,项目从此开始发展。
![Chrome OS 上 Joplin 的图片][6]
### 编写 Joplin 花了多长时间呢?
自 2016 年以来,我一直在断断续续地开发,但并不是专门去维护。不过在过去的两年里,我更加专注于它。
### 对于准备创建自己的开源应用的人,你有什么建议?
挑选一个你自己使用的项目和你喜欢的技术来工作。
管理一个开源项目有时是很困难的,所以必须要有足够的兴趣去让它变得更有价值。那么我想 “早发布,多发布” 原则在这里也适用,这样你就可以衡量用户的兴趣,以及是否有必要花时间进一步开发这个项目。
### 有多少人参与了 Joplin 的开发?
有 3、4 人参与开发。目前,我们还有 6 名学生在 <ruby>谷歌编程之夏<rt>Google Summer of Code</rt></ruby> 中为这个项目工作。
### 许多人都在创建开源项目,但 Joplin 对你来说是一个巨大的成功。关于如何获得关注,你能否给开发者提供一些建议?
没有简单的公式,说实话,我不认为我可以在另一个项目中复制这种成功!你必须对你所做的事情充满热情,但同时也要严谨、有组织、稳步前进,确保代码质量保持高水平,并拥有大量的测试单元以防止回归。
同时,对于你收到的用户反馈保持开放的态度,并在此基础上改进项目。
一旦你掌握了这些,剩下的可能就全靠运气了 —— 如果你做的项目让很多人都感兴趣,事情可能会顺利进行!
### 一旦你得到关注,但如果你没有传统的营销预算,你如何保持这种势头?
我认为这在于倾听项目周围的社区。举个例子来说,我从未计划过建立一个论坛,但有人在 GitHub 上提出了这个建议,所以我创建了一个论坛,它成为了一个分享想法、讨论功能、提供支持等很好的方式。社区也普遍欢迎新人,这形成了一种良性循环。
除此以外,定期就项目进行沟通也很重要。
我们没有一个公开的路线图,因为大多数功能的 ETA 通常是 “我不知道”,但我会试图就即将到来的功能、新版本等进行沟通。我们也会就重要的事件进行沟通,特别是谷歌编程之夏,或者当我们有机会赢得像 20i FOSS 奖的时候。
最后,我们很快将在伦敦举行一次面对面的聚会,这是与社区和合作者保持联系的另一种方式。
### 用户的反馈是如何影响路线图的?
很明显,贡献者们经常仅仅因为他们需要某个特性而从事某些工作。但除此之外,我们还根据论坛和 GitHub 问题追踪器上的信息,追踪对用户来说似乎最重要的功能。
例如,移动应用程序现在具有很高的优先级,因为我们经常从用户那里听到,它的限制和缺陷是有效使用 Joplin 的一个问题。
![桌面使用Joplin的图片][8]
### 你是如何跟上最新的开发和编码的发展的?
主要是通过阅读 Hacker News
### 你有个人最喜欢的自由/开源软件可以推荐吗?
在不太知名的项目中,[SpeedCrunch][9] 作为一个计算器非常好。它有很多功能,而且很好的是它能保留以前所有计算的历史。
我还使用 [KeepassXC][10] 作为密码管理器。在过去的几年里,它一直在稳步改进。
最后,[Visual Studio Code][11] 作为一个跨平台的文本编辑器非常棒。
### 我原以为 Joplin 是以 Janis 的名字命名的,但维基百科告诉我来自是 Scoot Joplin。你为什么选择这个名字
我起初想把它命名为 “jot-it”但我想这个名字已经被人占了。
由于我那时经常听 Scoot Joplin 的 <ruby>拉格泰姆<rt>ragtime</rt></ruby>音乐(我相当痴迷于此),我决定使用他的名字。
我认为产品名称的含义并不太重要,只要名称本身易于书写、发音、记忆,并与一些积极的东西(或至少没有消极的东西)有关。
我觉得 “Joplin” 符合所有条件。
### 关于 Joplin 的计划,你还有什么可以说的吗?也许是对一个新功能的独家预告?
如前所述,我们非常希望在用户体验设计和新功能方面对移动应用进行改进。
我们也在考虑创建一个 “插件商店”,以便更容易地浏览和安装插件。
感谢 Laurent — 祝 Joplin 的未来好运。
*图片来自: (Opensource.com, CC BY-SA 4.0)*
*[这篇访谈最初发表在 20i 博客上,已获得许可进行转载。][12]*
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/joplin-interview
作者:[Richard Chambers][a]
选题:[lkxed][b]
译者:[MareDevi](https://github.com/MareDevi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/20i
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/wfh_work_home_laptop_work.png
[2]: https://joplinapp.org/
[3]: https://www.20i.com/foss-awards/winners
[4]: https://opensource.com/article/19/1/productivity-tool-joplin
[5]: https://opensource.com/article/22/8/note-taking-apps-linux
[6]: https://opensource.com/sites/default/files/2022-09/joplin-chrome-os.png
[7]: https://opensource.com/article/21/10/google-summer-code
[8]: https://opensource.com/sites/default/files/2022-09/joplin-desktop.png
[9]: https://heldercorreia.bitbucket.io/speedcrunch/
[10]: https://opensource.com/article/18/12/keepassx-security-best-practices
[11]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
[12]: https://www.20i.com/blog/joplin-creator-laurent-cozic/

View File

@ -0,0 +1,206 @@
[#]: subject: "GUI Apps for Package Management in Arch Linux"
[#]: via: "https://itsfoss.com/arch-linux-gui-package-managers/"
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15149-1.html"
Arch Linux 中用于包管理的图形化应用
======
![](https://img.linux.net.cn/data/attachment/album/202210/17/110440isl629s0uqnl8b29.jpg)
[安装 Arch Linux][1] 有一些挑战性。这就是为什么 [有几个基于 Arch 的发行版][2] 通过提供图形化的安装程序使事情变得简单。
即使你设法安装了 Arch Linux你也会注意到它严重依赖命令行。如果你需要安装应用或更新系统那么必须打开终端。
是的Arch Linux 没有软件中心。我知道,这让很多人感到震惊。
如果你对使用命令行管理应用感到不舒服,你可以安装一个 GUI 工具。这有助于在舒适的图形化界面中搜索包以及安装和删除它们。
想知道你应该使用 [pacman 命令][3] 的哪个图形前端?我有一些建议可以帮助你。
**请注意,某些软件管理器是特定于桌面环境的。**
### 1、Apper
![使用 Apper 安装 Firefox][4]
Apper 是一个精简的 Qt5 应用,它使用 PackageKit 进行包管理,它还支持 AppStream 和自动更新。但是,**没有 AUR 支持**。
要从官方仓库安装它,请使用以下命令:
```
sudo pacman -Syu apper
```
> **[GitLab 上的 Apper][5]**
### 2、深度应用商店
![使用深度应用商店安装 Firefox][6]
深度应用商店是使用 DTKQT5构建的深度桌面环境的应用商店它使用 PackageKit 进行包管理,支持 AppStream同时提供系统更新通知。 **没有 AUR 支持**
要安装它,请使用以下命令:
```
sudo pacman -Syu deepin-store
```
> **[Github 上的深度商店][7]**
### 3、KDE 发现应用
![使用 Discover 安装 Firefox][8]
<ruby>发现<rt>Discover</rt></ruby> 应用不需要为 KDE Plasma 用户介绍。它是一个使用 PackageKit 的基于 Qt 的应用管理器,支持 AppStream、Flatpak 和固件更新。
要在发现应用中安装 Flatpak 和固件更新,需要分别安装 `flatpak``fwupd` 包。**它没有 AUR 支持。**
```
sudo pacman -Syu discover packagekit-qt5
```
> **[GitLab 上的 Discover][9]**
### 4、GNOME PackageKit
![Installing Firefox using GNOME PackageKit][10]
GNOME PackageKit 是一个使用 PackageKit 技术的 GTK3 包管理器,支持 AppStream。不幸的是**没有 AUR 支持**。
要从官方仓库安装它,请使用以下命令:
```
sudo pacman -Syu gnome-packagekit
```
> **[freedesktop 上的 PackageKit][11]**
### 5、GNOME 软件应用
![Installing Firefox using GNOME Software][12]
GNOME <ruby>软件<rt>Software</rt></ruby> 应用不需要向 GNOME 桌面用户介绍。它是使用 PackageKit 技术的 GTK4 应用管理器,支持 AppStream、Flatpak 和固件更新。
**它没有 AUR 支持。** 要安装来自 GNOME 软件应用的 Flatpak 和固件更新,需要分别安装 `flatpak``fwupd` 包。
安装它使用:
```
sudo pacman -Syu gnome-software-packagekit-plugin gnome-software
```
> **[GitLab 上的 GNOME 软件][13]**
### 6、tkPacman
![使用 tkPacman 安装 Firefox][14]
它是用 Tcl 编写的 Tk pacman 封装。界面类似于 [Synaptic 包管理器][15]。
由于没有 GTK/Qt 依赖,它非常轻量级,因为它使用 Tcl/Tk GUI 工具包。
**它不支持 AUR**,这很讽刺,因为你需要从 [AUR][16] 安装它。你需要事先安装一个 [AUR 助手][17],如 yay。
```
yay -Syu tkpacman
```
> **[Sourceforge 上的 tkPacman][18]**
### 7、Octopi
![使用 Octopi 安装 Firefox][19]
可以认为它是 tkPacman 的更好看的表亲。它使用 Qt5 和 Alpm还支持 Appstream 和 **AUR通过 yay**
你还可以获得桌面通知、仓库编辑器和缓存清理器。它的界面类似于 Synaptic 包管理器。
要从 AUR 安装它,请使用以下命令。
```
yay -Syu octopi
```
> **[GitHub 上的 Octopi][20]**
### 8、Pamac
![使用 Pamac 安装 Firefox][21]
Pamac 是 Manjaro Linux 的图形包管理器。它基于 GTK3 和 Alpm**支持 AUR、Appstream、Flatpak 和 Snap**。
Pamac 还支持自动下载更新和降级软件包。
它是 Arch Linux 衍生版中使用最广泛的应用。但因为 [DDoS AUR 网页][22] 而臭名昭著。
[在 Arch Linux 上安装 Pamac][23] 有几种方法。最简单的方法是使用 AUR 助手。
```
yay -Syu pamac-aur
```
> **[GitLab 上的 Pamac][24]**
### 总结
要删除任何上面图形化包管理器以及依赖项和配置文件,请使用以下命令将 `packagename` 替换为要删除的包的名称。
```
sudo pacman -Rns packagename
```
这样看来Arch Linux 也可以在不接触终端的情况下使用合适的工具。
还有一些其他应用程序也使用终端用户界面TUI。一些例子是 [pcurses][25]、[cylon][26]、[pacseek][27] 和 [yup][28]。但是,这篇文章只讨论那些有适当的 GUI 的软件。
**注意:** PackageKit 默认打开系统权限,因而 [不推荐][29] 用于一般用途。因为如果用户属于 `wheel` 组,更新或安装任何软件都不需要密码。
**你看到了在 Arch Linux 上使用图形化软件中心的几种选择。现在是时候决定使用其中一个了。你会选择哪一个Pamac 或 OctoPi 还是其他?现在就在下面留言吧**。
---
via: https://itsfoss.com/arch-linux-gui-package-managers/
作者:[Anuj Sharma][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/anuj/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/install-arch-linux/
[2]: https://itsfoss.com/arch-based-linux-distros/
[3]: https://itsfoss.com/pacman-command/
[4]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[5]: https://invent.kde.org/system/apper
[6]: https://itsfoss.com/wp-content/uploads/2022/09/dde-arch-install-firefox.png
[7]: https://github.com/dekzi/dde-store
[8]: https://itsfoss.com/wp-content/uploads/2022/09/discover-arch-install-firefox.png
[9]: https://invent.kde.org/plasma/discover
[10]: https://itsfoss.com/wp-content/uploads/2022/09/gnome-packagekit-arch-install-firefox.png
[11]: https://freedesktop.org/software/PackageKit/index.html
[12]: https://itsfoss.com/wp-content/uploads/2022/09/gnome-software-arch-install-firefox.png
[13]: https://gitlab.gnome.org/GNOME/gnome-software
[14]: https://itsfoss.com/wp-content/uploads/2022/09/tkpacman-arch-install-firefox.png
[15]: https://itsfoss.com/synaptic-package-manager/
[16]: https://itsfoss.com/aur-arch-linux/
[17]: https://itsfoss.com/best-aur-helpers/
[18]: https://sourceforge.net/projects/tkpacman
[19]: https://itsfoss.com/wp-content/uploads/2022/09/octopi-arch-install-firefox.png
[20]: https://github.com/aarnt/octopi
[21]: https://itsfoss.com/wp-content/uploads/2022/09/pamac-arch-install-firefox.png
[22]: https://gitlab.manjaro.org/applications/pamac/-/issues/1017
[23]: https://itsfoss.com/install-pamac-arch-linux/
[24]: https://gitlab.manjaro.org/applications/pamac
[25]: https://github.com/schuay/pcurses
[26]: https://github.com/gavinlyonsrepo/cylon
[27]: https://github.com/moson-mo/pacseek
[28]: https://github.com/ericm/yup
[29]: https://bugs.archlinux.org/task/50459

View File

@ -3,29 +3,30 @@
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15145-1.html"
如何在 Fedora、CentOS、RHEL 中启用 RPM Fusion 仓库
======
本指南解释了在 Fedora Linux 发行版中启用第三方软件仓库 RPM Fusion 的步骤。
[RPM Fusion][1] 软件仓库是一个社区维护的软件仓库,它为 Fedora Linux 提供额外的软件包,这些软件包不是由 Fedora 官方团队分发,例如 DVD 播放、媒体播放、来自 GNOME 和 KDE 的软件等。这是因为许可、其他法律原因和特定国家/地区的软件规范
> 本指南解释了在 Fedora Linux 发行版中启用第三方软件仓库 RPM Fusion 的步骤
RPM Fusion 为 Red Hat Enterprise Linux 以及 Fedora 提供了 .rpm 包。
[RPM Fusion][1] 软件仓库是一个社区维护的软件仓库,它为 Fedora Linux 提供额外的软件包,这些软件包不是由 Fedora 官方团队分发,例如 DVD 播放、媒体播放、来自 GNOME 和 KDE 的软件等。这是因为许可证、其他法律原因和特定国家/地区的软件规范而导致的。
RPM Fusion 为 Red Hat Enterprise LinuxRHEL以及 Fedora 提供了 .rpm 包。
本指南介绍了在 Fedora Linux 中启用 RPM Fusion 仓库所需的步骤。本指南适用于所有 Fedora 发行版本。
这在所有当前支持的 Fedora 版本35、36 和 37中进行了测试。
![RPM Fusion][2]
![](https://img.linux.net.cn/data/attachment/album/202210/16/111338jjr0eh5cjgq017n5.jpg)
### 如何在 Fedora Linux、RHEL、CentOS 中启用 RPM Fusion 仓库
RPM Fusion 有两种版本的仓库:自由和非自由。
顾名思义,自由版包含软件包的自由版本,非自由版包含封闭源代码和“非商业”开源软件的编译软件包
顾名思义,自由版包含软件包的自由版本,非自由版包含封闭源代码的编译软件包和“非商业”开源软件。
在继续之前,首先检查你是否安装了 RPM fusion。打开终端并运行以下命令。
@ -33,7 +34,6 @@ RPM Fusion 有两种版本的仓库:自由和非自由。
dnf repolist | grep rpmfusion
```
如果安装了 RPM你应该会看到如下所示的消息。就不用下面的步骤。如果未安装你可以继续执行以下步骤。
![RPM Fusion 已安装][3]
@ -42,69 +42,93 @@ dnf repolist | grep rpmfusion
#### Fedora
自由版:
```
sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
```
非自由版:
```
sudo dnf install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
#### 带 rpm-ostree 的 Silverblue
#### 在 Silverblue 上使用 rpm-ostree
自由版:
```
sudo rpm-ostree install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
```
非自由版:
```
sudo rpm-ostree install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
#### RHEL 8
先安装 EPEL
```
sudo dnf install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
```
自由版:
```
sudo dnf install --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
```
非自由版:
```
sudo dnf install --nogpgcheckhttps://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
```
开发相关软件包:
```
sudo subscription-manager repos --enable "codeready-builder-for-rhel-8-$(uname -m)-rpms"
```
#### CentOS 8
先安装 EPEL
```
sudo dnf install --nogpgcheck https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
```
自由版:
```
sudo dnf install --nogpgcheck https://download1.rpmfusion.org/free/el/rpmfusion-free-release-8.noarch.rpm
```
非自由版:
```
sudo dnf install --nogpgcheckhttps://download1.rpmfusion.org/nonfree/el/rpmfusion-nonfree-release-8.noarch.rpm
```
启用 PowerTools
```
sudo dnf config-manager --enable PowerTools
```
### 附加说明
* RPM Fusion 还提供帮助用户安装来自 GNOME 软件或 KDE Discover 的软件包。要在 Fedora 中启用它,请运行以下命令
RPM Fusion 还可以帮助用户安装来自 GNOME 软件或 KDE Discover 的软件包。要在 Fedora 中启用它,请运行以下命令
```
sudo dnf groupupdate core
```
* 你还可以通过以下命令启用 RPM Fusion 来使用 gstreamer 和其他多媒体播放包来播放媒体文件。
你还可以通过以下命令启用 RPM Fusion 来使用 gstreamer 和其他多媒体播放包来播放媒体文件。
```
sudo dnf groupupdate multimedia --setop="install_weak_deps=False" --exclude=PackageKit-gstreamer-plugin
@ -114,13 +138,13 @@ sudo dnf groupupdate multimedia --setop="install_weak_deps=False" --exclude=Pack
sudo dnf groupupdate sound-and-video
```
* 启用 RPM Fusion 以使用 libdvdcss 播放 DVD。
启用 RPM Fusion 以使用 libdvdcss 播放 DVD。
```
sudo dnf install rpmfusion-free-release-taintedsudo dnf install libdvdcss
```
* 通过以下命令启用 RPM Fusion 以启用非 FLOSS 硬件包。
通过以下命令启用 RPM Fusion 以启用非 FLOSS 硬件包。
```
sudo dnf install rpmfusion-nonfree-release-taintedsudo dnf install *-firmware
@ -169,7 +193,7 @@ via: https://www.debugpoint.com/enable-rpm-fusion-fedora-rhel-centos/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,29 +3,32 @@
[#]: author: "James Kiarie https://www.linuxtechi.com/author/james/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15152-1.html"
# 如何在 Linux 中逐步创建 LVM 分区
在 Linux 中创建 LVM 分区的分步指南
======
在本指南中,我们将逐步介绍如何在 Linux 中创建 lvm 分区。
![](https://img.linux.net.cn/data/attachment/album/202210/18/113615swffwazya3nyfve2.jpg)
LVM 代表逻辑卷管理,它是专门为服务器管理 Linux 系统上的磁盘或存储的推荐方式。 LVM 分区的主要优点之一是我们可以实时扩展其大小而无需停机。 LVM 分区也可以减少,但不推荐。
> 在本指南中,我们将逐步介绍如何在 Linux 中创建 LVM 分区。
LVM 代表 “<ruby>逻辑卷管理<rt>Logical Volume Management</rt></ruby>”,它是专门为服务器管理 Linux 系统上的磁盘或存储的推荐方式。 LVM 分区的主要优点之一是我们可以实时扩展其大小而无需停机。 LVM 分区也可以缩小,但不推荐。
为了演示,我在我的 Ubuntu 22.04 系统上连接了 15GB 磁盘,我们将从命令行在该磁盘上创建 LVM 分区。
##### 先决条件
##### 准备
- 连接到 Linux 系统的原始磁盘
- 具有 Sudo 权限的本地用户
- 具有 sudo 权限的本地用户
- 预装 lvm2 包
事不宜迟,让我们深入了解这些步骤。
### 步骤 1) 识别新连接的原始磁盘
### 步骤 1识别新连接的原始磁盘
登录到你的系统,打开终端并运行以下 dmesg 命令:
登录到你的系统,打开终端并运行以下 `dmesg` 命令:
```
$ sudo dmesg | grep -i sd
@ -35,7 +38,7 @@ $ sudo dmesg | grep -i sd
![dmesg-command-new-attached-disk-linux][1]
识别新连接的原始磁盘的另一种方法是通过 fdisk 命令:
识别新连接的原始磁盘的另一种方法是通过 `fdisk` 命令:
```
$ sudo fdisk -l | grep -i /dev/sd
@ -45,18 +48,18 @@ $ sudo fdisk -l | grep -i /dev/sd
![fdisk-command-output-new-disk][2]
从上面的输出,可以确认新连接的磁盘是 “/dev/sdb”
从上面的输出,可以确认新连接的磁盘是 `/dev/sdb`
### 步骤 2创建 PV物理卷
### 步骤 2创建 PV物理卷
在开始在磁盘 /dev/sdb 上创建 pv 之前,请确保已安装 lvm2 包。如果未安装,请运行以下命令:
在开始在磁盘 `/dev/sdb` 上创建<ruby>物理卷<rt>Physical Volume</rt></ruby>PV之前请确保已安装 `lvm2` 包。如果未安装,请运行以下命令:
```
$ sudo apt install lvm2 // On Ubuntu / Debian
$ sudo dnf install lvm2 // on RHEL / CentOS
```
运行以下 pvcreate 命令在磁盘 /dev/sdb 上创建 pv
运行以下 `pvcreate` 命令在磁盘 `/dev/sdb` 上创建 PV
```
$ sudo pvcreate /dev/sdb
@ -64,7 +67,7 @@ $ sudo pvcreate /dev/sdb
$
```
要验证 pv 状态,运行:
要验证 PV 状态,运行:
```
$ sudo pvs /dev/sdb
@ -74,9 +77,9 @@ $ sudo pvdisplay /dev/sdb
![pvdisplay-command-output-linux][3]
### 步骤 3) 创建 VG卷组
### 步骤 3创建 VG卷组
要创建卷组,我们将使用 vgcreate 命令。创建 VG 意味着将 pv 添加到卷组
要创建<ruby>卷组<rt>Volume Group</rt></ruby>VG我们将使用 `vgcreate` 命令。创建 VG 意味着将 PV 添加到其中
语法:
@ -92,7 +95,7 @@ $ sudo vgcreate volgrp01 /dev/sdb
$
```
运行以下命令以验证 vg (volgrp01) 的状态:
运行以下命令以验证 VG`volgrp01`的状态:
```
$ sudo vgs volgrp01
@ -104,17 +107,17 @@ $ sudo vgdisplay volgrp01
![vgs-command-output-linux][4]
以上输出确认大小为 15 GiB 的卷组 (volgrp01) 已成功创建,一个物理扩展 (PE) 的大小为 4 MB。创建 vg 时可以更改 PE 大小。
以上输出确认大小为 15 GiB 的卷组 `volgrp01` 已成功创建,一个<ruby>物理扩展<er>Physical Extend</rt></ruby>PE的大小为 4 MB。创建 VG 时可以更改 PE 大小。
### 步骤 4创建 LV逻辑卷
### 步骤 4创建 LV逻辑卷
Lvcreate 命令用于从 VG 创建 LV。 lvcreate 命令的语法如下所示:
`lvcreate` 命令用于从 VG 中创建<ruby>逻辑卷<rt>Logical Volume</rt></ruby> LV。 `lvcreate` 命令的语法如下所示:
```
$ sudo lvcreate -L <Size-of-LV> -n <LV-Name> <VG-Name>
```
在我们的例子中,以下命令将用于创建大小为 14 GB 的 lv
在我们的例子中,以下命令将用于创建大小为 14 GB 的 LV
```
$ sudo lvcreate -L 14G -n lv01 volgrp01
@ -122,7 +125,7 @@ $ sudo lvcreate -L 14G -n lv01 volgrp01
$
```
验证 lv 的状态,运行:
验证 LV 的状态,运行:
```
$ sudo lvs /dev/volgrp01/lv01
@ -134,11 +137,11 @@ $ sudo lvdisplay /dev/volgrp01/lv01
![lvs-command-output-linux][5]
上面的输出显示 LV (lv01) 已成功创建,大小为 14 GiB。
上面的输出显示 LV`lv01`已成功创建,大小为 14 GiB。
### 步骤 5) 格式化 LVM 分区
### 步骤 5格式化 LVM 分区
使用 mkfs 命令格式化 lvm 分区。在我们的例子中lvm 分区是 /dev/volgrp01/lv01
使用 `mkfs` 命令格式化 LVM 分区。在我们的例子中LVM 分区是 `/dev/volgrp01/lv01`
注意:我们可以将分区格式化为 ext4 或 xfs因此请根据你的设置和要求选择文件系统类型。
@ -150,19 +153,19 @@ $ sudo mkfs.ext4 /dev/volgrp01/lv01
![mkfs-ext4-filesystem-lvm][6]
执行下面的命令,用 xfs 文件系统格式化 lvm 分区:
执行下面的命令,用 xfs 文件系统格式化 LVM 分区:
```
$ sudo mkfs.xfs /dev/volgrp01/lv01
```
要使用上述格式化分区,我们必须将其挂载到某个文件夹中。所以,让我们创建一个文件夹 /mnt/data
要使用上述格式化分区,我们必须将其挂载到某个文件夹中。所以,让我们创建一个文件夹 `/mnt/data`
```
$ sudo mkdir /mnt/data
```
现在运行 mount 命令将其挂载到 /mnt/data 文件夹:
现在运行 `mount` 命令将其挂载到 `/mnt/data` 文件夹:
```
$ sudo mount /dev/volgrp01/lv01 /mnt/data/
@ -172,20 +175,20 @@ Filesystem Type Size Used Avail Use% Mounted on
$
```
尝试创建一些虚拟文件,运行以下命令:
尝试创建一些没用的文件,运行以下命令:
```
$ cd /mnt/data/
$ echo "testing lvm partition" | sudo tee dummy.txt
$ echo "testing lvm partition" | sudo tee dummy.txt
$ cat dummy.txt
testing lvm partition
$
$ sudo rm -f dummy.txt
```
完美,以上命令输出确认我们可以访问 lvm 分区。
完美,以上命令输出确认我们可以访问 LVM 分区。
要永久挂载到 lvm 分区之上,请使用以下 echo 命令将其条目添加到 fstab 文件中:
要永久挂载上述 LVM 分区,请使用以下 `echo` 命令将其条目添加到 `fstab` 文件中:
```
$ echo '/dev/volgrp01/lv01 /mnt/data ext4 defaults 0 0' | sudo tee -a /etc/fstab
@ -201,7 +204,7 @@ via: https://www.linuxtechi.com/how-to-create-lvm-partition-in-linux/
作者:[James Kiarie][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者 ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,96 @@
[#]: subject: "How to Update Google Chrome on Ubuntu Linux"
[#]: via: "https://itsfoss.com/update-google-chrome-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15163-1.html"
如何在 Ubuntu Linux 上更新谷歌 Chrome
======
![](https://img.linux.net.cn/data/attachment/album/202210/22/085013gihsi4rtmpkmj4yb.png)
> 你设法在你的 Ubuntu 系统上安装了谷歌 Chrome 浏览器。现在你想知道如何让浏览器保持更新。
在 Windows 和 macOS 上,当 Chrome 上有可用更新时,你会在浏览器中收到通知,你可以从浏览器中点击更新选项。
Linux 中的情况有所不同。你不会从浏览器更新 Chrome。你要使用系统更新对其进行更新。
是的。当 Chrome 上有可用的新更新时Ubuntu 会通过系统更新工具通知你。
![当有新版本的 Chrome 可用时Ubuntu 会发送通知][1]
你只需单击“<ruby>立即安装<rt>Install Now</rt></ruby>”按钮,在被提示时输入你的帐户密码并将 Chrome 更新到新版本。
让我告诉你为什么会在系统级别看到更新,以及如何在命令行中更新谷歌 Chrome。
### 方法 1使用系统更新更新谷歌浏览器
你最初是如何安装 Chrome 的?你从 [Chrome 网站][2] 获得了 deb 安装程序文件,并使用它来 [在 Ubuntu 上安装 Chrome][3]。
当你这样做时,谷歌会在你系统的源列表中添加一个仓库条目。这样,你的系统就会信任来自谷歌仓库的包。
![谷歌 Chrome 存储库添加到 Ubuntu 系统][4]
对于添加到系统中的所有此类条目,包更新通过 Ubuntu 更新程序集中进行。
这就是为什么当 Google Chrome和其他已安装的应用有可用更新时你的 Ubuntu 系统会向你发送通知。
![Chrome 更新可通过系统更新与其他应用一起使用][5]
**单击“<ruby>立即安装<rt>Install Now</rt></ruby>”按钮并在要求时输入你的密码**。很快,系统将安装所有可升级的软件包。
根据更新偏好,通知可能不是立即的。如果需要,你可以手动运行更新程序工具并查看适用于你的 Ubuntu 系统的更新。
![运行软件更新程序以查看你的系统有哪些可用更新][6]
### 方法 2在 Ubuntu 命令行中更新 Chrome
如果你更喜欢终端而不是图形界面,你也可以使用命令更新 Chrome。
打开终端,并依次运行以下命令:
```
sudo apt update
sudo apt --only-upgrade install google-chrome-stable
```
第一条命令更新包缓存,以便你的系统知道可以升级哪些包。
第二条命令 [仅更新单个包][7],即谷歌 Chrome安装为 `google-chrome-stable`)。
### 总结
如你所见Ubuntu 比 Windows 更精简。你会随其他系统更新一起更新 Chrome。
顺便一提,如果你对它不满意,你可以了解 [从 Ubuntu 中删除 Google Chrome][8]。
Chrome 是一款不错的浏览器。你可以通过 [使用 Chrome 中的快捷方式][9] 来试验它,因为它使浏览体验更加流畅。
在 Ubuntu 上享受 Chrome
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-google-chrome-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2021/06/chrome-edge-update-ubuntu.png
[2]: https://www.google.com/chrome/
[3]: https://itsfoss.com/install-chrome-ubuntu/
[4]: https://itsfoss.com/wp-content/uploads/2021/06/google-chrome-repo-ubuntu.png
[5]: https://itsfoss.com/wp-content/uploads/2021/06/chrome-edge-update-ubuntu.png
[6]: https://itsfoss.com/wp-content/uploads/2022/04/software-updater-ubuntu-22-04.jpg
[7]: https://itsfoss.com/apt-upgrade-single-package/
[8]: https://itsfoss.com/uninstall-chrome-from-ubuntu/
[9]: https://itsfoss.com/google-chrome-shortcuts/

View File

@ -0,0 +1,123 @@
[#]: subject: "Xubuntu 22.10: Top New Features"
[#]: via: "https://www.debugpoint.com/xubuntu-22-10-features/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15155-1.html"
Xubuntu 22.10:热门新功能
======
> 这是 Xubuntu 22.10 Kinetic Kudu 及其新功能的快速总结。
![Xubuntu 22.10 桌面][1]
质量需要时间来建立。它适用于生活的各个方面,包括软件。
自 Xfce 4.16 发布以来Xfce 4.17(开发版)已经被添加了许多新功能。这包括核心 Xfce、原生应用GNOME 43、MATE 1.26 和 libadwaita。由于 Xfce 也是 GNOME 和 MATE 的组合,正确地合并和测试这些更改需要时间。
在 Xubuntu 22.10 Kinetic Kudu 版本中,你将体验到自 2020 年 12 月以来所做的所有改进:将近两年的错误修复和增强。
让我们快速查看一下时间表。目前Xubuntu 22.10 beta 已经发布,并正在测试中(本问末尾提供了 ISO 链接)。最终版本预计于 2022 年 10 月 20 日发布。
### Xubuntu 22.10 新功能
#### 核心更新和 GNOME 框架
在其核心Xubuntu 22.10 同其基于的 Ubuntu 22.10 一样,采用 Linux 内核 5.19。另外Xfce 桌面版本是 Xfce 4.17。
4.17 版本是一个开发版,因为它是下一个大版本 Xfce 4.18 的基础,该版本 [计划在今年圣诞节发布][2]。
让我们谈谈 GNOME 和相关应用。 Xubuntu 22.10 中的 Xfce 4.17 首次获得了带有 GNOME 43 更新的 libadwaita。这意味着默认的 GNOME 应用程序可以在 Xfce 桌面下正确呈现。
这就是说GNOME <ruby>软件应用<rt>Software</rt></ruby> 43 在 Xubuntu 22.10 的 Xfce 桌面下看起来很棒。如果你将其与 Xfce 原生外观和带有 CSD/SSD例如 “<ruby>磁盘应用<rt>Disk</rt></ruby>”)的 GNOME 应用进行比较,它们看起来都很顺眼。
我对 GNOME 软件应用 43 在 Xfce 桌面下的 libadwaita/GTK4 渲染效果如此之好感到惊讶。
![在 Xubuntu 22.10 中一起使用三种不同的窗口][3]
#### Xfce 应用
Xfce 桌面带来了自己的原生应用集。在此版本中,所有应用都从 4.16 升级到 4.17 版本。
值得注意的变化包括Xfce 面板获得了对任务列表插件的中键单击支持和托盘时钟中的二进制时间模式。PulseAudio 插件引入了一个新的录音指示器,可以过滤掉多个按钮按下事件。
Thunar 文件管理器获得了大量的底层功能和错误修复。如果你将 Thunar 4.16 与 Thunar 4.17 进行比较,它是变化巨大的。更改包括更新的上下文菜单、路径栏、搜索、导航等。你可以在 [此处][4] 阅读 Thunar 的所有更改日志。
此外,截屏应用 ScreenShooter 默认支持 WebP。蓝牙管理器 Blueman 在系统托盘新增配置文件切换器,并更新了 Catfish 文件搜索工具。
这是 Xfce 应用版本的更新列表和指向其更改日志的链接(如果你想进一步挖掘)。
* Appfinder [4.17.0][5]
* Catfish [4.16.4][6]
* Mousepad [0.5.10][7]
* Panel [4.17.3][8]
* PulseAudio 插件 [0.4.4][9]
* Ristretto [0.12.3][10]
* Screenshooter [1.9.11][11]
* Task Manager [1.5.4][12]
* Terminal [1.0.4][13]
* Thunar [4.17.9][14]
#### 外观和感觉
默认的 elementary-xfce 图标集(浅色和深色)得到了更新,带有额外的精美图标,让你的 Xfce 桌面焕然一新。默认的 Greybird GTK 主题对窗口装饰进行了必要的改进,并添加了 Openbox 支持。
你可能会注意到的重要且可见的变化之一是 `ALT+TAB` 外观。图标更大一些,眼睛更舒适,可以在深色背景下更快地切换窗口。
![在 Xubuntu 22.10 的 elementary-xfce 中更新的图标集示例][15]
![ALT TAB 有更大的图标][16]
上述更改使默认应用与其所基于的 [Ubuntu 22.10 版本][17] 保持一致。以下是 Xubuntu 22.10 中的更改概括。
### 概括
* Linux 内核 5.19,基于 Ubuntu 22.10
* Xfce 桌面版 4.17
* 原生应用全部更新到 4.17
* 核心与 GNOME 43、libadwaita、GTK4 保持一致
* MATE 应用程序升级到 1.26
* Mozilla Firefox 网页浏览器 105.0
* Thunderbird 邮件客户端 102.3
* LibreOffice 7.4.4.2
### 总结
Xfce 桌面最关键的整体变化将在 4.18 版本中到来。例如,最初的 Wayland 支持、更新的 glib 和 GTK 包。如果一切顺利,你可以在明年 4 月发布的 Xubuntu 中期待这些最好的变化。
最后,如果你想试用,可以从 [这个页面][18] 下载 Beta 镜像。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/xubuntu-22-10-features/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/Xubuntu-22.10-Desktop-1024x563.jpg
[2]: https://debugpointnews.com/xfce-4-18-announcement/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/10/Three-different-window-decorations-together-in-Xubuntu-22.10.jpg
[4]: https://gitlab.xfce.org/xfce/thunar/-/blob/master/NEWS
[5]: https://gitlab.xfce.org/xfce/xfce4-appfinder/-/blob/master/NEWS
[6]: https://gitlab.xfce.org/apps/catfish/-/blob/master/NEWS
[7]: https://gitlab.xfce.org/apps/mousepad/-/blob/master/NEWS
[8]: https://gitlab.xfce.org/xfce/xfce4-panel/-/blob/master/NEWS
[9]: https://gitlab.xfce.org/panel-plugins/xfce4-pulseaudio-plugin/-/blob/master/NEWS
[10]: https://gitlab.xfce.org/apps/ristretto/-/blob/master/NEWS
[11]: https://gitlab.xfce.org/apps/xfce4-screenshooter/-/blob/master/NEWS
[12]: https://gitlab.xfce.org/apps/xfce4-taskmanager/-/blob/master/NEWS
[13]: https://gitlab.xfce.org/apps/xfce4-terminal/-/blob/master/NEWS
[14]: https://gitlab.xfce.org/xfce/thunar/-/blob/master/NEWS
[15]: https://www.debugpoint.com/wp-content/uploads/2022/10/Refreshed-icon-set-sample-in-elementary-xfce-with-Xubuntu-22.10.jpg
[16]: https://www.debugpoint.com/wp-content/uploads/2022/10/ALT-TAB-is-refreshed-with-larger-icons.jpg
[17]: https://www.debugpoint.com/ubuntu-22-10/
[18]: https://cdimage.ubuntu.com/xubuntu/releases/kinetic/beta/

View File

@ -0,0 +1,149 @@
[#]: subject: "How to Enable Snap Support in Arch Linux"
[#]: via: "https://itsfoss.com/install-snap-arch-linux/"
[#]: author: "Pranav Krishna https://itsfoss.com/author/pranav/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15160-1.html"
如何在 Arch Linux 中启用 Snap 支持
======
![](https://img.linux.net.cn/data/attachment/album/202210/21/100128gzzqkf3fcg3f6q3n.jpg)
Snap 是由 Ubuntu 的母公司 Canonical 设计的通用包格式。有些人不喜欢 Snap但它有一些优势。
通常,某些应用仅以 Snap 格式提供。这为你提供了在 Arch Linux 中启用 Snap 的充分理由。
我知道 AUR 拥有大量应用,但 Snap 应用通常直接来自开发人员。
如果你希望能够在 Arch Linux 中安装 Snap 应用,你需要先启用 Snap 支持。
有两种方法可以做到:
* 使用 AUR 助手启用 Snap 支持(更简单)
* 通过从 AUR 获取包,手动启用 Snap 支持
让我们看看怎么做。
### 方法 1、使用 AUR 助手启用 Snap
Snap 支持在 Arch 用户仓库中以 `snapd` 包的形式提供。你可以使用 AUR 助手轻松安装它。
有 [许多 AUR 助手][1],但 `yay` 是我更喜欢的,因为它的语法类似于 [pacman 命令][2]。
如果你还没有安装 AUR请使用以下命令安装 Yay需要事先安装 `git`
```
git clone https://aur.archlinux.org/yay
cd yay
makepkg -si
```
![安装 yay][3]
现在 `yay` 已安装,你可以通过以下方式安装 `snapd`
```
yay -Sy snapd
```
![使用 yay 从 AUR 安装 snapd][4]
每当你 [更新 Arch Linux][5] 系统时,`yay` 都会启用 `snapd` 的自动更新。
#### 验证 Snap 支持是否有效
要测试 Snap 支持是否正常工作,请安装并运行 `hello-world` Snap 包。
```
sudo snap install hello-world
hello-world
(或者)
sudo snap run hello-world
```
![hello-world Snap 包执行][6]
如果它运行良好,那么你可以轻松安装其他 Snap 包。
### 方法 2、从 AUR 手动构建 snapd 包
如果你不想使用 AUR 助手,你仍然可以从 AUR 获取 `snapd`。让我展示详细的过程。
你需要先安装一些构建工具。
```
sudo pacman -Sy git go go-tools python-docutils
```
![为 Snap 安装依赖项][7]
完成依赖项安装后,现在可以克隆 `snapd` 的 AUR 目录,如下所示:
```
git clone https://aur.archlinux.org/snapd
cd snapd
```
![克隆仓库][8]
然后构建 `snapd` 包:
```
makepkg -si
```
当它要求安装其他依赖包时输入 `yes`
![手动构建 snapd][9]
你已安装 `snapd` 守护程序。但是,需要启用它以在启动时自动启动。
```
sudo systemctl enable snapd --now
sudo systemctl enable snapd.apparmor --now #start snap applications
sudo ln -s /var/lib/snapd/snap /snap #optional: classic snap support
```
![启动时启用 Snap][10]
手动构建包的主要缺点是每次新更新启动时你都必须手动构建。使用 AUR 助手为我们解决了这个问题。
### 总结
我更喜欢 Arch Linux 中的 pacman 和 AUR。很少能看到不在 AUR 中但以其他格式提供的应用。尽管如此,在某些你希望直接从源获取它的情况下,使用 Snap 可能是有利的,例如 [在 Arch 上安装 Spotify][11]。
希望本教程对你有所帮助。如果你有任何问题,请告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-snap-arch-linux/
作者:[Pranav Krishna][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pranav/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/best-aur-helpers/
[2]: https://itsfoss.com/pacman-command/
[3]: https://itsfoss.com/wp-content/uploads/2022/10/yay-makepkg.png
[4]: https://itsfoss.com/wp-content/uploads/2022/10/yay-install-snapd.png
[5]: https://itsfoss.com/update-arch-linux/
[6]: https://itsfoss.com/wp-content/uploads/2022/10/snap-hello-world-1.png
[7]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-dependencies.png
[8]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-clone.png
[9]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-makepkg-800x460.png
[10]: https://itsfoss.com/wp-content/uploads/2022/10/enable-snapd-startup-2.png
[11]: https://itsfoss.com/install-spotify-arch/

View File

@ -0,0 +1,106 @@
[#]: subject: "VirtualBox 7.0 Releases With Secure Boot and Full VM Encryption Support"
[#]: via: "https://news.itsfoss.com/virtualbox-7-0-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15141-1.html"
VirtualBox 7.0 发布,支持安全启动和全加密虚拟机
======
> VirtualBox 7.0 是自其上次大版本更新以来的一次重大升级。有一些不错的进步!
![伴随着 VirtualBox 7.0 的发布,支持安全启动和全加密虚拟机][1]
对 VirtualBox 来说,这是一次大的升级。这个版本值得关注,因为我们在最近几年没有看到过它的大版本更新。
对于那些不熟悉 VirtualBox 的人来说,它是一个由 [甲骨文公司][2] 开发的虚拟化软件。
随着 VirtualBox 7.0 的推出,增加了许多新功能。
让我们来看看其中最关键的一些。
### VirtualBox 7.0 的新内容
![virtualbox 7.0][3]
VirtualBox 7.0 是一次有益的升级。有图标更新、主题改进,以及一些关键的亮点,包括:
* 一个显示运行中的<ruby>客体<rt>Guest</rt></ruby>的性能统计的新工具。
* 支持安全启动。
* 支持<ruby>全加密虚拟机<rt>Full VM Encryption</rt></ruby>(通过 CLI
* 重新设计的新建虚拟机向导。
#### 通过命令行管理的全加密虚拟机
虚拟机VM现在可以完全加密了但只能通过命令行界面。
这也包括加密的配置日志和暂存状态。
截至目前,用户只能通过命令行界面对机器进行加密,未来将增加不同的方式。
#### 新的资源监控工具
![VirtualBox 7.0 的资源监控][4]
新的实用程序可以让你监控性能统计,如 CPU、内存使用、磁盘 I/O 等。它将列出所有正在运行的客体的性能统计。
这不是最吸引人的补充,但很有用。
#### 改进的主题支持
对主题的支持在所有平台上都得到了改进。在 Linux 和 macOS 上使用原生引擎,而在 Windows 上,有一个单独的实现。
#### 对安全启动的支持
VirtualBox 现在支持安全启动,增强了对恶意软件、病毒和间谍软件的安全性。
它还将防止虚拟机使用损坏的驱动程序启动,这对企业应用非常重要。
使用那些需要安全启动才能运行的操作系统的用户现在应该能够轻松创建虚拟机了。
#### 其他变化
VirtualBox 7.0 是一次重大的升级。因此,有几个功能的增加和全面的完善。
例如,新件虚拟机向导现在已经重新设计,以整合无人值守的客体操作系统安装。
![virtualbox 7.0 无人值守的发行版安装][5]
其他改进包括:
* 云端虚拟机现在可以被添加到 VirtualBox并作为本地虚拟机进行控制。
* VirtualBox 的图标在此版本中得到了更新。
* 引入了一个新的 3D 栈,支持 DirectX 11。它使用 [DXVK][6] 来为非 Windows 主机提供同样的支持。
* 支持虚拟 TPM 1.2/2.0。
* 改进了多显示器设置中的鼠标处理。
* Vorbis 是音频录制的默认音频编解码器。
你可以查看 [发行说明][7] 以了解更多信息。
如果你正在寻找增强的功能如更好的主题支持、加密功能、安全启动支持和类似的功能添加VirtualBox 7.0 是一个不错的升级。
💬 *你对这次升级有什么看法?你会使用较新的版本还是暂时坚持使用旧版本的虚拟机?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/virtualbox-7-0-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/virtualbox-7-0-release.jpg
[2]: https://www.oracle.com/in/
[3]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0.png
[4]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0_Resource_Monitor.png
[5]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0_Unattended_Guest_Install.png
[6]: https://github.com/doitsujin/dxvk
[7]: https://www.virtualbox.org/wiki/Changelog-7.0

View File

@ -0,0 +1,132 @@
[#]: subject: "First Look at LURE! Bringing AUR to All Linux Distros"
[#]: via: "https://news.itsfoss.com/lure-aur/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15151-1.html"
LURE 初窥!将 AUR 带入所有 Linux 发行版
======
> LURE 是一个新的开源项目,它希望成为所有发行版的 AUR。
![LURE 是一个新的开源项目,它希望成为所有发行版的 AUR][1]
AUR<ruby>Arch 用户仓库<rt>Arch User Repository</rt></ruby>)是一个由社区驱动的基于 Arch 的 Linux 的发行版仓库。
**简而言之:** 它可以帮助你安装官方仓库中没有的软件包,并让你获得最新的版本。
我发现它对我在 [Manjaro Linux][2] 上的体验很有帮助。
从技术上讲AUR 从源头构建一个软件包,然后利用软件包管理器(`pacman`)来安装它。
你也可以在我们的详细指南中探索更多关于它的信息。
> **[什么是 AUR 如何在 Arch 和 Manjaro Linux 中使用 AUR][3]**
📢 现在你对 AUR 有了一个基本的了解,有一个 **新的开源项目** 旨在将 AUR 的功能带到所有的发行版中。
这个项目被称为 “<ruby>Linux 用户仓库<rt>Linux User REpository</rt></ruby>LURE
> 💡 LURE 项目正处于 alpha 阶段,由创建者在几周前宣布。所以,它完全是一个正在进行的工作。
### 已经有这样的项目了?
![lure 添加仓库][5]
**没有。**
开发者们已经尝试做一个 AUR 的替代品,但是是针对特定的发行版。就像 [makedeb 软件包仓库][6] 是针对 Debian 的。
LURE 是一个雄心勃勃的想法,可以在你选择的任何发行版上工作。
它试图成为一个帮助你使用类似于 `PKGBUILD` 的脚本为你的发行版创建原生软件包的工具。
> **[创建 PKGBUILD 为 Arch Linux 制作软件包][7]**
开发者在 [Reddit 公告帖子][9] 中提到了一些技术细节:
> 我的项目叫 LURE是 “Linux 用户仓库”的简称。它构建原生软件包,然后使用系统软件包管理器安装它们,就像 AUR 一样。它使用一个类似于 AUR 的 `PKGBUILD` 的构建脚本来构建软件包。
>
> 它是用纯 Go 语言编写的,这意味着它在构建后没有任何依赖性,除了一些特权提升命令(`sudo``doas` 等等)和任何一个支持的软件包管理器,目前支持 `pacman`、`apt`、`apk`Alpine Linux 上,不是安卓)、`dnf`、`yum` 和 `zypper`
**听起来很棒!**
> **[LURE 项目Repo][10]**
你也可以在它的 [GitHub 镜像][11] 上探索更多信息。
### 使用 LURE
你不必安装一个额外的软件包管理器来使它工作,它可以自动与你系统的软件包管理器一起工作。
因此,如果它在其仓库(或任何其添加的仓库)中没有找到一个包,它就会转到系统的默认仓库,并从那里安装它。就像我用 `lure` 命令在我的系统上安装/移除 `neofetch` 一样。
![lure neofetch remove][12]
虽然该项目处于早期开发阶段,但它为各种发行版提供了 [二进制包][13],以让你安装和测试它们。
![][14]
目前,它的仓库包括一个来自创建者自己的项目。但你可以尝试添加一个仓库并构建/安装东西。
为了方便起见,我试着在它的仓库中安装软件包。
![][15]
命令看起来像这样:
```
lure in itd-bin
```
在它的 [官方文档页面][16],你可以读到更多关于它在构建/安装/添加存储库方面的用法。
未来版本的一些计划中的功能包括:
* 自动安装脚本
* 基于 Docker 的自动测试工具
* 仓库的网页接口
### 让它变得更好
嗯,首先,这是一个优秀的项目。如果你是过去使用过 Arch 的人,或者想离开 Arch Linux这将是一个很好的工具。
然而,对于大多数终端用户和非 Arch Linux 新手来说,像 [Pamac GUI 软件包管理器][17] 这样的软件包管理器支持 LURE 应该是锦上添花的。
当然,在目前的阶段,它需要开源贡献者的支持。所以,如果你喜欢这个想法,请随时为该项目贡献改进意见
*💭 你对 LURE 有什么看法?请在下面的评论中分享你的想法! *
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/lure-aur/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/LURE-aur-for-all-linux-distros.jpg
[2]: https://news.itsfoss.com/manjaro-linux-experience/
[3]: https://itsfoss.com/aur-arch-linux/
[4]: https://itsfoss.com/aur-arch-linux/
[5]: https://news.itsfoss.com/content/images/2022/10/lure-repos.png
[6]: https://mpr.makedeb.org
[7]: https://itsfoss.com/create-pkgbuild/
[8]: https://itsfoss.com/create-pkgbuild/
[9]: https://www.reddit.com/r/linux/comments/xq09nf/lure_aur_on_nonarch_distros/
[10]: https://gitea.arsenm.dev/Arsen6331/lure
[11]: https://github.com/Arsen6331/lure
[12]: https://news.itsfoss.com/content/images/2022/10/lure-neofetch-rm.png
[13]: https://gitea.arsenm.dev/Arsen6331/lure/releases/tag/v0.0.2
[14]: https://news.itsfoss.com/content/images/2022/10/lure-binaries.jpg
[15]: https://news.itsfoss.com/content/images/2022/10/lure-test.png
[16]: https://github.com/Arsen6331/lure/blob/master/docs/usage.md
[17]: https://itsfoss.com/install-pamac-arch-linux/

View File

@ -0,0 +1,109 @@
[#]: subject: "Notion-like Markdown Note-Taking App 'Obsidian' is Out of Beta"
[#]: via: "https://news.itsfoss.com/obsidian-1-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15147-1.html"
类似 Notion 的 Markdown 笔记应用黑曜石推出测试版
======
> 黑曜石 1.0 做了重新设计,带来了有价值的新功能。
![类似于 Notion 的 Markdown 笔记软件 Obsidian 推出了测试版][1]
<ruby>黑曜石<rt>Obsidian</rt></ruby> 是一款强大的笔记应用,可以用来制作知识图谱,同时还提供 [Notion][2] 类似的功能。
在 1.0 更新之前,我们已经有一篇关于它的 [详细文章][3]。
黑曜石 1.0 的发布标志着该应用在桌面和移动体验方面的发展迈出了关键一步。
> 💡 黑曜石不是一个开源的应用程序,但可以在 Linux 上使用。
让我们来看看它的桌面版提供的新功能。
### 🆕 黑曜石 1.0 的新功能
![黑曜石 1.0][5]
1.0 版本增加了大量的新功能、主要的视觉变化和错误修复,其中一些亮点包括:
* 改良的用户界面
* 新的外观设置
* 带有标签堆叠的标签功能
* 大修的主题画廊
* 各种错误的修复
#### 🎨 用户界面的改造
![黑曜石 1.0 用户界面][8]
用户界面已经得到了全面改造,这使得用户体验更加直观和强大。
![黑曜石用户界面][9]
除此之外,黑曜石现在还有一个专门的外观设置部分。它包含了切换显示内联标题、标签标题栏、改变重点颜色等选项的设置。
![黑曜石 1.0 外观设置][10]
#### 带有标签堆叠的标签功能
![黑曜石 1.0 的标签][11]
现在你可以在黑曜石中打开多个标签,并使用热键来帮助你在忙碌的一天中完成多个任务。
一个额外的好处是,即使你退出黑曜石,它也会记住你曾经打开的标签和你在这些标签中的活动状态。
![黑曜石 1.0 的标签堆叠][12]
标签也可以分组形成堆叠,并在它们之间进行切换,从而使工作空间变得更加整齐。
#### 🛠️ 其他变化
其他值得一提的变化包括:
* 改变缩放级别的能力
* 改进了黑曜石的同步
* 内存泄漏修复
* 用于折叠行的折叠命令
你可以通过 [发布说明][13],以及 [发布公告][14] 来了解更多的细节。
### 📥 下载黑曜石 1.0
你可以到 [官方网站][17] 下载黑曜石 1.0。在手机上,也可以在谷歌应用商店和苹果的应用商店上找到它。
为 Linux 用户提供了三个软件包:**AppImage、Flatpak 和 Snap**。
开发者还澄清说,他们不会向个人用户收取黑曜石的使用费。
*💬 你对黑曜石 1.0 有什么看法?一个值得替代其他记事本的应用程序吗?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/obsidian-1-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/obsidian-1.png
[2]: https://notion.grsm.io/itsfoss
[3]: https://linux.cn/article-14230-1.html
[5]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0.png
[8]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0_User_Interface.png
[9]: https://news.itsfoss.com/content/images/2022/10/obisidian-1-ui.png
[10]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0_Appearance_Settings.png
[11]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0_Tabs.png
[12]: https://news.itsfoss.com/content/images/2022/10/Obsidian_1.0_Tab_Stacks.gif
[13]: https://forum.obsidian.md/t/obsidian-release-v1-0-0/44873
[14]: https://obsidian.md/1.0
[17]: https://obsidian.md/download
[18]: https://www.youtube-nocookie.com/embed/Ia2CaItxTEk

View File

@ -1,38 +0,0 @@
[#]: subject: "Worlds First Open Source Wi-Fi 7 Access Points Are Now Available"
[#]: via: "https://www.opensourceforu.com/2022/10/worlds-first-open-source-wi-fi-7-access-points-are-now-available/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Worlds First Open Source Wi-Fi 7 Access Points Are Now Available
======
*The first Open source Wi-Fi 7 products in the world will be released by HFCL in conjunction with Qualcomm under its IO product line.*
The worlds first Open source Wi-Fi 7 Access Points will be introduced by HFCL Limited, the top high-tech enterprise and integrated next-gen communication product and solution provider, in collaboration with Qualcomm Technologies, Inc. on October 1, 2022 at the India Mobile Congress in Pragati Maidan, New Delhi.
Based on IEEE 802.11be, a ground-breaking Wi-Fi technology that is intended to give Extremely High Throughput (EHT), increased spectrum efficiency, better interference mitigation, and support for Real Time Applications (RTA), HFCL becomes the first OEM to release Open source Wi-Fi 7 Access Points. In order to provide a better user experience while yet using less power, Wi-Fi 7 uses faster connections with 320MHz and 4kQAM, numerous connections with Multi Link operation, and Adaptive Connections for adaptive interference puncturing.
Wi-Fi 7 promises a significant technological advance above all prior Wi-Fi standards updates, providing a more immersive user experience and paving the way for a more robust digital future. The peak data speeds supported by HFCLs Wi-Fi 7 APs will exceed 10 Gbps, and they will have latency under 2 ms as opposed to the 5 Gbps and 10 ms of existing Wi-Fi 6 products.
Technology providers like telecom operators, Internet service providers, system integrators, and network administrators will be able to offer mission-critical and real-time application services and provide a better user experience than ever before thanks to HFCLs Wi-Fi 7 product line, which is supported by a strong R&D focus. A wide variety of dual band and tri-band indoor and outdoor variations may be found in the new Wi-Fi 7 product portfolio.
Being the first Wi-Fi 7 Access points in the market to embrace Open standards, all Wi-Fi 7 variations will come pre-installed with open source software. With the goal of providing improved global connectivity and maintaining interoperability in multi-vendor scenarios, open standards support disaggregated Wi-Fi systems delivered as free open source software.
The introduction of Wi-Fi 7 will primarily support the countrys upcoming 5G rollout, particularly for enhancing inside coverage. Additionally, the technology will make it easier to construct a variety of apps because to its increased throughput, dependable network performance, and lower latency. The Internet of Things (IoT) and Industrial Internet of Things (IIoT) applications including surveillance, remote industrial automation, AV/VR/XR, and other video-based applications will benefit from Wi-Fi 7 technology for businesses. With numerous developments in Cloud/Edge Computing and Cloud gaming, it will also increase the number of remote offices, real-time collaborations, and online video conferencing.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/10/worlds-first-open-source-wi-fi-7-access-points-are-now-available/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -1,105 +0,0 @@
[#]: subject: "VirtualBox 7.0 Releases With Secure Boot and Full VM Encryption Support"
[#]: via: "https://news.itsfoss.com/virtualbox-7-0-release/"
[#]: author: "Sourav Rudra https://news.itsfoss.com/author/sourav/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
VirtualBox 7.0 Releases With Secure Boot and Full VM Encryption Support
======
VirtualBox 7.0 is a big upgrade since its last major update. Some nice advancements!
![VirtualBox 7.0 Releases With Secure Boot and Full VM Encryption Support][1]
A big upgrade for VirtualBox. This release is pretty interesting because we haven't seen a major update in recent years.
For those unfamiliar with VirtualBox, it is a virtualization software developed by [Oracle][2].
With the launch of VirtualBox 7.0, many new features have been added.
Let's take a look at some of the most crucial ones.
### VirtualBox 7.0: What's New?
![virtualbox 7.0][3]
VirtualBox 7.0 is a helpful upgrade. There are icon updates, theme improvements, and some key highlights, including:
* A new utility to show performance statistics for running guests.
* Secure boot support.
* Full VM encryption support (via CLI).
* Reworked new virtual machine wizard.
#### Full VM Encryption via CLI
Virtual Machines (VM) can now be fully encrypted, but only through the command-line interface.
This also includes the config logs and saved states.
As of now, users can encrypt their machines only through the command-line interface, and different methods are to be added in the future.
#### New Resource Monitor Utility
![virtualbox 7.0 resource monitor][4]
The new utility lets you monitor performance statistics like CPU, RAM usage, disk I/O, and more. It would list the performance stats for all the running guests.
Not the most attractive addition, but useful.
#### Improved Theme Support
The support for themes has been improved on all platforms. The native engine is used on Linux and macOS, and on Windows, a separate implementation is in place.
#### Support for Secure Boot
VirtualBox now supports Secure Boot, enhancing security against malware, viruses, and spyware.
It will also prevent a VM from booting up with broken drivers, which is very important for enterprise applications.
Users who use operating systems that require a secure boot to run should be able to create VMs easily.
#### Other Changes
VirtualBox 7.0 is a significant upgrade. So, there are several feature additions and refinements across the board.
For instance, the new VM wizard has now been reworked to integrate unattended guest OS installations.
![virtualbox 7.0 unattended distro installs][5]
Other improvements include:
* Cloud virtual machines can now be added to VirtualBox and controlled as local VMs.
* The VirtualBox icon has been updated with this release.
* A new 3D stack has been introduced that supports DirectX 11. It uses [DXVK][6] to provide the same support for non-Windows hosts.
* Support for virtual TPM 1.2/2.0.
* Improved mouse handling in multi-monitor setups.
* Vorbis is the default audio codec for audio recording.
You can review the [release notes][7] for additional information.
If you were looking for enhanced features, such as better theme support, encryption features, secure boot support, and similar feature additions, VirtualBox 7.0 is a nice upgrade.
💬 *What do you think about the upgrade? Would you use the newer version or stick to the older version for your VMs for now?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/virtualbox-7-0-release/
作者:[Sourav Rudra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/sourav/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/10/virtualbox-7-0-release.jpg
[2]: https://www.oracle.com/in/
[3]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0.png
[4]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0_Resource_Monitor.png
[5]: https://news.itsfoss.com/content/images/2022/10/VirtualBox_7.0_Unattended_Guest_Install.png
[6]: https://github.com/doitsujin/dxvk
[7]: https://www.virtualbox.org/wiki/Changelog-7.0

View File

@ -1,142 +0,0 @@
[#]: subject: "Using habits to practice open organization principles"
[#]: via: "https://opensource.com/open-organization/22/6/using-habits-practice-open-organization-principles"
[#]: author: "Ron McFarland https://opensource.com/users/ron-mcfarland"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Using habits to practice open organization principles
======
Follow these steps to implement habits that support open culture and get rid of those that don't.
![Selfcare, drinking tea on the porch][1]
Image by: opensource.com
Habits are a long-term interest of mine. Several years ago, I gave a presentation on habits, both good and bad, and how to expand on good habits and change bad ones. Just recently, I read the habits-focused book Smart Thinking by Art Markman. You might ask what this has to do with [open organization principles.][2] There is a connection, and I'll explain it in this two-part article on managing habits.
In this first article, I talk about habits, how they work, and—most important—how you can start to change them. In the second article, I review Markman's thoughts as presented in his book.
### The intersection of principles and habits
Suppose you learned about open organization principles and although you found them interesting and valuable, you just weren't in the habit of using them. Here's how that might look in practice.
Community: If you're faced with a significant challenge but think you can't address it alone, you're likely in the habit of just giving up. Wouldn't it be better to have the habit of building a community of like-minded people that collectively can solve the problem?
Collaboration: Suppose you don't think you're a good collaborator. You like to do things alone. You know that there are cases when collaboration is required, but you don't have a habit of engaging in it. To counteract that, you must build a habit of collaborating more.
Transparency: Say you like to keep most of what you do and know a secret. However, you know that if you don't share information, you're not likely to get good information from others. Therefore, you must create the habit of being more transparent.
Inclusivity: Imagine you are uncomfortable working with people you don't know and who are different from you, whether in personality, culture, or language. You know that if you want to be successful, you must work with a wide variety of people. How do you create a habit of being more inclusive?
Adaptability: Suppose you tend to resist change long after what you're doing is no longer achieving what you had hoped it would. You know you must adapt and redirect your efforts, but how can you create a habit of being adaptive?
### What is a habit?
Before I give examples regarding the above principles, I'll explain some of the relevant characteristics of a habit.
* A habit is a behavior performed repeatedly—so much so that it's now performed without thinking.
* A habit is automatic and feels right at the time. The person is so used to it, that it feels good when doing it, and to do something else would require effort and make them feel uncomfortable. They might have second thoughts afterward though.
* Some habits are good and extremely helpful by saving you a lot of energy. The brain is 2% of the body's weight but consumes 20% of your daily energy. Because thinking and concentration require a lot of energy, your mind is built to save it through developing unconscious habits.
* Some habits are bad for you, so you desire to change them.
* All habits offer some reward, even if it is only temporary.
* Habits are formed around what you are familiar with and what you know, even habits you dont necessarily like.
### The three steps of a habit
1. Cue (trigger): First, a cue or trigger tells the brain to go into automatic mode, using previously learned habitual behavior. Cues can be things like seeing a candy bar or a television commercial, being in a certain place at a certain time of day, or just seeing a particular person. Time pressure can trigger a routine. An overwhelming atmosphere can trigger a routine. Simply put, something reminds you to behave a certain way.
2. Routine: The routine follows the trigger. A routine is a set of physical, mental, and/or emotional behaviors that can be incredibly complex or extremely simple. Some habits, such as those related to emotions, are measured in milliseconds.
3. Reward: The final step is the reward, which helps your brain figure out whether a particular activity is worth remembering for the future. Rewards can range from food or drugs that cause physical sensations to joy, pride, praise, or personal self-esteem.
### Bad habits in a business environment
Habits aren't just for individuals. All organizations have good and bad institutional habits. However, some organizations deliberately design their habits, while others just let them evolve without forethought, possibly through rivalries or fear. These are some organizational habit examples:
* Always being late with reports
* Working alone or working in groups when the opposite is appropriate
* Being triggered by excess pressure from the boss
* Not caring about declining sales
* Not cooperating among a sales team because of excess competition
* Allowing one talkative person to dominate a meeting
### A step-by-step plan to change a habit
Habits don't have to last forever. You can change your own behavior. First, remember that many habits can not be changed concurrently. Instead, find a keystone habit and work on it first. This produces small, quick rewards. Remember that one keystone habit can create a chain reaction.
Here is a four-step framework you can apply to changing any habit, including habits related to open organization principles.
##### Step one: identify the routine
Identify the habit loop and the routine in it (for example, when an important challenge comes up that you can't address alone). The routine (the behaviors you do) is the easiest to identify, so start there. For example: "In my organization, no one discusses problems with anyone. They just give up before starting." Determine the routine that you want to modify, change, or just study. For example: "Every time an important challenge comes up, I should discuss it with people and try to develop a community of like-minded people who have the skills to address it."
##### Step two: experiment with the rewards
Rewards are powerful because they satisfy cravings. But, we're often not conscious of the cravings that drive our behavior. They are only evident afterward. For example, there may be times in meetings when you want nothing more than to get out of the room and avoid a subject of conversation, even though down deep you know you should figure out how to address the problem.
To learn what a craving is, you must experiment. That might take a few days, weeks, or longer. You must feel the triggering pressure when it occurs to identify it fully. For example, ask yourself how you feel when you try to escape responsibility.
Consider yourself a scientist, just doing experiments and gathering data. The steps in your investigation are:
1. After the first routine, start adjusting the routines that follow to see whether there's a reward change. For example, if you give up every time you see a challenge you can't address by yourself, the reward is the relief of not taking responsibility. A better response might be to discuss the issue with at least one other person who is equally concerned about the issue. The point is to test different hypotheses to determine which craving drives your routine. Are you craving the avoidance of responsibility?
2. After four or five different routines and rewards, write down the first three or four things that come to mind right after each reward is received. Instead of just giving up in the face of a challenge, for instance, you discuss the issue with one person. Then, you decide what can be done.
3. After writing about your feeling or craving, set a timer for 15 minutes. When it rings, ask yourself whether you still have the craving. Before giving in to a craving, rest and think about the issue one or two more times. This forces you to be aware of the moment and helps you later recall what you were thinking about at that moment.
4. Try to remember what you were thinking and feeling at that precise instant, and then 15 minutes after the routine. If the craving is gone, you have identified the reward.
##### Step three: isolate the cue or trigger
The cue is often hard to identify because there's usually too much information bombarding you as your behaviors unfold. To identify a cue amid other distractions, you can observe four factors the moment the urge hits you:
Location: Where did it occur? ("My biggest challenges come out in meetings.")
Time: When did it occur? ("Meetings in the afternoon, when I'm tired, are the worst time, because I'm not interested in putting forth any effort.")
Feelings: What was your emotional state? ("I feel overwhelmed and depressed when I hear the problem.")
People: Who or what type of people were around you at the time, or were you alone? ("In the meetings, most other people don't seem interested in the problem either. Others dominate the discussion.")
##### Step four: have a plan
Once you have confirmed the reward driving your behavior, the cues that trigger it, and the behavior itself, you can begin to shift your actions. Follow these three easy steps:
1. First, plan for the cue. ("In meetings, I'm going to look for and focus my attention on important problems that come up.")
2. Second, choose a behavior that delivers the same reward but without the penalties you suffer now. ("I'm going to explore a plan to address that problem and consider what resources and skills I need to succeed. I'm going to feel great when I create a community that's able to address the problem successfully.")
3. Third, make the behavior a deliberate choice each and every time, until you no longer need to think about it. ("I'm going to consciously pay attention to major issues until I can do it without thinking. I might look at agendas of future meetings, so I know what to expect in advance. Before and during every meeting, I will ask why should I be here, to make sure I'm focused on what is important."
##### Plan to avoid forgetting something that must be done
To successfully start doing something you often forget, follow this process:
1. Plan what you want to do.
2. Determine when you want to complete it.
3. Break the project into small tasks as needed.
4. With a timer or daily planner, set up cues to start each task.
5. Complete each task on schedule.
6. Reward yourself for staying on schedule.
### Habit change
Change takes a long time. Sometimes a support group is required to help change a habit. Sometimes, a lot of practice and role play of a new and better routine in a low-stress environment is required. To find an effective reward, you need repeated experimentation.
Sometimes habits are only symptoms of a more significant, deeper problem. In these cases, professional help may be required. But if you have the desire to change and accept that there will be minor failures along the way, you can gain power over any habit.
In this article, I've used examples of community development using the cue-routine-reward process. It can equally be applied to the other open organization principles. I hope this article got you thinking about how to manage habits through knowing how habits work, taking steps to change habits, and making plans to avoid forgetting things you want done. Whether it's an open organization principle or anything else, you can now diagnose the cue, the routine, and the reward. That will lead you to a plan to change a habit when the cue presents itself.
In my next article, I'll look at habits through the lens of Art Markman's thoughts on Smart Thinking.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/22/6/using-habits-practice-open-organization-principles
作者:[Ron McFarland][a]
选题:[lkxed][b]
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/coffee_tea_selfcare_wfh_porch_520.png
[2]: https://theopenorganization.org/definition/open-organization-definition/

View File

@ -1,184 +0,0 @@
[#]: subject: "7 summer book recommendations from open source enthusiasts"
[#]: via: "https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list"
[#]: author: "Joshua Allen Holm https://opensource.com/users/holmja"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 summer book recommendations from open source enthusiasts
======
Members of the Opensource.com community recommend this mix of books covering everything from a fun cozy mystery to non-fiction works that explore thought-provoking topics.
![Ceramic mug of tea or coffee with flowers and a book in front of a window][1]
Image by: Photo by [Carolyn V][2] on [Unsplash][3]
It is my great pleasure to introduce Opensource.com's 2022 summer reading list. This year's list contains seven wonderful reading recommendations from members of the Opensource.com community. You will find a nice mix of books covering everything from a fun cozy mystery to non-fiction works that explore thought-provoking topics. I hope you find something on this list that interests you.
Enjoy!
![Book title 97 Things Every Java Programmer Should Know][4]
Image by: O'Reilly Press
**[97 Things Every Java Programmer Should Know: Collective Wisdom from the Experts, edited by Kevlin Henney and Trisha Gee][5]**
*[Recommendation written by Seth Kenlon][6]*
Written by 73 different authors working in all aspects of the software industry, the secret to this book's greatness is that it actually applies to much more than just Java programming. Of course, some chapters lean into Java, but there are topics like Be aware of your container surroundings, Deliver better software, faster, and Don't hIDE your tools that apply to development regardless of language.
Better still, some chapters apply to life in general. Break problems and tasks into small chunks is good advice on how to tackle any problem, Build diverse teams is important for every group of collaborators, and From puzzles to products is a fascinating look at how the mind of a puzzle-solver can apply to many different job roles.
Each chapter is just a few pages, and with 97 to choose from, it's easy to skip over the ones that don't apply to you. Whether you write Java code all day, just dabble, or if you haven't yet started, this is a great book for geeks interested in code and the process of software development.
![Book title A City is Not a Computer][7]
Image by: Princeton University Press
**[A City is Not a Computer: Other Urban Intelligences, by Shannon Mattern][8]**
*[Recommendation written by Scott Nesbitt][9]*
These days, it's become fashionable (if not inevitable) to make everything *smart*: Our phones, our household appliances, our watches, our cars, and, especially, our cities.
With the latter, that means putting sensors everywhere, collecting data as we go about our business, and pushing information (whether useful or not) to us based on that data.
This begs the question, does embedding all that technology in a city make it smart? In *A City Is Not a Computer*, Shannon Mattern argues that it doesn't.
A goal of making cities smart is to provide better engagement with and services to citizens. Mattern points out that smart cities often "aim to merge the ideologies of technocratic managerialism and public service, to reprogram citizens as 'consumers' and 'users'." That, instead of encouraging citizens to be active participants in their cities' wider life and governance.
Then there's the data that smart systems collect. We don't know what and how much is being gathered. We don't know how it's being used and by whom. There's *so much* data being collected that it overwhelms the municipal workers who deal with it. They can't process it all, so they focus on low-hanging fruit while ignoring deeper and more pressing problems. That definitely wasn't what cities were promised when they were sold smart systems as a balm for their urban woes.
*A City Is Not a Computer* is a short, dense, well-researched polemic against embracing smart cities because technologists believe we should. The book makes us think about the purpose of a smart city, who really benefits from making a city smart, and makes us question whether we need to or even should do that.
![Book title git sync murder][10]
Image by: Tilted Windmill Press
**[git sync murder, by Michael Warren Lucas][11]**
*[Recommendation written by Joshua Allen Holm][12]*
Dale Whitehead would rather stay at home and connect to the world through his computer's terminal, especially after what happened at the last conference he attended. During that conference, Dale found himself in the role of an amateur detective solving a murder. You can read about that case in the first book in this series, *git commit murder*.
Now, back home and attending another conference, Dale again finds himself in the role of detective. *git sync murder* finds Dale attending a local tech conference/sci-fi convention where a dead body is found. Was it murder or just an accident? Dale, now the "expert" on these matters, finds himself dragged into the situation and takes it upon himself to figure out what happened. To say much more than that would spoil things, so I will just say *git sync murder* is engaging and enjoyable to read. Reading *git commit murder* first is not necessary to enjoy *git sync murder*, but I highly recommend both books in the series.
Michael Warren Lucas's *git murder* series is perfect for techies who also love cozy mysteries. Lucas has literally written the book on many complex technical topics, and it carries over to his fiction writing. The characters in *git sync murder* talk tech at conference booths and conference social events. If you have not been to a conference recently because of COVID and miss the experience, Lucas will transport you to a tech conference with the added twist of a murder mystery to solve. Dale Whitehead is an interesting, if somewhat unorthodox, cozy mystery protagonist, and I think most Opensource.com readers would enjoy attending a tech conference with him as he finds himself thrust into the role of amateur sleuth.
![Book title Kick Like a Girl][13]
Image by: Inner Wings Foundation
**[Kick Like a Girl, by Melissa Di Donato Roos][14]**
*[Recommendation written by Joshua Allen Holm][15]*
Nobody likes to be excluded, but that is what happens to Francesca when she wants to play football at the local park. The boys won't play with her because she's a girl, so she goes home upset. Her mother consoles her by relating stories about various famous women who have made an impact in some significant way. The historical figures detailed in *Kick Like a Girl* include women from throughout history and from many different fields. Readers will learn about Frida Kahlo, Madeleine Albright, Ada Lovelace, Rosa Parks, Amelia Earhart, Marie Curie, Valentina Tereshkova, Florence Nightingale, and Malala Yousafzai. After hearing the stories of these inspiring figures, Francesca goes back to the park and challenges the boys to a football match.
*Kick Like a Girl* features engaging writing by Melissa Di Donato Roos (SUSE's CEO) and excellent illustrations by Ange Allen. This book is perfect for young readers, who will enjoy the rhyming text and colorful illustrations. Di Donato Roos has also written two other books for children, *How Do Mermaids Poo?* and *The Magic Box*, both of which are also worth checking out.
![Book title Mine!][16]
Image by: Doubleday
**[Mine!: How the Hidden Rules of Ownership Control Our Lives, by Michael Heller and James Salzman][17]**
*[Recommendation written by Bryan Behrenshausen][18]*
"A lot of what you know about ownership is wrong," authors Michael Heller and James Salzman write in *Mine!* It's the kind of confrontational invitation people drawn to open source can't help but accept. And this book is certainly one for open source aficionados, whose views on ownership—of code, of ideas, of intellectual property of all kinds—tend to differ from mainstream opinions and received wisdom. In this book, Heller and Salzman lay out the "hidden rules of ownership" that govern who controls access to what. These rules are subtle, powerful, deeply historical conventions that have become so commonplace they just seem incontrovertible. We know this because they've become platitudes: "First come, first served" or "You reap what you sow." Yet we see them play out everywhere: On airplanes in fights over precious legroom, in the streets as neighbors scuffle over freshly shoveled parking spaces, and in courts as juries decide who controls your inheritance and your DNA. Could alternate theories of ownership create space for rethinking some essential rights in the digital age? The authors certainly think so. And if they're correct, we might respond: Can open source software serve as a model for how ownership works—or doesn't—in the future?
![Book Title Not All Fairy Tales Have Happy Endings][19]
Image by: Lulu.com
**[Not All Fairy Tales Have Happy Endings: The Rise and Fall of Sierra On-Line, by Ken Williams][20]**
*[Recommendation written by Joshua Allen Holm][21]*
During the 1980s and 1990s, Sierra On-Line was a juggernaut in the computer software industry. From humble beginnings, this company, founded by Ken and Roberta Williams, published many iconic computer games. King's Quest, Space Quest, Quest for Glory, Leisure Suit Larry, and Gabriel Knight are just a few of the company's biggest franchises.
*Not All Fairy Tales Have Happy Endings* covers everything from the creation of Sierra's first game, [Mystery House][22], to the company's unfortunate and disastrous acquisition by CUC International and the aftermath. The Sierra brand would live on for a while after the acquisition, but the Sierra founded by the Williams was no more. Ken Williams recounts the entire history of Sierra in a way that only he could. His chronological narrative is interspersed with chapters providing advice about management and computer programming. Ken Williams had been out of the industry for many years by the time he wrote this book, but his advice is still extremely relevant.
Sierra On-Line is no more, but the company made a lasting impact on the computer gaming industry. *Not All Fairy Tales Have Happy Endings* is a worthwhile read for anyone interested in the history of computer software. Sierra On-Line was at the forefront of game development during its heyday, and there are many valuable lessons to learn from the man who led the company during those exciting times.
![Book title The Soul of a New Machine][23]
Image by: Back Bay Books
**[The Soul of a New Machine, by Tracy Kidder][24]**
*[Recommendation written by Guarav Kamathe][25]*
I am an avid reader of the history of computing. It's fascinating to know how these intelligent machines that we have become so dependent on (and often take for granted) came into being. I first heard of [The Soul of a New Machine][26] via [Bryan Cantrill][27]'s [blog post][28]. This is a non-fiction book written by [Tracy Kidder][29] and published in 1981 for which he [won a Pulitzer prize][30]. Imagine it's the 1970s, and you are part of the engineering team tasked with designing the [next generation computer][31]. The backdrop of the story begins at Data General Corporation, a then mini-computer vendor who was racing against time to compete with the 32-bit VAX computers from Digital Equipment Corporation (DEC). The book outlines how two competing teams within Data General, both wanting to take a shot at designing the new machine, results in a feud. What follows is a fascinating look at the events that unfold. The book provides insights into the minds of the engineers involved, the management, their work environment, the technical challenges they faced along the way and how they overcame them, how stress affected their personal lives, and much more. Anybody who wants to know what goes into making a computer should read this book.
There is the 2022 suggested reading list. It provides a variety of great options that I believe will provide Opensource.com readers with many hours of thought-provoking entertainment. Be sure to check out our previous reading lists for even more book recommendations.
* [2021 Opensource.com summer reading list][32]
* [2020 Opensource.com summer reading list][33]
* [2019 Opensource.com summer reading list][34]
* [2018 Open Organization summer reading list][35]
* [2016 Opensource.com summer reading list][36]
* [2015 Opensource.com summer reading list][37]
* [2014 Opensource.com summer reading list][38]
* [2013 Opensource.com summer reading list][39]
* [2012 Opensource.com summer reading list][40]
* [2011 Opensource.com summer reading list][41]
* [2010 Opensource.com summer reading list][42]
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/6/2022-opensourcecom-summer-reading-list
作者:[Joshua Allen Holm][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/holmja
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/tea-cup-mug-flowers-book-window.jpg
[2]: https://unsplash.com/@sixteenmilesout?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/tea?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://opensource.com/sites/default/files/2022-06/97_Things_Every_Java_Programmer_Should_Know_1.jpg
[5]: https://www.oreilly.com/library/view/97-things-every/9781491952689/
[6]: https://opensource.com/users/seth
[7]: https://opensource.com/sites/default/files/2022-06/A_City_is_Not_a_Computer_0.jpg
[8]: https://press.princeton.edu/books/paperback/9780691208053/a-city-is-not-a-computer
[9]: https://opensource.com/users/scottnesbitt
[10]: https://opensource.com/sites/default/files/2022-06/git_sync_murder_0.jpg
[11]: https://mwl.io/fiction/crime#gsm
[12]: https://opensource.com/users/holmja
[13]: https://opensource.com/sites/default/files/2022-06/Kick_Like_a_Girl.jpg
[14]: https://innerwings.org/books/kick-like-a-girl
[15]: https://opensource.com/users/holmja
[16]: https://opensource.com/sites/default/files/2022-06/Mine.jpg
[17]: https://www.minethebook.com/
[18]: https://opensource.com/users/bbehrens
[19]: https://opensource.com/sites/default/files/2022-06/Not_All_Fairy_Tales.jpg
[20]: https://kensbook.com/
[21]: https://opensource.com/users/holmja
[22]: https://en.wikipedia.org/wiki/Mystery_House
[23]: https://opensource.com/sites/default/files/2022-06/The_Soul_of_a_New_Machine.jpg
[24]: https://www.hachettebookgroup.com/titles/tracy-kidder/the-soul-of-a-new-machine/9780316204552/
[25]: https://opensource.com/users/gkamathe
[26]: https://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine
[27]: https://en.wikipedia.org/wiki/Bryan_Cantrill
[28]: http://dtrace.org/blogs/bmc/2019/02/10/reflecting-on-the-soul-of-a-new-machine/
[29]: https://en.wikipedia.org/wiki/Tracy_Kidder
[30]: https://www.pulitzer.org/winners/tracy-kidder
[31]: https://en.wikipedia.org/wiki/Data_General_Eclipse_MV/8000
[32]: https://opensource.com/article/21/6/2021-opensourcecom-summer-reading-list
[33]: https://opensource.com/article/20/6/summer-reading-list
[34]: https://opensource.com/article/19/6/summer-reading-list
[35]: https://opensource.com/open-organization/18/6/summer-reading-2018
[36]: https://opensource.com/life/16/6/2016-summer-reading-list
[37]: https://opensource.com/life/15/6/2015-summer-reading-list
[38]: https://opensource.com/life/14/6/annual-reading-list-2014
[39]: https://opensource.com/life/13/6/summer-reading-list-2013
[40]: https://opensource.com/life/12/7/your-2012-open-source-summer-reading
[41]: https://opensource.com/life/11/7/summer-reading-list
[42]: https://opensource.com/life/10/8/open-books-opensourcecom-summer-reading-list

View File

@ -1,140 +0,0 @@
[#]: subject: "The story behind Joplin, the open source note-taking app"
[#]: via: "https://opensource.com/article/22/9/joplin-interview"
[#]: author: "Richard Chambers https://opensource.com/users/20i"
[#]: collector: "lkxed"
[#]: translator: "MareDevi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
The story behind Joplin, the open source note-taking app
======
Laurent Cozic sat down with me to discuss how Joplin got started and what's next for the open source note-taking app.
In this interview, I met up with Laurent Cozic, creator of the note-taking app, Joplin. [Joplin][2] was a winner of the [20i][3] rewards, so I wanted to find out what makes it such a success, and how he achieved it.
**Could you summarize what Joplin does?**
[Joplin][4] is an open source note-taking app. It allows you to capture your thoughts and securely access them from any device.
**Obviously, there are other note-taking apps out there—but apart from it being free to use, what makes it different?**
The fact that it is open source is an important aspect for many of our users, because it means there is no vendor locking on the data, and that data can be easily exported and accessed in various ways.
We also focus on security and data privacy, in particular with the synchronization end-to-end encryption feature, and by being transparent about any connection that the application makes. We also work with security researchers to keep the app more secure.
Finally, Joplin can be customized in several different ways—through plugins, which can add new functionalities, and themes to customize the app appearance. We also expose a data API, which allows third-party applications to access Joplin data.
**[[ Related read 5 note-taking apps for Linux ]][5]**
**It's a competitive market, so what inspired you to build it?**
It happened organically. I started looking into it in 2016, as I was looking at existing commercial note-taking applications, and I didn't like that the notes, attachments, or tags could not easily be exported or manipulated by other tools.
This is probably due to vendor locking and partly a lack of motivation from the vendor since they have no incentive to help users move their data to other apps. There is also an issue with the fact that these companies usually will keep the notes in plain text, and that can potentially cause issues in terms of data privacy and security.
So I decided to start creating a simple mobile and terminal application with sync capabilities to have my notes easily accessible on my devices. Later the desktop app was created and the project grew from there.
![Image of Joplin on Chrome OS.][6]
Image by: (Opensource.com, CC BY-SA 4.0)
**How long did Joplin take to make?**
I've been working on it on and off since 2016 but it wasn't full time. The past two years I've been focusing more on it.
**What advice might you have for someone setting to create their own open source app?**
Pick a project you use yourself and technologies you enjoy working with.
Managing an open source project can be difficult sometimes so there has to be this element of fun to make it worthwhile. Then I guess "release early, release often" applies here, so that you can gauge user's interest and whether it makes sense to spend time developing the project further.
**How many people are involved in Joplin's development?**
There are 3-4 people involved in the development. At the moment we also have six students working on the project as part of Google Summer of Code.
**Lots of people create open source projects, yet Joplin has been a resounding success for you. Could you offer creators any tips on how to get noticed?**
There's no simple formula and to be honest I don't think I could replicate the success in a different project! You've got to be passionate about what you're doing but also be rigorous, be organized, make steady progress, ensure the code quality remains high, and have a lot of test units to prevent regressions.
Also be open to the user feedback you receive, and try to improve the project based on it.
Once you've got all that, the rest is probably down to luck—if it turns out you're working on a project that interests a lot of people, things might work out well!
**Once you get noticed, how do you keep that momentum going, if you don't have a traditional marketing budget?**
I think it's about listening to the community around the project. For example I never planned to have a forum but someone suggested it on GitHub, so I made one and it became a great way to share ideas, discuss features, provide support, and so on. The community is generally welcoming of newcomers too, which creates a kind of virtuous circle.
Next to this, it's important to communicate regularly about the project.
We don't have a public roadmap, because the ETA for most features is generally "I don't know", but I try to communicate about coming features, new releases, and so on. We also communicate about important events, the Google Summer of Code in particular, or when we have the chance to win something like the 20i FOSS Awards.
Finally, very soon we'll have an in-person meetup in London, which is another way to keep in touch with the community and collaborators.
**How does user feedback influence the roadmap?**
Significantly. Contributors will often work on something simply because they need the feature. But next to this, we also keep track of the features that seem most important to users, based on what we read about on the forum and on the GitHub issue tracker.
For example, the mobile app is now high priority because we frequently hear from users that its limitations and issues are a problem to effectively use Joplin.
![Image of Joplin being used on a Desktop.][8]
Image by: (Opensource.com, CC BY-SA 4.0)
**How do you keep up to date with the latest in dev and coding?**
Mostly by reading Hacker News!
**Do you have a personal favorite FOSS that you'd recommend?**
Among the less well-known projects, [SpeedCrunch][9] is very good as a calculator. It has a lot of features and it's great how it keeps a history of all previous calculations.
I also use [KeepassXC][10] as a password manager. It has been improving steadily over the past few years.
Finally, [Visual Studio Code][11] is great as a cross-platform text editor.
**I'd assumed that Joplin was named after Janis, but Wikipedia tells me it's Scott Joplin. What made you choose the name?**
I wanted to name it "jot-it" at first but I think the name was already taken.
Since I was listening to Scott Joplin ragtime music a lot back then (I was pretty much obsessed with it), I decided to use his name.
I think the meaning of a product name is not too important, as long as the name itself is easy to write, pronounce, remember, and perhaps is associated with something positive (or at least nothing negative).
And I think "Joplin" ticks all these boxes.
**Is there anything you can say about plans for Joplin? An exclusive tease of a new feature, perhaps?**
As mentioned earlier, we are very keen to make improvements to the mobile app, both in terms of UX design and new features.
We're also looking at creating a "Plugin Store" to make it easier to browse and install plugins.
**Thanks for your time Laurent— best of luck with the future of Joplin.**
*[This interview was originally published on the 20i blog and has been republished with permission.][12]*
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/joplin-interview
作者:[Richard Chambers][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/20i
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/wfh_work_home_laptop_work.png
[2]: https://joplinapp.org/
[3]: https://www.20i.com/foss-awards/winners
[4]: https://opensource.com/article/19/1/productivity-tool-joplin
[5]: https://opensource.com/article/22/8/note-taking-apps-linux
[6]: https://opensource.com/sites/default/files/2022-09/joplin-chrome-os.png
[7]: https://opensource.com/article/21/10/google-summer-code
[8]: https://opensource.com/sites/default/files/2022-09/joplin-desktop.png
[9]: https://heldercorreia.bitbucket.io/speedcrunch/
[10]: https://opensource.com/article/18/12/keepassx-security-best-practices
[11]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
[12]: https://www.20i.com/blog/joplin-creator-laurent-cozic/

View File

@ -2,7 +2,7 @@
[#]: via: (https://itsfoss.com/nvidia-linux-mint/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: (hwlife)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -2,7 +2,7 @@
[#]: via: (https://itsfoss.com/set-up-ssh-ubuntu/)
[#]: author: (Chris Patrick Carias Stas https://itsfoss.com/author/chris/)
[#]: collector: (lujun9972)
[#]: translator: (hwlife)
[#]: translator: (Donkey-Hao)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -220,7 +220,7 @@ via: https://itsfoss.com/set-up-ssh-ubuntu/
作者:[Chris Patrick Carias Stas][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,267 +0,0 @@
[#]: subject: (Optimize Java serverless functions in Kubernetes)
[#]: via: (https://opensource.com/article/21/6/java-serverless-functions-kubernetes)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Optimize Java serverless functions in Kubernetes
======
Achieve faster startup and a smaller memory footprint to run serverless
functions on Kubernetes.
![Ship captain sailing the Kubernetes seas][1]
A faster startup and smaller memory footprint always matter in [Kubernetes][2] due to the expense of running thousands of application pods and the cost savings of doing it with fewer worker nodes and other resources. Memory is more important than throughput on containerized microservices on Kubernetes because:
* It's more expensive due to permanence (unlike CPU cycles)
* Microservices multiply the overhead cost
* One monolith application becomes _N_ microservices (e.g., 20 microservices ≈ 20GB)
This significantly impacts serverless function development and the Java deployment model. This is because many enterprise developers chose alternatives such as Go, Python, and Nodejs to overcome the performance bottleneck—until now, thanks to [Quarkus][3], a new Kubernetes-native Java stack. This article explains how to optimize Java performance to run serverless functions on Kubernetes using Quarkus.
### Container-first design
Traditional frameworks in the Java ecosystem come at a cost in terms of the memory and startup time required to initialize those frameworks, including configuration processing, classpath scanning, class loading, annotation processing, and building a metamodel of the world, which the framework requires to operate. This is multiplied over and over for different frameworks.
Quarkus helps fix these Java performance issues by "shifting left" almost all of the overhead to the build phase. By doing code and framework analysis, bytecode transformation, and dynamic metamodel generation only once, at build time, you end up with a highly optimized runtime executable that starts up super fast and doesn't require all the memory of a traditional startup because the work is done once, in the build phase.
![Quarkus Build phase][4]
(Daniel Oh, [CC BY-SA 4.0][5])
More importantly, Quarkus allows you to build a native executable file that provides [performance advantages][6], including amazingly fast boot time and incredibly small resident set size (RSS) memory, for instant scale-up and high-density memory utilization compared to the traditional cloud-native Java stack.
![Quarkus RSS and Boot Time Metrics][7]
(Daniel Oh, [CC BY-SA 4.0][5])
Here is a quick example of how you can build the native executable with a [Java serverless][8] function project using Quarkus.
### 1\. Create the Quarkus serverless Maven project
This command generates a Quarkus project (e.g., `quarkus-serverless-native`) to create a simple function:
```
$ mvn io.quarkus:quarkus-maven-plugin:1.13.4.Final:create \
       -DprojectGroupId=org.acme \
       -DprojectArtifactId=quarkus-serverless-native \
       -DclassName="org.acme.getting.started.GreetingResource"
```
### 2\. Build a native executable
You need a GraalVM to build a native executable for the Java application. You can choose any GraalVM distribution, such as [Oracle GraalVM Community Edition (CE)][9] and [Mandrel][10] (the downstream distribution of Oracle GraalVM CE). Mandrel is designed to support building Quarkus-native executables on OpenJDK 11.
Open `pom.xml`, and you will find this `native` profile. You'll use it to build a native executable:
```
&lt;profiles&gt;
    &lt;profile&gt;
        &lt;id&gt;native&lt;/id&gt;
        &lt;properties&gt;
            &lt;quarkus.package.type&gt;native&lt;/quarkus.package.type&gt;
        &lt;/properties&gt;
    &lt;/profile&gt;
&lt;/profiles&gt;
```
> **Note:** You can install the GraalVM or Mandrel distribution locally. You can also download the Mandrel container image to build it (as I did), so you need to run a container engine (e.g., Docker) locally.
Assuming you have started your container runtime already, run one of the following Maven commands.
For [Docker][11]:
```
$ ./mvnw package -Pnative \
-Dquarkus.native.container-build=true \
-Dquarkus.native.container-runtime=docker
```
For [Podman][12]:
```
$ ./mvnw package -Pnative \
-Dquarkus.native.container-build=true \
-Dquarkus.native.container-runtime=podman
```
The output should end with `BUILD SUCCESS`.
![Native Build Logs][13]
(Daniel Oh, [CC BY-SA 4.0][5])
Run the native executable directly without Java Virtual Machine (JVM):
```
`$ target/quarkus-serverless-native-1.0.0-SNAPSHOT-runner`
```
The output will look like:
```
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,&lt; / /_/ /\ \  
\--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/  
INFO  [io.quarkus] (main) quarkus-serverless-native 1.0.0-SNAPSHOT native
(powered by Quarkus xx.xx.xx.) Started in 0.019s. Listening on: <http://0.0.0.0:8080>
INFO [io.quarkus] (main) Profile prod activated.
INFO [io.quarkus] (main) Installed features: [cdi, kubernetes, resteasy]
```
Supersonic! That's _19_ _milliseconds_ to startup. The time might be different in your environment.
It also has extremely low memory usage, as the Linux `ps` utility reports. While the app is running, run this command in another terminal:
```
`$ ps -o pid,rss,command -p $(pgrep -f runner)`
```
You should see something like:
```
  PID    RSS COMMAND
10246  11360 target/quarkus-serverless-native-1.0.0-SNAPSHOT-runner
```
This process is using around _11MB_ of memory (RSS). Pretty compact!
> **Note:** The RSS and memory usage of any app, including Quarkus, will vary depending on your specific environment and will rise as application experiences load.
You can also access the function with a REST API. Then the output should be `Hello RESTEasy`:
```
$ curl localhost:8080/hello
Hello RESTEasy
```
### 3\. Deploy the functions to Knative service
If you haven't already, [create a namespace][14] (e.g., `quarkus-serverless-native`) on [OKD][15] (OpenShift Kubernetes Distribution) to deploy this native executable as a serverless function. Then add a `quarkus-openshift` extension for Knative service deployment:
```
`$ ./mvnw -q quarkus:add-extension -Dextensions="openshift"`
```
Append the following variables in `src/main/resources/application.properties` to configure Knative and Kubernetes resources:
```
quarkus.container-image.group=quarkus-serverless-native
quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000
quarkus.native.container-build=true
quarkus.kubernetes-client.trust-certs=true
quarkus.kubernetes.deployment-target=knative
quarkus.kubernetes.deploy=true
quarkus.openshift.build-strategy=docker
```
Build the native executable, then deploy it to the OKD cluster directly:
```
`$ ./mvnw clean package -Pnative`
```
> **Note:** Make sure to log in to the right project (e.g., `quarkus-serverless-native`) using the `oc login` command ahead of time.
The output should end with `BUILD SUCCESS`. It will take a few minutes to complete a native binary build and deploy a new Knative service. After successfully creating the service, you should see a Knative service (KSVC) and revision (REV) using either the `kubectl` or `oc` command tool:
```
$ kubectl get ksvc
NAME                        URL   [...]
quarkus-serverless-native   <http://quarkus-serverless-native-\[...\].SUBDOMAIN>  True
$ kubectl get rev
NAME                              CONFIG NAME                 K8S SERVICE NAME                  GENERATION   READY   REASON
quarkus-serverless-native-00001   quarkus-serverless-native   quarkus-serverless-native-00001   1            True
```
### 4\. Access the native executable function
Retrieve the serverless function's endpoint by running this `kubectl` command:
```
`$ kubectl get rt/quarkus-serverless-native`
```
The output should look like:
```
NAME                         URL                                                                                                          READY   REASON
quarkus-serverless-native   <http://quarkus-serverless-restapi-quarkus-serverless-native.SUBDOMAIN>   True
```
Access the route `URL` with a `curl` command:
```
`$ curl http://quarkus-serverless-restapi-quarkus-serverless-native.SUBDOMAIN/hello`
```
In less than one second, you will get the same result as you got locally:
```
`Hello RESTEasy`
```
When you access the Quarkus running pod's logs in the OKD cluster, you will see the native executable is running as the Knative service.
![Native Quarkus Log][16]
(Daniel Oh, [CC BY-SA 4.0][5])
### What's next?
You can optimize Java serverless functions with GraalVM distributions to deploy them as serverless functions on Knative with Kubernetes. Quarkus enables this performance optimization using simple configurations in normal microservices.
The next article in this series will guide you on making portable functions across multiple serverless platforms with no code changes.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/java-serverless-functions-kubernetes
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://opensource.com/article/19/6/reasons-kubernetes
[3]: https://quarkus.io/
[4]: https://opensource.com/sites/default/files/uploads/quarkus-build.png (Quarkus Build phase)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://quarkus.io/blog/runtime-performance/
[7]: https://opensource.com/sites/default/files/uploads/quarkus-boot-metrics.png (Quarkus RSS and Boot Time Metrics)
[8]: https://opensource.com/article/21/5/what-serverless-java
[9]: https://www.graalvm.org/community/
[10]: https://github.com/graalvm/mandrel
[11]: https://www.docker.com/
[12]: https://podman.io/
[13]: https://opensource.com/sites/default/files/uploads/native-build-logs.png (Native Build Logs)
[14]: https://docs.okd.io/latest/applications/projects/configuring-project-creation.html
[15]: https://docs.okd.io/latest/welcome/index.html
[16]: https://opensource.com/sites/default/files/uploads/native-quarkus-log.png (Native Quarkus Log)

View File

@ -1,325 +0,0 @@
[#]: subject: (Try Linux on any operating system with VirtualBox)
[#]: via: (https://opensource.com/article/21/6/try-linux-virtualbox)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Try Linux on any operating system with VirtualBox
======
VirtualBox helps anyone—even a command line novice—set up a virtual
machine.
![Person programming on a laptop on a building][1]
VirtualBox makes it easy for anyone to try Linux. You don't even need experience with the command line to set up a simple virtual machine to tinker with Linux. I'm kind of a power user when it comes to virtual machines, but this article will show even novices how to virtualize a Linux system. In addition, it provides an overview of how to run and install a Linux system for testing purposes with the open source hypervisor [VirtualBox][2].
### Terms
Before starting, you should understand the difference between the two operating systems (OSes) in this setup:
* **Host system:** This is your actual OS on which you install VirtualBox.
* **Guest system:** This is the system you want to run virtualized on top of your host system.
Both systems, host and guest, must interact with each other when it comes to input/output, networking, file access, clipboard, audio, and video.
In this tutorial, I'll use Windows 10 as the _host system_ and [Fedora 33][3] as the _guest system_.
### Prerequisites
When we talk about virtualization, we actually mean [hardware-assisted virtualization][4]. Hardware-assisted virtualization requires a compatible CPU. Almost every ordinary x86 CPU from the last decade comes which this feature. AMD calls it **AMD-V,** and Intel calls it **VT-x**. The virtualization feature adds some additional CPU instructions, and it can be enabled or disabled in the BIOS.
To start with virtualization:
* Make sure that AMD-V or VT-x is enabled in the BIOS.
* Download and install [VirtualBox][5].
### Prepare the virtual machine
Download the image of the Linux distribution you want to try out. It does not matter if it's a 32-bit or 64-bit OS image. You can even start a 64-bit OS image on a 32-bit host system (with limitations in memory usage, of course) and vice versa.
> **Considerations:** If possible, choose a Linux distribution that comes with the [Logical Volume Manager][6] (LVM). LVM decouples the filesystem from the physical hard drives. This allows you to increase the size of your guest system's hard drive if you are running out of space.
Now, open VirtualBox and click on the yellow **New** button:
![VirtualBox New VM][7]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Next, configure how much memory the guest OS is allowed to use:
![Set VM memory size][9]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
My recommendation: **Don't skimp on memory!** When memory is low, the guest system will start paging memory from RAM to the hard drive, worsening the system's performance and responsiveness extremely. If the underlying host system starts paging, you might not notice. For a Linux workstation system with a graphical desktop environment, I recommend at least 4GB of memory.
Next, create the hard disk:
![Create virtual hard disk][10]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Choose the default option, **VDI**:
![Selecting hard disk file type][11]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
In this window, I recommend choosing **dynamically allocated**, as this allows you to increase the size later. If you choose **fixed size**, the disk will be probably faster, but you won't be able to modify it:
![Dynamically allocating hard disk][12]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
With a Linux distribution that uses LVM, you can start with a small hard disk. If you are running out of space, you can increase it on demand.
> **Note**: Fedora's website says [it requires][13] a minimum of 20GB free disk space. I highly recommend you stick to that specification. I chose 8GB here so that I can demonstrate how to increase it later. If you are new to Linux or inexperienced with the command line, choose 20GB.
![Setting hard disk size][14]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
After creating the hard drive, select the newly created virtual machine from the list in VirtualBox's main window and click on **Settings**. In the Settings menu, go to **System** and select the **Processor** tab. By default, VirtualBox assigns only one CPU core to the guest system. On a modern multicore CPU, it should not be any problem to assign at least two cores, which will speed up the guest system significantly:
![Assigning cores to guest system][15]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
#### Network adapter setup
The next thing to take care of is the network setup. By default, VirtualBox creates one NAT connection, which should be OK for most use cases:
![Network settings][16]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
You can create more than one network adapter. Here are the most common types:
* **NAT:** The NAT adapter performs a [network address translation][17]. From the outside, it looks like the host and the guest system use the same IP address. You are not able to access the guest system from within the host system over the network. (Although you could define [port forwarding][18] to access certain services.) When your host system has access to the internet, the guest system will have access, too. NAT requires no further configuration.
* _Choose **NAT** if you only need internet access for the guest system._
* **Bridged adapter:** Here, the guest and the host system share the same physical Ethernet device. Both systems will have independent IP addresses. From the outside, it looks like there are two separate systems in the network, both sharing the same physical Ethernet adapter. This setup is more flexible but requires more configuration.
* _Choose **Bridged adapter** if you want to share the guest system's network services._
* **Host-only adapter:** In this configuration, the guest system can only talk to the host or other guest systems running on the same host. The host system can also connect to the guest system. There is no internet nor physical network access for the guest.
* _Choose **Host-only adapter** for advanced security._
#### Assign the OS image
Navigate to **Storage** and select the virtual optical drive. Click on the **CD icon** on the right, and select **Choose a disk file…**. Then assign the downloaded Linux distribution image you want to install:
![Assigning OS image][19]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
### Install Linux
The virtual machine is now configured. Leave the **Settings** menu and go back to the main window. Click on the **Green arrow** (i.e., the start button). The virtual machine will start up and boot from the virtual optical drive, and you will find yourself in your Linux distribution's installer:
![VirtualBox Fedora installer][20]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
#### Partitioning
The installer will ask you for partitioning information during the installation process. Choose **Custom**:
![Selecting Custom partition configuration][21]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
> **Note:** I'm assuming you're creating this virtual machine just for testing purposes. Also you don't need to care about hibernation for your guest system, as this function is implicitly provided by VirtualBox. Therefore, you can omit the swap partition to save disk space on your host system. Keep in mind that you can add a swap partition later if needed. In [_An introduction to swap space on Linux systems_][22], David Both explains how to add a swap partition and choose the correct size.
Fedora 33 and later offer a [zram][23] partition, a compressed part of the memory used for paging and swap. The zram partition is resized on demand, and it is much faster than a hard disk swap partition.
To keep it simple, just add these two mount points:
![Adding mount points][24]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Apply the changes and proceed with the installation.
### Install VirtualBox Guest Additions
After you finish the installation, boot from the hard drive and log in. Now you can install VirtualBox Guest Additions, which include special device drivers and system applications that provide:
* Shared clipboard
* Shared folders
* Better performance
* Freely scalable window size
To install them, click on the top menu in **Devices** and select **Insert Guest Additions CD image…**:
![Selecting Guest Additions CD image][25]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
On most Linux distributions, the CD image with the Guest Additions is mounted automatically, and they are available in the file browser. Fedora will ask you if you want to run the installation script. Click **Run** and enter your credentials to grant the process root rights:
![Enabling Guest Additions autorun][26]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
When the installation is finished, reboot the system.
### LVM: Enlarge disk space
Creating an 8GB hard disk was a dumb decision, as Fedora quickly starts signaling that it is running out of space:
![Fedora hard disk running out of space][27]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
As I mentioned, a disk space of 20GB is recommended, and 8GB is the _absolute_ minimum for a Fedora 33 installation to boot up. A fresh installation with no additional software (except the VirtualBox Guest Additions) takes nearly the whole 8GB of available space. Don't open the GNOME Software center or anything else that might download files from the internet in this condition.
Luckily, I chose to use LVM, so I can easily fix this mishap.
To increase the filesystem's space within the virtual machine, you must first increase the virtual hard drive on your host system.
Shut down the virtual machine. If your host system is running Windows, open a command prompt and navigate to `C:\Program Files\Oracle\VirtualBox`. Resize the disk to 12,000MB with the following command:
```
`VBoxManage.exe modifyhd "C:\Users\StephanA\VirtualBox VMs\Fedora_33\Fedora_33.vdi" --resize 12000`
```
Boot the virtual machine and open the **Disks** utility. You should see the newly created unassigned free space. Select **Free Space** and click the **+** button:
![Free space before adding][28]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Now, create a new partition. Select the amount of free space you want to use:
![Creating a new partition and setting size][29]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
You don't want to create a filesystem or anything else on your new partition, so select **Other**:
![Selecting "other" for partition volume type][30]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Select **No Filesystem**:
![Setting "No filesystem" on new partition][31]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
The overview should now look like this:
![VirtualBox after adding new partition][32]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
There is a new partition device, **/dev/sda3**. Check your LVM volume group by typing `vgscan`:
![Checking LVM volume group by typing vgscan:][33]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Now you have everything you need. Extend the volume group in the new partition:
```
`vgextend fedora_localhost-live /dev/sda3`
```
![vgextend command output][34]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Because the volume group is larger, you can increase the size of the logical volume. The command `vgdisplay` shows that it has 951 free extends available:
![vgdisplay command output][35]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Increase the logical volume by 951 extends:
```
`lvextend -l+951 /dev/mapper/fedora_localhost--live-root`
```
![lvextend command output][36]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
After you increase the logical volume, the last thing to do is to resize the filesystem:
```
`resize2fs /dev/mapper/fedora_localhost--live-root`
```
![resize2fs command output][37]
(Stephan Avenwedde, [CC BY-SA 4.0][8])
Done! Check the **Disk Usage Analyzer**, and you should see that the extended space is available for the filesystem.
### Summary
With a virtual machine, you can check how a piece of software behaves with a specific operating system or a specific version of an operating system. Besides that, you can also try out any Linux distribution you want to test without worrying about breaking your system. For advanced users, VirtualBox offers a wide range of possibilities when it comes to testing, networking, and simulation.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/try-linux-virtualbox
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building)
[2]: https://www.virtualbox.org/
[3]: https://getfedora.org/
[4]: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
[5]: https://www.virtualbox.org/wiki/Downloads
[6]: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
[7]: https://opensource.com/sites/default/files/uploads/virtualbox_new_vm.png (VirtualBox New VM)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/virtualbox_memory_size_1.png (Set VM memory size)
[10]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_1.png (Create virtual hard disk)
[11]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_2.png (Selecting hard disk file type)
[12]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_3.png (Dynamically allocating hard disk)
[13]: https://getfedora.org/en/workstation/download/
[14]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_4.png (Setting hard disk size)
[15]: https://opensource.com/sites/default/files/uploads/virtualbox_cpu_settings.png (Assigning cores to guest system)
[16]: https://opensource.com/sites/default/files/uploads/virtualbox_network_settings2.png (Network settings)
[17]: https://en.wikipedia.org/wiki/Network_address_translation
[18]: https://www.virtualbox.org/manual/ch06.html#natforward
[19]: https://opensource.com/sites/default/files/uploads/virtualbox_choose_image3.png (Assigning OS image)
[20]: https://opensource.com/sites/default/files/uploads/virtualbox_running.png (VirtualBox Fedora installer)
[21]: https://opensource.com/sites/default/files/uploads/virtualbox_partitioning_1.png (Selecting Custom partition configuration)
[22]: https://opensource.com/article/18/9/swap-space-linux-systems
[23]: https://fedoraproject.org/wiki/Changes/SwapOnZRAM
[24]: https://opensource.com/sites/default/files/uploads/virtualbox_partitioning_2.png (Adding mount points)
[25]: https://opensource.com/sites/default/files/uploads/virtualbox_guest_additions_2.png (Selecting Guest Additions CD image)
[26]: https://opensource.com/sites/default/files/uploads/virtualbox_guest_additions_autorun.png (Enabling Guest Additions autorun)
[27]: https://opensource.com/sites/default/files/uploads/virtualbox_disk_usage_1.png (Fedora hard disk running out of space)
[28]: https://opensource.com/sites/default/files/uploads/virtualbox_disks_before.png (Free space before adding)
[29]: https://opensource.com/sites/default/files/uploads/virtualbox_new_partition_1.png (Creating a new partition and setting size)
[30]: https://opensource.com/sites/default/files/uploads/virtualbox_new_partition_2.png (Selecting "other" for partition volume type)
[31]: https://opensource.com/sites/default/files/uploads/virtualbox_no_partition_3.png (Setting "No filesystem" on new partition)
[32]: https://opensource.com/sites/default/files/uploads/virtualbox_disk_after.png (VirtualBox after adding new partition)
[33]: https://opensource.com/sites/default/files/uploads/virtualbox_vgscan.png (Checking LVM volume group by typing vgscan:)
[34]: https://opensource.com/sites/default/files/uploads/virtualbox_vgextend_2.png (vgextend command output)
[35]: https://opensource.com/sites/default/files/uploads/virtualbox_vgdisplay.png (vgdisplay command output)
[36]: https://opensource.com/sites/default/files/uploads/virtualbox_lvextend.png (lvextend command output)
[37]: https://opensource.com/sites/default/files/uploads/virtualbox_resizefs.png (resize2fs command output)

View File

@ -1,151 +0,0 @@
[#]: subject: "Troubleshooting “Bash: Command Not Found” Error in Linux"
[#]: via: "https://itsfoss.com/bash-command-not-found/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Troubleshooting “Bash: Command Not Found” Error in Linux
======
_**This beginner tutorial shows how to go about fixing the Bash: command not found error on Debian, Ubuntu and other Linux distributions.**_
When you use commands in Linux, you expect to see an output. But sometimes, youll encounter issues where the terminal shows command not found error.
![][1]
There is no straightforward, single solution to this error. You have to do a little bit of troubleshooting on your own.
Its not too difficult, honestly. The error gives some hint already when it says “bash: command not found”. Your shell (or Linux system) cannot find the command you entered.
There could be three possible reasons why it cannot find the command:
* Its a typo and the command name is misspelled
* The command is not even installed
* The command is basically an executable script and its location is not known
Lets go in detail on each possible root cause.
### Fixing “bash: command not found” error
![][2]
#### Method 1: Double check the command name (no, seriously)
It is human to make mistakes, specially while typing. It is possible that the command you entered has a typo (spelling mistake).
You should specially pay attention to:
* The correct command name
* The spaces between the command and its options
* The use of 1 (numeral one), I (capital i) and l (lowercase L)
* Use of uppercase and lowercase characters
Take a look at the example below, where I have misspelled the common ls command.
![][3]
So, make double sure what you are typing.
#### Method 2: Ensure that the command is installed on your system
This is another common reason behind the command not found error. You cannot run a command if it is not installed already.
While your Linux distribution comes with a huge number of commands installed by default, it is not possible to pre-install all the command line tools in a system. If the command you are trying to run is not a popular, common command, youll have to install it first.
You can use your distributions package manager to install it.
![You may have to install the missing command][4]
In some cases, popular commands may get discontinued and you may not even install it anymore. Youll have to find an alternative command to achieve the result.
Take the example of ipconfig command. This deprecated command was used for [getting Ip address][5] and other network interface information. Older tutorials on the web still mention using this command but you cannot use it anymore in newer Linux versions. It has been replaced by the ifconfig tool.
![Some popular commands get discontinued over the time][1]
Occasionally, your system wont find even the extremely common commands. This is often the case when you are running a Linux distribution in Docker containers. To cut down on the size of the operating system image, the containers often do not include even the most common Linux commands.
This is why Docker user stumble across things like [ping command not found error][6] etc.
![Docker containers often have only a few commands installed][7]
So, the solution is to either install the missing command or find a tool that could do the same thing you were trying to do with the missing command.
#### Method 3: Check if it is an executable script with correct path
This is a common mistake Linux rookies make while [running a shell script][8].
Even if you are in the same directory and try to run an executable script just by its name, it will show an error.
```
[email protected]:~/scripts# sample
-bash: sample: command not found
```
You need to either specify the shell interpreter explicitly or its absolute path.
![][9]
If you are in some other directory and try to execute the shell script without giving the correct path to the file, it will complain about not finding the file.
![][10]
##### Adding it to the PATH
In some cases, you download the entire software in a tar file, extract it and find an executable file along with other program files. To run the program, you need to run the executable file.
But for that, you need to be in the same directory or specify the entire path to the executable file. This is tiresome.
Here, you can use the PATH variable. This variable has a collection of directories and these directories have the binary (executable) files of various Linux commands. When you run a command, your Linux system checks the mentioned directories in the PATH variable to look for the executable file of that command.
You can check the location of the binary of a command by using the `which` command:
![][11]
If you want to run an executable file or script from anywhere on the system, you need to add the location of the file to this PATH variable.
![][12]
The PATH variable then needs to be added to the rc file of the shell so that the changes made to PATH variable is permanent.
You get the gist here. It is important that your Linux system has the knowledge about the location of the executable script. Either you give the path while running it or you add its location to the PATH variable.
### Did it help you?
I understand that when you are new to Linux, things could be overwhelming. But when you understand the root cause of the problem, it gradually improved your knowledge.
Here, there is no straightforward solution possible for the command not found error. I gave you some hints and pointers and that should help you in troubleshooting.
If you still have doubt or need help, please let me know in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/bash-command-not-found/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-command-not-found-error.png?resize=741%2C291&ssl=1
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-command-not-found-error-1.png?resize=800%2C450&ssl=1
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/command-not-found-error.png?resize=723%2C234&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/command-not-found-debian.png?resize=741%2C348&ssl=1
[5]: https://itsfoss.com/check-ip-address-ubuntu/
[6]: https://linuxhandbook.com/ping-command-ubuntu/
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ping-command-not-found-ubuntu.png?resize=786%2C367&ssl=1
[8]: https://itsfoss.com/run-shell-script-linux/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/bash-script-command-not-found-error-800x331.png?resize=800%2C331&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/script-file-not-found-error-800x259.png?resize=800%2C259&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/path-location.png?resize=800%2C241&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/adding-executable-to-PATH-variable-linux.png?resize=800%2C313&ssl=1

View File

@ -1,5 +1,5 @@
[#]: subject: "slimmer emacs with kitty"
[#]: via: "https://jao.io/blog/2022-06-08-slimmer-emacs-with-kitty.html"
[#]: via: "https://jao.io/blog/slimmer-emacs-with-kitty.html"
[#]: author: "jao https://jao.io"
[#]: collector: "lujun9972"
[#]: translator: " "
@ -69,7 +69,7 @@ The gist of it is pretty simple though, and it's basically distilled in [this se
--------------------------------------------------------------------------------
via: https://jao.io/blog/2022-06-08-slimmer-emacs-with-kitty.html
via: https://jao.io/blog/slimmer-emacs-with-kitty.html
作者:[jao][a]
选题:[lujun9972][b]
@ -87,8 +87,8 @@ via: https://jao.io/blog/2022-06-08-slimmer-emacs-with-kitty.html
[5]: https://sw.kovidgoyal.net/kitty/
[6]: https://codeberg.org/jao/elibs/src/branch/main/data/kitty.conf
[7]: https://en.wikipedia.org/wiki/HarfBuzz
[8]: tmp.aRLm0IGxe1#fn.1
[9]: tmp.aRLm0IGxe1#fnr.1
[8]: tmp.cmx0w4nr81#fn.1
[9]: tmp.cmx0w4nr81#fnr.1
[10]: https://codeberg.org/jao/elibs/src/main/init.el#L1595
[11]: https://jao.io/blog/tags.html
[12]: https://jao.io/blog/tag-emacs.html

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/22/6/container-orchestration-kubernetes"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lkxed"
[#]: translator: "Donkey"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -0,0 +1,117 @@
[#]: subject: "simple note taking"
[#]: via: "https://jao.io/blog/2022-06-19-simple-note-taking.html"
[#]: author: "jao https://jao.io"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
simple note taking
======
I was just watching [Prot's explanation][1] of his new package _denote_, a very elegant note-taking system with a stress on simplicity and, as the author puts it, low-tech requirements. Now, those are excellent qualities in my book, and i think i'd quickly become a _denote_ user if it weren't for the fact that i already have a homegrown set of utilities following a similar philosophy. Inevitably, they differ in some details, as is to be expected from software that has grown with me, as Prot's with him, during more than a decade, but they are similar in important ways.
I've had in mind writing a brief note on my notes utilities for a while, so i guess this is a good time for it: i can, after showing you mine, point you to a polished package following a similar philosophy and sidestep any temptation of doing anything similar with my little functions :)
![][2]
As you'll see in a moment, in some ways, my note taking system is even simpler than Prot's, while in others i rely on more sophisticated software, essentially meaning that where _denote_ is happy to use dired and filenames, i am using grep over the front-matter of the notes. So if you loved the filename-as-metadata idea in _denote_, you can skip the rest of this post!
These are the main ideas around which i built my note-taking workflow:
* Personally, i have such a dislike for non-human readable identifiers, that i cannot even stand those 20221234T142312 prefixes (for some reason, i find them quite hard to read and distracting). When i evolved my notes collection, i wanted my files to be named by their title and nothing more. I am also pretty happy to limit myself to org-mode files. So i wanted a directory of (often short) notes with names like `the-lisp-machine.org`, `david-foster-wallace.org` or `combinator-parsing-a-short-tutorial.org`.[1][3]
* I like tags, so all my notes, besides a title, are going to have attached a list of them (_denote_ puts them in the filename and inside the file's headers; i'm content with the latter, because, as you'll see in a moment, i have an easy way of searching through that contents).
* I'm not totally averse to hierarchies: besides tagging, i put my notes in a subdirectory indicating their broad category. I can then quickly narrow my searches to a general _theme_ if needed[2][4].
* As mentioned, i want to be able to search by the title and tag (besides more broadly by contents) of my notes. Since that's all information available in plain text in the files, `grep` and family (via their emacs lovely helpers) are all that is needed; but i can easily go a step further and use other indexers of plain text like, say, recoll (via my [consult-recoll package][5]).
* It must be easy to quickly create notes that link to any contents i'm seeing in my emacs session, be it text, web, pdf, email, or any other. That comes for free thanks to org and `org-capture`.
* I want the code i have to write to accomplish all the above to be short and sweet, let's say well below two hundred lines of code.
Turns out that i was able to write a little emacs lisp library doing all the above, thanks to the magic of org-mode and consult: you can find it over at my repo by the name of [jao-org-notes.el][6]. The implementation is quite simple and is based on having all note files in a parent directory (`jao-org-notes-dir`) with a subfolder for each of the main top-level categories, and, inside each of them, note files in org mode with a preamble that has the structure of this example:
```
#+title: magit tips
#+date: <2021-07-22 Thu>
#+filetags: git tips
```
The header above corresponds to the note in the file `emacs/magit-tips.org`. Now, it's very easy to write a new command to ask for a top-level category and a list of tags and insert a header like that in a new file: it's called [jao-org-notes-open-or-create][7] in my little lib, and with it one can define a new org template:
```
("N" "Note" plain (file jao-org-notes-open-or-create)
"\n- %a\n %i"
:jump-to-captured t)
```
that one can then add to `org-capture-templates` (above, i'm using "N" as its shortcut; in the package, this is done by [jao-org-notes-setup][8], which takes the desired shortcut as a parameter). I maintain a simple list of possible tags in the variable `jao-org-notes--tags`, whose value is persisted in the file denoted by the value `jao-org-notes-tags-cache-file`, so that we can remember newly-added tags; with that and the magic of emacs's completing read, handling tags is a breeze.
Now for search. These are text files, so if i want to search for contents, i just need grepping, for instance with `M-x rgrep` or, even better, `M-x consult-ripgrep`. That is what the command [jao-org-notes-grep][9] does.
But it is also very useful to be able to limit searches to the title and tags of the notes: that's what the command [jao-org-notes-open][10] does using `consult` and `ripgrep` by the very simple device of searching for regular expressions in the first few lines of each file that start with either `#+title:` or `#+filetags:` followed by the terms we're looking for. That's something one could already do with `rgrep` alone; what consult adds to the party is the ability of displaying the matching results nicely formatted:
![][11]
Links between notes are simply org `file:` links, and having a simple "backlinks" command is, well, simple if you don't want anything fancy[3][12]. A command to insert a new link to another note is so boring to almost not meriting mention (okay, almost: [jao-org-notes-insert-link][13]).
And that's about it. With those simple commands and in about 160 lines of code i find myself comfortably managing plain text notes, and easily finding contents within them. I add a bit of icing by asking [Recoll][14] to index my notes directory (as well as my email and PDFs): it is clever enough to parse org files, and give you back pointers to the sections in the files, and then issue queries with the comfort of a consult asynchronous command thanks to [consult-recoll][5] (the screenshot in the introduction is just me using it). It's a nice use case of how having little, uncomplicated packages that don't try to be too sophisticated and center on the functionality one really needs makes it very easy to combine solutions in beatiful ways[4][15].
### Footnotes:
[1][16]
I also hate with a passion those `:PROPERTIES:` drawers and other metadata embellishments so often used in org files, and wanted to avoid them as much as possible, so i settled with the only mildly annoying `#+title` and friends at the beginning of the files and nothing more. The usual caveat that that makes it more difficult to have unique names has proven a non-problem to me over the years.
[2][17]
Currently i use `work`, `books`, `computers`, `emacs`, `letters`, `maths`, and `physics`: as you see, i am not making a great effort on finding the perfect ontology of all knowledge; rather, i just use the first broad breakdown of the themes that interest me most at the moment.
[3][18]
Just look for the regular expression matching "[[file:" followed by the name of the current file. I find myself seldom needing this apparently very popular functionality, but it should be pretty easy to present the search results in a separate buffer if needed.
[4][19]
Another example would be how easy it becomes to incorporate web contents nicely formatted as text when one uses eww as a browser. Or how how seamless it is taking notes on PDFs one's reading in emacs, or even externally zathura (that's for a future blog post though! :)).
[Tags][20]: [emacs][21]
--------------------------------------------------------------------------------
via: https://jao.io/blog/2022-06-19-simple-note-taking.html
作者:[jao][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jao.io
[b]: https://github.com/lujun9972
[1]: https://www.youtube.com/watch?v=mLzFJcLpDFI
[2]: https://jao.io/img/consult-recoll.png
[3]: tmp.xKXexbfGmW#fn.1
[4]: tmp.xKXexbfGmW#fn.2
[5]: https://codeberg.org/jao/consult-recoll
[6]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el
[7]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L133
[8]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L174
[9]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L143
[10]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L126
[11]: https://jao.io/img/org-notes.png
[12]: tmp.xKXexbfGmW#fn.3
[13]: https://codeberg.org/jao/elibs/src/main/lib/doc/jao-org-notes.el#L161
[14]: https://www.lesbonscomptes.com/recoll/
[15]: tmp.xKXexbfGmW#fn.4
[16]: tmp.xKXexbfGmW#fnr.1
[17]: tmp.xKXexbfGmW#fnr.2
[18]: tmp.xKXexbfGmW#fnr.3
[19]: tmp.xKXexbfGmW#fnr.4
[20]: https://jao.io/blog/tags.html
[21]: https://jao.io/blog/tag-emacs.html

View File

@ -1,525 +0,0 @@
[#]: subject: "Python Microservices Using Flask on Kubernetes"
[#]: via: "https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Python Microservices Using Flask on Kubernetes
======
*Microservices follow domain driven design (DDD), irrespective of the platform on which they are developed. Python microservices are not an exception. The object-oriented features of Python 3 make it even easier to model services on the lines of DDD. Part 10 of this series demonstrates how to deploy FindService of user management system as a Python microservice on Kubernetes.*
The power of microservices architecture lies partly in its polyglot nature. The enterprises decompose their functionality into a set of microservices and each team chooses a platform of its choice.
Our user management system was already decomposed into four microservices, namely, AddService, FindService, SearchService and JournalService. The AddService was developed on the Java platform and deployed on the Kubernetes cluster for resilience and scalability. This doesnt mean that the remaining services are also developed on Java. We are free to choose a suitable platform for individual services.
Let us pick Python as the platform for developing the FindService. The model for FindService has already been designed (refer to the March 2022 issue of Open Source For You). We only need to convert this model into code and configuration.
### The Pythonic approach
Python is a general-purpose programming language that has been around for about three decades. In the early days, it was the first choice for automation scripting. However, with frameworks like Django and Flask, its popularity has been growing and it is now adopted in various other domains like enterprise application development. Data science and machine learning pushed the envelope further and Python is now one of the top three programming languages.
Many people attribute the success of Python to its ease of coding. This is only partially true. As long as your objective is to develop small scripts, Python is just like a toy. You love it. However, the moment you enter the domain of serious large-scale application development, you will have to handle lots of ifs and buts, and Python becomes as good as or as bad as any other platform. For example, take an object-oriented approach! Many Python developers may not even be aware of the fact that Python supports classes, inheritance, etc. Python does support full-blown object-oriented development, but in its own way — the Pythonic way! Let us explore it!
### The domain model
AddService adds the users to the system by saving the data into a MySQL database. The objective of the FindService is to offer a REST API to find the users by their name. The domain model is reproduced in Figure 1. It primarily consists of value objects like Name, PhoneNumber along with User entity and UserRepository.
![Figure 1: The domain model of FindService][1]
Let us begin with the Name. Since it is a value object, it must be validated by the time it is created and must remain immutable. The basic structure looks like this:
```
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError(“Invalid Name”)
```
As you can see, the Name consists of a value attribute of type str. As part of the post initialization, the value is validated.
Python 3.7 offers @dataclass decorator, which offers many features of a data-carrying class out-of-the-box like constructors, comparison operators, etc.
Following is the decorated Name:
```
from dataclasses import dataclass
@dataclass
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError(“Invalid Name”)
```
The following code can create an object Name:
```
name = Name(“Krishna”)
```
The value attribute can be read or written as follows:
```
name.value = “Mohan”
print(name.value)
```
Another Name object can be compared as easily as follows:
```
other = Name(“Mohan)
if name == other: print(“same”)
```
As you can see, the objects are compared by their values, not by their references. This is all out-of-the-box. We can also make the object immutable, by freezing. Here is the final version of Name as a value object:
```
from dataclasses import dataclass
@dataclass(frozen=True)
class Name:
value: str
def __post_init__(self):
if self.value is None or len(self.value.strip()) < 8 or len(self.value.strip()) > 32:
raise ValueError(“Invalid Name”)
```
The PhoneNumber also follows a similar approach, as it is also a value object:
```
@dataclass(frozen=True)
class PhoneNumber:
value: int
def __post_init__(self):
if self.value < 9000000000:
raise ValueError(“Invalid Phone Number”)
```
The User class is an entity, not a value object. In other words, the User is not immutable. Here is the structure:
```
from dataclasses import dataclass
import datetime
@dataclass
class User:
_name: Name
_phone: PhoneNumber
_since: datetime.datetime
def __post_init__(self):
if self._name is None or self._phone is None:
raise ValueError(“Invalid user”)
if self._since is None:
self.since = datetime.datetime.now()
```
You can observe that the User is not frozen as we want it to be mutable. However, we do not want all the properties mutable. The identity field like _name and fields like _since are not expected to be modified. Then, how can this be controlled?
Python 3 offers the so-called Descriptor protocol. It helps us in defining the getters and setters appropriately. Let us add the getters to all the three fields of User using the @property decorator:
```
@property
def name(self) -> Name:
return self._name
@property
def phone(self) -> PhoneNumber:
return self._phone
@property
def since(self) -> datetime.datetime:
return self._since
```
And the setter for the phone field can be added using @<field>.setter decoration:
```
@phone.setter
def phone(self, phone: PhoneNumber) -> None:
if phone is None:
raise ValueError(“Invalid phone”)
self._phone = phone
```
The User can also be given a method for easy print representation by overriding the __str__() function:
```
def __str__(self):
return self.name.value + “ [“ + str(self.phone.value) + “] since “ + str(self.since)
```
With this the entities and the value objects of the domain model are ready. Creating an exception class is as easy as follows:
```
class UserNotFoundException(Exception):
pass
```
The only other one remaining in the domain model is the UserRepository. Python offers a useful module called abc for building abstract methods and abstract classes. Since UserRepository is only an interface, we can use the abc package.
Any Python class that extends abc.ABC becomes abstract. Any function with the @abc.abstractmethod decorator becomes an abstract function. Here is the resultant structure of UserRepository:
```
from abc import ABC, abstractmethod
class UserRepository(ABC):
@abstractmethod
def fetch(self, name:Name) -> User:
pass
```
The UserRepository follows the repository pattern. In other words, it offers appropriate CRUD operations on the User entity without exposing the underlying data storage semantics. In our case, we need only fetch() operation since FindService only finds the users.
Since the UserRepository is an abstract class, we cannot create instance objects from this class. A concrete class must implement this abstract class for object creation.The data layer
The UserRepositoryImpl offers the concrete implementation of UserRepository:
```
class UserRepositoryImpl(UserRepository):
def fetch(self, name:Name) -> User: pass
```
Since the AddService stores the data of the users in a MySQL database server, the UserRepositoryImpl also must connect to the same database server to retrieve the data. The code for connecting to the database is given below. Observe that we are using the connector library of MySQL.
```
from mysql.connector import connect, Error
```
```
class UserRepositoryImpl(UserRepository):
def fetch(self, name:Name) -> User:
try:
with connect(
host=”mysqldb”,
user=”root”,
password=”admin”,
database=”glarimy”,
) as connection:
with connection.cursor() as cursor:
cursor.execute(“SELECT * FROM ums_users where name=%s”, (name.value,))
row = cursor.fetchone()
if cursor.rowcount == -1:
raise UserNotFoundException()
else:
return User(Name(row[0]), PhoneNumber(row[1]), row[2])
except Error as e:
raise e
```
In the above snippet, we are connecting to a database server named mysqldb using root as the user and admin as the password, in order to use the database schema named glarimy. It is fine to have such information in the code for an illustration, but surely not a suggested approach in production as it exposes sensitive information.
The logic of the fetch() operation is quite intuitive. It executes a SELECT query against the ums_users table. Recollect that the AddService was writing the user data into the same table. In case the SELECT query returns no records, the fetch() function throws UserNotFoundException. Otherwise, it constructs the User entity from the record and returns it to the caller. Nothing unusual.
### The application layer
And finally, we need to build the application layer. The model is reproduced in Figure 2. It consists of just two classes: a controller and a DTO.
![Figure 2: The application layer of FindService][2]
As we already know, a DTO is just a data container without any business logic. It is primarily used for carrying data between the FindService and the outside world. We just offer a way to convert the UserRecord into a dictionary for JSON marshalling in the REST layer:
```
class UserRecord:
def toJSON(self):
return {
“name”: self.name,
“phone”: self.phone,
“since”: self.since
}
```
The job of a controller is to convert DTOs to domain objects for invoking the domain services and vice versa, as can be observed in the find() operation.
```
class UserController:
def __init__(self):
self._repo = UserRepositoryImpl()
def find(self, name: str):
try:
user: User = self._repo.fetch(Name(name))
record: UserRecord = UserRecord()
record.name = user.name.value
record.phone = user.phone.value
record.since = user.since
return record
except UserNotFoundException as e:
return None
```
The find() operation receives a string as a name, converts it as a Name object and invokes the UserRepository to fetch the corresponding User object. If it is found, a UserRecord is created by using the retrieved User object. Recollect that it is necessary to convert the domain objects as DTO to hide the domain model from the external world.
The UserController need not have multiple instances. It can be a singleton as well. By overriding the __new__ operation, it can be modelled as a singleton:
```
class UserController:
def __new__(self):
if not hasattr(self, instance):
self.instance = super().__new__(self)
return self.instance
def __init__(self):
self._repo = UserRepositoryImpl()
def find(self, name: str):
try:
user: User = self._repo.fetch(Name(name))
record: UserRecord = UserRecord()
record.name = user.name.getValue()
record.phone = user.phone.getValue()
record.since = user.since
return record
except UserNotFoundException as e:
return None
```
We are done with implementing the model of the FindService fully. The only task remaining is to expose it as a REST service.
### The REST API
Our FindService offers only one API and that is to find a user by name. The obvious URI is as follows:
```
GET /user/{name}
```
This API is expected to find the user with the supplied name and returns the details of phone number, etc, of the user in JSON format. In case no such user is found, the API is expected to return a 404 status.
We can use the Flask framework to build the REST API. This framework was originally built for developing Web applications using Python. It is further extended to support the REST views besides the HTML views. We pick this framework for its simplicity.
Start by creating a Flask application:
```
from flask import Flask
app = Flask(__name__)
```
Then define the routes for the Flask application as simple functions:
```
@app.route(/user/<name>)
def get(name): pass
```
Observe that the @app.route is mapped to the API /user/<name> on one side and to the function get() on the other side.
As you may already have figured out, this get() function will be invoked every time the user accesses the API with a URI like http://server:port/user/Krishna. Flask is intelligent enough to extract Krishna as the name from the URI and pass it to the get() function.
The get() function is straightforward. It asks the controller to find the user and returns the same after marshalling it to the JSON format along with the usual HTTP headers. In case the controller returns None, the get() function returns the response with an appropriate HTTP status.
```
from flask import jsonify, abort
controller = UserController()
record = controller.find(name)
if record is None:
abort(404)
else:
resp = jsonify(record.toJSON())
resp.status_code = 200
return resp
```
And, finally, the Flask app needs to be served. We can use the waitress server for this purpose.
```
from waitress import serve
serve(app, host=”0.0.0.0”, port=8080)
```
In the above snippet, the app is served on port 8080 on the local host.
The final code looks like this:
```
from flask import Flask, jsonify, abort
from waitress import serve
app = Flask(__name__)
@app.route(/user/<name>)
def get(name):
controller = UserController()
record = controller.find(name)
if record is None:
abort(404)
else:
resp = jsonify(record.toJSON())
resp.status_code = 200
return resp
serve(app, host=”0.0.0.0”, port=8080)
```
### Deployment
The FindService is now ready with code. It has its domain model, data layer, and application layer besides the REST API. The next step is to build the service, containerise it, and then deploy it on Kubernetes. The process is in no way different from any other service, though there are some Python-specific steps.
Before proceeding further, let us have a look at the folder/file structure:
```
+ ums-find-service
+ ums
- domain.py
- data.py
- app.py
- Dockerfile
- requirements.txt
- kube-find-deployment.yml
```
As you can observe, the whole work is under the ums-find-service folder. It contains the code in the ums folder and configurations in Dockerfile, requirements.txt and kube-find-deployment.yml files.
The domain.py consists of the domain model classes, data.py consists of UserRepositoryImpl and app.py consists of the rest of the code. Since we have seen the code already, let us move on to the configuration files.
The first one is the file named requirements.txt. It declares the external dependencies for the Python system to download and install. We need to populate it with every external Python module that we use in the FindService. As you know, we used MySQL connector, Flask and Waitress modules. Hence the following is the content of the requirements.txt.
```
Flask==2.1.1
Flask_RESTful
mysql-connector-python
waitress
```
The second step is to declare the manifestation for dockerisation in Dockerfile. Here it is:
```
FROM python:3.8-slim-buster
WORKDIR /ums
ADD ums /ums
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 8080
ENTRYPOINT [“python”]
CMD [“/ums/app.py”]
```
In summary, we used Python 3.8 as the baseline and moved our code from the ums folder to a corresponding folder in the Docker container, besides moving the requirements.txt. Then we instructed the container to run the pip3 install command. And, finally, we exposed port 8080 (since the waitress was running on that port).
In order to run the service, we instructed the container to use the following command:
```
python /ums/app.py
```
Once the Dockerfile is ready, run the following command from within the ums-find-service folder to create the Dockerised image:
```
docker build -t glarimy/ums-find-service
```
It creates the Docker image, which can be found using the following command:
```
docker images
```
Try pushing the image to the Docker Hub as appropriate. You may log in to Docker as well.
```
docker login
docker push glarimy/ums-find-service
```
And the last step is to build the manifest for the Kubernetes deployment.
We have already covered the way to set up the Kubernetes cluster, and deploy and use services, in the previous part. I am assuming that we still have the manifest file we used in the previous part for deploying the AddService, MySQL, Kafka and Zookeeper. We only need to add the following entries into the kube-find-deployment.yml file:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: ums-find-service
labels:
app: ums-find-service
spec:
replicas: 3
selector:
matchLabels:
app: ums-find-service
template:
metadata:
labels:
app: ums-find-service
spec:
containers:
- name: ums-find-service
image: glarimy/ums-find-service
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: ums-find-service
labels:
name: ums-find-service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: ums-find-service
```
The first part of the above manifest declares a deployment of FindService from the image glarimy/ums-find-service with three replicas. It also exposes port 8080. And the latter part of the manifest declares a Kubernetes service as the front-end for the FindService deployment. Recollect that the MySQL service in the name of mysqldb was already part of the above manifest from the previous part.
Run the following command to deploy the manifest on a Kubernetes cluster:
```
kubectl create -f kube-find-deployment.yml
```
Once the deployment is finished, you can verify the pods and services using the following command:
```
kubectl get services
```
It gives an output as shown in Figure 3.
![Figure 3: Kubernetes services][3]
It lists all the services running on the cluster. Make a note of the external-ip of the FindService and use the curl command to invoke the service:
```
curl http://10.98.45.187:8080/user/KrishnaMohan
```
Note that the IP address 10.98.45.187 corresponds to the FindService, as found in Figure 3.
If we have used the AddService to create a user by the name KrishnaMohan, the above curl command looks like what is shown in Figure 4.
![Figure 4: FindService][4]
With the AddService and FindService, along with the required back-end services for storage and messaging, the architecture of the user management system (UMS) stands as shown in Figure 5, at this point. You can observe that the end-user uses the IP address of the ums-add-service for adding a new user, whereas it uses the IP address of the ums-find-service for finding existing users. Each of these Kubernetes services are backed by three pods with the corresponding containers. Also note that the same mysqldb service is used for storing and retrieving the user data.
![Figure 5: UMS with AddService and FindService][5]
### What about the other services?
The UMS consists of two more services, namely, SearchService and JournalService. We will design these services in the next part of this series on the Node platform, and deploy them on the same Kubernetes cluster to demonstrate the real power of polyglot microservices architecture. We will conclude by observing some design patterns pertaining to microservices.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/
作者:[Krishna Mohan Koyya][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/krishna-mohan-koyya/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-1-The-domain-model-of-FindService-1.png
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-2-The-application-layer-of-FindService.png
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-3-Kubernetes-services-1.png
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-4-FindService.png
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-5-UMS-with-AddService-and-FindService.png

View File

@ -1,323 +0,0 @@
[#]: subject: "How To Find Default Gateway IP Address In Linux And Unix From Commandline"
[#]: via: "https://ostechnix.com/find-default-gateway-linux/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How To Find Default Gateway IP Address In Linux And Unix From Commandline
======
5 Ways To Find Gateway Or Router IP Address In Linux
A **gateway** is a node or a router that allows two or more hosts with different IP addresses to communicate with each other when connected to the same router. Without gateway, devices connected on the same router wont be able to communicate with each other. To put this another way, the gateway acts as an access point to pass network data from a local network to a remote network. In this guide, we will see all the possible ways to **find default gateway in Linux** and **Unix** from commandline.
#### Contents
1. Find Default Gateway In Linux 2. 1. Find Default Gateway Using ip Command 3. 2. Display Default Gateway IP Address Using route Command 4. 3. View Gateway IP Address Using netstat Command 5. 4. Print Default Gateway IP Address Or Router IP Address Using routel Command 6. 5. Find Gateway From Ethernet Configuration Files
7. Conclusion
### Find Default Gateway In Linux
There are various commandline tools are available to view the gateway IP address in Linux. The most commonly used tools are: **ip**, **ss**, and **netcat**. We will see how check the default gateway using each tool with examples.
#### 1. Find Default Gateway Using ip Command
The **ip** command is used to show and manipulate routing, network devices, interfaces and tunnels in Linux.
To find the default gateway or Router IP address, simply run:
```
$ ip route
```
Or,
```
$ ip r
```
Or,
```
$ ip route show
```
**Sample output:**
```
default via 192.168.1.101 dev eth0 proto static metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.20 metric 100
```
Did you see the line **"default via 192.168.1.101"** in the above output? This is the default gateway. So my default gateway is **192.168.1.101**.
You can use **-4** with `ip route` command to **display the IPv4 gateway** only:
```
$ ip -4 route
```
And, use `-6` to **display the IPv6 gateway** only:
```
$ ip -6 route
```
As you noticed in the output, the IP address and the subnet details are also shown. If you want to display ONLY the default gateway and exclude all other details from the output, you can use `awk` command with `ip route` like below.
To find the default gateway IP address using `ip route` and `grep`, run:
To print Gateway IP address with `ip route` and `awk` commands, run:
```
$ ip route | awk '/^default/{print $3}'
```
Or,
```
$ ip route show default | awk '{print $3}'
```
This will list only the gateway.
**Sample output:**
```
192.168.1.101
```
![Find Default Gateway Using ip Command][1]
You can also use **[grep][2]** command with `ip route` to filter the default gateway.
```
$ ip route | grep default
default via 192.168.1.101 dev eth0 proto static metric 100
```
The `ip route` is the recommended command to find the default gateway IP address in latest Linux distributions. However, some of you may still be using the legacy tools like **route** and `netstat`. Old habits die hard, right? The following sections explains how to determine the gateway in Linux using `route` and `netstat` commands.
#### 2. Display Default Gateway IP Address Using route Command
The **route** command is used to show and manipulate routing table in older Linux distributions, for example RHEL 6, CentOS 6.
If you're using those older Linux distributions, you can use the `route` command to display the default gateway.
Please note that the `route` tool is deprecated and replaced with `ip route` command in the latest Linux distributions. If you still want to use `route` for any reason, you need to install it.
First, we need to check which package provides `route` command. To do so, run the following command on your RHEL-based system:
```
$ dnf provides route
```
**Sample output:**
```
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo : @System
Matched from:
Filename : /usr/sbin/route
net-tools-2.0-0.52.20160912git.el8.x86_64 : Basic networking tools
Repo : baseos
Matched from:
Filename : /usr/sbin/route
```
As you can see in the above output, the net-tools package provides the `route` command. So, let us install it using command:
```
$ sudo dnf install net-tools
```
Now, run `route` command with `-n` flag to display the gateway IP address or router IP address in your Linux system:
```
$ route -n
```
**Sample output:**
```
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.101 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
```
![Display Default Gateway IP Address Using route Command][3]
As you see in the above output, the gateway IP address is 192.168.1.101. You will also see the two letters **"UG"** under Flags section. The letter **"U"** indicates the interface is **UP** and **G** stands for Gateway.
#### 3. View Gateway IP Address Using netstat Command
**Netstat** prints information about the Linux networking subsystem. Using netstat tool, we can print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships in Linux and Unix systems.
Netstat is part of net-tools package, so make sure you've installed it in your Linux system. The following commands install net-tools package in RHEL-based systems:
```
$ sudo dnf install net-tools
```
To print the default gateway IP address using `netstat` command, run:
```
$ netstat -rn
```
**Sample output:**
```
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.1.101 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
```
![View Gateway IP Address Using netstat Command][4]
The `netstat` command's output is same as `route` command's output. As per the above output, the gateway IP address is 192.168.1.101 and the UG stands the NIC associated to gateway is UP and G indicates Gateway,
Please note that `netstat` is also deprecated and it is recommended to use **"ss"** command instead of netstat.
#### 4. Print Default Gateway IP Address Or Router IP Address Using routel Command
The **routel** is a script to list routes with pretty output format. The routel script will list routes in a format that some might consider easier to interpret then the `ip route` list equivalent.
The routel script is also the part of net-tools package.
To print the default gateway or router IP address, run routel script without any flags like below:
```
$ routel
```
**Sample output:**
```
target gateway source proto scope dev tbl
default 192.168.1.101 static eth0
172.17.0.0/ 16 172.17.0.1 kernel linkdocker0
192.168.1.0/ 24 192.168.1.20 kernel link eth0
127.0.0.0/ 8 local 127.0.0.1 kernel host lo local
127.0.0.1 local 127.0.0.1 kernel host lo local
127.255.255.255 broadcast 127.0.0.1 kernel link lo local
172.17.0.1 local 172.17.0.1 kernel hostdocker0 local
172.17.255.255 broadcast 172.17.0.1 kernel linkdocker0 local
192.168.1.20 local 192.168.1.20 kernel host eth0 local
192.168.1.255 broadcast 192.168.1.20 kernel link eth0 local
::1 kernel lo
::/ 96 unreachable lo
::ffff:0.0.0.0/ 96 unreachable lo
2002:a00::/ 24 unreachable lo
2002:7f00::/ 24 unreachable lo
2002:a9fe::/ 32 unreachable lo
2002:ac10::/ 28 unreachable lo
2002:c0a8::/ 32 unreachable lo
2002:e000::/ 19 unreachable lo
3ffe:ffff::/ 32 unreachable lo
fe80::/ 64 kernel eth0
::1 local kernel lo local
fe80::d085:cff:fec7:c1c3 local kernel eth0 local
```
![Print Default Gateway IP Address Or Router IP Address Using routel Command][5]
To print only the default gateway, run routel with `grep` like below:
```
$ routel | grep default
default 192.168.1.101 static eth0
```
#### 5. Find Gateway From Ethernet Configuration Files
If you have **[configured static IP address in your Linux or Unix][6]** system, you can view the default gateway or router IP address by looking at the network configuration files.
In RPM-based systems like Fedora, RHEL, CentOS, AlmaLinux and Rocky Linux, the network interface card (shortly **NIC**) configuration are stored under **/etc/sysconfig/network-scripts/** directory.
Find the name of the network card:
```
# ip link show
```
**Sample output:**
```
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d2:85:0c:c7:c1:c3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
```
The network card name is **eth0**. So let us open the network card configuration of this NIC card file:
```
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
```
**Sample output:**
```
DEVICE=eth0
ONBOOT=yes
UUID=eb6b6a7c-37f5-11ed-a59a-a0e70bdf3dfb
BOOTPROTO=none
IPADDR=192.168.1.20
NETMASK=255.255.255.0
GATEWAY=192.168.1.101
DNS1=8.8.8.8
```
As you see above, the gateway IP is `192.168.1.101`.
In Debian, Ubuntu and its derivatives, all network configuration files are stored under **/etc/network/** directory.
```
$ cat /etc/network/interfaces
```
**Sample output:**
```
auto ens18
iface ens18 inet static
address 192.168.1.150
netmask 255.255.255.0
gateway 192.168.1.101
dns-nameservers 8.8.8.8
```
Please note that this method should work only if the IP address is configured manually. For DHCP-enabled network, you need to follow the previous 4 methods.
### Conclusion
In this guide, we listed 5 different ways to find default gateway in Linux and Unix operating systems. We also have included sample commands to display the gateway/router IP address in each method. Hope this helps.
--------------------------------------------------------------------------------
via: https://ostechnix.com/find-default-gateway-linux/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/wp-content/uploads/2022/09/Find-Default-Gateway-Using-ip-Command.png
[2]: https://ostechnix.com/the-grep-command-tutorial-with-examples-for-beginners/
[3]: https://ostechnix.com/wp-content/uploads/2022/09/Display-Default-Gateway-IP-Address-Using-route-Command.png
[4]: https://ostechnix.com/wp-content/uploads/2022/09/View-Gateway-IP-Address-Using-netstat-Command.png
[5]: https://ostechnix.com/wp-content/uploads/2022/09/Print-Default-Gateway-IP-Address-Or-Router-IP-Address-Using-routel-Command.png
[6]: https://ostechnix.com/configure-static-ip-address-linux-unix/

View File

@ -1,185 +0,0 @@
[#]: subject: "PyLint: The good, the bad, and the ugly"
[#]: via: "https://opensource.com/article/22/9/pylint-good-bad-ugly"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
PyLint: The good, the bad, and the ugly
======
Get the most out of PyLint.
![Python programming language logo with question marks][1]
Image by: Opensource.com
Hot take: PyLint is actually good!
"PyLint can save your life" is an exaggeration, but not as much as you might think! PyLint can keep you from really really hard to find and complicated bugs. At worst, it can save you the time of a test run. At best, it can help you avoid complicated production mistakes.
### The good
I'm embarrassed to say how common this can be. Naming tests is perpetually *weird*: Nothing cares about the name, and there's often not a natural name to be found. For instance, look at this code:
```
def test_add_small():
    # Math, am I right?
    assert 1 + 1 == 3
   
def test_add_large():
    assert 5 + 6 == 11
   
def test_add_small():
    assert 1 + 10 == 11
```
The test works:
```
collected 2 items                                                                        
test.py ..
2 passed
```
In reality, these files can be hundreds of lines long, and the person adding the new test might not be aware of all the names. Unless someone is looking at test output carefully, everything looks fine.
Worst of all, the *addition of the overriding test*, the *breakage of the overridden test*, and the *problem that results in prod* might be separated by days, months, or even years.
### PyLint finds it
But like a good friend, PyLint is there for you.
```
test.py:8:0: E0102: function already defined line 1
     (function-redefined)
```
### The bad
Like a 90s sitcom, the more you get into PyLint, the more it becomes problematic. This is completely reasonable code for an inventory modeling program:
```
"""Inventory abstractions"""
import attrs
@attrs.define
class Laptop:
    """A laptop"""
    ident: str
    cpu: str
```
It seems that PyLint has opinions (probably formed in the 90s) and is not afraid to state them as facts:
```
$ pylint laptop.py | sed -n '/^laptop/s/[^ ]*: //p'
R0903: Too few public methods (0/2) (too-few-public-methods)
```
### The ugly
Ever wanted to add your own unvetted opinion to a tool used by millions? PyLint has 12 million monthly downloads.
> "People will just disable the whole check if it's too picky." —PyLint issue 6987, July 3rd, 2022
The attitude it takes towards adding a test with potentially many false positives is...*"eh."*
### Making it work for you
PyLint is fine, but you need to interact with it carefully. Here are the three things I recommend to make PyLint work for you.
#### 1. Pin it
Pin the PyLint version you use to avoid any surprises!
In your `.toml` file:
```
[project.optional-dependencies]
pylint = ["pylint"]
```
In your code:
```
from unittest import mock
```
This corresponds with code like this:
```
# noxfile.py
...
@nox.session(python=VERSIONS[-1])
def refresh_deps(session):
    """Refresh the requirements-*.txt files"""
    session.install("pip-tools")
    for deps in [..., "pylint"]:
        session.run(
            "pip-compile",
            "--extra",
            deps,
            "pyproject.toml",
            "--output-file",
            f"requirements-{deps}.txt",
        )
```
#### 2. Default deny
Disable all checks. Then enable ones that you think have a high value-to-false-positive ratio. (Not just false-negative-to-false-positive ratio!)
```
# noxfile.py
...
@nox.session(python="3.10")
def lint(session):
    files = ["src/", "noxfile.py"]
    session.install("-r", "requirements-pylint.txt")
    session.install("-e", ".")
    session.run(
        "pylint",
        "--disable=all",
        *(f"--enable={checker}" for checker in checkers)
        "src",
    )
```
#### 3. Checkers
These are some of the ones I like. Enforce consistency in the project, avoid some obvious mistakes.
```
checkers = [
    "missing-class-docstring",
    "missing-function-docstring",
    "missing-module-docstring",
    "function-redefined",
]
```
### Using PyLint
You can take just the good parts of PyLint. Run it in CI to keep consistency, and use the highest value checkers.
Lose the bad parts: Default deny checkers.
Avoid the ugly parts: Pin the version to avoid surprises.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/pylint-good-bad-ugly
作者:[Moshe Zadka][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/python_programming_question.png

View File

@ -0,0 +1,382 @@
[#]: subject: "14 Best Open Source WYSIWYG HTML Editors"
[#]: via: "https://itsfoss.com/open-source-wysiwyg-editors/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
14 Best Open Source WYSIWYG HTML Editors
======
WYSIWYG (What You See Is What You Get) editors are self-explanatory. Whatever you see when editing is what you, a reader/user see.
Whether you want to build your content management system, or aim to provide an editor to the end-user of your application, an open-source WYSIWYG editor will help provide a secure, modern, and scalable experience. Of course, you also get the technical freedom to customize open-source WYSIWYG editors to meet your requirements.
Here, we look at some of the best open-source WYSIWYG editors.
### Things to Look For When Choosing a WYSIWYG HTML Editor
![best open source wysiwyg editors][1]
A document editor must be fast for some users and loaded with features.
Similarly, what are some of the key highlights that you should look at when selecting an HTML editor? Let me give you some pointers here:
* Is the editor lightweight?
* Does it have SEO-friendly features?
* How well does it let you collaborate?
* Does it offer auto-save functionality?
* Can you check spelling and grammar with it?
* How well does it handle images/galleries?
When selecting an open-source HTML editor for your app or website, you should look for these essential aspects.
Keeping these in mind, let me mention some of the best options to try.
**Note:** *The editors are in no particular order of ranking. You may choose the best for your use case.*
Table of Contents
* Things to Look For When Choosing a WYSIWYG HTML Editor
* 1. CKEditor
* 2. Froala
* 3. TinyMCE
* 4. Quilljs
* 5. Aloha Editor
* 6. Editor.js
* 7. Trix
* 8. Summernote
* 9. ContentTools
* 10. Toast UI Editor
* 11. Jodit
* 12. SCEditor
* 13. SunEditor
* 14. ProseMirror
* Picking The Best Open-Source WYSIWYG Editor
### 1. CKEditor
![ck5 editor][2]
#### Key Features:
* Autosave.
* Drag and drop support.
* Responsive images.
* Supports pasting from Word/GDocs while preserving the formatting.
* Autoformatting, HTML/Markdown support, Font Style customization.
* Image alt text.
* Real-time Collaboration (Premium only).
* Revision History (Premium only).
* Spell and grammar check (Premium only).
CKEditor 5 is a feature-rich and open-source WYSIWYG editing solution with great flexibility. The user interface looks modern. Hence, you may expect a modern user experience.
It offers a free edition and a premium plan with extra features. CKEditor is a popular option among enterprises and several publications with a custom Content Management System (CMS), for which they provide technical support and custom deployment options.
CKeditors free edition should provide basic editing capabilities if you do not need an enterprise-grade offering. Check out its [GitHub page][3] to explore.
[CKEditor 5][4]
### 2. Froala
![froala][5]
#### Key Features:
* Simple user interface and Responsive Design.
* Easy to integrate.
* HTML/Markdown support.
* Theme/Custom style support.
* Lightweight.
* Image Manager and alt text.
* Autosave.
Froala is an exciting web editor that you can easily integrate with your existing [open-source CMS][6] like WordPress.
It provides a simple user interface with the ability to extend its functionality through default plugins. You can use it as a simple editor or add more tools to the interface for a powerful editing experience.
You can self-host it, but to access its mobile apps and premium support, you must opt for one of the paid plans. Head to its [GitHub page][7] to explore more.
[Froala][8]
### 3. TinyMCE
![tinymce editor][9]
#### Key Features:
* Autosave.
* Lightweight.
* Emoticons.
* Manage images.
* Preview.
* Color picker tool.
TinyMCE is an incredibly popular option for users looking to use a solid editor with several integration options.
TinyMCE was the editor powering WordPress with proven flexibility and ease of use for all users. Unless you want real-time collaboration and cloud deployments at your disposal, TinyMCEs free self-hosted edition should serve you well.
It is a lightweight option with essential features to work with. Check out more about it on its [GitHub page][10].
[TinyMCE][11]
### 4. Quilljs
![quilljs][12]
#### Key Features:
* Lightweight.
* Extend functionalities using extensions.
* Simple and easy to use.
Do you like Slacks in-app editor or LinkedIns web editor? Quilljs is what they use to offer that experience.
If you are looking for a polished free, open-source WYSIWYG editor with no premium frills, Quill (or Quilljs) should be the perfect text editor. It is a lightweight editor with a minimal user interface that allows you to customize or add your extensions to scale their functionalities per your requirements.
To explore its technical details, head to its [GitHub page][13].
[Quilljs][14]
### 5. Aloha Editor
![A Video from YouTube][15]
#### Key Features:
* Fast editor.
* Front-end editing.
* Supports clean copy/paste from Word.
* Easy integration.
* Plugin support.
* Customization for look and feel.
Aloha Editor is a simple and fast HTML5 WYSIWYG editor that lets you edit the content on the front end.
You can download and use it for free. But, if you need professional help, you can contact them for paid options. Its [GitHub page][16] should be the perfect place to explore its technical details.
[Aloha Editor][17]
### 6. Editor.js
![editor js 1][18]
#### Key Features:
* Block-style editing.
* Completely free and open-source.
* Plugin support.
* Collaborative editing (in roadmap).
Editor.js gives you the perks of a block-style editor. The headings, paragraphs, and other items are all separate blocks, which makes them editable while not affecting the rest of the content.
It is an entirely free and open-source project with no premium extras available for upgrade. However, there are several plugins to extend the features, and you can also explore its [GitHub page][19] for more info.
[Editor.js][20]
### 7. Trix
![trix editor][21]
**Note:** *This project hasnt seen any new activity for more than a year when writing.*
Trix is an open-source project by the creators of Ruby on Rails.
If you want something different for a change, with the basic functionalities of a web editor, Trix can be a pick. The project describes that it is built for the modern web.
Trix is not a popular option, but it is a respectable project that lets tinkerers try something different for their website or app. You can explore more on its [GitHub page][22].
[Trix][23]
### 8. Summernote
![summernote][24]
#### Key Features:
* Lightweight.
* Simple user interface.
* Plugins supported.
Want something similar to TincyMCE but simpler? Summernote can be a good choice.
It provides the look and feel of a classic web editor without any fancy modern UX elements. The focus of this editor is to offer a simple and fast experience along with the ability to add plugins and connectors.
You also get to change the themes according to Bootstraps used. Yes, an editor on Bootstrap. Explore more about it on its [GitHub page][25].
[Summernote][26]
### 9. ContentTools
![content tools][27]
#### Key Features:
* Easy-to-use.
* Completely free.
* Lightweight.
Want to edit HTML pages from the front end? Well, ContentTools lets you do that pretty quickly.
While it can be integrated with a CMS, it may not be a preferred pick for the job. You can take a look around at its [GitHub page][28] as well.
[ContentTools][29]
### 10. Toast UI Editor
![toast ui editor][30]
#### Key Features:
* Specially focused on Markdown editing/pages.
* Plugins supported.
* Live Preview.
Toast UI editor will be a perfect fit if you deal with Markdown documents to publish web pages.
It offers a live preview and a few essential options for edits. You also get a dark theme and plugin support for extended functions.
While it does provide useful features, it may not be a feature-rich editor for all. Learn more about it on its [GitHub page][31].
[Toast UI Editor][32]
### 11. Jodit
![jodit screenshot][33]
#### Key Features:
* Lightweight.
* TypeScript based.
* Plugin support.
Jodit is a TypeScript-based WYSIWYG editor that makes no use of additional libraries.
It is a simple and helpful editor with all the essential editing features, including drag-and-drop support and a plugin system to extend functionalities.
The user experience is much similar to WordPresss classic editor or TinyMCE. You can opt for its pro version to access additional plugins and technical support. Head to its [GitHub page][34] to explore technical details.
[Jodit][35]
### 12. SCEditor
![sceditor][36]
Key Features:
* Simple and easy to use.
* Completely free.
* Lightweight.
* Plugins support.
SCEditor is yet another simple open-source WYSIWYG editor. It may not be popular enough, but it has been actively maintained for more than six years since publishing.
By default, it does not feature drag-and-drop support, but you can add it using a plugin. There is scope for using multiple themes and customizing the icons as well. Learn more about it on its [GitHub page][37].
[SCEditor][38]
### 13. SunEditor
![suneditor][39]
#### Key Features:
* Feature-rich.
* Completely free.
* Plugin supported.
Like the last one, SunEditor is not popular enough but works well with its simple and feature-rich offering.
It is based on pure JavaScript with no dependencies. You should be able to copy from Microsoft Word and Excel without issues.
Additionally, one can use KaTex (math plugin) as well. It gives you complete freedom with custom plugins as well. There are no premium extras here. Head to its [GitHub page][40] to check out its recent releases.
### 14. ProseMirror
![prosemirror][41]
#### Key Features:
* Collaboration capabilities.
* Modular.
* Simple.
* Plugins support.
ProseMirror is an exciting choice for free for users who want collaborative editing capabilities. Most of the WYSIWYG editors offer the collaboration feature for a premium. But here, you can work with others on the same document in real-time (for free).
It provides a modular architecture that makes maintenance and development more accessible compared to others.
Explore more about it on its [GitHub page][42].
[ProseMirror][43]
### Picking The Best Open-Source WYSIWYG Editor
Depending on the type of use case, it is easy to pick a WYSIWYG, an open-source editor.
If you want to focus on the out-of-the-box experience and reduce efforts to maintain it, any option that provides premium technical support should be a good choice.
If you are more of a DIY user, you should do anything that serves your requirements.
Note that a popular option does not mean that it is a flawless editor for your requirements. Sometimes a more straightforward option is a better solution than a feature-rich editor.
*So, what would be your favorite open-source HTML editor?* *Let me know in the comments below.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-wysiwyg-editors/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/10/best-open-source-wysiwyg-editors.png
[2]: https://itsfoss.com/wp-content/uploads/2022/10/ck5-editor.webp
[3]: https://github.com/ckeditor/ckeditor5
[4]: https://ckeditor.com/ckeditor-5/
[5]: https://itsfoss.com/wp-content/uploads/2022/10/froala.jpg
[6]: https://itsfoss.com/open-source-cms/
[7]: https://github.com/froala
[8]: https://froala.com/wysiwyg-editor/
[9]: https://itsfoss.com/wp-content/uploads/2022/10/tinymce-editor.jpg
[10]: https://github.com/tinymce/tinymce
[11]: https://www.tiny.cloud/
[12]: https://itsfoss.com/wp-content/uploads/2022/10/quilljs.jpg
[13]: https://github.com/quilljs/quill
[14]: https://quilljs.com/
[15]: https://youtu.be/w_oXaW5Rrpc
[16]: https://github.com/alohaeditor/Aloha-Editor
[17]: https://www.alohaeditor.org/
[18]: https://itsfoss.com/wp-content/uploads/2022/10/editor-js-1.jpg
[19]: https://github.com/codex-team/editor.js
[20]: https://editorjs.io/
[21]: https://itsfoss.com/wp-content/uploads/2022/10/trix-editor.jpg
[22]: https://github.com/basecamp/trix
[23]: https://trix-editor.org/
[24]: https://itsfoss.com/wp-content/uploads/2022/10/summernote.jpg
[25]: https://github.com/summernote/summernote/
[26]: https://summernote.org/
[27]: https://itsfoss.com/wp-content/uploads/2022/10/content-tools.jpg
[28]: https://github.com/GetmeUK/ContentTools
[29]: https://getcontenttools.com/
[30]: https://itsfoss.com/wp-content/uploads/2022/10/toast-ui-editor.jpg
[31]: https://github.com/nhn/tui.editor
[32]: https://ui.toast.com/tui-editor
[33]: https://itsfoss.com/wp-content/uploads/2022/10/jodit-screenshot.jpg
[34]: https://github.com/xdan/jodit
[35]: https://xdsoft.net/jodit/
[36]: https://itsfoss.com/wp-content/uploads/2022/10/sceditor.jpg
[37]: https://github.com/samclarke/SCEditor
[38]: https://www.sceditor.com/
[39]: https://itsfoss.com/wp-content/uploads/2022/10/suneditor.png
[40]: https://github.com/JiHong88/SunEditor
[41]: https://itsfoss.com/wp-content/uploads/2022/10/prosemirror.jpg
[42]: https://github.com/ProseMirror/prosemirror
[43]: https://prosemirror.net/

View File

@ -1,93 +0,0 @@
[#]: subject: "How to Update Google Chrome on Ubuntu Linux"
[#]: via: "https://itsfoss.com/update-google-chrome-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Update Google Chrome on Ubuntu Linux
======
So, you managed to install Google Chrome browser on your Ubuntu system. And now you wonder how to keep the browser updated.
On Windows and macOS, when there is an update available on Chrome, you are notified in the browser itself and you can hit the update option from the browser.
Things are different in Linux. You dont update Chrome from the browser. You update it with the system updates.
Yes. When there is a new update available on Chrome, Ubuntu notifies you via the system updater tool.
![Ubuntu sends notifications when a new version of Chrome is available][1]
You just have to click on the Install Now button, enter your accounts password when asked for it and have Chrome updated to a new version.
Let me tell you why you see the updates on the system level and how you can update Google Chrome in the command line.
### Method 1: Updating Google Chrome with system updates
How did you install Chrome in the first place? You got the deb installer file from the [Chrome website][2] and used it to [install Chrome on Ubuntu][3].
The thing is that when you do that, Google adds a repository entry into your systems sources list. This way, your system trusts the packages coming from the Google repository.
![Google Chrome repository is added to the Ubuntu system][4]
For all such entries added to your system, the package updates are centralized through the Ubuntu Updater.
And this is why when there is an update available to Google Chrome (and other installed applications), your Ubuntu system sends you notification.
![Chrome update available with other applications via System Updater][5]
**Click the “Install Now” button and enter your password when asked for it**. Soon, the system will install all the upgradeable packages.
Depending on the update preference, the notification may not be immediate. If you want, you can manually run the updater tool and see what updates are available for your Ubuntu system.
![Run Software Updater to see what updates are available for your system][6]
### Method 2: Updating Chrome in the Ubuntu command line
If you prefer the terminal over the graphical interface, you can update Chrome with commands as well.
Open a terminal and run the following commands one by one:
```
sudo apt update
sudo apt --only-upgrade install google-chrome-stable
```
The first command updates the package cache so that your system is aware of what packages can be upgraded.
The second command [only updates the single package][7] which is Google Chrome (installed as google-chrome-stable).
### Conclusion
As you can see, things are more streamlined in Ubuntu than in Windows. You get Chrome updated along with other system updates.
On a related note, you may learn about [removing google Chrome from Ubuntu][8] if you are unhappy with it.
Chrome is a fine browser. You can experiment with it by [using shortcuts in Chrome][9] as it makes the browsing experience even smoother.
Enjoy Chrome on Ubuntu!
--------------------------------------------------------------------------------
via: https://itsfoss.com/update-google-chrome-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2021/06/chrome-edge-update-ubuntu.png
[2]: https://www.google.com/chrome/
[3]: https://itsfoss.com/install-chrome-ubuntu/
[4]: https://itsfoss.com/wp-content/uploads/2021/06/google-chrome-repo-ubuntu.png
[5]: https://itsfoss.com/wp-content/uploads/2021/06/chrome-edge-update-ubuntu.png
[6]: https://itsfoss.com/wp-content/uploads/2022/04/software-updater-ubuntu-22-04.jpg
[7]: https://itsfoss.com/apt-upgrade-single-package/
[8]: https://itsfoss.com/uninstall-chrome-from-ubuntu/
[9]: https://itsfoss.com/google-chrome-shortcuts/

View File

@ -1,122 +0,0 @@
[#]: subject: "Xubuntu 22.10: Top New Features"
[#]: via: "https://www.debugpoint.com/xubuntu-22-10-features/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Xubuntu 22.10: Top New Features
======
Heres a quick summary of Xubuntu 22.10 Kinetic Kudu and its new features.
![Xubuntu 22.10 Desktop][1]
Quality takes time to build. It applies to all phases of life, including software.
Since Xfce 4.16 release, many new features to Xfce 4.17 (development version) have been added. That includes core Xfce, native applications, adoption of GNOME 43, MATE 1.26 and libadwaita. Since Xfce is also a combination of GNOME and MATE it takes time to properly incorporate and test the changes.
In the Xubuntu 22.10 Kinetic Kudu release, you get to experience all the improvements done since December 2020 almost two years of bug fixes and enhancements.
Lets quickly check on the schedule. Currently, the Xubuntu 22.10 beta is out and under testing (link to ISO at the end of this page). The final release is expected on October 20, 2022.
### Xubuntu 22.10 New Features
#### Core updates and GNOME framework
At its core, Xubuntu 22.10 is powered by Linux Kernel 5.19 as its base on Ubuntu 22.10. In addition, the Xfce desktop version is Xfce 4.17.
The 4.17 version is a development tag because its a stepping stone for the next big release, Xfce 4.18, which is [planned for this Christmas.][2]
Lets talk about GNOME and related apps. For the first time, Xfce 4.17 in Xubuntu 22.10 gets the libadwaita with GNOME 43 updates. That means the default GNOME apps can render correctly under the Xfce desktop.
That being said, the GNOME Software 43 looks great under Xfce desktop in Xubuntu 22.10. If you compare this to Xfce native look and GNOME apps with CSD/SSD (such as DIsks) they all look neat.
I am surprised at how good Software 43 is with libadwaita/GTK4 rendered under Xfce desktop.
![Three different window decorations together in Xubuntu 22.10][3]
#### Xfce Applications
The Xfce desktop brings its own native set of applications. All the apps are bumped to version 4.17 from 4.16 in this release.
Noteworthy changes include the Xfce panel getting middle-click support for the tasklist plugin and binary time mode in the tray clock. The pulse audio plugin introduces a new recording indicator and can filter out multiple button press events.
Thunar file manager gets a massive set of under-the-hood features and bug fixes. Its vast if you compare Thunar 4.16 to Thunar 4.17. Changes include an updated context menu, path bar, search, navigation and more. You can read the entire change log of Thunar [here][4].
Moreover, the screenshot application screenshooter gets WebP support by default. Bluetooth manager Blueman receives a profile switcher right from the system tray and updates the Catfish file search tool.
Heres the updated list of Xfce application versions and a link to their change log (if you want to dig further).
* Appfinder [4.17.0][5]
* Catfish [4.16.4][6]
* Mousepad [0.5.10][7]
* Panel [4.17.3][8]
* PulseAudio Plugin [0.4.4][9]
* Ristretto [0.12.3][10]
* Screenshooter [1.9.11][11]
* Task Manager [1.5.4][12]
* Terminal [1.0.4][13]
* Thunar [4.17.9][14]
#### Look and Feel
The default elementary-xfce icon set (both light and dark) gets updated icon set with extra polished icons to give your Xfce desktop a new look. The default Greybird GTK theme gets the necessary improvements for window decorations and added Openbox support.
One of the important and visible changes you might notice is the ALT+TAB look. The icons are a little larger and more comfortable for eyes for faster window switching with a dark background.
![Refreshed icon set sample in elementary-xfce with Xubuntu 22.10][15]
![ALT TAB is refreshed with larger icons][16]
The above changes align the default applications with the base [Ubuntu 22.10 release][17]. Heres a summary of the changes in Xubuntu 22.10.
### Summary
* Linux Kernel 5.19 and based on Ubuntu 22.10
* Xfce desktop version 4.17
* Native applications are all updated to 4.17
* The core aligns with GNOME 43, libadwaita, GTK4
* MATE apps bump to 1.26
* Mozilla Firefox web-browser 105.0
* Thunderbird email client 102.3
* LibreOffice 7.4.4.2
### Wrapping up
The most critical changes of Xfce desktop as a whole are arriving in version 4.18. For example, the initial Wayland support, updated glib and GTK packages. If all goes well, you can expect these finest changes in next years April release of Xubuntu.
Finally, you can download the beta image from [this page][18] if you want to try it.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/xubuntu-22-10-features/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/Xubuntu-22.10-Desktop-1024x563.jpg
[2]: https://debugpointnews.com/xfce-4-18-announcement/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/10/Three-different-window-decorations-together-in-Xubuntu-22.10.jpg
[4]: https://gitlab.xfce.org/xfce/thunar/-/blob/master/NEWS
[5]: https://gitlab.xfce.org/xfce/xfce4-appfinder/-/blob/master/NEWS
[6]: https://gitlab.xfce.org/apps/catfish/-/blob/master/NEWS
[7]: https://gitlab.xfce.org/apps/mousepad/-/blob/master/NEWS
[8]: https://gitlab.xfce.org/xfce/xfce4-panel/-/blob/master/NEWS
[9]: https://gitlab.xfce.org/panel-plugins/xfce4-pulseaudio-plugin/-/blob/master/NEWS
[10]: https://gitlab.xfce.org/apps/ristretto/-/blob/master/NEWS
[11]: https://gitlab.xfce.org/apps/xfce4-screenshooter/-/blob/master/NEWS
[12]: https://gitlab.xfce.org/apps/xfce4-taskmanager/-/blob/master/NEWS
[13]: https://gitlab.xfce.org/apps/xfce4-terminal/-/blob/master/NEWS
[14]: https://gitlab.xfce.org/xfce/thunar/-/blob/master/NEWS
[15]: https://www.debugpoint.com/wp-content/uploads/2022/10/Refreshed-icon-set-sample-in-elementary-xfce-with-Xubuntu-22.10.jpg
[16]: https://www.debugpoint.com/wp-content/uploads/2022/10/ALT-TAB-is-refreshed-with-larger-icons.jpg
[17]: https://www.debugpoint.com/ubuntu-22-10/
[18]: https://cdimage.ubuntu.com/xubuntu/releases/kinetic/beta/

View File

@ -1,81 +0,0 @@
[#]: subject: "Easiest Way to Open Files as Root in GNOME Files"
[#]: via: "https://www.debugpoint.com/gnome-files-root-access/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Easiest Way to Open Files as Root in GNOME Files
======
Heres the simplest way to access a file or directory as root in GNOME Files.
![][1]
In Windows, you generally get an option to open a file or folder as “Open As Administrator” in the right-click context menu.
That feature is part of the File manager, i.e. for Windows; its part of Windows Explorer. However, it is executed by the operating system and its permission control modules.
In Linux distributions and file managers, the situation is a little different. The different desktop has their way of handling this.
Since modifying the files and folders as admin (or root) is risky and may cause a broken system, the feature is not easily available to users via the GUI of file managers.
For example, KDE Plasmas default file manager Dolphin recently [added this feature][2] so that when a root privilege is required, it will ask for you with a PolicyKit KDE Agent (polkit) window as shown below. Not the other way around. You want to open/execute something via root from the file manager.
Its worth mentioning that you can not use “sudo dolphin” to run the file manager itself with root privilege.
![Dolphin root access after KIO with Polkit implementation][3]
In a way, it saves many unforeseen situations. But advanced users can always use sudo via the terminal to do their job.
### GNOME Files (Nautilus) and root access to files, directories
That being said, [GNOME Files][4] (aka Nautilus) has a way to open files and folders via root.
Heres how.
* Open GNOME Files or Nautilus.
* Then click on other locations at the left pane.
* Press CTRL+L to bring up the address bar.
* In the address bar, type in below and hit enter.
```
admin:///
```
* It would ask for the admin password; once you authenticate yourself successfully, you get the system open for you as admin.
* Now, here onwards, whatever you do, its as admin or root.
![Enter the location address as admin][5]
![Give admin password][6]
![Opening GNOME Files as root][7]
But, as always, be careful what you do as an admin. Its often easy to forget after you authenticate yourself as root.
Theres always a reason why these options are not easily visible to prevent you and many new Linux users from breaking their system.
Cheers.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/gnome-files-root-access/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/nauroot-1024x576.jpg
[2]: https://www.debugpoint.com/dolphin-root-access/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/02/Dolphin-root-access-after-KIO-with-Polkit-implementation.jpg
[4]: https://wiki.gnome.org/Apps/Files
[5]: https://www.debugpoint.com/wp-content/uploads/2022/10/Enter-the-location-address-as-admin.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/10/Give-admin-password.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/10/Opening-GNOME-Files-as-root.jpg

View File

@ -1,146 +0,0 @@
[#]: subject: "How to Enable Snap Support in Arch Linux"
[#]: via: "https://itsfoss.com/install-snap-arch-linux/"
[#]: author: "Pranav Krishna https://itsfoss.com/author/pranav/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Enable Snap Support in Arch Linux
======
Snap is a universal package format designed by Canonical, the parent company of Ubuntu. Some people do not like Snap, but it has some advantages.
Often, some applications are only available in the Snap format. This gives you a good enough reason to enable snap in Arch Linux.
I know that AUR has a vast collection of applications but the snap apps often come directly from the developers.
If you want to be able to install Snap applications in Arch Linux, you need to enable snap support first.
There are two ways to do it:
* Enable Snap support using an AUR helper (easier)
* Enable Snap support manually by getting the packages from AUR
Lets see how to do it.
### Method 1. Use an AUR helper to enable Snap
Snap is available in the Arch User Repository as the *snapd* package. You can install it easily using an AUR helper.
There are [many AUR helpers][1] out there, but *yay* is what I prefer because it has syntax similar to the [pacman command][2].
If you dont have an AUR installed already, install Yay using the command below (needs git beforehand):
```
git clone https://aur.archlinux.org/yay
cd yay
makepkg -si
```
![Installing yay][3]
Now that *yay* is installed, you can install snapd by:
```
yay -Sy snapd
```
![Installing snapd from AUR using yay][4]
Yay enables automatic updating of snapd whenever you [update your Arch Linux][5] system.
### Verify that snap works
To test if snap works fine, install and run the *hello-world* snap package.
```
sudo snap install hello-world
hello-world
(or)
sudo snap run hello-world
```
![The hello-world snap package executes][6]
If it runs fine, then you can install other snap packages easily.
### Method 2. Manually build the snap package from AUR
If you do not want to use an AUR helper, you can still get the snapd from the AUR. Let me show the detailed procedure.
You will need to install some build tools first.
```
sudo pacman -Sy git go go-tools python-docutils
```
![Installing Dependencies for snap][7]
Once youre done with installing the dependencies, now you can clone the AUR directory, which goes as:
```
git clone https://aur.archlinux.org/snapd
cd snapd
```
![Cloning the repository][8]
Then make the snapd package:
```
makepkg -si
```
Enter yes when it asks to install other dependency packages.
![snapd manual install makepkg][9]
You have installed the snapd daemon. However, it needs to be enabled to auto start at boot time.
```
sudo systemctl enable snapd --now
sudo systemctl enable snapd.apparmor --now #start snap applications
sudo ln -s /var/lib/snapd/snap /snap #optional: classic snap support
```
![Enable Snap at startup][10]
The major disadvantage of manually building a package is that you have to manually build every time a new update kicks in. Using an AUR helper solves that problem for us.
### Conclusion
I prefer pacman and AUR in Arch Linux. Its rare to see an application that is not in AUR but available in some other formats. Still, using snap could be advantageous in some conditions where you want it directly from the source, like [installing Spotify on Arch][11] for example.
I hope you find this tutorial helpful. Let me know if you have any questions.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-snap-arch-linux/
作者:[Pranav Krishna][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pranav/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/best-aur-helpers/
[2]: https://itsfoss.com/pacman-command/
[3]: https://itsfoss.com/wp-content/uploads/2022/10/yay-makepkg.png
[4]: https://itsfoss.com/wp-content/uploads/2022/10/yay-install-snapd.png
[5]: https://itsfoss.com/update-arch-linux/
[6]: https://itsfoss.com/wp-content/uploads/2022/10/snap-hello-world-1.png
[7]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-dependencies.png
[8]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-clone.png
[9]: https://itsfoss.com/wp-content/uploads/2022/10/snapd-manual-install-makepkg-800x460.png
[10]: https://itsfoss.com/wp-content/uploads/2022/10/enable-snapd-startup-2.png
[11]: https://itsfoss.com/install-spotify-arch/

View File

@ -0,0 +1,88 @@
[#]: subject: "Kubuntu 22.10 Kinetic Kudu: Top New Features"
[#]: via: "https://www.debugpoint.com/kubuntu-22-10-features/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Kubuntu 22.10 Kinetic Kudu: Top New Features
======
A brief summary of Kubuntu 22.10 “Kinetic Kudu” and additional information about the release.
![Kubuntu 22.10 Kinetic Kudu Desktop][1]
### Kubuntu 22.10: Top New Features
Among all the [great KDE Plasma-based distributions][2], Kubuntu is the best. Because it brings stability to both Plasma and at its core, that is Ubuntu.
Kubuntu 22.10 is a short-term release based on Ubuntu 22.10 supported for nine months from the release. Since short-term releases are to adopt the latest technologies, removing the obsolete ones, its features list is minimal.
This release of Kubuntu features Linux Kernel 5.19, which brings run-time Average Power Limiting (RAPL) support for Intels Raptor and Alder Lake processor, multiple families of ARM updates in mainline kernel and usual processor/GPU and file-system updates. Learn more about Kernel 5.19 features in [this article][3].
Compared to the prior [Kubuntu release 22.04 LTS][4] (with Plasma 5.24), you get the latest KDE Plasma 5.25 (final point release) desktop with all the bug fixes and updates.
Although, [KDE Plasma 5.26][5], which has just got released, could not make it to this version. But I believe it should come in as a point release, just not on the release day.
Besides, Plasma 5.25 is not small in terms of features. Its, in fact, packed with new cool advancements. If you are especially using Kubuntus earlier version, you should be aware of these new items.
Firstly, Kubuntu 22.10 enables you to make your default panel “float”. We call it the Floating Panel. So, no more using the add-ons for this.
Secondly, the accent colour of your desktop can change based on the wallpapers tone. Now you can see your Kubuntu desktop changes dynamically when you enable it from Settings > Global Theme > Colours.
![KDE Plasma - Dynamic Accent Colour and Floating Panel Demo][6]
In addition, switching between dark and light modes becomes more smooth thanks to the change. Also, in Kubuntu 22.10 with Wayland, you can now see and apply the display-specific resolutions in the settings dropdown.
On the other hand, Discover is more friendly to Flatpak, with additional app details and an additional options button to notify you that there is still data for uninstalled apps.
![The app page gives more clarity in Plasma 5.25][7]
Furthermore, the Krunner launcher in Kubuntu now detects the search language and display results accordingly. Also, the network manager applet now shows the Wi-Fi frequency alongside the access point name (this is help full for cases where you have the same access point name for 4G and 5G bands).
All of these changes are powered by Qt 5.15.6 and Framework 5.98. If you want to learn more about Plasma 5.25, refer to the dedicated feature guide [here][8].
### Other features of Kubuntu 22.10
The core applications and packages bump up to their respective versions based on Ubuntu 22.10, and heres a summary.
* Linux Kernel 5.19
* KDE Plasma 5.25.5 (hopefully will get 5.26 soon)
* KDE Framework 5.98
* Qt 5.15.6
* Firefox 105.0.1
* Thunderbird 102.3.2
* LibreOffice 7.4.2.3
* VLC Media Player 3.0.17
* Pipewire replacing PulseAudio
Finally, you can download the Kubuntu 22.10 BETA from the below links.
[https://cdimage.ubuntu.com/kubuntu/releases/kinetic/beta/][9]
While the developers are preparing for the final release (due on Oct 20, 2022), you can try it on a [virtual machine][10], Or a physical system.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/kubuntu-22-10-features/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/Kubuntu-22.10-Kinetic-Kudu-Desktop.jpg
[2]: https://www.debugpoint.com/top-linux-distributions-kde-plasma/
[3]: https://www.debugpoint.com/linux-kernel-5-19/
[4]: https://www.debugpoint.com/kubuntu-22-04-lts/
[5]: https://www.debugpoint.com/kde-plasma-5-26/
[6]: https://youtu.be/npfHwMLXXHs
[7]: https://www.debugpoint.com/wp-content/uploads/2022/05/App-page-gives-more-clarity-in-Plasma-5.25.jpg
[8]: https://www.debugpoint.com/kde-plasma-5-25/
[9]: https://cdimage.ubuntu.com/kubuntu/releases/kinetic/beta/
[10]: https://www.debugpoint.com/tag/virtual-machine

View File

@ -0,0 +1,190 @@
[#]: subject: "Asynchronous programming in Rust"
[#]: via: "https://opensource.com/article/22/10/asynchronous-programming-rust"
[#]: author: "Stephan Avenwedde https://opensource.com/users/hansic99"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Asynchronous programming in Rust
======
Take a look at how async-await works in Rust.
![Ferris the crab under the sea, unofficial logo for Rust programming language][1]
Image by: Opensource.com
Asynchronous programming: Incredibly useful but difficult to learn. You can't avoid async programming to create a fast and reactive application. Applications with a high amount of file or network I/O or with a GUI that should always be reactive benefit tremendously from async programming. Tasks can be executed in the background while the user still makes inputs. Async programming is possible in many languages, each with different styles and syntax. [Rust][2] is no exception. In Rust, this feature is called *async-await*.
While *async-await* has been an integral part of Rust since version 1.39.0, most applications depend on community crates. In Rust, except for a larger binary, *async-await* comes with zero costs. This article gives you an insight into asynchronous programming in Rust.
### Under the hood
To get a basic understanding of *async-await* in Rust, you literally start in the middle.
The center of *async-await* is the [future][3] trait, which declares the method *poll* (I cover this in more detail below). If a value can be computed asynchronously, the related type should implement the *future* trait. The *poll* method is called repeatedly until the final value is available.
At this point, you could repeatedly call the *poll* method from your synchronous application manually in order to get the final value. However, since I'm talking about asynchronous programming, you can hand over this task to another component: the runtime. So before you can make use of the *async* syntax, a runtime must be present. I use the runtime from the [tokio][4] community crate in the following examples.
A handy way of making the tokio runtime available is to use the `#[tokio::main]` macro on your main function:
```
#[tokio::main]
async fn main(){
    println!("Start!");
    sleep(Duration::from_secs(1)).await;
    println!("End after 1 second");
}
```
When the runtime is available, you can now *await* futures. Awaiting means that further executions stop here as long as the *future* needs to be completed. The *await* method causes the runtime to invoke the *poll* method, which will drive the *future* to completion.
In the above example, the tokios [sleep][5] function returns a *future* that finishes when the specified duration has passed. By awaiting this future, the related *poll* method is repeatedly called until the *future* completes. Furthermore, the *main()* function also returns a *future* because of the `async` keyword before the **fn**.
So if you see a function marked with `async`**:**
```
async fn foo() -> usize { /**/ }
```
Then it is just syntactic sugar for:
```
fn foo() -> impl Future<Output = usize> { async { /**/ } }
```
### Pinning and boxing
To remove some of the shrouds and clouds of *async-await* in Rust, you must understand *pinning* and *boxing*.
If you are dealing with *async-await*, you will relatively quickly step over the terms boxing and pinning. Since I find that the available explanations on the subject are rather difficult to understand, I have set myself the goal of explaining the issue more easily.
Sometimes it is necessary to have objects that are guaranteed not to be moved in memory. This comes into effect when you have a self-referential type:
```
struct MustBePinned {
    a: int16,
    b: &int16
}
```
If member **b** is a reference (pointer) to member **a** of the same instance, then reference **b** becomes invalid when the instance is moved because the location of member **a** has changed but **b** still points to the previous location. You can find a more comprehensive example of a *self-referential* type in the [Rust Async book][6]. All you need to know now is that an instance of *MustBePinned* should not be moved in memory. Types like *MustBePinned* do not implement the *Unpin* trait, which would allow them to move within memory safely. In other words, *MustBePinned* is *!Unpin*.
Back to the future: By default, a *future* is also *!Unpin*; thus, it should not be moved in memory. So how do you handle those types? You pin and box them.
The [Pin<T>][7] type wraps pointer types, guaranteeing that the values behind the pointer won't be moved. The **Pin<T>** type ensures this by not providing a mutable reference of the wrapped type. The type will be pinned for the lifetime of the object. If you accidentally pin a type that implements *Unpin* (which is safe to move), it won't have any effect.
In practice: If you want to return a *future* (*!Unpin*) from a function, you must box it. Using [Box<T>][8] causes the type to be allocated on the heap instead of the stack and thus ensures that it can outlive the current function without being moved. In particular, if you want to hand over a *future*, you can only hand over a pointer to it as the *future* must be of type **Pin<Box<dyn Future>>**.
Using *async-wait*, you will certainly stumble upon this boxing and pinning syntax. To wrap this topic up, you just have to remember this:
* Rust does not know whether a type can be safely moved.
* Types that shouldn't be moved must be wrapped inside [Pin<T>][9].
* Most types are [Unpin][10]ned types. They implement the trait Unpin and can be freely moved within memory.
* If a type is wrapped inside [Pin<T>][11] and the wrapped type is !Unpin, it is not possible to get a mutable reference out of it.
* Futures created by the async keyword are !Unpin and thus must be pinned.
### Future trait
In the [future][12] trait, everything comes together:
```
pub trait Future {
    type Output;
    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>;
}
```
Here is a simple example of how to implement the *future* trait:
```
struct  MyCounterFuture {
cnt : u32,
cnt_final : u32
}
impl MyCounterFuture {
pub fn new(final_value : u32) -> Self {
Self {
cnt : 0,
cnt_final : final_value
}
}
}
 
impl Future for MyCounterFuture {
type Output = u32;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<u32>{
self.cnt += 1;
if self.cnt >= self.cnt_final {
println!("Counting finished");
return Poll::Ready(self.cnt_final);
}
cx.waker().wake_by_ref();
Poll::Pending
}
}
#[tokio::main]
async fn main(){
let my_counter = MyCounterFuture::new(42);
let final_value = my_counter.await;
println!("Final value: {}", final_value);
}
```
Here is a simple example of how the *future* trait is implemented manually: The *future* is initialized with a value to which it shall count, stored in **cnt_final**. Each time the *poll* method is invoked, the internal value **cnt** gets incremented by one. If **cnt** is less than **cnt_final**, the future signals the [waker][13] of the runtime that the *future* is ready to be polled again. The return value of `Poll::Pending` signals that the *future* has not completed yet. After **cnt** is *>=* **cnt_final**, the *poll* function returns with `Poll::Ready`, signaling that the *future* has completed and providing the final value.
This is just a simple example, and of course, there are other things to take care of. If you consider creating your own futures, I highly suggest reading the chapter [Async in depth][14] in the documentation of the tokio crate.
### Wrap up
Before I wrap things up, here is some additional information that I consider useful:
* Create a new pinned and boxed type using [Box::pin][15].
* The [futures][16] crate provides the type [BoxFuture][17] which lets you define a future as return type of a function.
* The [async_trait][18] allows you to define an async function in traits (which is currently not allowed).
* The [pin-utils][19] crate provides macros to pin values.
* The tokios [try_join!][20] macro (a)waits on multiple futures which return a [Result<T, E>][21].
Once the first hurdles have been overcome, async programming in Rust is straightforward. You don't even have to implement the *future* trait in your own types if you can outsource code that can be executed in parallel in an async function. In Rust, single-threaded and multi-threaded runtimes are available, so you can benefit from async programming even in embedded environments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/asynchronous-programming-rust
作者:[Stephan Avenwedde][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/rust_programming_crab_sea.png
[2]: https://opensource.com/article/20/12/learn-rust
[3]: https://doc.rust-lang.org/std/future/trait.Future.html
[4]: https://tokio.rs/
[5]: https://docs.rs/tokio/latest/tokio/time/fn.sleep.html
[6]: https://rust-lang.github.io/async-book/04_pinning/01_chapter.html
[7]: https://doc.rust-lang.org/std/pin/struct.Pin.html
[8]: https://doc.rust-lang.org/std/boxed/struct.Box.html
[9]: https://doc.rust-lang.org/std/pin/struct.Pin.html
[10]: https://doc.rust-lang.org/std/marker/trait.Unpin.html#
[11]: https://doc.rust-lang.org/std/pin/struct.Pin.html
[12]: https://doc.rust-lang.org/std/future/trait.Future.html
[13]: https://tokio.rs/tokio/tutorial/async#wakers
[14]: https://tokio.rs/tokio/tutorial/async
[15]: https://doc.rust-lang.org/std/boxed/struct.Box.html#method.pin
[16]: https://crates.io/crates/futures
[17]: https://docs.rs/futures/latest/futures/future/type.BoxFuture.html
[18]: https://docs.rs/async-trait/latest/async_trait/
[19]: https://crates.io/crates/pin-utils
[20]: https://docs.rs/tokio/latest/tokio/macro.try_join.html
[21]: https://doc.rust-lang.org/std/result/

View File

@ -0,0 +1,100 @@
[#]: subject: "Enjoy the Classic Snake Game in Your Linux Terminal"
[#]: via: "https://www.debugpoint.com/snake-game-linux-terminal/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Enjoy the Classic Snake Game in Your Linux Terminal
======
This is how you can install and play the classic Snake Game in Linux Terminal.
Remember the classic and simple snake game of old mobile phones? I remember playing it for hours. Hey, no other options at the time, right? Smartphones were still not in the market. And all you have is this
![Nokia 3310 with legacy snake game][1]
But over time, the Snake Game was replaced by more advanced graphical games with various options. But nothing beats that classic snake game.
And what if I told you, you could play this game in the Linux Terminal itself? Whether you are running Ubuntu Linux, Fedora Linux or Arch Linux doesnt matter. This game is available for most of the [distros][2].
![nsnake Game - Main Menu][3]
### Install nSnake Snake Game for Linux Terminal
You can install [this game][4] via the terminal using the below methods.
For Ubuntu, Linux Mint or other related distributions:
```
sudo apt install nsnake
```
For Fedora Linux and others:
```
sudo dnf install nsnake
```
For Arch Linux, this snake game is available in the [Arch User repository][5]. You can install it using the following steps.
* [Set up Yay AUR helper][6]
* Then open a terminal and run the below command
```
yay -S nsnake
```
The above command installs the stock repository version of the game, which might not be the latest. However, if you want the latest version, you may need to compile the source via GitHub. I have added the compilation instructions at the end of this page for your reference.
### Playing the game
Playing the game is very simple. Type nsnake in the terminal, which will launch the game.
To quit immediately, press q.
Following are the default key bindings.
* Arrow keys to move the snake
* q Quit the game
* p Pause the game
You can also configure the game in various ways, which are available via the main menu.
![nsnake Linux Terminal Snake Game Settings][7]
So, enjoy!
##### Compilation
To compile the latest version, use the following commands in all Linux distributions.
Oh, make sure you have `git` and `ncurses-devel` installed, which are the required packages for compilation.
```
git clone https://github.com/alexdantas/nSnake.gitcd nsnakemakemake install
```
So, do you like Snake Game? Do you prefer it over other terminal-based games? Share your views with other readers in the comment box below.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/snake-game-linux-terminal/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2021/12/Nokia-3310-with-legacy-snake-game.jpg
[2]: https://www.debugpoint.com/category/distributions
[3]: https://www.debugpoint.com/wp-content/uploads/2021/12/nsnake-Game-Main-Menu.jpg
[4]: https://github.com/alexdantas/nsnake
[5]: https://aur.archlinux.org/packages/nsnake/
[6]: https://www.debugpoint.com/2021/01/install-yay-arch/
[7]: https://www.debugpoint.com/wp-content/uploads/2021/12/nsnake-Linux-Terminal-Snake-Game-Settings.jpg

View File

@ -0,0 +1,479 @@
[#]: subject: "How To Monitor User Activity In Linux"
[#]: via: "https://ostechnix.com/monitor-user-activity-linux/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How To Monitor User Activity In Linux
======
As a Linux administrator, you need to keep track of all users' activities. When something goes wrong in the server, you can analyze and investigate the users' activities, and try to find the root cause of the problem. There are many ways to **monitor users in Linux**. In this guide, we are going to talk about **GNU accounting utilities** that can be used to **monitor the user activity in Linux**.
### What are Accounting utilities?
The Accounting utilities provides the useful information about system usage, such as connections, programs executed, and utilization of system resources in Linux. These accounting utilities can be installed using **psacct** or **acct** package.
The psacct or acct are same. In RPM-based systems, it is available as psacct, and in DEB-based systems, it is available as acct.
What is the use of psacct or acct utilities? You might wonder. Generally, the user's command line history details will be stored in **.bash_history** file in their $HOME directory. Some users might try to edit, modify or delete the history.
However, the accounting utilities will still be able to retrieve the users activities even though they [cleared their command line history][1] completely. Because, **all process accounting files are owned by root** user, and the normal users can't edit them.
### Install psacct or acct in Linux
The psacct/acct utilities are packaged for popular Linux distributions.
To install psacct in Alpine Linux, run:
```
$ sudo apk add psacct
```
To install acct in Arch Linux and its variants like EndeavourOS and Manjaro Linux, run:
```
$ sudo pacman -S acct
```
On Fedora, RHEL, and its clones like CentOS, AlmaLinux and Rocky Linux, run the following command to install psacct:
```
$ sudo dnf install psacct
```
In RHEL 6 and older versions, you should use `yum` instead of `dnf` to install psacct.
```
$ sudo yum install psacct
```
On Debian, Ubuntu, Linux Mint, install acct using command:
```
$ sudo apt install acct
```
To install acct on openSUSE, run:
```
$ sudo zypper install acct
```
### Start psacct/acct service
To enable and start the psacct service, run:
```
$ sudo systemctl enable psacct
```
```
$ sudo systemctl start psacct
```
To check if psacct service is loaded and active, run:
```
$ sudo systemctl status psacct
```
On DEB-based systems, the acct service will be automatically started after installing it.
You can verify whether acct service is started or not using command:
```
$ sudo systemctl status acct
```
**Sample output:**
```
● acct.service - Kernel process accounting
Loaded: loaded (/lib/systemd/system/acct.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2022-10-13 16:06:35 IST; 28s ago
Docs: man:accton(8)
Process: 3241 ExecStart=/usr/sbin/accton /var/log/account/pacct (code=exited, status=0/SUCCESS)
Main PID: 3241 (code=exited, status=0/SUCCESS)
CPU: 879us
Oct 13 16:06:35 ubuntu2204 systemd[1]: Starting Kernel process accounting...
Oct 13 16:06:35 ubuntu2204 accton[3241]: Turning on process accounting, file set to '/var/log/account/pacct'.
Oct 13 16:06:35 ubuntu2204 systemd[1]: Finished Kernel process accounting.
```
> **Download** - [Free eBook: "Nagios Monitoring Handbook"][2]
### Monitor User Activity in Linux using psacct or acct
The psacct (Process accounting) package contains following useful utilities to monitor the user and process activities.
* ac - Displays statistics about how long users have been logged on.
* lastcomm - Displays information about previously executed commands.
* accton - Turns process accounting on or off.
* dump-acct - Transforms the output file from the accton format to a human-readable format.
* dump-utmp - Prints utmp files in human-readable format.
* sa - Summarizes information about previously executed commands.
Let us learn how to monitor the activities of Linux users by using each utility with examples.
#### 1. The ac command examples
The **ac** utility will display the report of connect time in hours. It can tell you how long a user or group of users were connected to the system.
##### 1.1. Display total connect time of all users
```
$ ac
```
This command displays the total connect time of all users in hours.
```
total 52.91
```
![Display total connect time of all users][3]
##### 1.2. Show total connect of all users by day-wise
You can sort this result by day-wise by using **-d** flag as shown below.
```
$ ac -d
```
**Sample output:**
```
May 11 total 4.29
May 13 total 3.23
May 14 total 7.66
May 15 total 8.97
May 16 total 0.52
May 20 total 4.09
May 24 total 1.32
Jun 9 total 15.18
Jun 10 total 2.97
Jun 22 total 2.61
Jul 19 total 1.95
Today total 0.29
```
![Show total connect of all users by day-wise][4]
##### 1.3. Get total connect time by user-wise
Also, you can display how long each user was connected with the system with **-p** flag.
```
$ ac -p
```
**Sample output:**
```
ostechnix 52.85
root 0.51
total 53.36
```
![Get total connect time by user-wise][5]
##### 1.4. Print total connect time of a specific user
And also, you can display the individual user's total login time as well.
```
$ ac ostechnix
```
**Sample output:**
```
total 52.95
```
##### 1.5. View total connect time of a certain user by day-wise
To display individual user's login time by day-wise, run:
```
$ ac -d ostechnix
```
**Sample output:**
```
May 11 total 4.29
May 13 total 3.23
May 14 total 7.66
May 15 total 8.97
May 16 total 0.01
May 20 total 4.09
May 24 total 1.32
Jun 9 total 15.18
Jun 10 total 2.97
Jun 22 total 2.61
Jul 19 total 1.95
Today total 0.68
```
![View total connect time of a certain user by day-wise][6]
For more details, refer the man pages.
```
$ man ac
```
#### 2. The lastcomm command examples
The **lastcomm** utility displays the list of previously executed commands. The most recent executed commands will be listed first.
##### 2.1. Display previously executed commands
```
$ lastcomm
```
**Sample output:**
```
systemd-hostnam S root __ 0.06 secs Thu Oct 13 17:21
systemd-localed S root __ 0.06 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
awk ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
uname ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
sed ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
grep ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
grep ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
bash F ostechni pts/1 0.00 secs Thu Oct 13 17:22
[...]
```
##### 2.2. Print last executed commands of a specific user
The above command displays all user's commands. You can display the previously executed commands by a particular user using command:
```
$ lastcomm ostechnix
```
**Sample output:**
```
less ostechni pts/1 0.00 secs Thu Oct 13 17:26
lastcomm ostechni pts/1 0.00 secs Thu Oct 13 17:26
lastcomm ostechni pts/1 0.00 secs Thu Oct 13 17:26
lastcomm ostechni pts/1 0.00 secs Thu Oct 13 17:26
gdbus X ostechni __ 0.00 secs Thu Oct 13 17:24
lastcomm ostechni pts/1 0.00 secs Thu Oct 13 17:24
ac ostechni pts/1 0.00 secs Thu Oct 13 17:24
update-notifier F ostechni __ 0.00 secs Thu Oct 13 17:23
apport-checkrep ostechni __ 0.06 secs Thu Oct 13 17:23
apport-checkrep ostechni __ 0.05 secs Thu Oct 13 17:23
systemctl ostechni __ 0.00 secs Thu Oct 13 17:23
apt-check ostechni __ 0.81 secs Thu Oct 13 17:23
dpkg ostechni __ 0.00 secs Thu Oct 13 17:23
ischroot ostechni __ 0.00 secs Thu Oct 13 17:23
dpkg ostechni __ 0.00 secs Thu Oct 13 17:23
[...]
```
##### 2.3. Print total number of command execution
Also, you can view how many times a particular command has been executed.
```
$ lastcomm apt
```
**Sample output:**
```
apt S root pts/2 0.70 secs Thu Oct 13 16:06
apt F root pts/2 0.00 secs Thu Oct 13 16:06
apt F root pts/2 0.00 secs Thu Oct 13 16:06
```
As you see in the above output, the `apt` command has been executed three times by `root` user.
For more details, refer the man pages.
```
$ man lastcomm
```
#### 3. The sa command examples
The sa utility will summarize the information about previously executed commands.
##### 3.1. Print summary of all commands
```
$ sa
```
**Sample output:**
```
1522 1598.63re 0.23cp 0avio 32712k
139 570.90re 0.05cp 0avio 36877k ***other*
38 163.63re 0.05cp 0avio 111445k gdbus
3 0.05re 0.04cp 0avio 12015k apt-check
27 264.27re 0.02cp 0avio 0k kworker/dying*
2 51.87re 0.01cp 0avio 5310464k Docker Desktop
5 0.03re 0.01cp 0avio 785k snap-confine
8 59.48re 0.01cp 0avio 85838k gmain
5 103.94re 0.01cp 0avio 112720k dconf worker
24 3.38re 0.00cp 0avio 2937k systemd-udevd*
7 0.01re 0.00cp 0avio 36208k 5
3 1.51re 0.00cp 0avio 3672k systemd-timedat
2 0.00re 0.00cp 0avio 10236k apport-checkrep
2 0.01re 0.00cp 0avio 4316160k ThreadPoolForeg*
2 0.00re 0.00cp 0avio 8550k package-data-do
3 0.79re 0.00cp 0avio 2156k dbus-daemon
12 0.00re 0.00cp 0avio 39631k ffmpeg
[...]
```
##### 3.2. View number of processes and CPU minutes
To print the number of processes and number of CPU minutes on a per-user basis, run `sa` command with `-m` flag:
```
$ sa -m
```
**Sample output:**
```
1525 1598.63re 0.23cp 0avio 32651k
root 561 647.23re 0.09cp 0avio 3847k
ostechnix 825 780.79re 0.08cp 0avio 47788k
gdm 117 13.43re 0.06cp 0avio 63715k
colord 2 52.01re 0.00cp 0avio 89720k
geoclue 1 1.01re 0.00cp 0avio 70608k
jellyfin 12 0.00re 0.00cp 0avio 39631k
man 1 0.00re 0.00cp 0avio 3124k
kernoops 4 104.12re 0.00cp 0avio 3270k
sshd 1 0.05re 0.00cp 0avio 3856k
whoopsie 1 0.00re 0.00cp 0avio 8552k
```
##### 3.3. Print user id and command name
For each command in the accounting file, print the userid and command name using `-u` flag.
```
$ sa -u
```
**Sample output:**
```
root 0.00 cpu 693k mem 0 io accton
root 0.00 cpu 3668k mem 0 io systemd-tty-ask
root 0.00 cpu 3260k mem 0 io systemctl
root 0.01 cpu 3764k mem 0 io deb-systemd-inv
root 0.00 cpu 722k mem 0 io acct.postinst
root 0.00 cpu 704k mem 0 io rm
root 0.00 cpu 939k mem 0 io cp
root 0.00 cpu 704k mem 0 io rm
root 0.00 cpu 951k mem 0 io find
root 0.00 cpu 911k mem 0 io gzip
root 0.00 cpu 722k mem 0 io sh
root 0.00 cpu 748k mem 0 io install-info
root 0.00 cpu 911k mem 0 io gzip
[...]
```
For more details, refer the man pages.
```
$ man sa
```
#### 4. The dump-acct and dump-utmp command examples
The **dump-acct** utility displays the output file from the accton format to a human-readable format.
```
$ dump-acct /var/account/pacct
```
dump-utmp displays utmp files in human-readable format.
```
$ dump-utmp /var/run/utmp
```
For more details, refer the man pages.
```
$ man dump-acct
```
```
$ man dump-utmp
```
#### 5. The accton command examples
The accton command will allow you to turn on or turn off accounting.
To turn on process accounting, run:
```
$ accton on
```
To turn it off, run:
```
$ accton off
```
For more details, refer the man pages.
```
$ man accton
```
### Conclusion
Every Linux administrator should be aware of GNU accounting utilities to keep an eye on all users. These utilities will be quite helpful in troubleshooting time.
**Resource:**
* [The GNU Accounting Utilities website][7]
--------------------------------------------------------------------------------
via: https://ostechnix.com/monitor-user-activity-linux/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://ostechnix.com/how-to-clear-command-line-history-in-linux/
[2]: https://ostechnix.tradepub.com/free/w_syst04/prgm.cgi
[3]: https://ostechnix.com/wp-content/uploads/2022/10/Display-total-connect-time-of-all-users.png
[4]: https://ostechnix.com/wp-content/uploads/2022/10/Show-total-connect-of-all-users-by-day-wise.png
[5]: https://ostechnix.com/wp-content/uploads/2022/10/Get-total-connect-time-by-user-wise.png
[6]: https://ostechnix.com/wp-content/uploads/2022/10/View-total-connect-time-of-a-certain-user-by-day-wise.png
[7]: https://www.gnu.org/software/acct/manual/accounting.html

View File

@ -0,0 +1,158 @@
[#]: subject: "Learn Bash base64 Encode and Decode With Examples"
[#]: via: "https://www.debugpoint.com/bash-base64-encode-decode/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Learn Bash base64 Encode and Decode With Examples
======
Want to learn about the base64 encode and decode method? Here in this tutorial, we explain the base64 encode and decode steps using bash shell scripting with various examples.
![][1]
The base64 encoding method transmits data over any communication medium by converting binary data to text. This method is primarily used for the email encryption process.
The Base64 method, in general, is a binary-to-text encoding scheme representing 8-byte binary data in ASCII string format. This has several advantages while transmitting or channelling data among various mediums especially those that reliably support text content. Hence, it is widely used on World Wide Web. Probably the most used case of this encoding scheme is using it for email attachments.
As per the Base64 representation table, the binary data can be converted to 64 different ASCII characters which are easy to transmit and printable. This encoding method uses letters A to Z, a to Z, 0 to 9 and + and /.
A total of 64 ASCII characters to represent binary from `000000` to `111111`. Each non-final Base64 digit represents exactly 6 bits of data.
![Base64 Index Table][2]
### Bash base64 encode and decode
#### Syntax
Before you learn about the examples, here is the basic syntax.
```
base64 [OPTIONs] [INFILE] [OUTFILE]
```
Option: You can provide any of the options or combine them as explained below.INFILE: Input can be picked up from standard input (like the command line) or files.OUTFILE: You can redirect the output to the standard output, like the terminal or to a file.
| Arguments | Descriptions |
| :- | :- |
| -e or encode | This option is used to encode any data from standard input or any file. It is the default option. |
| -d or decode | This option is used to decode any encoded data from standard input or any file. |
| -n or noerrcheck | By default, base64 checks error while decoding any data. You can use n or noerrcheck option to ignore checking at the time of decoding. |
| -i, ignore-garbage | This option is used to ignore non-alphabet characters while decoding. |
| -u or help | This option is used to get information about the usage of this command. |
#### Example 1 A basic encode
In Linux, the base64 package is installed by default. Hence, you can use it from the command line easily. To simply encode a string or text, you can pass it through the command line via piping and get the encoded text. In this example, the string debugpoint.com is encoded to base64.
```
echo "debugpoint.com" | base64
```
![bash base64 encode and decode - example 1][3]
The result is the base64 encoded string.
#### Explanation
The encoding method uses several steps to convert the input. The input characters are converted to 8-bit binary values. The entire set of the binary string is split into 6-bit binary values, which are converted to decimals.
Each decimal value is translated to the base64 character via the base64 index table.
In the above example, the first character, “d”, is converted to binary `01100100`. The first 6 bits are `011001`, which is 25 in decimal. The 25 refers to the Z in the base64 index table. And this goes on for the entire stream of text. See the example below.
![Base64 Encode and Decode inner working][4]
#### Example 2 A basic decode
To decode the string, simply pass the encoded value to the base64 with the option `--decode`. And it will give you the exact input string.
![bash base64 encode and decode - example 2 (decode the same example)][5]
#### Example 3 Encode a Text file
The same command can be used to encode a text file and redirect the output to another text file. Heres how.
```
base64 example3.txt > example3-encoded.txt
```
![Encode a text file][6]
#### Example 4 Decode a Text File
And to decode a text file that was encoded using base64, simply use the `--decode` or `-d` switch and pass on the text file name.
```
base64 -d example3-encoded.txt
```
#### Example 5 Encode a custom input from the user
Using bash shell programming, you can take input from the user via the terminal and encode it. But for that, you need to write a simple shell script and execute it after giving executable permission.
Heres a simple example which takes input from the user and displays the encoded string.
```
#!/bin/bash
#Sample program to take input, encode to base64 and display on terminal
#Example by www.debugpoint.com
echo "Enter text for encoding to base64:"
read input_text
output_text=`echo -n $input_text | base64`
echo "The Base64 Encoded text is: $output_text"
```
![Custom input - base64 encode and decode using script][7]
#### Example 6 A Simple Authentication using base64
You can implement a simple authentication system using the above encode and decode method. You can ask the user to enter a password or a secret code. Then store the secret code in a file or compare it on the fly.
If the stored encoded string matches with the user input encoded text, then the user is authenticated. However, it is a straightforward way of checking an authentication, but sometimes useful for simple business cases.
```
#!/bin/bash
#Sample program to take input, encode to base64 and display on terminal
#Example by www.debugpoint.com
echo "Type your password"
read pwd1
decoded_text=`echo 'U2lsZW5jZSBpcyBnb2xkZW4h' | base64 --decode`
if [[ $pwd1 == $decoded_text ]]
then
echo "You are a valid user."
else
echo "You are NOT a valid user."
fi
```
![A Simple Authentication using bash base64][8]
### Conclusion
I hope you get to learn the basics of [Base64][9] encode and decode with these examples. Also, learn a bit about its inner workings. Let me know in the comment box below if this helps you or need additional tutorials on this topic.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/bash-base64-encode-decode/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2021/11/base64example-1024x576.jpg
[2]: https://www.debugpoint.com/wp-content/uploads/2021/11/Base64-Index-Table.png
[3]: https://www.debugpoint.com/wp-content/uploads/2021/11/bash-base64-encode-and-decode-example-1.jpg
[4]: https://www.debugpoint.com/wp-content/uploads/2021/11/Base64-Encode-and-Decode-inner-working.png
[5]: https://www.debugpoint.com/wp-content/uploads/2021/11/bash-base64-encode-and-decode-example-2-decode-the-same-example.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2021/11/Encode-a-text-file.png
[7]: https://www.debugpoint.com/wp-content/uploads/2021/11/Custom-input-base64-encode-and-decode-using-script.png
[8]: https://www.debugpoint.com/wp-content/uploads/2021/11/A-Simple-Authentication-using-bash-base64.png
[9]: https://linux.die.net/man/1/base64

View File

@ -0,0 +1,93 @@
[#]: subject: "Ubuntu Budgie 22.10 Kinetic Kudu: Top New Features"
[#]: via: "https://www.debugpoint.com/ubuntu-budgie-22-10-features/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu Budgie 22.10 Kinetic Kudu: Top New Features
======
Heres a brief overview of the Ubuntu Budgie 22.10 (upcoming) and its features.
![Ubuntu Budgie 22.10 KInetic Kudu Desktop][1]
### Ubuntu Budgie 22.10: Top New Features
The Budgie desktop has a different fan base because of its fusion of simplicity, feature and performance. Ubuntu Budgie 22.10 is an official Budgie flavour of Ubuntu, where you get the latest Ubuntu base with a stable Budgie desktop.
Ubuntu Budgie 22.10 is a short-term release, supported for nine months until July 2023.
This release of Ubuntu Budgie features Linux Kernel 5.19, which brings run-time Average Power Limiting (RAPL) support for Intels Raptor and Alder Lake processor, multiple families of ARM updates in mainline kernel and usual processor/GPU and file-system updates. Learn more about Kernel 5.19 features in [this article][2].
At the heart of this version, Ubuntu Budgie is powered by Budgie Desktop version 10.6.4, which brings updated applications and additional core tweaks. If you are using the prior Ubuntu budgie version, i.e. 22.04 or 21.10, you will notice some changes. And you should be aware of those.
First of all, the Budgie Control Center gets modern protocols for screensharing (both RDP and VNC), enhanced fractional scaling and colour profile support for your monitor. In addition, the Control centre now shows the monitor refresh rate and additional support added for Wacom tablets.
Secondly, Budgie desktop settings get a global option to control applets spacing and additional refinements.
After removing the calendar launcher, the clock at the top bar now only gives the launcher for date/time settings. You can access the Calendar from the applets at the extreme right of the top bar.
Ubuntu Budgie 22.10 fundamentally changed its application stack, which ships by default. Earlier (22.04 and prior), Budgie featured the GNOME applications as default for several desktop functions. Since GNOME is already moved to libadwaita/GTK4 with its own styling and theming, those apps wouldnt look consistent in Budgies styling.
That is correct. Because these rounded corners with budgie/mate apps really look off.
![Budgie desktop with GTK4-libadwaita and MATE apps][3]
So, from this release onwards, the default app sets are slowly moving away from GNOME Apps to native or MATE apps/alternatives.
For Example, GNOME Calculator and System Monitor are now replaced by Mate Calculator and Mate system monitor. Atril document viewer replaces Evince reader. In addition, font-manager replaces GNOME Font Viewer, and a list of other GNOME apps are in “wait and watch” mode. They may get replaced in upcoming Budgie desktop point releases.
However, the text editor is still Gedit in this version, which is already a [great choice][4]. And a new native screenshot application debuts with a tweak tool for Raspberry Pi devices.
If you want to learn more about this transition, read the discussion [here][5].
So, thats about the core changes in this release and heres a quick summary of Ubuntu Budgie alongside other applications.
### Summary of the core items of Ubuntu Budgie 22.10
#### Core and major items
* [Ubuntu 22.10][6]
* Linux Kernel 5.19
* Budgie desktop version 10.6.4
* Firefox 105.0.1 (Snap version)
* Thunderbird 102.3.2
* LibreOffice 7.4.2.3
#### Other changes
* Pipewire replacing PulseAudio. The PulseAudio package is removed from ISO.
* WebP support by default
* New Tweak Tool for ARM devices (Raspberry Pi)
* Tilix Terminal 1.9.5
* Transmission torrent client 3.0.0
* GNOME Software 43.0
Finally, you can download Ubuntu Budgie 22.10 Beta from the below link. Also you can try out the Raspberry Pi beta image as well.
* [Link to the beta build][7]
* [Link to Raspberry Pi build][8]
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/ubuntu-budgie-22-10-features/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/Ubuntu-Budgie-22.10-KInetic-Kudu-Desktop-1024x578.jpg
[2]: https://www.debugpoint.com/linux-kernel-5-19/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/10/Budgie-desktop-with-GTK4-libadwaita-and-MATE-apps.jpg
[4]: https://www.debugpoint.com/gedit-features/
[5]: https://discourse.ubuntubudgie.org/t/default-applications-review-for-22-10-23-04-and-beyond/5883
[6]: https://www.debugpoint.com/ubuntu-22-10/
[7]: https://cdimage.ubuntu.com/ubuntu-budgie/releases/kinetic/beta/
[8]: https://sourceforge.net/projects/budgie-remix/files/budgie-raspi-22.10/

View File

@ -0,0 +1,102 @@
[#]: subject: "What you need to know about compiling code"
[#]: via: "https://opensource.com/article/22/10/compiling-code"
[#]: author: "Alan Smithee https://opensource.com/users/alansmithee"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What you need to know about compiling code
======
Use this handy mousetrap analogy to understand compiling code. Then download our new eBook, An open source developer's guide to building applications.
Source code must be compiled in order to run, and in open source software everyone has access to source code. Whether you've written code yourself and you want to compile and run it, or whether you've downloaded somebody's project to try it out, it's useful to know how to process source code through a [compiler][2], and also what exactly a compiler does with all that code.
### Build a better mousetrap
We don't usually think of a mousetrap as a computer, but believe it or not, it does share some similarities with the CPU running the device you're reading this article on. The classic (non-cat) mousetrap has two states: it's either set or released. You might consider that *on* (the kill bar is set and stores potential energy) and *off* (the kill bar has been triggered.) In a sense, a mousetrap is a computer that calculates the presence of a mouse. You might imagine this code, in an imaginary language, describing the process:
```
if mousetrap == 0 then
There's a mouse!
else
There's no mouse yet.
end
```
In other words, you can derive mouse data based on the state of a mousetrap. The mousetrap isn't foolproof, of course. There could be a mouse next to the mousetrap, and the mousetrap would still be registered as *on* because the mouse has not yet triggered the trap. So the program could use a few enhancements, but that's pretty typical.
### Switches
A mousetrap is ultimately a switch. You probably use a switch to turn on the lights in your house. A lot of information is stored in these mechanisms. For instance, people often assume that you're at home when the lights are on.
You could program actions based on the activity of lights on in your neighborhood. If all lights are out, then turn down your loud music because people have probably gone to bed.
A CPU uses the same logic, multiplied by several orders of measure, and shrunken to a microscopic level. When a CPU receives an electrical signal at a specific register, then some other register can be tripped, and then another, and so on. If those registers are made to be meaningful, then there's communication happening. Maybe a chip somewhere on the same motherboard becomes active, or an LED lights up, or a pixel on a screen changes color.
**[[ Related read 6 Python interpreters to try in 2022 ]][3]**
What comes around goes around. If you really want to detect a rodent in more places than the one spot you happen to have a mousetrap set, you could program an application to do just that. With a webcam and some rudimentary image recognition software, you could establish a baseline of what an empty kitchen looks like and then scan for changes. When a mouse enters the kitchen, there's a shift in the pixel values where there was previously no mouse. Log the data, or better yet trigger a drone that focuses in on the mouse, captures it, and moves it outside. You've built a better mousetrap through the magic of on and off signals.
### Compilers
A code compiler translates human-readable code into a machine language that speaks directly to the CPU. It's a complex process because CPUs are legitimately complex (even more complex than a mousetrap), but also because the process is more flexible than it strictly "needs" to be. Not all compilers are flexible. There are some compilers that have exactly one target, and they only accept code files in a specific layout, and so the process is relatively straight-forward.
Luckily, modern general-purpose compilers aren't simple. They allow you to write code in a variety of languages, and they let you link libraries in different ways, and they can target several different architectures. The [GNU C Compiler (GCC)][4] has over 50 lines of options in its `--help` output, and the LLVM `clang` compiler has over 1000 lines in its `--help` output. The GCC manual contains over 100,000 words.
You have lots of options when you compile code.
Of course, most people don't need to know all the possible options. There are sections in the GCC man page I've never read, because they're for Objective-C or Fortran or chip architectures I've never even heard of. But I value the ability to compile code for several different architectures, for 64-bit and 32-bit, and to run open source software on computers the rest of the industry has left behind.
### The compilation lifecycle
Just as importantly, there's real power to understanding the different stages of compiling code. Here's the lifecycle of a simple C program:
1. C source with macros (.c) is preprocessed with `cpp` to render an `.i` file.
2. C source code with expanded macros (.i) is translated with `gcc` to render an `.s` file.
3. A text file in Assembly language (.s) is `as`sembled with as into an `.o` file.
4. Binary object code with instructions for the CPU, and with offsets not tied to memory areas relative to other object files and libraries (*.o) is linked with `ld` to produce an executable.
5. The final binary file either has all required objects within it, or it's set to load linked dynamic libraries (*.so files).
And here's a simple demonstration you can try (with some adjustment for library paths):
```
$ cat << EOF >> hello.c
 #include
 int main(void)
 { printf("hello world\n");
   return 0; }
   EOF
$ cpp hello.c > hello.i
$ gcc -S hello.i
$ as -o hello.o hello.s
$ ld -static -o hello \
-L/usr/lib64/gcc/x86_64-slackware-linux/5.5.0/ \
/usr/lib64/crt1.o /usr/lib64/crti.o hello.o \
/usr/lib64/crtn.o  --start-group -lc -lgcc \
-lgcc_eh --end-group
$ ./hello
hello world
```
### Attainable knowledge
Computers have become amazingly powerful, and pleasantly user-friendly. Don't let that fool you into believing either of the two possible extremes: computers aren't as simple as mousetraps and light switches, but they also aren't beyond comprehension. You can learn about compiling code, about how to link, and compile for a different architecture. Once you know that, you can debug your code better. You can understand the code you download. You may even fix a bug or two. Or, in theory, you could build a better mousetrap. Or a CPU out of mousetraps. It's up to you.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/compiling-code
作者:[Alan Smithee][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alansmithee
[b]: https://github.com/lkxed
[2]: https://opensource.com/article/19/5/primer-assemblers-compilers-interpreters
[3]: https://opensource.com/article/22/9/python-interpreters-2022
[4]: https://opensource.com/article/22/5/gnu-c-compiler

View File

@ -0,0 +1,315 @@
[#]: subject: "13 Independent Linux Distros That are Built From Scratch"
[#]: via: "https://itsfoss.com/independent-linux-distros/"
[#]: author: "sreenath https://itsfoss.com/author/sreenath/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
13 Independent Linux Distros That are Built From Scratch
======
There are hundreds of Linux distributions available.
But most of them fall into these three categories: Debian, Red Hat (Fedora) and Arch Linux.
Using a distribution based on Debian/Ubuntu, Red Hat/SUSE or Arch Linux has its advantages. They are popular and hence their package manager offers a huge range of software.
However, some users prefer to use Linux distributions built from scratch and be independent of DEB/RPM packaging system.
In this article, we will list some of the best Linux distributions developed independently.
**Note:** Obviously, this list excludes popular options like Debian, Ubuntu, and Fedora, which are used as bases for creating new distros. Moreover, the distributions are in no particular order of ranking.
### 1. NixOS
![Image Credits: Distrowatch][1]
Initially released in 2003, Nix OS is built on top of the Nix Package Manager. It provides two releases every year, usually scheduled in May and November.
NixOS may not be a distribution directly geared to new and average users. However, its unique approach to [package management][2] attracts various kinds of users.
Additionally, 32-bit support systems are also supported.
##### Other Features:
* Builds packages isolated
* Reliable upgrade with rollback feature
* Reproducible system configuration
[NixOS][3]
**Related**: [Advanced Linux Distributions for Expert Linux Users][4]
### 2. Gentoo Linux
![Image Credits: Distrowatch][5]
Gentoo Linux is an independently developed distribution aimed mainly at system experts. It is built for users who want the freedom to customize, fine-tune and optimize the operating system to suit their requirements.
Gentoo uses [Portage package management][6] that lets you create and install packages, often allowing you to optimize them for your hardware. **Chromium OS**, the open-source version of Chrome OS, uses Gentoo at its core.
Not to forget, Gentoo is one of those [distributions that still support 32-bit architectures][7].
##### Other Features:
* Incremental Updates
* Source-based approach to software management
* Concept of overlay repositories like GURU (Gentoos user repository), where users can add packages not yet provided by Gentoo
[Gentoo Linux][8]
### 3. Void Linux
![Image Credits: Distrowatch][9]
Void Linux is a [rolling release distribution][10] with its own X Binary Package System (XBPS) for installing and removing software. It was created by **Juan Romero Pardines**, a former NetBSD developer.
It avoids systemd and instead uses runit as its init system. Furthermore, it gives you the option to use several [desktop environments][11].
##### Other Features:
* Minimal system requirements
* Offers an official repository for non-free packages
* Support for Raspberry Pi
* Integration of OpenBSDs LibreSSL software
* Support for musl C library
* 32-bit support
[Void Linux][12]
**Related:** [Not a Systemd Fan? Here are 13+ Systemd-Free Linux Distributions][13]
### 4. Solus Linux
![solus budgie 2022][14]
Formerly EvolveOS, Solus Linux offers some exciting features while built from scratch. Solus features its own homegrown budgie desktop environment as its flagship version.
Compared to other options, Solus Linux is one of the few independent distributions that new Linux users can use. It manages to be one of the [best Linux distributions][15] available.
It uses eopkg package management with a semi-rolling release model. As per the developers, Solus is exclusively developed for personal computing purposes.
##### Other Features:
* Available in Budgie, Gnome, MATE, and KDE Plasma editions
* Variety of software out of the box, which reduces setup efforts
[Solus Linux][16]
### 5. Mageia
![Image Credits: Distrowatch][17]
Mageia started as a fork of Mandriva Linux back in 2010. It aims to be a stable and secure operating system for desktop and server usage.
Mageia is a community-driven project supported by a non-profit organization and elected contributors. You will notice a major release every year.
##### Other Features
* Supports 32-bit system
* KDE Plasma, Gnome, and XFCE editions are available from the website
* Minimal system requirements
[Mageia][18]
**Related:** **[Linux Distros That Still Support 32-Bit Systems][19]**
### 6. Clear Linux
![Image Credits: Distrowatch][20]
Clear Linux is a distribution by Intel, primarily designed with performance and cloud use cases in mind.
One interesting thing about Clear Linux is the operating system upgrades as a whole rather than individual packages. So, even if you mess up with the system accidentally, it should boot correctly, performing a factory reset to let you set it up again.
It is not geared toward personal use. But it can be a unique choice to try.
##### Other Features:
* Highly tuned for Intel platforms
* A strict separation between User and System files
* Constant vulnerability scanning
[Clear Linux OS][21]
### 7. PCLinuxOS
![Image Credits: Distrowatch][22]
PCLinuxOS is an x86_64 Linux distribution that uses APT-RPM packages. You can get KDE Plasma, Mate, and XFCE desktops, while it also offers several community editions featuring more desktops.
Locally installed versions of PCLinuxOS utilize the APT package management system thanks to [Synaptic package manager][23]. You can also find rpm packages from its repositories.
##### Other Features:
* mylivecd script allows the user to take a snapshot of their current hard drive installation (all settings, applications, documents, etc.) and compress it into an ISO CD/DVD/USB image.
* Additional support for over 85 languages.
[PCLinuxOS][24]
### 8. 4MLinux
![4m linux 2022][25]
[4MLinux][26] is a general-purpose Linux distribution with a strong focus on the following four **“M”**  of computing:
* Maintenance (system rescue Live CD)
* Multimedia (full support for a huge number of image, audio and video formats)
* Miniserver (DNS, FTP, HTTP, MySQL, NFS, Proxy, SMTP, SSH, and Telnet)
* Mystery (meaning a collection of classic Linux games)
It has a minimal system requirement and is available as a desktop and server version.
##### Other Features
* Support for large number of image, audio/video formats
* Small and general-purpose Linux distribution
[4MLinux][27]
### 9. Tiny Core Linux
![Image Credits: Distrowatch][28]
Tiny Core Linux focuses on providing a base system using BusyBox and FLTK. It is not a complete desktop. So, you do not expect it to run on every system.
It represents only the core needed to boot into a very minimal X desktop, typically with wired internet access.
The user gets great control over everything, but it may not be an easy out-of-the-box experience for new Linux users.
##### Other Features
* Designed to run from a RAM copy created at boot time
* By default, operates like a cloud/internet client
* Users can run appbrowser to browse repositories and download applications
[Tiny Core Linux][29]
### 10. Linux From Scratch
![Image Credit: Reddit][30]
[Reddit][31]
Linux From Scratch is a way to install a working Linux system by building all its components manually. Once completed, it provides a compact, flexible and secure system and a greater understanding of the internal workings of the Linux-based operating systems.
If you need to dive deep into how a Linux system works and explore its nuts and bolts, Linux From Scratch is the project you need to try.
##### Other Features
* Customised Linux system, entirely from scratch
* Extremely flexible
* Offers added security because of self compile from source
[Linux From Scratch][32]
### 11. Slackware
![Image Credits: Distrowatch][33]
Slackware is the oldest distribution that is still being maintained. Originally created in 1993, with Softlanding Linux System as base, Slackware later became the base for many Linux distributions.
Slackware aims at producing the most UNIX-like Linux distribution while keeping simplicity and stability.
##### Other Features
* Available for 32-bit and 64-bit systems
* Extensive online documentation
* Can run on Pentium system to latest machines
[Slackware][34]
### 12. Alpine Linux
![alpine linux xfce 2022][35]
Alpine Linux is a community-developed operating system designed for routers, firewalls, VPNs, VoIP boxes, and servers. It began as a fork of the LEAF Project.
Alpine Linux uses apk-tools package management, initially written as a shell script and later written in C programming language. This is a minimal Linux distribution, which still supports 32-bit systems and can be installed as a run-from-RAM operating system.
##### Other Features:
* Provides a minimal container image of just 5 MB in size
* 2-year support for the main repository and support until the next stable release for the community repository
* Made around musl libc and Busybox with resource-efficient containers
[Alpine Linux][36]
### 13. KaOS
![Image Credits: Distrowatch][37]
KaOS is a Linux distribution built from scratch and inspired by Arch Linux. It uses [pacman for package management][38]. It is built with the philosophy “*One Desktop Environment (KDE Plasma), One Toolkit (Qt), One Architecture (x86_64)*“.
It has limited repositories, but still, it offers plenty of tools for a regular user.
##### Other Features:
* Most up-to-date Plasma desktop
* Tightly integrated rolling and transparent distribution for the modern desktop
[KaOS][39]
#### Wrapping Up
If you need a unique experience, these independent Linux distributions should serve the purpose.
However, if you want to replace it with a mainstream distribution like Ubuntu for your desktop…You might want to think twice, considering most of the options (if not all) above are not ideal options for day-to-day desktop usage.
But then again, if you have a fair share of experience with Linux distributions, you can undoubtedly take up the task for an adventure!
*If you were to try one of these indie distros, which one would it be? Share with us in the comments.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/independent-linux-distros/
作者:[sreenath][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sreenath/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/10/nixos-2022.png
[2]: https://itsfoss.com/package-manager/
[3]: https://nixos.org/
[4]: https://itsfoss.com/advanced-linux-distros/
[5]: https://itsfoss.com/wp-content/uploads/2022/08/gentoo-linux-plasma.jpg
[6]: https://wiki.gentoo.org/wiki/Portage
[7]: https://itsfoss.com/32-bit-linux-distributions/
[8]: https://www.gentoo.org/
[9]: https://itsfoss.com/wp-content/uploads/2022/08/void-linux.jpg
[10]: https://itsfoss.com/rolling-release/
[11]: https://itsfoss.com/best-linux-desktop-environments/
[12]: https://voidlinux.org/
[13]: https://itsfoss.com/systemd-free-distros/
[14]: https://itsfoss.com/wp-content/uploads/2022/10/solus-budgie-2022.jpg
[15]: https://itsfoss.com/best-linux-distributions/
[16]: https://getsol.us/home/
[17]: https://itsfoss.com/wp-content/uploads/2022/08/mageia-1.jpg
[18]: https://www.mageia.org/en/
[19]: https://itsfoss.com/32-bit-linux-distributions/
[20]: https://itsfoss.com/wp-content/uploads/2022/08/clear-linux-desktop.png
[21]: https://clearlinux.org/
[22]: https://itsfoss.com/wp-content/uploads/2022/08/pclinuxos.png
[23]: https://itsfoss.com/synaptic-package-manager/
[24]: https://www.pclinuxos.com/
[25]: https://itsfoss.com/wp-content/uploads/2022/10/4m-linux-2022.jpg
[26]: https://itsfoss.com/4mlinux-review/
[27]: http://4mlinux.com/
[28]: https://itsfoss.com/wp-content/uploads/2022/03/tinycore.jpg
[29]: http://www.tinycorelinux.net/
[30]: https://itsfoss.com/wp-content/uploads/2022/08/enable-aur-e1659974408774.png
[31]: https://www.reddit.com/r/linuxmasterrace/comments/udi7ts/decided_to_try_lfs_in_a_vm_started_about_a_week/
[32]: https://www.linuxfromscratch.org/
[33]: https://itsfoss.com/wp-content/uploads/2022/10/slackware-scaled.jpg
[34]: http://www.slackware.com/
[35]: https://itsfoss.com/wp-content/uploads/2022/10/alpine-linux-xfce-2022.png
[36]: https://www.alpinelinux.org/
[37]: https://itsfoss.com/wp-content/uploads/2022/08/kaos-desktop.png
[38]: https://itsfoss.com/pacman-command/
[39]: https://kaosx.us/

View File

@ -0,0 +1,69 @@
[#]: subject: "Can Kubernetes help solve automation challenges?"
[#]: via: "https://opensource.com/article/22/10/kubernetes-solve-automation-challenges"
[#]: author: "Rom Adams https://opensource.com/users/romdalf"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Can Kubernetes help solve automation challenges?
======
Automation at the organization level has been an elusive goal, but Kubernetes might be able to change all that.
I started my automation journey when I adopted Gentoo Linux as my primary operating system in 2002. Twenty years later, automation is not yet a done deal. When I meet with customers and partners, they share automation wins within teams, but they also describe the challenges to achieving similar success at an organizational level.
Most IT organizations have the ability to provision a virtual machine end to end, reducing what used to be a four-week lead time to just five minutes. That level of automation is itself a complex workflow, requiring networking (IP address management, DNS, proxy, networking zones, and so on), identity access management, [hypervisor][2], storage, backup, updating the operating system, applying the latest configuration files, monitoring, security and hardening, and compliance benchmarking. Wow!
It's not easy to address the business need for high velocity, scaling, and on-demand automation. For instance, consider the classic webshop or an online government service to file tax returns. The workload has well-defined peaks that need to be absorbed.
A common approach for handling such a load is having an oversized server farm, ready to be used by a specialized team of IT professionals, monitoring the seasonal influx of customers or citizens. Everybody wants a just-in-time deployment of an entire stack. They want infrastructure running workloads within the context of a hybrid cloud scenario, using the model of "build-consume-trash" to optimize costs while benefiting from infinite elasticity.
In other words, everybody wants the utopian "cloud experience."
### Can the cloud really deliver?
All is not lost, thanks mainly to the way [Kubernetes][3] has been designed. The exponential adoption of Kubernetes fuels innovation, displacing standard legacy practices for managing platforms and applications. Kubernetes requires the use of Everything-as-Code (EaC) to define the desired state of all resources, from simple compute nodes to TLS certificates. Kubernetes compels the use of three major design constructs:
* A standard interface to reduce integration friction between internal and external components
* An API-first and API-only approach to standardize the CRUD (Create, Read, Update, Delete) operations of all its components
* Use of [YAML][4] as a common language to define all desired states of these components in a simple and readable way
These three key components are essentially the same requirements for choosing an automation platform, at least if you want to ease adoption by cross-functional teams. This also blurs the separation of duties between teams, helping to improve collaboration across silos, which is a good thing!
As a matter of fact, customers and partners adopting Kubernetes are ramping up to a state of hyper-automation. Kubernetes organically drives teams to adopt multiple [DevOps foundations and practices][5]—like EaC, [version control with Git][6], peer reviews, [documentation as code][7]—and encourages cross-functional collaboration. These practices help mature a team's automation skills, and they help a team get a good start in GitOps and CI/CD pipelines dealing with both application lifecycle and infrastructure.
### Making automation a reality
You read that right! The entire stack for complex systems like a webshop or government reporting can be defined in clear, understandable, universal terms that can be executed on any on-prem or cloud provider. An autoscaler with custom metrics can be defined to trigger a just-in-time deployment of your desired stack to address the influx of customers or citizens during seasonal peaks. When metrics are back to normal, and cloud compute resources don't have a reason to exist anymore, you trash them and return to regular operations, with a set of core assets on-prem taking over the business until the next surge.
### The chicken and the egg paradox
Considering Kubernetes and cloud-native patterns, automation is a must. But it raises an important question: Can an organization adopt Kubernetes before addressing the automation strategy?
It might seem that starting with Kubernetes could inspire better automation, but that's not a foregone conclusion. A tool is not an answer to the problem of skills, practices, and culture. However, a well-designed platform can be a catalyst for learning, change, and cross-functional collaboration within an IT organization.
### Get started with Kubernetes
Even if you feel you missed the automation train, don't be afraid to start with Kubernetes on an easy, uncomplicated stack. Embrace the simplicity of this fantastic orchestrator and iterate with more complex needs once you've [mastered the initial steps][8].
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/10/kubernetes-solve-automation-challenges
作者:[Rom Adams][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/romdalf
[b]: https://github.com/lkxed
[2]: https://www.redhat.com/en/topics/virtualization/what-is-a-hypervisor?intcmp=7013a000002qLH8AAM
[3]: https://www.redhat.com/en/topics/containers/what-is-kubernetes?intcmp=7013a000002qLH8AAM
[4]: https://opensource.com/article/21/9/yaml-cheat-sheet
[5]: https://opensource.com/resources/devops
[6]: https://opensource.com/life/16/7/stumbling-git
[7]: https://opensource.com/article/21/3/devops-documentation
[8]: https://opensource.com/article/17/11/getting-started-kubernetes

View File

@ -0,0 +1,193 @@
[#]: subject: "Celebrating KDEs 26th Birthday With Some Inspiring Facts From Its Glorious Past!"
[#]: via: "https://itsfoss.com/kde-facts-trivia/"
[#]: author: "Avimanyu Bandyopadhyay https://www.gizmoquest.com"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Celebrating KDEs 26th Birthday With Some Inspiring Facts From Its Glorious Past!
======
Wishing a Very Happy Birthday to **KDE**!
Let us celebrate this moment by looking back on its glorious history with some inspiring facts on this legendary and much-loved Desktop Environment!
![kde birthday][1]
### KDEs Origin
**26 years ago**, [Matthias Ettrich][2] (a German Computer Scientist currently working at [HERE][3]) founded KDE.
![matthias 2950607241][4]
When Matthias was a student at the [Eberhard Karls University of Tübingen][5], he was not satisfied as a [Common Desktop Environment (CDE)][6] user.
CDE is a desktop environment for Unix.
However, he wanted an interface that was more comfortable, simpler, and easy to use, with a better look and feel.
So, in 1996, Matthias Ettrich announced the **Kool Desktop Environment (KDE)**, a GUI (graphical user interface) for Unix systems, built with Qt and C ++.
Note that the full form of KDE was an innocent pun intended to CDE at the time. You do not usually say it as “Kool Desktop Environment”, just KDE as of now. You can read the [original announcement post][7] to get a dose of nostalgia.
**Trivia**: The official mascot of KDE is Konqi who has a girlfriend named Katie. Previously there used to be a wizard named [Kandalf][8] but was later replaced by Konqi because many people loved and preferred the mascot to be this charming and friendly dragon!
![Konqi is KDE's mascot. Katie is his girlfriend and mascot of KDE women's project.][9]
[Konqi][10]
[Katie][11]
And, heres how it looked like with KDE mascot:
![Screenshot of earlier version of KDE desktop][12]
### 13 Interesting and Inspiring Facts on KDE
Weve looked back into some interesting yet inspiring events that took place over the last 26 years of the KDE project:
#### 1. Early Development Events
15 developers met in Arnsberg, Germany, in 1997, to work on the KDE project and discuss its future. This event came to be known as [KDE One][13] followed by [KDE Two][14] and [KDE Three][15] and so on in the later years. They even had [one][16] for a beta version.
#### 2. The KDE Free Qt Foundation Agreement
The foundation agreement for the [KDE Free Qt Foundation][17] was signed by [KDE e.V.][18] and [Trolltech][19], then owner of the Qt Foundation who [ensured the permanent availability][20] of Qt as free software.
#### 3. First Stable Version
![kde 1][21]
The [first stable version][22] of KDE was released in **1998**, in addition to highlighting an application development framework, the [KOM/OpenParts][23], and an office suite preview. You can check out the [KDE 1.x Screenshots][24].
#### 4. The KDE Women Initiative
The community womens group, [KDE Women][25], was created and announced in March 2001 with the primary goal to increase the number of women in free software communities, particularly in KDE.
#### 5. 1 Million Commits
The community [reached 1 million commits][26] within a span of only 19 months, from 500,000 in January 2006 and 750,000 in December 2007, with the launch of KDE 4 at the same time.
#### 6. Release Candidate of Development Platform Announced
A [release candidate][27] of KDEs development platform consisting of basic libraries and tools to develop KDE applications was announced on October 2007.
#### 7. First KDE & Qt event in India
The [first conference][28] of the KDE and Qt community in India happened in Bengaluru in March 2011 that became an annual event henceforth.
#### 8. GCompris and KDE
In **December 2014**, the educational software suite [GCompris joined][29] the [project incubator of KDE community][30] (We have [previously][31] discussed GCompris, which is bundled with Escuelas Linux, a comprehensive educational distro for teachers and students).
#### 9. KDE Slimbooks
In **2016**, the KDE community partnered with a Spanish laptop retailer and [announced the launch of the KDE Slimbook][32], an ultrabook with KDE Plasma and KDE Applications pre-installed. Slimbook offers a pre-installed version of [KDE Neon][33] and [can be purchased from their website][34].
#### 10. Transition to GitLab
In **2019**, KDE [migrated][35] from Phabricator to GitLab to enhance the development process and let new contributors easy access to the workflow. However, KDE still uses bugzilla for tracking bugs.
#### 11. Adopts Decentralized Matrix Protocol
KDE added Matrix bridge to the IRC channels and powered up its native chat clients using the open-source decentralized Matrix protocol in **2019**.
#### 12. KDE PinePhone
KDE developers teamed up with [PINE64][36] in **2020** to introduce a community edition PinePhone powered by KDE.
#### 13. Valve Picks KDE for Steam Deck
Steam Deck is undoubtedly a super trending Linux gaming console right now. And, Valve chose KDE as its desktop environment to make it work in **2021**.
### Today, KDE is Powered by Three Great Projects
#### KDE Plasma
Previously called Plasma Workspaces, KDE Plasma facilitates a unified workspace environment for running and managing applications on various devices like desktops, netbooks, tablets or even [smartphones][37].
Currently, [KDE Plasma 5.26][38] is the most recent version and was released some days ago. The KDE Plasma 5 project is the fifth generation of the desktop environment and is the successor to KDE Plasma 4.
#### KDE Applications
KDE Applications are a bundled set of applications and libraries designed by the KDE community. Most of these applications are cross-platform, though primarily made for Linux.
A very [nice][39] project in this category is a music player called Elisa focused on an optimised integration with Plasma.
#### KDE Development Platform
The KDE Development Platform is what significantly empowers the above two initiatives, and is a collection of libraries and software frameworks released by KDE to promote better collaboration among the community to develop KDE software.
**Personal Note**: It was an honour covering this article on KDEs Birthday and I would like to take this opportunity to brief some of my personal favourite KDE based apps and distros that I have extensively used in the past and continue to.
Check out the entire [timeline][40] in detail here for a more comprehensive outline or you can take a look at this 19-year visual changes in this interesting video:
![A Video from YouTube][41]
### Best KDE-Based Distributions
If you have heard all the good things about KDE, you should try out the distributions powered by KDE.
We have a [list of Linux distributions based on KDE][42], if you are curious.
*Hope you liked our favourite moments in KDE history on their 26th Anniversary! Please do write about any thoughts you might have about any of your memorable experiences with KDE in the comments below.*
This article was originally published in 2018, and has been edited to reflect latest information.
--------------------------------------------------------------------------------
via: https://itsfoss.com/kde-facts-trivia/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.gizmoquest.com
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2022/10/kde-birthday.png
[2]: https://en.wikipedia.org/wiki/Matthias_Ettrich
[3]: https://here.com/
[4]: https://itsfoss.com/wp-content/uploads/2022/10/matthias-2950607241.jpg
[5]: https://www.uni-tuebingen.de/en
[6]: https://en.wikipedia.org/wiki/Common_Desktop_Environment
[7]: https://kde.org/announcements/announcement/
[8]: https://en.wikipedia.org/wiki/Konqi#Kandalf
[9]: https://itsfoss.com/wp-content/uploads/2018/10/Konqi-and-Katie.jpg
[10]: https://en.wikipedia.org/wiki/Konqi
[11]: https://community.kde.org/Katie
[12]: https://itsfoss.com/wp-content/uploads/2018/10/Konqi-from-the-early-days-who-replaced-Kandalf-right.jpg
[13]: https://community.kde.org/KDE_Project_History/KDE_One_(Developer_Meeting)
[14]: https://community.kde.org/KDE_Project_History/KDE_Two_(Developer_Meeting)
[15]: https://community.kde.org/KDE_Project_History/KDE_Three_(Developer_Meeting)
[16]: https://community.kde.org/KDE_Project_History/KDE_Three_Beta_(Developer_Meeting)
[17]: https://www.kde.org/community/whatiskde/kdefreeqtfoundation.php
[18]: https://www.kde.org/announcements/fsfe-associate-member.php
[19]: https://dot.kde.org/2007/02/28/trolltech-becomes-first-corporate-patron-kde
[20]: https://dot.kde.org/2016/01/13/qt-guaranteed-stay-free-and-open-%E2%80%93-legal-update
[21]: https://itsfoss.com/wp-content/uploads/2022/10/kde-1.jpg
[22]: https://www.kde.org/announcements/announce-1.0.php
[23]: https://www.kde.org/kdeslides/Usenix1998/sld016.htm
[24]: https://czechia.kde.org/screenshots/kde1shots.php
[25]: https://community.kde.org/KDE_Women
[26]: https://dot.kde.org/2009/07/20/kde-reaches-1000000-commits-its-subversion-repository
[27]: https://www.kde.org/announcements/announce-4.0-platform-rc1.php
[28]: https://dot.kde.org/2010/12/28/confkdein-first-kde-conference-india
[29]: https://dot.kde.org/2014/12/11/gcompris-joins-kde-incubator-and-launches-fundraiser
[30]: https://community.kde.org/Incubator
[31]: https://itsfoss.com/escuelas-linux/
[32]: https://dot.kde.org/2017/01/26/kde-and-slimbook-release-laptop-kde-fans
[33]: https://en.wikipedia.org/wiki/KDE_neon
[34]: https://slimbook.es/en/store/slimbook-kde
[35]: https://pointieststick.com/2020/05/23/this-week-in-kde-we-have-migrated-to-gitlab/
[36]: https://www.pine64.org
[37]: https://play.google.com/store/apps/details?id=org.kde.kdeconnect_tp
[38]: https://news.itsfoss.com/kde-plasma-5-26-release/
[39]: https://mgallienkde.wordpress.com/2018/10/09/0-3-release-of-elisa-music-player/
[40]: https://timeline.kde.org/
[41]: https://youtu.be/1UG4lQOMBC4
[42]: https://itsfoss.com/best-kde-distributions/

View File

@ -0,0 +1,230 @@
[#]: subject: "How To Restrict Access To Linux Servers Using TCP Wrappers"
[#]: via: "https://ostechnix.com/restrict-access-linux-servers-using-tcp-wrappers/"
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How To Restrict Access To Linux Servers Using TCP Wrappers
======
In this guide, we are going to learn **what is TCP Wrappers**, what is it used for, how to **install TCP Wrappers in Linux**, and how to **restrict access to Linux servers using TCP Wrappers**.
### What is TCP Wrappers?
**TCP Wrappers** (also known as **tcp_wrapper**) is an open source host-based ACL (Access Control List) system, which is used to restrict the TCP network services based on the hostname, IP address, network address, and so on. It decides which host should be allowed to access a specific network service.
TCP Wrapper was developed by a Dutch programmer and physicist **Wietse Zweitze Venema** in 1990 at the Eindhoven University of Technology. He maintained it until 1995, and then released it under BSD License in 2001.
### Is TCP Wrappers a replacement for Firewalls?
**No.** Please be aware that **TCP Wrapper is not a complete replacement for properly configured firewall**. It is just a **valuable addition to** **enhance your Linux server's security**.
Some Linux distributions such as Debian, Ubuntu have dropped the TCP Wrappers from the official repositories. Because, the last version of tcp_wrappers was released 20 years ago. At that time it was very powerful tool to "block all traffic".
However, these days we can do the same thing using firewalls/iptables/nftables for all traffic on **network level** or use similar filtering at the application level. But TCP Wrappers blocks incoming connection on application level only.
If you still prefer to use TCP Wrappers for any reason, it is always recommended to use TCP Wrappers in conjunction with a properly configured firewall and other security mechanisms and tools to harden your Linux server's security.
### Install TCP Wrappers in Linux
TCP Wrappers is available in the official repositories of most Linux operating systems.
Depending upon the Linux distribution you use, TCP Wrappers can be installed as shown below.
**On Arch-based systems**, make sure the [Community] repository is enabled and run the following command to TCP Wrappers in Arch Linux and its variants such as EndeavourOS and Manjaro Linux:
```
$ sudo pacman -S tcp-wrappers
```
**On Fedora , RHEL, CentOS, AlmaLinux and Rocky Linux:**
Make sure you've enabled the **[EPEL]** repository:
```
$ sudo dnf install epel-release
```
And then install TCP wrappers using command:
```
$ sudo dnf install tcp_wrappers
```
On RHEL 6 systems, you need to use yum instead of dnf to install TCP wrappers.
```
$ sudo yum install tcp_wrappers
```
### Configure TCP Wrappers
TCP Wrappers implements the access control with the help of two configuration files:
* /etc/hosts.allow,
* /etc/hosts.deny.
These two access control list files decides whether or not the specific clients are allowed to access your Linux server.
#### The /etc/hosts.allow file
The `/etc/hosts.allow` file contains the list of allowed or non-allowed hosts or networks. It means that we can both allow or deny connections to network services by defining access rules in this file.
#### The /etc/hosts.deny file
The `/etc/hosts.deny` file contains the list of hosts or networks that are not allowed to access your Linux server. The access rules in this file can also be set up in `/etc/hosts.allow` with a **'deny'** option.
The typical syntax to define an access rule is:
```
daemon_list : client_list : option : option ...
```
Where,
* daemon_list - The name of a network service such as SSH, FTP, Portmap etc.
* clients_list - The comma separated list of valid hostnames, IP addresses or network addresses.
* options - An optional action that specifies something to be done whenever a rule is matched.
The syntax is same for both files.
### Rules to remember
Before using TCP Wrappers, you need to know the following important rules. Please be mindful that the TCP Wrapper consults only these two files (hosts.allow and hosts.deny).
* The access rules in the `/etc/hosts.allow` file are applied first. They takes precedence over rules in `/etc/hosts.deny` file. Therefore, if access to a service is allowed in `/etc/hosts.allow` file, and a rule denying access to that same service in `/etc/hosts.deny` is ignored.
* Only one rule per service is allowed in both files (hosts.allow and `hosts.deny` files).
* The order of the rules is very important. Only the first matching rule for a given service will be taken into account. The same applies for both files.
* If there are no matching rules for a service in either files or if neither file exist, then access to the service will be granted to all remote hosts.
* Any changes in either files will come to effect immediately without restarting the network services.
### Restrict Access To Linux Servers Using TCP Wrappers
The recommended approach to secure a Linux server is to **block all incoming connections**, and allow only a few specific hosts or networks.
To do so, edit **/etc/hosts.deny** file:
```
$ sudo vi /etc/hosts.deny
```
Add the following line. This line refuses connections to ALL services and ALL networks.
```
ALL: ALL
```
Then, edit **/etc/hosts.allow** file:
```
$ sudo vi /etc/hosts.allow
```
and allow the specific hosts or networks of your choice.
```
sshd: 192.168.43.192 192.168.43.193
```
You can also specify valid hostnames instead of IP address as shown below.
```
sshd: server1.ostechnix.lan server2.ostechnx.lan
```
Alternatively, you can do the same by defining all rules (both allow and deny) in `/etc/hosts.allow` file itself.
Edit **/etc/hosts.allow** file and add the following lines.
```
sshd: 192.168.43.192 192.168.43.193
sshd: ALL: DENY
```
In this case, you don't need to specify any rule in `/etc/hosts.deny` file.
As per above rule, all incoming connections will be denied for all hosts except the two hosts 192.168.43.192, 192.168.43.193.
Now, try to SSH to your Linux server from any hosts except the above hosts, you will get the following error.
```
ssh_exchange_identification: read: Connection reset by peer
```
You can verify this from your Linux server's log files as shown below.
```
$ cat /var/log/secure
```
**Sample output:**
```
Jun 16 19:40:17 server sshd[15782]: refused connect from 192.168.43.150 (192.168.43.150)
```
Similarly, you can define rules for other services, say for example vsftpd, in `/etc/hosts.allow` file as shown below.
```
vsftpd: 192.168.43.192
vsftpd: ALL: DENY
```
Again, you don't need to define any rules in `/etc/hosts.deny` file. As per the above rule, a remote host with IP address 192.168.43.192 is allowed to access the Linux server via FTP. All other hosts will be denied.
Also, you can define the access rules in different formats in /etc/hosts.allow file as shown below.
```
sshd: 192.168.43.192 #Allow a single host for SSH service
sshd: 192.168.43.0/255.255.255.0 #Allow a /24 prefix for SSH
vsftpd: 192.168.43.192 #Allow a single host for FTP
vsftpd: 192.168.43.0/255.255.255.0 #Allow a /24 prefix for FTP
vsftpd: server1.ostechnix.lan #Allow a single host for FTP
```
#### Allow all hosts except a specific host
You can allow incoming connections from all hosts, but not from a specific host. Say for example, to allow incoming connections from all hosts in the **192.168.43** subnet, but not from the host **192.168.43.192**, add the following line in `/etc/hosts.allow` file.
```
ALL: 192.168.43. EXCEPT 192.168.43.192
```
In the above case, you don't need to add any rules in /etc/hosts.deny file.
Or you can specify the hostname instead of IP address as shown below.
```
ALL: .ostechnix.lan EXCEPT badhost.ostechnix.lan
```
For more details, refer the man pages.
```
$ man tcpd
```
### Conclusion
As you can see, securing network services in your Linux systems with TCP Wrappers is easy! But keep in mind that TCP Wrapper is not a replacement for a firewall. It should be used in conjunction with firewalls and other security tools.
**Resource:**
* [Wikipedia][1]
--------------------------------------------------------------------------------
via: https://ostechnix.com/restrict-access-linux-servers-using-tcp-wrappers/
作者:[sk][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://ostechnix.com/author/sk/
[b]: https://github.com/lkxed
[1]: https://en.wikipedia.org/wiki/TCP_Wrapper

View File

@ -0,0 +1,111 @@
[#]: subject: "Install Gedit on Ubuntu 22.10 and Make it Default Text Editor"
[#]: via: "https://itsfoss.com/install-gedit-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install Gedit on Ubuntu 22.10 and Make it Default Text Editor
======
[GNOME has a brand new text editor][1] to replace the good old Gedit editor.
While it was already available with GNOME 42, Ubuntu 22.04 relied on Gedit.
This is changing in Ubuntu 22.10. GNOME Text Editor is the default here and Gedit is not even installed.
![Searching for text editor only brings GNOME Text Editor][2]
While the new editor is good enough, not everyone would like it. This is especially if you use Gedit extensively with additional plugins.
If you are among those people, let me show you how to install Gedit on Ubuntu. Ill also share how you can make it the default text editor.
### Install Gedit on Ubuntu
This is actually a no-brainer. While Gedit is not installed by default, it is still available in Ubuntu repositories.
So, all you have to do is to use the apt command to install it:
```
sudo apt install gedit
```
Gedit is also available in the software center but it is the snap package. You could install that if you want.
![Gedit is also available in Ubuntus Snap Store][3]
#### Install Gedit Plugins (optional)
By default, Gedit gives you the option to access a few plugins. You can enable or disable the plugins from the menu->preference->plugins.
![Accessing plugins in Gedit][4]
You should see the available plugins here. The installed or in-use plugins are checked.
![See the available and installed plugins in Gedit][5]
However, you can take the plugin selection to the next level by installing the gedit-plugins meta package.
```
sudo apt install gedit-plugins
```
This will give you access to additional plugins like bookmarks, bracket completion, Python console and more.
![Additional Gedit plugins][6]
**Tip**: If you notice that Gedit looks a bit out of place for the lack of around bottom corners, you can install a GNOME extension called [Round Bottom Corner][7]. This will force round bottom corners for all applications including Gedit.
### Make Gedit the default text editor
Alright! So you have installed Gedit but the text files still open in GNOME Text Editor with double click action. To open a file with Gedit, you need to right click and then select the open with option.
If you want Gedit to open text files all the time, you can set it as default.
Right click on a text file and go with “open with” option. Select Gedit here and enable the “Always use for this file type” option from the bottom.
![Set Gedit as the default text editor][8]
### Remove Gedit
Dont feel Gedit is up to the mark? Thats rare, but I am not judging you. To remove Gedit from Ubuntu, use the following command:
```
sudo apt remove gedit
```
You may also try uninstalling it from the software center.
### Conclusion
GNOME Text Editor is the next-gen, created-from-scratch editor that blends well with the new GNOME.
Its good enough for simple text editing. However, Gedit has a plugin ecosystem that gives it more feature.
For those who use it extensively for coding and other stuff, installing Gedit is still an option in Ubuntu.
What about you? Will you stick with the default new text editor or would you go back to the good old Gedit?
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-gedit-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/gnome-text-editor/
[2]: https://itsfoss.com/wp-content/uploads/2022/10/text-editor-ubuntu.png
[3]: https://itsfoss.com/wp-content/uploads/2022/10/install-gedit-from-ubuntu-software-center.png
[4]: https://itsfoss.com/wp-content/uploads/2022/10/access-plugins-in-gedit.png
[5]: https://itsfoss.com/wp-content/uploads/2022/10/plugins-in-gedit.png
[6]: https://itsfoss.com/wp-content/uploads/2022/10/additional-plugins-gedit.png
[7]: https://extensions.gnome.org/extension/5237/rounded-window-corners/
[8]: https://itsfoss.com/wp-content/uploads/2022/10/set-gedit-default.png

View File

@ -0,0 +1,108 @@
[#]: subject: "How to Enable and Access USB Drive in VirtualBox"
[#]: via: "https://www.debugpoint.com/enable-usb-virtualbox/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Enable and Access USB Drive in VirtualBox
======
Heres a precise guide on how you can enable USB in Oracle VirtualBox.
![][1]
When you work in a Virtual machine environment, the USB is usually plugged into the host system. But it is a little difficult to access that USB content from the guest system.
In VirtualBox, you need to install some extensions and enable some settings to access USB in. Heres how.
This article assumes that you have already installed VirtualBox and also installed some Linux distribution or operating system inside it.
If not, check out the [articles here][2].
### Enable USB in VirtualBox 7.0
#### Install VirtualBox Extension Pack
* Open the VirtualBox download page and download the VirtualBox Extension pack for all supported platforms using [this link][3].
![Download the extension pack][4]
* Then Click on `File > Tools > Extension Pack Manager.`
* Click on the `Install` button in the toolbar and select the downloaded .vbox-extpak file.
* Hit `Install`. Accept the terms, and give the admin password for the installation.
![install extension pack manager][5]
![install extension pack manager after accepting terms][6]
* After successful installation, you can see it in the installed list.
* Restart your host system. Restarting is mandatory.
#### Enable USB in the guest box
* Plugin the USB stick into your host system which you want to access from the guest virtual machine.
* Start VirtualBox and right-click on the VM name where you want to enable USB. Select Settings.
![Launch settings for the virtual machine][7]
* On the left pane, click on USB. Then select the controller version. For example, you can select USB 3.0. Then click on the small plus icon to add a USB filter.
* In this list, you should see your USB stick name (which you plugged in). For this example, I can see my Transcend Jetflash drive, which I plugged in.
* Select it and press OK.
![Select the USB stick][8]
* Now, start your virtual machine. Open the file manager, and you should see the USB is enabled and mounted on your virtual machine.
* In this demonstration, you can see the Thunar file manager of my [Arch-Xfce][9] virtual machine is showing the contents of my USB stick.
![Enabling USB and accessing contents from VirtualBox][10]
### Usage notes
Now, here are a couple of things you should remember.
* When you plug in the USB in the host system, keep it mounted. But do not open or access any file before launching the virtual machine.
* Once you start your virtual machine, the USB will be unmounted in the host system and auto-mounted in the guest system, i.e. your virtual machine.
* After you finish with a USB stick, ensure to eject or unmount it inside a virtual machine. Then it will be accessible again inside your host system.
### Wrapping Up
VirtualBox is a powerful utility and provides easy-to-use features to extensively set up your Virtual Machines. The steps are straightforward, and make sure your USB stick is detected properly in the host system to work.
Also, remember that USB stick detection via extension pack is not related to VirtualBox guest addition. They are completely unrelated and provide separate functions.
Finally, let me know if this guide helps you in the comment box.
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/enable-usb-virtualbox/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/usb-vbox-1024x576.jpg
[2]: https://www.debugpoint.com/tag/virtualbox
[3]: https://www.virtualbox.org/wiki/Downloads
[4]: https://www.debugpoint.com/wp-content/uploads/2022/10/Download-the-extension-pack.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/10/install-extension-pack-manager.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/10/install-extension-pack-manager-after-accepting-terms.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/10/Launch-settings-for-the-virtual-machine.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/10/Select-the-USB-stick.jpg
[9]: https://www.debugpoint.com/xfce-arch-linux-install-4-16/
[10]: https://www.debugpoint.com/wp-content/uploads/2022/10/Enabling-USB-and-accessing-contents-from-VirtualBox.jpg

View File

@ -0,0 +1,143 @@
[#]: subject: "Using habits to practice open organization principles"
[#]: via: "https://opensource.com/open-organization/22/6/using-habits-practice-open-organization-principles"
[#]: author: "Ron McFarland https://opensource.com/users/ron-mcfarland"
[#]: collector: "lkxed"
[#]: translator: "Donkey-Hao"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
利用习惯练习开放式组织原则
======
你可以按照以下步骤,来养成符合开放文化的习惯,并改掉那些不符合开放文化的习惯。
![Selfcare, drinking tea on the porch][1]
很久以来,我就对习惯很感兴趣。几年前,我做了一次关于习惯利弊的演讲,并且介绍了如何改变坏习惯、养成好习惯。不久前,我阅读了 Art Markman 教授的 《Smart Thinking》一书这本书主要讨论的也是习惯。或许你会问习惯与 [开放式组织的原则][2] 有什么关系?这其中有一定的联系!我将会分成两篇文章,来解释你可以如何管理你的习惯。
在本文中,我们将讨论习惯如何工作的,以及更重要的是你如何去开始改变你的习惯。在下一篇文章中,我们将回顾 Markman 教授在他书中所表达的思想。
### 开放式组织原则和习惯的交集
设想你学习过开放式组织的原则,尽管你认为它们很有趣并且很有价值,但是你还没有对这些原则形成自己的习惯。以下就是你现实中会表现出来的样子。
社区:如果你面对一项重要挑战,但是你不知道如何独自解决它,你很有可能会由于习惯而放弃这项挑战。养成与由志同道合的人组成的社区,共同解决问题的习惯,不是更好吗?
协作:假设你认为你不善于合作,你喜欢独立完成任务。你知道有一些需要合作才能完成的事情,但是你并没有参与合作的习惯。为了弥补这种情况,你必须养成与他人更多合作的习惯。
信息共享:假如说你喜欢将你所做的事以及所知道的东西当作秘密。但是,你知道如果你不共享信息,你也无法从他人那里获取有用的信息。因此,你必须拥有共享信息的习惯。
包容性:想象一下,你与你不熟悉的人,或者是在个性、文化还是语言上都与你不同的人一起工作,你会感到不自在。但是,你知道如果你想要成功的话,你必须要和各种各样的人一同工作。那你该如何培养包容的习惯呢?
适应能力:假设当你所做的事情不再能达到你所希望的结果之后,你往往会拒绝改变。但是,你知道你必须适应这种情况,并重新调整你的努力,那你如何才能养成适应的习惯呢?
### 习惯是什么?
在我给出关于上述开放式组织原则的示例之前,我想先解释一下习惯的一些相关特征。
* 习惯是重复很多次的行为,最终习惯会成为你下意识的行为。
* 习惯是自动的并且当时会感觉良好。当一个人在养成习惯后,做习惯行为会使他感觉很好,但是当他跳出习惯做事时,会感到不舒服。或许之后他会再次考虑尝试。
* 一些习惯是有益的,并且能够节省你很多的能量。大脑只占身体质量的 2%,但是却会消耗 20% 的能量。因为大脑在思考和集中精力上需要消耗很多能量,你可以通过培养下意识的习惯来节省能量。
* 一些习惯对你有害,因此你渴望改变这些坏习惯。
* 所有的习惯都会给你回报,即使回报是短暂的。
* 习惯是基于你熟悉的事情和你知道的东西而形成的,即使你可能并不一定需要这个习惯。
### 养成习惯的 3 个步骤
1、提示触发器首先提示或者触发器会告诉大脑进入之前学习的习惯性行为的自动模式之中。这里的提示可以是某件事比如每天在确定的时间点、在确定的地点看到一包糖果或者看到电视购物节目亦或者看到某个特定的人。时间压力会触发你去做例行事项routine。在令人崩溃的环境下也会触发例行事项。简而言之某件事提醒你开始做一些固定的事情。
2、例行事项routine例行事项会被触发。一个例行事项是一系列的身体、心理或者情绪上的表现可以是非常复杂的也可以十分简单。诸如与心情相关的一些习惯可以在很短时间内被触发。
3、奖励最后一步是奖励奖励会帮助你的大脑计算一个特定的行为是否值得记住。奖励的范围很广泛可以是食物或者其他令你感到快乐的东西。
### 商业环境中的坏习惯
习惯不仅仅是个人行为。所有的组织或多或少都有一些好的坏的制度习惯。然而,一些组织会有先见之明地设计好他们的习惯,而其他组织却不会设计习惯,只是随着竞争或者担心落伍而演变。以下是一些组织的坏习惯示例:
* 总是晚提交报告
* 单独工作或者分组合作,然而采用相反的方法才合适
* 上级对下级施压很大
* 不关心销售额的下降
* 由于内卷,销售团队之间不协同合作
* 让一个健谈的人主导会议
### 逐步改变习惯
习惯不是一成不变的,你可以改变你的行为习惯。首先,要知道不能一下子改变所有坏习惯。相反,先找到一个关键的习惯进行改变,这会产生小而快速的奖励。请记住,改变了一个关键的习惯后,会产生连锁反应。
以下是你可以用来改变任何习惯的四步框架,其中还包括与开放式组织原则相关的习惯。
##### 第一步:调整例行事项
确定你的习惯循环和例行事项,例如,当面临一件你无法独自解决的重大挑战之时。例行事项(你表现出的行为)最容易确定,所以先从它下手:例如,“在我的组织中,没人愿意和别人讨论问题。大家都会早早地放弃”。决定好你想要调整、改变或者学习的事情:例如:“每次重大挑战到来的时候,我应该和他人讨论一下,并且尝试建立一个志同道合、有能力解决问题的社区。”
##### 第二步:有奖励的实验
奖励是很重要的,因为它会满足你强烈的渴望。但是,我们通常没有意识到强烈的渴望会驱动我们的行为。只有在事后,才会被我们察觉。比方说,开会时很多次你都想尽快离开会议室,避免讨论话题,即使内心清楚你应该弄明白如何解决问题。
要了解强烈的渴望是什么,你必须要实验。这可能会花费你几天、几周甚至更久的时间。你必须要感受到触发压力,才能完全识别它。例如,问问你自己当你试图推卸责任时的感受。
把你自己当作科学家,进行实验并收集数据。这是你调查研究的步骤:
1、第一个行为结束后开始调整后面的行为看看有没有奖励变化。例如如果你每次碰到自己无法解决的挑战时都放弃那么奖励就是不承担责任的解脱。更好的解决方法是与至少一个同样关心该问题的人讨论该问题。关键是要测试不同的假设以确定哪种渴望驱使你的日常生活。你真的想逃避责任吗
2、在经历四至五个不同的例行事项和奖励之后写下在收到每个奖励后立即想到的前三、四件事。例如你不会在面对挑战时放弃而是与其他人讨论这个问题。然后你决定可以做什么。
3、写下你的感受或渴望后设置一个 15 分钟的计时器。当计时器结束时,问问自己是否依旧渴望。在屈服于渴望之前,请休息一会儿并再考虑一两次这个问题。这会迫使你意识到这一刻,并帮助你稍后回忆起你当时的想法。
4、试着记住你在那一刻的想法和感受然后在例行事项后 15 分钟。如果渴望消失了,你就已经确定了回报是什么。
##### 第三步:分析出坏习惯的提示或触发器
坏习惯的提示信息很难鉴定,因为通常有太多信息干扰你未定型的行为。要在干扰中鉴别提示,你可以在你的坏习惯出现的时候,观察以下四个因素:
地点:它在哪里发生?例如:“我最大的挑战在会议中出现。”
时间:它什么时候出现?例如:“如果我累了,下午的会议就是在浪费时间,因为我没兴趣付出努力。”
感受:你当时的情绪状态是怎样的?例如:“当我听到这个问题时,我感到压力山大并且很沮丧。”
人们:当时有谁或者哪一类人在你周围,还是你是独自一人?例如:“在会议上,大多数人似乎对这个问题也不感兴趣。剩下的人主导会议讨论。”
##### 第四步:制定养成好习惯的计划
一旦你确定奖励可以驱动你的行为,某些提示会触发你的坏习惯,那你就可以开始改变你的行动。请跟随以下三个简单的步骤:
1、首先规划好习惯的提示。例如“在会议上我将发现并将我的注意力集中在重要的问题上。”
2、其次选择一种能带来相同回报的好行为但不会遭受你现在坏习惯的惩罚。例如“我将找到解决这个问题的方法并考虑我需要哪些资源和技能才能成功。当我创建一个能够成功解决问题的社区时我会感觉很棒。”
3、最后让你选择的行为成为深思熟虑的选择直到你不再需要考虑它就能下意识地做它了。例如“我将有意识地关注重要问题直到我可以不假思索地做到这一点。我会查看近期会议的安排表这样我就可以提前知道会发生什么。在每次会议开始前和会议期间我会问自己为什么我会来开会来确保我集中注意于重要的事情。”
##### 指定计划来避免忘记必做事项
为了成功地开始做你经常忘记的事情,请按照以下步骤:
1、 计划你想要做什么
2、 决定何时完成
3、 将计划分为必要的小任务
4、 用计时器或者日常计划进行提示,并开始每项任务
5、 按计划完成每个任务
6、 按时完成后就奖励自己
### 习惯的改变
习惯的改变需要很长时间。有时候互助小组会帮助你改变习惯。有时候,在低压力环境中,进行大量的练习和角色预演能够更好地帮助你改变。想要找到有效的奖励,你需要不断的尝试。
有时,习惯是更重要、更深层次问题的反映。在这些情况下,你可能需要专业帮助。但是,如果你有改变的愿望,并接受在此过程中会有一些小失败,你就可以控制任何习惯。
在本文中,我使用了使用 *提示-例行事项-奖励* 三个过程的社区开发示例。它同样可以应用于其他开放式组织的原则。我希望这篇文章能让你思考如何通过了解习惯如何运作、采取措施改变习惯,以及制定计划避免忘记你想做的事情,来管理习惯。无论是开放式组织原则,还是其他任何东西,你现在都可以判断出提示、常规和奖励。当提示出现时,这将引导你制定改变习惯的计划。
在我的下一篇文章中,我将通过 Art Markman 教授在《Smart Thinking》中观点来继续讨论习惯。
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/22/6/using-habits-practice-open-organization-principles
作者:[Ron McFarland][a]
选题:[lkxed][b]
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ron-mcfarland
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/coffee_tea_selfcare_wfh_porch_520.png
[2]: https://theopenorganization.org/definition/open-organization-definition/

View File

@ -0,0 +1,267 @@
[#]: subject: (Optimize Java serverless functions in Kubernetes)
[#]: via: (https://opensource.com/article/21/6/java-serverless-functions-kubernetes)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
[#]: collector: (lujun9972)
[#]: translator: (cool-summer-021)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
优化 Kubernetes 中的 Java 无服务器函数
======
在 Kubernetes 中以更快的启动速度和更小的内存占用运行无服务器函数
![Ship captain sailing the Kubernetes seas][1]
由于运行上千个应用程序集群所耗费的资源多,令它实现较少工作节点和资源占用所需成本也较高,所以在使用 [Kubernetes][2] 时,快速启动和较少的内存占用是至关重要的。在 Kubernetes 平台运行容器化微服务时,内存占用是比吞吐量更重要的考量因素,这是因为:
* 由于需要持续运行,所以耗费资源更多(不同于 CPU 周期)
* 微服务令管理成本成倍增加
* 一个庞大的应用程序变为若干个微服务的情况例如20个微服务占用的存储空间一共有20GB
这些情况极大影响了无服务器函数的发展和 Java 部署模型。到目前为止,许多企业开发人员选择 Go、Python 或 Node.js 这些替代方案来解决性能瓶颈,直到出现了 [Quarkus][3] 这种基于 kubernetes 的原生 Java 堆栈,才有所改观。本文介绍如何在使用了 Quarkus 的 kubernetes 平台上进行性能优化,以便运行无服务器函数。
### 容器优先的设计理念
由于 Java 生态系统中传统的框架都要进行框架的初始化包括配置文件的处理、classpath 的扫描、类加载、注解的处理以及构建元模型,这些过程都是必不可少的,所以它们都比较耗费资源。如果使用了几种不同的框架,所耗费的资源也是成倍增加。
Quarkus 通过“向左移动”,把所有的资源开销大的操作都转移到构建阶段,解决了这些 Java 性能问题。在构建阶段进行代码和框架分析、字节码转换和动态元模型生成,而且只有一次,结果是:运行时可执行文件经过高度优化,启动非常快,不需要经过那些传统的启动过程,全过程只在构建阶段执行一次。
![Quarkus Build phase][4]
(Daniel Oh, [CC BY-SA 4.0][5])
更重要的是Quarkus 支持构建原生可执行文件,它具有良好性能,包括快速启动和极小的 RSS 内存占用,跟传统的云原生 Java 栈相比,具备即时扩展的能力和高密度的内存利用。
![Quarkus RSS and Boot Time Metrics][7]
(Daniel Oh, [CC BY-SA 4.0][5])
这里有个例子,展示如何使用 Quarkus 将一个 [Java 无服务器][8]项目构建为本地可执行文件。
### 1\. 使用 Quarkus 创建无服务器 Maven 项目
以下命令生成一个 Quarkus 项目,(例如`quarkus-serverless-native`)以此创建一个简单的函数:
```
$ mvn io.quarkus:quarkus-maven-plugin:1.13.4.Final:create \
       -DprojectGroupId=org.acme \
       -DprojectArtifactId=quarkus-serverless-native \
       -DclassName="org.acme.getting.started.GreetingResource"
```
### 2\. 构建一个本地可执行文件
你需要使用 GraalVM 为 Java 程序构建一个本地可执行文件。你可以选择 GraalVM 的任何发行版,例如 [Oracle GraalVM Community Edition (CE)][9] 或 [Mandrel][10]Oracle GraalVM CE 的下游发行版。Mandrel 是为支持 OpenJDK 11 上的 Quarkus-native 可执行文件的构建而设计的。
打开 `pom.xml`,你将发现其中的 `native` 设置。你将使用它来构建本地可执行文件。
```
<profiles>
    <profile>
        <id>native</id>
        <properties>
            <quarkus.package.type>native</quarkus.package.type>
        </properties>
    </profile>
</profiles>
```
> **注意:** 你可以在本地安装 GraalVM 或 Mandrel 发行版。你也可以下载 Mandrel 容器映像来构建它(像我那样),因此你还需要在本地运行一个容器引擎(例如 Docker
假设你已经打开了容器运行时,此时需要运行一下 Maven 命令:
使用 [Docker][11] 作为容器引擎
```
$ ./mvnw package -Pnative \
-Dquarkus.native.container-build=true \
-Dquarkus.native.container-runtime=docker
```
使用 [Podman][12] 作为容器引擎
```
$ ./mvnw package -Pnative \
-Dquarkus.native.container-build=true \
-Dquarkus.native.container-runtime=podman
```
输出信息结尾应当是 `BUILD SUCCESS`
![Native Build Logs][13]
(Daniel Oh, [CC BY-SA 4.0][5])
不借助 JVM 直接运行本地可执行文件:
```
`$ target/quarkus-serverless-native-1.0.0-SNAPSHOT-runner`
```
输出信息类似于:
```
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,&lt; / /_/ /\ \  
\--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/  
INFO  [io.quarkus] (main) quarkus-serverless-native 1.0.0-SNAPSHOT native
(powered by Quarkus xx.xx.xx.) Started in 0.019s. Listening on: <http://0.0.0.0:8080>
INFO [io.quarkus] (main) Profile prod activated.
INFO [io.quarkus] (main) Installed features: [cdi, kubernetes, resteasy]
```
简直是超音速!启动只花了 19 毫秒。你的运行时间可能稍有不同。
使用 Linux 的 `ps` 工具检测一下,结果内存占用还是很低。检测的方法是:在应用程序运行期间,另外打开一个终端,运行如下命令:
```
`$ ps -o pid,rss,command -p $(pgrep -f runner)`
```
输出结果类似于:
```
  PID    RSS COMMAND
10246  11360 target/quarkus-serverless-native-1.0.0-SNAPSHOT-runner
```
该进程只占 11MB 内存。非常小!
> **注意:** 各种应用程序(包括 Quarkus的驻留集大小和内存占用都因运行环境而异并随着应用程序载入而上升。
你也可以使用 REST API 访问这个函数。输出结果应该是 `Hello RESTEasy`:
```
$ curl localhost:8080/hello
Hello RESTEasy
```
### 3\. 把函数部署到 Knative 服务
如果你还没有创建命名空间,现在就在 [OKD][15] (OpenShift Kubernetes 发行版)[创建一个命名空间][14](例如 `quarkus-serverless-native`),进而把这个本地可执行文件部署为无服务器函数。然后添加 `quarkus-openshift` 扩展:
```
`$ ./mvnw -q quarkus:add-extension -Dextensions="openshift"`
```
`src/main/resources/application.properties` 文件中添加以下内容,配置 Knative 和 Kubernetes 的相关资源:
```
quarkus.container-image.group=quarkus-serverless-native
quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000
quarkus.native.container-build=true
quarkus.kubernetes-client.trust-certs=true
quarkus.kubernetes.deployment-target=knative
quarkus.kubernetes.deploy=true
quarkus.openshift.build-strategy=docker
```
构建本地可执行文件,并把它直接部署到 OKD 集群:
```
`$ ./mvnw clean package -Pnative`
```
> **Note:** 提前使用 `oc login` 命令,确保登录的是正确的项目(例如 `quarkus-serverless-native`)。
输出信息结尾应当是 `BUILD SUCCESS`。完成一个本地二进制文件的构建并部署为 Knative 服务需要花费几分钟。成功创建服务后,使用 `kubectl``oc` 命令工具,可以查看 Knative 服务和版本信息:
```
$ kubectl get ksvc
NAME                        URL   [...]
quarkus-serverless-native   <http://quarkus-serverless-native-\[...\].SUBDOMAIN>  True
$ kubectl get rev
NAME                              CONFIG NAME                 K8S SERVICE NAME                  GENERATION   READY   REASON
quarkus-serverless-native-00001   quarkus-serverless-native   quarkus-serverless-native-00001   1            True
```
### 4\. 访问本地可执行函数
运行 `kubectl` 命令,搜索无服务器函数的节点:
```
`$ kubectl get rt/quarkus-serverless-native`
```
输出信息类似于:
```
NAME                         URL                                                                                                          READY   REASON
quarkus-serverless-native   <http://quarkus-serverless-restapi-quarkus-serverless-native.SUBDOMAIN>   True
```
`curl` 命令访问上述信息中的 `URL` 字段:
```
`$ curl http://quarkus-serverless-restapi-quarkus-serverless-native.SUBDOMAIN/hello`
```
过了不超过一秒钟,你也会得到跟本地操作一样的结果:
```
`Hello RESTEasy`
```
当你在 OKD 群集中访问 Quarkus 运行中的节点的日志,你会发现本地可执行文件正在以 Knative 服务的形式运行。
![Native Quarkus Log][16]
(Daniel Oh, [CC BY-SA 4.0][5])
### 下一步呢?
你可以借助 GraalVM 发行版优化 Java 无服务器函数,从而在 Knative 中使用 Kubernetes 将它们部署为无服务器函数。Quarkus 支持在普通的微服务中使用简易配置进行性能优化。
本系列的下一篇文章将指导你在不更改代码的情况下跨多个无服务器平台实现可移植函数。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/java-serverless-functions-kubernetes
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://opensource.com/article/19/6/reasons-kubernetes
[3]: https://quarkus.io/
[4]: https://opensource.com/sites/default/files/uploads/quarkus-build.png (Quarkus Build phase)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://quarkus.io/blog/runtime-performance/
[7]: https://opensource.com/sites/default/files/uploads/quarkus-boot-metrics.png (Quarkus RSS and Boot Time Metrics)
[8]: https://opensource.com/article/21/5/what-serverless-java
[9]: https://www.graalvm.org/community/
[10]: https://github.com/graalvm/mandrel
[11]: https://www.docker.com/
[12]: https://podman.io/
[13]: https://opensource.com/sites/default/files/uploads/native-build-logs.png (Native Build Logs)
[14]: https://docs.okd.io/latest/applications/projects/configuring-project-creation.html
[15]: https://docs.okd.io/latest/welcome/index.html
[16]: https://opensource.com/sites/default/files/uploads/native-quarkus-log.png (Native Quarkus Log)

View File

@ -0,0 +1,264 @@
[#]: subject: (Try Linux on any operating system with VirtualBox)
[#]: via: (https://opensource.com/article/21/6/try-linux-virtualbox)
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
[#]: collector: (lujun9972)
[#]: translator: (chai001125)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
使用 VirtualBox 安装 Linux 虚拟机
======
VirtualBox 能帮助任何人(即使是命令行新手)安装一个新的虚拟机。
![Person programming on a laptop on a building][1]
VirtualBox 能让任何人都可以轻松安装 Linux 虚拟机。你不需要有使用命令行的经验,就可以自己安装一个简单的 Linux 虚拟机。在虚拟机方面,我精通很多东西,但这篇文章将向新手展示如何安装一个 Linux 虚拟机。此外,这篇文章还概述了如何使用开源虚拟机管理程序 [VirtualBox][2] ,来运行以及安装一个测试目的的 Linux 系统。
### 一些术语
在开始之前你需要了解在本安装教程中的两个操作系统OS之间的区别
* **主机系统host system** 这指的是你安装 VirtualBox 的操作系统(即本机的操作系统)。
* **访客系统guest system** 这指的是你想要在主机系统之上运行的虚拟化系统。
在输入/输出、网络、文件访问、剪贴板、音频和视频方面,主机系统和访客系统都必须能够交互。
在本教程中,我将使用 Windows 10 作为 _主机系统_[Fedora 33][3] 作为 _访客系统_
### 安装前的准备
当我们谈论虚拟化时,实际上,我们指的是 [硬件辅助虚拟化][4]。硬件辅助虚拟化需要兼容的 CPU。过去十年来几乎每个普通的 x86 CPU 都有这一功能。AMD 公司称这样的 x86 CPU 是具有 **AMD 虚拟化技术AMD-V** 的处理器,英特尔公司则称其是具有 **Intel 虚拟化技术VT-x** 的处理器。虚拟化功能增加了一些额外的 CPU 指令,你可以在 BIOS 中启用或禁用这些指令。
在安装虚拟机之前:
* 确保在 BIOS 中启用了虚拟化技术AMD-V 或 VT-x
* 下载并安装好 [VirtualBox][5]。
### 准备虚拟机
下载你要用的 Linux 发行版的镜像文件。下载 32 位还是 64 位的操作系统映像都没有关系,因为在 32 位的主机系统上也可以启动 64 位的操作系统映像(当然内存的使用会受限),反之亦然。
> **注意事项:** 如果可以的话,请下载附带有 [逻辑卷管理器][6]LVM的 Linux 发行版。LVM 会将文件系统与物理硬盘驱动器解耦。如果你的空间不足时,这能够让你增加访客系统的硬盘驱动器的大小。
现在,打开 VirtualBox然后单击黄色的**新建**按钮:
![VirtualBox New VM][7]
接下来,配置访客操作系统允许使用多少内存:
![Set VM memory size][9]
我的建议是:**不要吝啬访客操作系统的内存!**
当访客操作系统的内存不足时访客系统将开始从随机存取存储器RAM转换到硬盘驱动器hard drive进行内存的分页这样会极大地恶化系统的性能和响应能力。如果底层的主机系统开始分页你很可能不会注意到。对于具有图形化桌面环境的 Linux 工作站系统,我建议至少分配 4GB 内存。
接下来,创建虚拟磁盘:
![Create virtual hard disk][10]
虚拟磁盘的格式选择默认的选项 **VDIVirtualBox 磁盘映像)** 就可以了:
![Selecting hard disk file type][11]
在以下的窗口中,我建议选择**动态分配**,因为这允许你在之后增加虚拟磁盘的大小。如果你选择了**固定大小**,磁盘的速度可能会更快,但你将无法修改虚拟磁盘的大小了:
![Dynamically allocating hard disk][12]
建议你使用附带有逻辑卷管理器LVM的 Linux 发行版,这样你就可以先创建一个较小的硬盘。如果之后你的访客系统的空间快用完了,你可以按需增加磁盘的大小。
> **注意**:我选择的访客系统为 Fedora在 Fedora 的官网说明:[Fedora 至少需要分配 20GB 的空闲磁盘空间][13]。我强烈建议你遵守该规范。在这里,我选择了 8GB以便稍后演示如何用命令行增加磁盘空间。如果你是 Linux 新手,或者对命令行没有经验,请依旧选择 20GB。
![Setting hard disk size][14]
创建好硬盘驱动器后,从 VirtualBox 主窗口的列表中选择新创建的虚拟机,然后单击**设置**。在设置菜单中,点击**系统**,然后选择**处理器**标签。默认情况下VirtualBox 只向访客系统分配一个 CPU 内核。在现代多核 CPU 计算机上,分配至少两个内核是没有任何问题的,这能显著地加快访客系统的速度:
![Assigning cores to guest system][15]
#### 设置网络适配器
接下来,要处理的是网络设置。默认情况下, VirtualBox 会创建一个 NAT 连接,这对于大多数情况来说,是没有问题、不用做其他更改的:
![Network settings][16]
你也可以创建多个网络适配器。以下是网络适配器最常见的类型:
* **NAT** NAT适配器能自动执行 [网络地址转换][17]。从外部看,主机和访客系统使用着相同的 IP 地址。你无法通过网络从主机系统内访问访客系统。(尽管,你也可以通过定义 [端口转发][18]来访问某些服务。当你的主机系统可以访问互联网时则你的访客系统也可以访问互联网。NAT 不再需要进一步的配置。
* _如果你只需要让访客系统接入互联网就可以的话请选择 **NAT**。_
* **桥接适配器Bridged adapter** 在此配置中,访客系统和主机系统可以共享相同的物理以太网设备。这两个系统都将拥有独立的 IP 地址。从外部看,网络中会有两个独立的系统,它们共享相同的物理以太网适配器。这种设置更灵活,但需要更多的配置。
* _如果你想要共享访客系统的网络服务的话请选择 **桥接适配器**。_
* **仅限主机的适配器Host-only adapter** 在此配置中,访客系统只能与主机,或在同一主机上运行的其他访客系统,相互通信。主机系统也可以连接到访客系统。但访客系统不能接入互联网或物理网络。
* _如果你想要获得高安全性请选择 **仅限主机的适配器**。_
#### 分配操作系统映像
在设置菜单中,点击**存储**,然后选择虚拟光盘驱动器。单击右侧的 **光盘图标**,然后点击**选择一个磁盘文件...**,然后分配你想要安装的、已下载的 Linux 发行版映像:
![Assigning OS image][19]
### 安装 Linux
现在,就已经配置好了虚拟机。右上角关闭**设置**菜单,返回主窗口。点击**绿色箭头**(即“开始”按钮)。虚拟机将从虚拟光盘驱动器启动,你将发现你已经进入到 Linux 发行版的安装程序中:
![VirtualBox Fedora installer][20]
#### 设置分区
安装程序将在安装过程中要求你提供分区信息。选择**自定义Custom**
![Selecting Custom partition configuration][21]
> **注意:** 我假设,你创建这一虚拟机的目的是为了测试。此外,你也无需关心访客系统的休眠,因为此功能会由 VirtualBox 来隐式地提供。因此,你可以省略交换分区,以节省主机系统的磁盘空间。请记住,如果你需要的话,你可以稍后自己添加交换分区。在 [Linux 系统交换空间的介绍][22] 这篇文章中,作者 David Both 进一步解释了如何添加交换分区,并选择交换分区正确的大小。
Fedora 33 及之后更高的版本提供了一个 [zram 分区][23]zram 分区可以用于存放分页和交换、并经过压缩过后的硬盘数据。zram 分区可以按需地调整大小,并且它比硬盘交换分区快得多。
为了简单我们只添加以下两个挂载点mount point
![Adding mount points][24]
保存更改,接下来我们继续安装。
### 安装 VirtualBox 增强功能 Guest Additions
完成安装后,从硬盘驱动器启动,并登录到虚拟机。现在,你可以安装 VirtualBox 增强功能,其中包括特殊的设备驱动程序和系统应用程序,他们能提供以下内容:
* 共享剪贴板
* 共享文件夹
* 更好的性能
* 可自由扩展的窗口大小
点击顶部菜单栏的**设备**,然后选择**插入增强功能的CD映像...**,来安装 VirtualBox 增强功能:
![Selecting Guest Additions CD image][25]
在大多数 Linux 发行版上带有增强功能的CD映像会自动挂载并且能够在 File Browser 文件管理器中找到。Fedora 会问你是否要运行安装脚本。单击**运行**,并授予该安装进程 root 权限:
![Enabling Guest Additions autorun][26]
安装完成后,需要重新启动系统。
### LVM扩大磁盘空间
我在之前给 Fedora 虚拟机分配了 8GB 硬盘空间,是一个愚蠢的决定,因为 Fedora 很快就会告警空间不足:
![Fedora hard disk running out of space][27]
正如我提到的Fedora 官网建议安装时分配 20GB 的磁盘空间。因为 8GB 是 Fedora 33 安装启动就需要的最少空间。没有新安装其他软件(除了 VirtualBox Guest Additions启动项就几乎占用了整个 8GB 的可用空间。这时候,不要打开 GNOME 软件中心或任何其他可能从互联网下载文件的东西。
幸运的是,我选择了附带有 LVM 的 Fedora这样我就可以用命令行轻松地修复这个问题。
要增加虚拟机中文件系统的空间,你必须先增加主机系统上分配的虚拟硬盘驱动器。
关闭虚拟机。如果你的主机系统运行的是 Windows请打开终端并进入到 `C:\Program Files\Oracle\VirtualBox` 目录下。使用以下命令,将磁盘大小扩大到 12,000MB
```
`VBoxManage.exe modifyhd "C:\Users\StephanA\VirtualBox VMs\Fedora_33\Fedora_33.vdi" --resize 12000`
```
然后启动虚拟机,并打开**磁盘**的利用。你可以看到你刚刚新创建且未分配的可用空间。选择**可用空间**,然后单击 **+** 按钮:
![Free space before adding][28]
现在,创建一个新的分区。选择你要使用的可用空间的大小:
![Creating a new partition and setting size][29]
You don't want to create a filesystem or anything else on your new partition, so select **Other**:
如果你不想在新分区上创建文件系统或任何其他内容,请选择**其他**
![Selecting "other" for partition volume type][30]
选择**无文件系统**
![Setting "No filesystem" on new partition][31]
现在,磁盘空间应该如下图所示:
![VirtualBox after adding new partition][32]
虚拟机有了一个新的分区设备:**/dev/sda3**。通过输入 `vgscan` ,来检查你的 LVM 卷组,找到 **fedora_localhost_live** 这一LVM 卷组
![Checking LVM volume group by typing vgscan:][33]
现在,已经万事俱备了。在新分区 **/dev/sda3** 中扩展卷组 **fedora_localhost_live**
```
`vgextend fedora_localhost-live /dev/sda3`
```
![vgextend command output][34]
由于卷组较大,你可以增加逻辑卷的大小。命令 `vgdisplay` 显示了共有 951 个可用的空闲扩展:
![vgdisplay command output][35]
将逻辑卷增加 951 个扩展:
```
`lvextend -l+951 /dev/mapper/fedora_localhost--live-root`
```
![lvextend command output][36]
在增加了逻辑卷后,最后一件事就是调整文件系统的大小:
```
`resize2fs /dev/mapper/fedora_localhost--live-root`
```
![resize2fs command output][37]
这样磁盘空间就增加完成了!检查**磁盘使用分析器**,你就可以看到扩展空间已经可用于文件系统了。
### 总结
使用虚拟机,你可以检查在一个特定的操作系统或一个特定版本的操作系统,软件是如何操作的。除此之外,你还可以尝试任何想测试的 Linux 发行版而不必担心系统损坏。对于高级用户来说VirtualBox 在测试、网络和模拟方面提供了广泛的可能性。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/try-linux-virtualbox
作者:[Stephan Avenwedde][a]
选题:[lujun9972][b]
译者:[chai001125](https://github.com/chai001125)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building)
[2]: https://www.virtualbox.org/
[3]: https://getfedora.org/
[4]: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
[5]: https://www.virtualbox.org/wiki/Downloads
[6]: https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
[7]: https://opensource.com/sites/default/files/uploads/virtualbox_new_vm.png (VirtualBox New VM)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/virtualbox_memory_size_1.png (Set VM memory size)
[10]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_1.png (Create virtual hard disk)
[11]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_2.png (Selecting hard disk file type)
[12]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_3.png (Dynamically allocating hard disk)
[13]: https://getfedora.org/en/workstation/download/
[14]: https://opensource.com/sites/default/files/uploads/virtualbox_create_hd_4.png (Setting hard disk size)
[15]: https://opensource.com/sites/default/files/uploads/virtualbox_cpu_settings.png (Assigning cores to guest system)
[16]: https://opensource.com/sites/default/files/uploads/virtualbox_network_settings2.png (Network settings)
[17]: https://en.wikipedia.org/wiki/Network_address_translation
[18]: https://www.virtualbox.org/manual/ch06.html#natforward
[19]: https://opensource.com/sites/default/files/uploads/virtualbox_choose_image3.png (Assigning OS image)
[20]: https://opensource.com/sites/default/files/uploads/virtualbox_running.png (VirtualBox Fedora installer)
[21]: https://opensource.com/sites/default/files/uploads/virtualbox_partitioning_1.png (Selecting Custom partition configuration)
[22]: https://opensource.com/article/18/9/swap-space-linux-systems
[23]: https://fedoraproject.org/wiki/Changes/SwapOnZRAM
[24]: https://opensource.com/sites/default/files/uploads/virtualbox_partitioning_2.png (Adding mount points)
[25]: https://opensource.com/sites/default/files/uploads/virtualbox_guest_additions_2.png (Selecting Guest Additions CD image)
[26]: https://opensource.com/sites/default/files/uploads/virtualbox_guest_additions_autorun.png (Enabling Guest Additions autorun)
[27]: https://opensource.com/sites/default/files/uploads/virtualbox_disk_usage_1.png (Fedora hard disk running out of space)
[28]: https://opensource.com/sites/default/files/uploads/virtualbox_disks_before.png (Free space before adding)
[29]: https://opensource.com/sites/default/files/uploads/virtualbox_new_partition_1.png (Creating a new partition and setting size)
[30]: https://opensource.com/sites/default/files/uploads/virtualbox_new_partition_2.png (Selecting "other" for partition volume type)
[31]: https://opensource.com/sites/default/files/uploads/virtualbox_no_partition_3.png (Setting "No filesystem" on new partition)
[32]: https://opensource.com/sites/default/files/uploads/virtualbox_disk_after.png (VirtualBox after adding new partition)
[33]: https://opensource.com/sites/default/files/uploads/virtualbox_vgscan.png (Checking LVM volume group by typing vgscan:)
[34]: https://opensource.com/sites/default/files/uploads/virtualbox_vgextend_2.png (vgextend command output)
[35]: https://opensource.com/sites/default/files/uploads/virtualbox_vgdisplay.png (vgdisplay command output)
[36]: https://opensource.com/sites/default/files/uploads/virtualbox_lvextend.png (lvextend command output)
[37]: https://opensource.com/sites/default/files/uploads/virtualbox_resizefs.png (resize2fs command output)

View File

@ -1,195 +0,0 @@
[#]: subject: (Write your first web component)
[#]: via: (https://opensource.com/article/21/7/web-components)
[#]: author: (Ramakrishna Pattnaik https://opensource.com/users/rkpattnaik780)
[#]: collector: (lujun9972)
[#]: translator: (cool-summer-021)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
开发第一个 Web 组件
======
不要做重复的工作;
基于浏览器开发 Web App 时,需要制作一些可重用的模块。
![Digital creative of a browser on the internet][1]
Web 组件是一系列开源技术(例如 JavaScript 和 HTML你可以用它创建一些 Web App 中可重用的自定义元素。你创建的组件是独立于其他代码的,所以这些组件可以方便地在多个项目中重用。
首先,它是一个平台标准,所有主流的浏览器都支持它。
### Web 组件中包含什么?
* **定制元素:** 支持定义HTML元素的新类别。
* **Shadow DOM:** 提供一种将一个隐藏的、独立的[文档对象模型][2] (DOM) 附加到一个元素的方法。它通过保留从页面的其他代码分离出来的样式、标记结构和行为特征对 Web 组件进行封装。它确保 Web 组件内样式不会被外部样式覆盖反之亦然Web 组件内样式也不会“泄露”到页面的其他部分。
* **HTML 模板:** 支持定义可重用的 DOM 元素。可重用 DOM 元素和它的内容不会呈现在 DOM 内,但仍然可以通过 JavaScript 被引用。
### 开发你的第一个 Web 组件
你可以借助你最喜欢的文本编辑器和 JavaScript 写一个简单的 Web 组件。本指南使用引导程序生成简单的样式,并创建一个简易的卡片式的 Web 组件,给定了位置信息,该组件就能显示该位置的温度。组件使用了 [Open Weather API][3],你需要先注册,然后创建 APPID/APIKey才能正常使用。
调用该组件,需要给出位置的经度和纬度:
```
`<weather-card longitude='85.8245' latitude='20.296' />`
```
创建一个名为 **weather-card.js** 的文件,这个文件包含 Web 组件的所有代码。首先,需要定义你的组件,创建一个模板元素,并在其中加入一些简单的 HTML 标签:
```
const template = document.createElement('template');
template.innerHTML = `
  <div class="card">
    <div class="card-body"></div>
  </div>
`
```
定义 WebComponent 类及其构造函数:
```
class WeatherCard extends HTMLElement {
  constructor() {
    super();
    this._shadowRoot = this.attachShadow({ 'mode': 'open' });
    this._shadowRoot.appendChild(template.content.cloneNode(true));
  }
  ….
}
```
构造函数中,附加了 shadowRoot 属性,并将它设置为开启模式。然后这个模板就包含了 shadowRoot 属性。
接着,写获取属性的函数。对于经度和纬度,你需要向 Open Weather API 发送 GET 请求。这些功能需要在 `connectedCallback` 函数中完成。你可以使用 `getAttribute` 方法访问相应的属性,或定义读取属性的方法,把他们绑定到本对象中。
```
get longitude() {
  return this.getAttribute('longitude');
}
get latitude() {
  return this.getAttribute('latitude');
}
```
现在定义 `connectedCallBack` 方法,它的功能是在需要时获取天气数据:
```
connectedCallback() {
  var xmlHttp = new XMLHttpRequest();
  const url = `[http://api.openweathermap.org/data/2.5/weather?lat=${this.latitude}\&amp;lon=${this.longitude}\&amp;appid=API\\_KEY\\`][4]
  xmlHttp.open("GET", url, false);
  xmlHttp.send(null);
  this.$card = this._shadowRoot.querySelector('.card-body');
  let responseObj = JSON.parse(xmlHttp.responseText);
  let $townName = document.createElement('p');
  $townName.innerHTML = `Town: ${responseObj.name}`;
  this._shadowRoot.appendChild($townName);
  let $temperature = document.createElement('p');
  $temperature.innerHTML = `${parseInt(responseObj.main.temp - 273)} &amp;deg;C`
  this._shadowRoot.appendChild($temperature);
}
```
一旦获取到天气数据,附加的 HTML 元素就添加进了模板。至此,完成了类的定义。
最后,使用 `window.customElements.define` 方法定义并注册一个新的自定义元素:
```
`window.customElements.define('weather-card', WeatherCard);`
```
其中,第一个参数是自定义元素的名称,第二个参数是所定义的类。这里是[整个组件的链接][5]。
你的第一个 Web 组件的代码已完成!现在应该把它放入 DOM。为了把它放入 DOM你需要在 HTML 文件(**index.html**)中载入指向 Web 组件的 JavaScript 脚本。
```
<!DOCTYPE html>
<html>
<head>
  <meta charset="UTF-8">
</head>
<body>
<weather-card longitude='85.8245' latitude='20.296'/>
  <script src='./weather-card.js'></script>
</body>
</html>
```
这就是显示在浏览器中的 Web 组件:
![Web component displayed in a browser][6]
(Ramakrishna Pattnaik, [CC BY-SA 4.0][7])
由于 Web 组件中只包含 HTML、CSS 和 JavaScript它们本来就是浏览器所支持的并且可以无瑕疵地跟前端框架例如 React 和 Vue一同使用。下面这段简单的代码展现的是它跟一个由 [Create React App] 引导的一个简单的 React App 的整合方法。如果你需要,可以引入前面定义的 **weather-card.js**,把它作为一个组件使用:
```
import './App.css';
import './weather-card';
function App() {
  return (
  <weather-card longitude='85.8245' latitude='20.296'></weather-card>
  );
}
export default App;
```
### Web 组件的生命周期
一切组件都遵循从初始化到移除的生命周期法则。每个生命周期事件都有相应的方法你可以借助这些方法令组件更好地工作。Web 组件的生命周期事件包括:
* **Constructor:** Web 组件的构造函数在它被挂载前调用,意味着在元素附加到文档对象前被创建。它用于初始化本地状态、绑定事件处理器以及创建 Shadow DOM。在构造函数中必须调用 `super()`,执行父类的构造函数。
* **ConnectedCallBack:** 当一个元素被挂载(插入 DOM 树)时调用。该函数处理创建 DOM 节点的初始化过程中的相关事宜大多数情况下用于类似于网络请求的操作。React 开发者可以将它与 `componentDidMount` 相关联。
* **attributeChangedCallback:** 这个方法接收三个参数:`name`, `oldValue``newValue`。组件的任一属性发生变化,就会执行这个方法。属性由静态 `observedAttributes` 方法声明:
```
static get observedAttributes() {
  return ['name', '_id'];
}
```
一旦属性名或 `_id` 改变,就会调用 `attributeChangedCallback` 方法。
* **DisconnectedCallBack:**当一个元素从 DOM 树移除,会执行这个方法。它相当于 React 中的 `componentWillUnmount`。它可以用于释放不能由垃圾回收机制自动清除的资源,比如 DOM 事件的取消订阅、停用计时器或取消所有已注册的回调方法。
* **AdoptedCallback:** 每次自定义元素移动到一个新文档时调用。只有在处理 IFrame 时会发生这种情况。
### 模块化开源
Web 组件对于开发 Web App 很有用。无论你是熟练使用 JavaScript 的老手,还是初学者,无论你的目标客户使用哪种浏览器,借助这种开源标准创建可重用的代码都是一件可以轻松完成的事。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/7/web-components
作者:[Ramakrishna Pattnaik][a]
选题:[lujun9972][b]
译者:[cool-summer-021](https://github.com/cool-summer-021)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rkpattnaik780
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
[2]: https://en.wikipedia.org/wiki/Document_Object_Model
[3]: https://openweathermap.org/api
[4]: http://api.openweathermap.org/data/2.5/weather?lat=${this.latitude}\&lon=${this.longitude}\&appid=API\_KEY\`
[5]: https://gist.github.com/rkpattnaik780/acc683d3796102c26c1abb03369e31f8
[6]: https://opensource.com/sites/default/files/uploads/webcomponent.png (Web component displayed in a browser)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://create-react-app.dev/docs/getting-started/

View File

@ -1,205 +0,0 @@
[#]: subject: "GUI Apps for Package Management in Arch Linux"
[#]: via: "https://itsfoss.com/arch-linux-gui-package-managers/"
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
# Arch Linux 中用于包管理的 GUI 应用
[安装 Arch Linux][1] 被认为具有挑战性。这就是为什么[有几个基于 Arch 的发行版][2]通过提供图形化的安装程序使事情变得简单。
即使你设法安装了 Arch Linux你也会注意到它严重依赖命令行。如果你需要安装应用或更新系统那么必须打开终端。
是的! Arch Linux 没有软件中心。我知道,这让很多人感到震惊。
如果你对使用命令行管理应用感到不舒服,你可以安装一个 GUI 工具。这有助于在舒适的 GUI 中搜索包以及安装和删除它们。
想知道你应该使用 [pacman 命令][3]的哪个图形前端?我有一些建议可以帮助你入门。
**请注意,某些软件管理器是特定于桌面环境的。**
### 1. Apper
![使用 Apper 安装 Firefox][4]
Apper 是使用 PackageKit 的最小化 Qt5 应用和包管理器,它还支持 AppStream 和自动更新。但是,**没有 AUR 支持**。
要从官方仓库安装它,请使用以下命令。
```
sudo pacman -Syu apper
```
[GitLab 上的应用][5]
### 2. 深度应用商店
![使用深度应用商店安装 Firefox][6]
深度应用商店是深度桌面环境的应用商店,使用 DTKQT5构建使用 PackageKit支持 AppStream同时提供系统更新通知。 **没有 AUR 支持**
要安装它,请使用以下命令。
```
sudo pacman -Syu deepin-store
```
[Github 上的深度商店][7]
### 3. Discover
![使用 Discover 安装 Firefox][8]
Discover 不需要为 KDE Plasma 用户介绍。它是一个使用 PackageKit 的基于 Qt 的应用管理器,支持 AppStream、Flatpak 和固件更新。
为了安装 Flatpak 和固件更新,需要分别安装 Discover 的 `flatpak``fwupd` 包。
它没有 AUR 支持。
```
sudo pacman -Syu discover packagekit-qt5
```
[GitLab 上的 Discover][9]
### 4. GNOME PackageKit
![Installing Firefox using GNOME PackageKit][10]
GNOMEPackageKit 是一个使用 PackageKit 的 GTK3 包管理器,支持 AppStream。不幸的是**没有 AUR 支持**。
要从官方仓库安装它,请使用以下命令。
```
sudo pacman -Syu gnome-packagekit
```
[freedesktop 上的 PackageKit][11]
### 5. GNOME 软件
![Installing Firefox using GNOME Software][12]
GNOME 软件不需要向 GNOME 桌面用户介绍。它是使用 PackageKit 的 GTK4 应用管理器,支持 AppStream、Flatpak 和固件更新。
它没有 AUR 支持。要安装来自 GNOME 软件的 Flatpak 和固件更新,需要分别安装 `flatpak``fwupd` 包。
安装它使用:
```
sudo pacman -Syu gnome-software-packagekit-plugin gnome-software
```
[GitLab 上的 GNOME 软件][13]
### 6. tkPacman
![使用 tkPacman 安装 Firefox][14]
它是用 Tcl 编写的 Tk pacman 包装器。界面类似于 [Synaptic 包管理器][15]。
由于没有 GTK/Qt 依赖,它非常轻量级,因为它使用 Tcl/Tk GUI 工具包。
它不支持 AUR这很讽刺因为你需要从 [AUR][16] 安装它。你需要事先安装一个 [AUR 助手][17],如 yay。
```
yay -Syu tkpacman
```
[Sourceforge 上的 tkPacman][18]
### 7. Octopi
![使用 Octopi 安装 Firefox][19]
可以认为它是 tkPacman 的更好看的表亲。它使用 Qt5 和 Alpm还支持 Appstream 和 **AUR (通过 yay)**
你还可以获得桌面通知、仓库编辑器和缓存清理器。它的界面类似于 Synaptic 包管理器。
要从 AUR 安装它,请使用以下命令。
```
yay -Syu octopi
```
[GitHub 上的 Octopi][20]
### 8. Pamac
![使用 Pamac 安装 Firefox][21]
Pamac 是 Manjaro Linux 的图形包管理器。它基于 GTK3 和 Alpm**支持 AUR、Appstream、Flatpak 和 Snap**。
Pamac 还支持自动下载更新和降级软件包。
它是 Arch Linux 衍生版中使用最广泛的应用。但因为 [DDoS AUR 网页][22]而臭名昭著。
[在 Arch Linux 上安装 Pamac][23] 有几种方法。最简单的方法是使用 AUR 助手。
```
yay -Syu pamac-aur
```
[GitLab 上的 Pamac][24]
### 总结
要删除任何上面 GUI 包管理器以及依赖项和配置文件,请使用以下命令将 _packagename_ 替换为要删除的包的名称。
```
sudo pacman -Rns packagename
```
这样看来Arch Linux 也可以在不接触终端的情况下使用合适的工具。
还有一些其他应用程序也使用终端用户界面 (TUI)。一些例子是 [pcurses][25]、[cylon][26]、[pacseek][27] 和 [yup][28]。但是,这篇文章只讨论那些有适当的 GUI 的软件。
**注意:** PackageKit 默认打开系统权限,否则[不推荐][29]用于一般用途。因为如果用户是 wheel 组的一部分,更新或安装任何软件都不需要密码。
**你看到了在 Arch Linux 上使用 GUI 软件中心的几种选择。现在是时候决定使用其中一个了。你会选择哪一个Pamac 或 OctoPi 还是其他?现在就在下面留言吧**。
---
via: https://itsfoss.com/arch-linux-gui-package-managers/
作者:[Anuj Sharma][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者 ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/anuj/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/install-arch-linux/
[2]: https://itsfoss.com/arch-based-linux-distros/
[3]: https://itsfoss.com/pacman-command/
[4]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[5]: https://invent.kde.org/system/apper
[6]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[7]: https://github.com/dekzi/dde-store
[8]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[9]: https://invent.kde.org/plasma/discover
[10]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[11]: https://freedesktop.org/software/PackageKit/index.html
[12]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[13]: https://gitlab.gnome.org/GNOME/gnome-software
[14]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[15]: https://itsfoss.com/synaptic-package-manager/
[16]: https://itsfoss.com/aur-arch-linux/
[17]: https://itsfoss.com/best-aur-helpers/
[18]: https://sourceforge.net/projects/tkpacman
[19]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[20]: https://github.com/aarnt/octopi
[21]: https://itsfoss.com/wp-content/uploads/2022/09/apper-arch-install-firefox.png
[22]: https://gitlab.manjaro.org/applications/pamac/-/issues/1017
[23]: https://itsfoss.com/install-pamac-arch-linux/
[24]: https://gitlab.manjaro.org/applications/pamac
[25]: https://github.com/schuay/pcurses
[26]: https://github.com/gavinlyonsrepo/cylon
[27]: https://github.com/moson-mo/pacseek
[28]: https://github.com/ericm/yup
[29]: https://bugs.archlinux.org/task/50459

View File

@ -0,0 +1,81 @@
[#]: subject: "Easiest Way to Open Files as Root in GNOME Files"
[#]: via: "https://www.debugpoint.com/gnome-files-root-access/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
在 GNOME 文件中以 Root 身份打开文件的最简单方法
======
这是在 GNOME Files 中以 root 身份访问文件或目录的最简单方法。
![][1]
在 Windows 中,你通常可以在右键单击上下文菜单中以“以管理员身份打开”的方式打开文件或文件夹。
该功能是文件管理器的一部分,即适用于 Windows。它是 Windows 资源管理器的一部分。但是,它是由操作系统及其权限控制模块执行的。
在 Linux 发行版和文件管理器中,情况略有不同。不同的桌面有自己的处理方式。
由于以管理员(或 root身份修改文件和文件夹是有风险的并且可能导致系统损坏因此用户无法通过文件管理器的 GUI 轻松使用该功能。
例如KDE Plasma 的默认文件管理器 Dolphin 最近[添加了此功能][2],因此当需要 root 权限时,它会通过 PolicyKit KDE Agent (polkit) 窗口询问你,如下所示。而不是相反的方式。你想在文件管理器中通过 root 打开/执行一些东西。
值得一提的是,你不能使用 “sudo dolphin” 以 root 权限运行文件管理器本身。
![使用 Polkit 实现 KIO 后的 Dolphin root 访问权限][3]
在某种程度上,它挽救了许多不可预见的情况。但是高级用户总是可以通过终端使用 sudo 来完成他们的工作。
### GNOME Files (Nautilus) 和对文件、目录的 root 访问权限
话虽如此,[GNOME Files][4](又名 Nautilus有一种方法可以通过 root 打开文件和文件夹。
以下是方法。
* 打开 GNOME Files 或 Nautilus。
* 然后单击左侧窗格中的其他位置。
* 按 CTRL+L 调出地址栏。
* 在地址栏中,输入下面的内容并回车。
```
admin:///
```
* 它会要求输入管理员密码。当你成功验证自己,你就会以管理员身份打开系统。
* 现在,从这里开始,无论你做什么,它都是管理员或 root。
![以管理员身份输入位置地址][5]
![输入管理员密码][6]
![以 root 身份打开 GNOME Files][7]
但是,与往常一样,请小心你作为管理员所做的事情。在你以 root 身份验证自己之后,通常很容易忘记。
这些选项不容易看到总是有原因的,以防止你和许多新的 Linux 用户破坏他们的系统。
干杯。
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/gnome-files-root-access/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.debugpoint.com/wp-content/uploads/2022/10/nauroot-1024x576.jpg
[2]: https://www.debugpoint.com/dolphin-root-access/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/02/Dolphin-root-access-after-KIO-with-Polkit-implementation.jpg
[4]: https://wiki.gnome.org/Apps/Files
[5]: https://www.debugpoint.com/wp-content/uploads/2022/10/Enter-the-location-address-as-admin.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/10/Give-admin-password.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/10/Opening-GNOME-Files-as-root.jpg

View File

@ -0,0 +1,125 @@
[#]: subject: "How to Set Static IP Address on Ubuntu Server 22.04"
[#]: via: "https://www.linuxtechi.com/static-ip-address-on-ubuntu-server/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何在 Ubuntu Server 22.04 上设置静态 IP 地址
======
在这篇文章中,我们将介绍如何在 Ubuntu Server 22.04 上设置静态 IP 地址。
强烈建议在 linux 服务器上使用静态 ip因为它会在重启后保持不变。静态 IP 对邮件服务器、Web 服务器和文件服务器等服务器起着重要作用。
##### 先决条件
* 最小安装的 Ubuntu Server 22.04
* 具有 sudo 管理员权限的普通用户
在 Ubuntu Server 22.04 中,网络由 netplan 程序控制,因此我们将使用 netplan 在 Ubuntu Server 上配置静态 IP 地址。
注意:我们不能使用 [nmcli 程序][1],因为它不是 Ubuntu Server 上默认安装的一部分。
### 在 Ubuntu Server 22.04 上设置静态 IP 地址
登录到你的 Ubuntu Server 22.04,查找 netplan 配置文件。它位于 /etc/netplan 目录下。
```
$ cd /etc/netplan/
$ ls -l
total 4
-rw-r--r-- 1 root root 116 Oct 12 04:03 00-installer-config.yaml
$
```
运行以下 cat 命令以查看 “00-installer-config.yaml” 的内容
注意:配置文件的名称可能因你的设置而异。由于它是一个 yaml 文件,因此请确保在编辑时保持缩进和语法。
```
$ cat 00-installer-config.yaml
```
输出:
![Default-Content-netplan-ubuntu-server][2]
根据上面的输出,它说我们有 ens33 接口,它正在从 dhcp 服务器获取 ip。查看接口名称的另一种方法是通过 ip 命令。
现在,要配置静态 ip 代替 dhcp使用 vi 或 nano 编辑器编辑 netplan 配置文件并添加以下内容。
```
$ sudo vi 00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
renderer: networkd
ethernets:
ens33:
addresses:
- 192.168.1.247/24
nameservers:
addresses: [4.2.2.2, 8.8.8.8]
routes:
- to: default
via: 192.168.1.1
version: 2
```
保存并关闭文件。
![Updated-Netplan-Config-File-Content-Ubuntu-Server][3]
在上面的文件中,我们使用了以下内容,
* ens33 为接口名称
* 用于设置静态 ip 的地址
* nameservers 用于指定 DNS 服务器的 ip
* 用于指定默认网关的路由
注意:根据你的环境更改 IP 详细信息和接口名称。
要是上述修改生效,请使用以下 netplan 命令应用这些更改:
```
$ sudo netplan apply
```
运行以下 ip 命令查看接口上的 ip 地址:
```
$ ip addr show ens33
```
要查看默认路由,请运行:
```
$ ip route show
```
上述命令的输出。
![ip-addr-route-command-output-ubuntu-server][4]
完美以上命令的输出确认静态ip和路由配置成功。
这就是这篇文章的全部内容。请在下面的评论部分发表你的问题和反馈。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/static-ip-address-on-ubuntu-server/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/configure-ip-with-nmcli-command-linux/
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/10/Default-Content-netplan-ubuntu-server.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/10/Updated-Netplan-Config-File-Content-Ubuntu-Server.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/10/ip-addr-route-command-output-ubuntu-server.png