mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
commit
a60d6b232c
@ -0,0 +1,244 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13099-1.html)
|
||||
[#]: subject: (Getting Started With Pacman Commands in Arch-based Linux Distributions)
|
||||
[#]: via: (https://itsfoss.com/pacman-command/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
Arch Linux 的 pacman 命令入门
|
||||
======
|
||||
|
||||
> 这本初学者指南向你展示了在 Linux 中可以使用 pacman 命令做什么,如何使用它们来查找新的软件包,安装和升级新的软件包,以及清理你的系统。
|
||||
|
||||
[pacman][1] 包管理器是 [Arch Linux][2] 和其他主要发行版如 Red Hat 和 Ubuntu/Debian 之间的主要区别之一。它结合了简单的二进制包格式和易于使用的 [构建系统][3]。`pacman` 的目标是方便地管理软件包,无论它是来自 [官方库][4] 还是用户自己构建的软件库。
|
||||
|
||||
如果你曾经使用过 Ubuntu 或基于 debian 的发行版,那么你可能使用过 `apt-get` 或 `apt` 命令。`pacman` 在 Arch Linux 中是同样的命令。如果你 [刚刚安装了 Arch Linux][5],在安装 Arch Linux 后,首先要做的 [几件事][6] 之一就是学习使用 `pacman` 命令。
|
||||
|
||||
在这个初学者指南中,我将解释一些基本的 `pacman` 命令的用法,你应该知道如何用这些命令来管理你的基于 Archlinux 的系统。
|
||||
|
||||
### Arch Linux 用户应该知道的几个重要的 pacman 命令
|
||||
|
||||

|
||||
|
||||
与其他包管理器一样,`pacman` 可以将包列表与软件库同步,它能够自动解决所有所需的依赖项,以使得用户可以通过一个简单的命令下载和安装软件。
|
||||
|
||||
#### 通过 pacman 安装软件
|
||||
|
||||
你可以用以下形式的代码来安装一个或者多个软件包:
|
||||
|
||||
```
|
||||
pacman -S 软件包名1 软件包名2 ...
|
||||
```
|
||||
|
||||
![安装一个包][8]
|
||||
|
||||
`-S` 选项的意思是<ruby>同步<rt>synchronization</rt></ruby>,它的意思是 `pacman` 在安装之前先与软件库进行同步。
|
||||
|
||||
`pacman` 数据库根据安装的原因将安装的包分为两组:
|
||||
|
||||
* **显式安装**:由 `pacman -S` 或 `-U` 命令直接安装的包
|
||||
* **依赖安装**:由于被其他显式安装的包所 [依赖][9],而被自动安装的包。
|
||||
|
||||
#### 卸载已安装的软件包
|
||||
|
||||
卸载一个包,并且删除它的所有依赖。
|
||||
|
||||
```
|
||||
pacman -R 软件包名
|
||||
```
|
||||
|
||||
![移除一个包][10]
|
||||
|
||||
删除一个包,以及其不被其他包所需要的依赖项:
|
||||
|
||||
```
|
||||
pacman -Rs 软件包名
|
||||
```
|
||||
|
||||
如果需要这个依赖的包已经被删除了,这条命令可以删除所有不再需要的依赖项:
|
||||
|
||||
```
|
||||
pacman -Qdtq | pacman -Rs -
|
||||
```
|
||||
|
||||
#### 升级软件包
|
||||
|
||||
`pacman` 提供了一个简单的办法来 [升级 Arch Linux][11]。你只需要一条命令就可以升级所有已安装的软件包。这可能需要一段时间,这取决于系统的新旧程度。
|
||||
|
||||
以下命令可以同步存储库数据库,*并且* 更新系统的所有软件包,但不包括不在软件库中的“本地安装的”包:
|
||||
|
||||
```
|
||||
pacman -Syu
|
||||
```
|
||||
|
||||
* `S` 代表同步
|
||||
* `y` 代表更新本地存储库
|
||||
* `u` 代表系统更新
|
||||
|
||||
也就是说,同步到中央软件库(主程序包数据库),刷新主程序包数据库的本地副本,然后执行系统更新(通过更新所有有更新版本可用的程序包)。
|
||||
|
||||
![系统更新][12]
|
||||
|
||||
> 注意!
|
||||
>
|
||||
> 对于 Arch Linux 用户,在系统升级前,建议你访问 [Arch-Linux 主页][2] 查看最新消息,以了解异常更新的情况。如果系统更新需要人工干预,主页上将发布相关的新闻。你也可以订阅 [RSS 源][13] 或 [Arch 的声明邮件][14]。
|
||||
>
|
||||
> 在升级基础软件(如 kernel、xorg、systemd 或 glibc) 之前,请注意查看相应的 [论坛][15],以了解大家报告的各种问题。
|
||||
>
|
||||
> 在 Arch 和 Manjaro 等滚动发行版中不支持**部分升级**。这意味着,当新的库版本被推送到软件库时,软件库中的所有包都需要根据库版本进行升级。例如,如果两个包依赖于同一个库,则仅升级一个包可能会破坏依赖于该库的旧版本的另一个包。
|
||||
|
||||
#### 用 Pacman 查找包
|
||||
|
||||
`pacman` 使用 `-Q` 选项查询本地包数据库,使用 `-S` 选项查询同步数据库,使用 `-F` 选项查询文件数据库。
|
||||
|
||||
`pacman` 可以在数据库中搜索包,包括包的名称和描述:
|
||||
|
||||
```
|
||||
pacman -Ss 字符串1 字符串2 ...
|
||||
```
|
||||
|
||||
![查找一个包][16]
|
||||
|
||||
查找已经被安装的包:
|
||||
|
||||
```
|
||||
pacman -Qs 字符串1 字符串2 ...
|
||||
```
|
||||
|
||||
根据文件名在远程软包中查找它所属的包:
|
||||
|
||||
```
|
||||
pacman -F 字符串1 字符串2 ...
|
||||
```
|
||||
|
||||
查看一个包的依赖树:
|
||||
|
||||
```
|
||||
pactree 软件包名
|
||||
```
|
||||
|
||||
#### 清除包缓存
|
||||
|
||||
`pacman` 将其下载的包存储在 `/var/cache/Pacman/pkg/` 中,并且不会自动删除旧版本或卸载的版本。这有一些优点:
|
||||
|
||||
1. 它允许 [降级][17] 一个包,而不需要通过其他来源检索以前的版本。
|
||||
2. 已卸载的软件包可以轻松地直接从缓存文件夹重新安装。
|
||||
|
||||
但是,有必要定期清理缓存以防止文件夹增大。
|
||||
|
||||
[pacman contrib][19] 包中提供的 [paccache(8)][18] 脚本默认情况下会删除已安装和未安装包的所有缓存版本,但最近 3 个版本除外:
|
||||
|
||||
```
|
||||
paccache -r
|
||||
```
|
||||
|
||||
![清除缓存][20]
|
||||
|
||||
要删除当前未安装的所有缓存包和未使用的同步数据库,请执行:
|
||||
|
||||
```
|
||||
pacman -Sc
|
||||
```
|
||||
|
||||
要从缓存中删除所有文件,请使用清除选项两次,这是最激进的方法,不会在缓存文件夹中留下任何内容:
|
||||
|
||||
```
|
||||
pacman -Scc
|
||||
```
|
||||
|
||||
#### 安装本地或者第三方的包
|
||||
|
||||
安装不是来自远程存储库的“本地”包:
|
||||
|
||||
```
|
||||
pacman -U 本地软件包路径.pkg.tar.xz
|
||||
```
|
||||
|
||||
安装官方存储库中未包含的“远程”软件包:
|
||||
|
||||
```
|
||||
pacman -U http://www.example.com/repo/example.pkg.tar.xz
|
||||
```
|
||||
|
||||
### 额外内容:用 pacman 排除常见错误
|
||||
|
||||
下面是使用 `pacman` 管理包时可能遇到的一些常见错误。
|
||||
|
||||
#### 提交事务失败(文件冲突)
|
||||
|
||||
如果你看到以下报错:
|
||||
|
||||
```
|
||||
error: could not prepare transaction
|
||||
error: failed to commit transaction (conflicting files)
|
||||
package: /path/to/file exists in filesystem
|
||||
Errors occurred, no packages were upgraded.
|
||||
```
|
||||
|
||||
这是因为 `pacman` 检测到文件冲突,不会为你覆盖文件。
|
||||
|
||||
解决这个问题的一个安全方法是首先检查另一个包是否拥有这个文件(`pacman-Qo 文件路径`)。如果该文件属于另一个包,请提交错误报告。如果文件不属于另一个包,请重命名“存在于文件系统中”的文件,然后重新发出更新命令。如果一切顺利,文件可能会被删除。
|
||||
|
||||
你可以显式地运行 `pacman -S –overwrite 要覆盖的文件模式**,强制 `pacman` 覆盖与 给模式匹配的文件,而不是手动重命名并在以后删除属于该包的所有文件。
|
||||
|
||||
#### 提交事务失败(包无效或损坏)
|
||||
|
||||
在 `/var/cache/pacman/pkg/` 中查找 `.part` 文件(部分下载的包),并将其删除。这通常是由在 `pacman.conf` 文件中使用自定义 `XferCommand` 引起的。
|
||||
|
||||
#### 初始化事务失败(无法锁定数据库)
|
||||
|
||||
当 `pacman` 要修改包数据库时,例如安装包时,它会在 `/var/lib/pacman/db.lck` 处创建一个锁文件。这可以防止 `pacman` 的另一个实例同时尝试更改包数据库。
|
||||
|
||||
如果 `pacman` 在更改数据库时被中断,这个过时的锁文件可能仍然保留。如果你确定没有 `pacman` 实例正在运行,那么请删除锁文件。
|
||||
|
||||
检查进程是否持有锁定文件:
|
||||
|
||||
```
|
||||
lsof /var/lib/pacman/db.lck
|
||||
```
|
||||
|
||||
如果上述命令未返回任何内容,则可以删除锁文件:
|
||||
|
||||
```
|
||||
rm /var/lib/pacman/db.lck
|
||||
```
|
||||
|
||||
如果你发现 `lsof` 命令输出了使用锁文件的进程的 PID,请先杀死这个进程,然后删除锁文件。
|
||||
|
||||
我希望你喜欢我对 `pacman` 基础命令的介绍。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/pacman-command/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.archlinux.org/pacman/
|
||||
[2]: https://www.archlinux.org/
|
||||
[3]: https://wiki.archlinux.org/index.php/Arch_Build_System
|
||||
[4]: https://wiki.archlinux.org/index.php/Official_repositories
|
||||
[5]: https://itsfoss.com/install-arch-linux/
|
||||
[6]: https://itsfoss.com/things-to-do-after-installing-arch-linux/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/essential-pacman-commands.jpg?ssl=1
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-S.png?ssl=1
|
||||
[9]: https://wiki.archlinux.org/index.php/Dependency
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-R.png?ssl=1
|
||||
[11]: https://itsfoss.com/update-arch-linux/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Syu.png?ssl=1
|
||||
[13]: https://www.archlinux.org/feeds/news/
|
||||
[14]: https://mailman.archlinux.org/mailman/listinfo/arch-announce/
|
||||
[15]: https://bbs.archlinux.org/
|
||||
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Ss.png?ssl=1
|
||||
[17]: https://wiki.archlinux.org/index.php/Downgrade
|
||||
[18]: https://jlk.fjfi.cvut.cz/arch/manpages/man/paccache.8
|
||||
[19]: https://www.archlinux.org/packages/?name=pacman-contrib
|
||||
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-paccache-r.png?ssl=1
|
@ -0,0 +1,176 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13100-1.html)
|
||||
[#]: subject: (Why I use the D programming language for scripting)
|
||||
[#]: via: (https://opensource.com/article/21/1/d-scripting)
|
||||
[#]: author: (Lawrence Aberba https://opensource.com/users/aberba)
|
||||
|
||||
我为什么要用 D 语言写脚本?
|
||||
======
|
||||
|
||||
> D 语言以系统编程语言而闻名,但它也是编写脚本的一个很好的选择。
|
||||
|
||||

|
||||
|
||||
D 语言由于其静态类型和元编程能力,经常被宣传为系统编程语言。然而,它也是一种非常高效的脚本语言。
|
||||
|
||||
由于 Python 在自动化任务和快速实现原型想法方面的灵活性,它通常被选为脚本语言。这使得 Python 对系统管理员、[管理者][2]和一般的开发人员非常有吸引力,因为它可以自动完成他们可能需要手动完成的重复性任务。
|
||||
|
||||
我们自然也可以期待任何其他的脚本编写语言具有 Python 的这些特性和能力。以下是我认为 D 是一个不错的选择的两个原因。
|
||||
|
||||
### 1、D 很容易读和写
|
||||
|
||||
作为一种类似于 C 的语言,D 应该是大多数程序员所熟悉的。任何使用 JavaScript、Java、PHP 或 Python 的人对 D 语言都很容易上手。
|
||||
|
||||
如果你还没有安装 D,请[安装 D 编译器][3],这样你就可以[运行本文中的 D 代码][4]。你也可以使用[在线 D 编辑器][5]。
|
||||
|
||||
下面是一个 D 代码的例子,它从一个名为 `words.txt` 的文件中读取单词,并在命令行中打印出来:
|
||||
|
||||
```
|
||||
open
|
||||
source
|
||||
is
|
||||
cool
|
||||
```
|
||||
|
||||
用 D 语言写脚本:
|
||||
|
||||
```
|
||||
#!/usr/bin/env rdmd
|
||||
// file print_words.d
|
||||
|
||||
// import the D standard library
|
||||
import std;
|
||||
|
||||
void main(){
|
||||
// open the file
|
||||
File("./words.txt")
|
||||
|
||||
//iterate by line
|
||||
.byLine
|
||||
|
||||
// print each number
|
||||
.each!writeln;
|
||||
}
|
||||
```
|
||||
|
||||
这段代码以 [释伴][6] 开头,它将使用 [rdmd][7] 来运行这段代码,`rdmd` 是 D 编译器自带的编译和运行代码的工具。假设你运行的是 Unix 或 Linux,在运行这个脚本之前,你必须使用` chmod` 命令使其可执行:
|
||||
|
||||
```
|
||||
chmod u+x print_words.d
|
||||
```
|
||||
|
||||
现在脚本是可执行的,你可以运行它:
|
||||
|
||||
```
|
||||
./print_words.d
|
||||
```
|
||||
|
||||
这将在你的命令行中打印以下内容:
|
||||
|
||||
```
|
||||
open
|
||||
source
|
||||
is
|
||||
cool
|
||||
```
|
||||
|
||||
恭喜你,你写了第一个 D 语言脚本。你可以看到 D 是如何让你按顺序链式调用函数,这让阅读代码的感觉很自然,类似于你在头脑中思考问题的方式。这个[功能让 D 成为我最喜欢的编程语言][8]。
|
||||
|
||||
试着再写一个脚本:一个非营利组织的管理员有一个捐款的文本文件,每笔金额都是单独的一行。管理员想把前 10 笔捐款相加,然后打印出金额:
|
||||
|
||||
```
|
||||
#!/usr/bin/env rdmd
|
||||
// file sum_donations.d
|
||||
|
||||
import std;
|
||||
|
||||
void main()
|
||||
{
|
||||
double total = 0;
|
||||
|
||||
// open the file
|
||||
File("monies.txt")
|
||||
|
||||
// iterate by line
|
||||
.byLine
|
||||
|
||||
// pick first 10 lines
|
||||
.take(10)
|
||||
|
||||
// remove new line characters (\n)
|
||||
.map!(strip)
|
||||
|
||||
// convert each to double
|
||||
.map!(to!double)
|
||||
|
||||
// add element to total
|
||||
.tee!((x) { total += x; })
|
||||
|
||||
// print each number
|
||||
.each!writeln;
|
||||
|
||||
// print total
|
||||
writeln("total: ", total);
|
||||
}
|
||||
```
|
||||
|
||||
与 `each` 一起使用的 `!` 操作符是[模板参数][9]的语法。
|
||||
|
||||
### 2、D 是快速原型设计的好帮手
|
||||
|
||||
D 是灵活的,它可以快速地将代码敲打在一起,并使其发挥作用。它的标准库中包含了丰富的实用函数,用于执行常见的任务,如操作数据(JSON、CSV、文本等)。它还带有一套丰富的通用算法,用于迭代、搜索、比较和 mutate 数据。这些巧妙的算法通过定义通用的 [基于范围的接口][10] 而按照序列进行处理。
|
||||
|
||||
上面的脚本显示了 D 中的链式调用函数如何提供顺序处理和操作数据的要领。D 的另一个吸引人的地方是它不断增长的用于执行普通任务的第三方包的生态系统。一个例子是,使用 [Vibe.d][11] web 框架构建一个简单的 web 服务器很容易。下面是一个例子:
|
||||
|
||||
```
|
||||
#!/usr/bin/env dub
|
||||
/+ dub.sdl:
|
||||
dependency "vibe-d" version="~>0.8.0"
|
||||
+/
|
||||
void main()
|
||||
{
|
||||
import vibe.d;
|
||||
listenHTTP(":8080", (req, res) {
|
||||
res.writeBody("Hello, World: " ~ req.path);
|
||||
});
|
||||
runApplication();
|
||||
}
|
||||
```
|
||||
|
||||
它使用官方的 D 软件包管理器 [Dub][12],从 [D 软件包仓库][13]中获取 vibe.d Web 框架。Dub 负责下载 Vibe.d 包,然后在本地主机 8080 端口上编译并启动一个 web 服务器。
|
||||
|
||||
### 尝试一下 D 语言
|
||||
|
||||
这些只是你可能想用 D 来写脚本的几个原因。
|
||||
|
||||
D 是一种非常适合开发的语言。你可以很容易从 D 下载页面安装,因此下载编译器,看看例子,并亲自体验 D 语言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/d-scripting
|
||||
|
||||
作者:[Lawrence Aberba][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/aberba
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
|
||||
[2]: https://opensource.com/article/20/3/automating-community-management-python
|
||||
[3]: https://tour.dlang.org/tour/en/welcome/install-d-locally
|
||||
[4]: https://tour.dlang.org/tour/en/welcome/run-d-program-locally
|
||||
[5]: https://run.dlang.io/
|
||||
[6]: https://en.wikipedia.org/wiki/Shebang_(Unix)
|
||||
[7]: https://dlang.org/rdmd.html
|
||||
[8]: https://opensource.com/article/20/7/d-programming
|
||||
[9]: http://ddili.org/ders/d.en/templates.html
|
||||
[10]: http://ddili.org/ders/d.en/ranges.html
|
||||
[11]: https://vibed.org
|
||||
[12]: https://dub.pm/getting_started
|
||||
[13]: https://code.dlang.org
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: ( chensanle )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,128 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension)
|
||||
[#]: via: (https://itsfoss.com/material-shell/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension
|
||||
======
|
||||
|
||||
There is something about tiling windows that attracts many people. Perhaps it looks good or perhaps it is time-saving if you are a fan of [keyboard shortcuts in Linux][1]. Or maybe it’s the challenge of using the uncommon tiling windows.
|
||||
|
||||
![Tiling Windows in Linux | Image Source][2]
|
||||
|
||||
From i3 to [Sway][3], there are so many tiling window managers available for Linux desktop. Configuring a tiling window manager itself requires a steep learning curve.
|
||||
|
||||
This is why projects like [Regolith desktop][4] exist to give you preconfigured tiling desktop so that you can get started with tiling windows with less effort.
|
||||
|
||||
Let me introduce you to a similar project named Material Shell that makes using tiling feature even easier than [Regolith][5].
|
||||
|
||||
### Material Shell GNOME Extension: Convert GNOME desktop into a tiling window manager
|
||||
|
||||
[Material Shell][6] is a GNOME extension and that’s the best thing about it. This means that you don’t have to log out and log in to another desktop environment or window manager. You can enable or disable it from within your current session.
|
||||
|
||||
I’ll list the features of Material Shell but it will be easier to see it in action:
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][7]
|
||||
|
||||
The project is called Material Shell because it follows the [Material Design][8] guideline and thus gives the applications an aesthetically pleasing interface. Here are its main features:
|
||||
|
||||
#### Intuitive interface
|
||||
|
||||
Material Shell adds a left panel for quick access. On this panel, you can find the system tray at the bottom and the search and workspaces on the top.
|
||||
|
||||
All the new apps are added to the current workspace. You can create new workspace and switch to it for organizing your running apps into categories. This is the essential concept of workspace anyway.
|
||||
|
||||
In Material Shell, every workspace can be visualized as a row with several apps rather than a box with several apps in it.
|
||||
|
||||
#### Tiling windows
|
||||
|
||||
In a workspace, you can see all your opened applications on the top all the time. By default, the applications are opened to take the entire screen like you do in GNOME desktop. You can change the layout to split it in half or multiple columns or a grid of apps using the layout changer in the top right corner.
|
||||
|
||||
This video shows all the above features at a glance:
|
||||
|
||||
#### Persistent layout and workspaces
|
||||
|
||||
That’s not it. Material Shell also remembers the workspaces and windows you open so that you don’t have to reorganize your layout again. This is a good feature to have as it saves time if you are particular about which application goes where.
|
||||
|
||||
#### Hotkeys/Keyboard shortcut
|
||||
|
||||
Like any tiling windows manager, you can use keyboard shortcuts to navigate between applications and workspaces.
|
||||
|
||||
* `Super+W` Navigate to the upper workspace.
|
||||
* `Super+S` Navigate to the lower workspace.
|
||||
* `Super+A` Focus the window at the left of the current window.
|
||||
* `Super+D` Focus the window at the right of the current window.
|
||||
* `Super+1`, `Super+2` … `Super+0` Navigate to specific workspace
|
||||
* `Super+Q` Kill the current window focused.
|
||||
* `Super+[MouseDrag]` Move window around.
|
||||
* `Super+Shift+A` Move the current window to the left.
|
||||
* `Super+Shift+D` Move the current window to the right.
|
||||
* `Super+Shift+W` Move the current window to the upper workspace.
|
||||
* `Super+Shift+S` Move the current window to the lower workspace.
|
||||
|
||||
|
||||
|
||||
### Installing Material Shell
|
||||
|
||||
Warning!
|
||||
|
||||
Tiling windows could be confusing for many users. You should be familiar with GNOME Extensions to use it. Avoid trying it if you are absolutely new to Linux or if you are easily panicked if anything changes in your system.
|
||||
|
||||
Material Shell is a GNOME extension. So, please [check your desktop environment][9] to make sure you are running _**GNOME 3.34 or higher version**_.
|
||||
|
||||
I would also like to add that tiling windows could be confusing for many users.
|
||||
|
||||
Apart from that, I noticed that after disabling Material Shell it removes the top bar from Firefox and the Ubuntu dock. You can get the dock back by disabling/enabling Ubuntu dock extension from the Extensions app in GNOME. I haven’t tried but I guess these problems should also go away after a system reboot.
|
||||
|
||||
I hope you know [how to use GNOME extensions][10]. The easiest way is to just [open this link in the browser][11], install GNOME extension plugin and then enable the Material Shell extension.
|
||||
|
||||
![][12]
|
||||
|
||||
If you don’t like it, you can disable it from the same extension link you used earlier or use the GNOME Extensions app:
|
||||
|
||||
![][13]
|
||||
|
||||
**To tile or not?**
|
||||
|
||||
I use multiple screens and I found that Material Shell doesn’t work well with multiple monitors. This is something the developer(s) can improve in the future.
|
||||
|
||||
Apart from that, it’s a really easy to get started with tiling windows with Material Shell. If you try Material Shell and like it, appreciate the project by [giving it a star or sponsoring it on GitHub][14].
|
||||
|
||||
For some reasons, tiling windows are getting popular. Recently released [Pop OS 20.04][15] also added tiling window features.
|
||||
|
||||
But as I mentioned previously, tiling layouts are not for everyone and it could confuse many people.
|
||||
|
||||
How about you? Do you prefer tiling windows or you prefer the classic desktop layout?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/material-shell/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-ricing-example-800x450.jpg?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/sway-window-manager/
|
||||
[4]: https://itsfoss.com/regolith-linux-desktop/
|
||||
[5]: https://regolith-linux.org/
|
||||
[6]: https://material-shell.com
|
||||
[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[8]: https://material.io/
|
||||
[9]: https://itsfoss.com/find-desktop-environment/
|
||||
[10]: https://itsfoss.com/gnome-shell-extensions/
|
||||
[11]: https://extensions.gnome.org/extension/3357/material-shell/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/install-material-shell.png?resize=800%2C307&ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/material-shell-gnome-extension.png?resize=799%2C497&ssl=1
|
||||
[14]: https://github.com/material-shell/material-shell
|
||||
[15]: https://itsfoss.com/pop-os-20-04-review/
|
@ -1,184 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why I use the D programming language for scripting)
|
||||
[#]: via: (https://opensource.com/article/21/1/d-scripting)
|
||||
[#]: author: (Lawrence Aberba https://opensource.com/users/aberba)
|
||||
|
||||
Why I use the D programming language for scripting
|
||||
======
|
||||
The D programming language is best known as a system programming
|
||||
language, but it's also a great option for scripting.
|
||||
![Business woman on laptop sitting in front of window][1]
|
||||
|
||||
The D programming language is often advertised as a system programming language due to its static typing and metaprogramming capabilities. However, it's also a very productive scripting language.
|
||||
|
||||
Python is commonly chosen for scripting due to its flexibility for automating tasks and quickly prototyping ideas. This makes Python very appealing to sysadmins, [managers][2], and developers in general for automating recurring tasks that they might otherwise have to do manually.
|
||||
|
||||
It is reasonable to expect any other script-writing language to have these Python traits and capabilities. Here are two reasons why I believe D is a good option.
|
||||
|
||||
### 1\. D is easy to read and write
|
||||
|
||||
As a C-like language, D should be familiar to most programmers. Anyone who uses JavaScript, Java, PHP, or Python will know their way around D.
|
||||
|
||||
If you don't already have D installed, [install a D compiler][3] so that you can [run the D code][4] in this article. You may also use the [online D editor][5].
|
||||
|
||||
Here is an example of D code that reads words from a file named `words.txt` and prints them on the command line:
|
||||
|
||||
|
||||
```
|
||||
open
|
||||
source
|
||||
is
|
||||
cool
|
||||
```
|
||||
|
||||
Write the script in D:
|
||||
|
||||
|
||||
```
|
||||
// file print_words.d
|
||||
|
||||
#!/usr/bin/env rdmd
|
||||
|
||||
// import the D standard library
|
||||
import std;
|
||||
|
||||
void main(){
|
||||
// open the file
|
||||
File("./words.txt")
|
||||
|
||||
//iterate by line
|
||||
.byLine
|
||||
|
||||
// print each number
|
||||
.each!writeln;
|
||||
}
|
||||
```
|
||||
|
||||
This code is prefixed with a [shebang][6] that will run the code using [rdmd][7], a tool that comes with the D compiler to compile and run code. Assuming you are running Unix or Linux, before you can run this script, you must make it executable by using the `chmod` command:
|
||||
|
||||
|
||||
```
|
||||
`chmod u+x print_words.d`
|
||||
```
|
||||
|
||||
Now that the script is executable, you can run it:
|
||||
|
||||
|
||||
```
|
||||
`./print_words.d`
|
||||
```
|
||||
|
||||
This should print the following on your command line:
|
||||
|
||||
|
||||
```
|
||||
open
|
||||
source
|
||||
is
|
||||
cool
|
||||
```
|
||||
|
||||
Congratulations! You've written your first D script. You can see how D enables you to chain functions in sequence to make reading the code feel natural, similar to how you think about problems in your mind. This [feature makes D my favorite programming language][8].
|
||||
|
||||
Try writing another script: A nonprofit manager has a text file of donations with each amount on separate lines. The manager wants to sum the first 10 donations and print the amounts:
|
||||
|
||||
|
||||
```
|
||||
// file sum_donations.d
|
||||
|
||||
#!/usr/bin/env rdmd
|
||||
|
||||
import std;
|
||||
|
||||
void main()
|
||||
{
|
||||
double total = 0;
|
||||
|
||||
// open the file
|
||||
File("monies.txt")
|
||||
|
||||
// iterate by line
|
||||
.byLine
|
||||
|
||||
// pick first 10 lines
|
||||
.take(10)
|
||||
|
||||
// remove new line characters (\n)
|
||||
.map!(strip)
|
||||
|
||||
// convert each to double
|
||||
.map!(to!double)
|
||||
|
||||
// add element to total
|
||||
.tee!((x) { total += x; })
|
||||
|
||||
// print each number
|
||||
.each!writeln;
|
||||
|
||||
// print total
|
||||
writeln("total: ", total);
|
||||
}
|
||||
```
|
||||
|
||||
The `!` operator used with `each` is the syntax of a [template argument][9].
|
||||
|
||||
### 2\. D is great for quick prototyping
|
||||
|
||||
D is flexible for hammering code together really quickly and making it work. Its standard library is rich with utility functions for performing common tasks, such as manipulating data (JSON, CSV, text, etc.). It also comes with a rich set of generic algorithms for iterating, searching, comparing, and mutating data. These cleverly crafted algorithms are oriented towards processing sequences by defining generic [range-based interfaces][10].
|
||||
|
||||
The script above shows how chaining functions in D provides a gist of sequential processing and manipulating data. Another appeal of D is its growing ecosystem of third-party packages for performing common tasks. An example is how easy it is to build a simple web server using the [Vibe.d][11] web framework. Here's an example:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env dub
|
||||
/+ dub.sdl:
|
||||
dependency "vibe-d" version="~>0.8.0"
|
||||
+/
|
||||
void main()
|
||||
{
|
||||
import vibe.d;
|
||||
listenHTTP(":8080", (req, res) {
|
||||
res.writeBody("Hello, World: " ~ req.path);
|
||||
});
|
||||
runApplication();
|
||||
}
|
||||
```
|
||||
|
||||
This uses the official D package manager, [Dub][12], to fetch the vibe.d web framework from the [D package repository][13]. Dub takes care of downloading the Vibe.d package, then compiling and spinning up a web server on localhost port 8080.
|
||||
|
||||
### Give D a try
|
||||
|
||||
These are only a couple of reasons why you might want to use D for writing scripts.
|
||||
|
||||
D is a great language for development. It's easy to install from the D download page, so download the compiler, take a look at the examples, and experience D for yourself.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/1/d-scripting
|
||||
|
||||
作者:[Lawrence Aberba][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/aberba
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
|
||||
[2]: https://opensource.com/article/20/3/automating-community-management-python
|
||||
[3]: https://tour.dlang.org/tour/en/welcome/install-d-locally
|
||||
[4]: https://tour.dlang.org/tour/en/welcome/run-d-program-locally
|
||||
[5]: https://run.dlang.io/
|
||||
[6]: https://en.wikipedia.org/wiki/Shebang_(Unix)
|
||||
[7]: https://dlang.org/rdmd.html
|
||||
[8]: https://opensource.com/article/20/7/d-programming
|
||||
[9]: http://ddili.org/ders/d.en/templates.html
|
||||
[10]: http://ddili.org/ders/d.en/ranges.html
|
||||
[11]: https://vibed.org
|
||||
[12]: https://dub.pm/getting_started
|
||||
[13]: https://code.dlang.org
|
@ -0,0 +1,76 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 open source tools that make Linux the ideal workstation)
|
||||
[#]: via: (https://opensource.com/article/21/2/linux-workday)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
3 open source tools that make Linux the ideal workstation
|
||||
======
|
||||
Linux has everything you think you need and more for you to have a
|
||||
productive workday.
|
||||
![Person using a laptop][1]
|
||||
|
||||
In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. Today, I'll share with you why Linux is a great choice for your workday.
|
||||
|
||||
Everyone wants to be productive during the workday. If your workday generally involves working on documents, presentations, and spreadsheets, then you might be accustomed to a specific routine. The problem is that _usual routine_ is usually dictated by one or two specific applications, whether it's a certain office suite or a desktop OS. Of course, just because something's a habit doesn't mean it's ideal, and yet it tends to persist unquestioned, even to the point of influencing the very structure of how a business is run.
|
||||
|
||||
### Working smarter
|
||||
|
||||
Many office applications these days run in the cloud, so you can work with the same constraints on Linux if you really want to. However, because many of the typical big-name office applications aren't cultural expectations on Linux, you might find yourself inspired to explore other options. As anyone eager to get out of their "comfort zone" knows, this kind of subtle disruption can be surprisingly useful. All too often, you don't know what you're doing inefficiently because you haven't actually tried doing things differently. Force yourself to explore other options, and you never know what you'll find. You don't even have to know exactly what you're looking for.
|
||||
|
||||
### LibreOffice
|
||||
|
||||
One of the most obvious open source office stalwarts on Linux (or any other platform) is [LibreOffice][2]. It features several components, including a word processor, presentation software, a spreadsheet, relational database interface, vector drawing, and more. It can import many document formats from other popular office applications, so transitioning to LibreOffice from another tool is usually easy.
|
||||
|
||||
There's more to LibreOffice than just being a great office suite, however. LibreOffice has macro support, so resourceful users can automate repetitive tasks. It also features terminal commands so you can perform many tasks without ever launching the LibreOffice interface.
|
||||
|
||||
Imagine, for instance, opening 21 documents, navigating to the **File** menu, to the **Export** or **Print** menu item, and exporting the file to PDF or EPUB. That's over 84 clicks, at the very least, and probably an hour of work. Compare that to opening a folder of documents and converting all of them to PDF or EPUB with just one swift command or menu action. The conversion would run in the background while you work on other things. You'd be finished in a quarter of the time, possibly less.
|
||||
|
||||
|
||||
```
|
||||
`$ libreoffice --headless --convert-to epub *.docx`
|
||||
```
|
||||
|
||||
It's the little improvements that Linux encourages, not explicitly but implicitly, through its toolset and the ease with which you can customize your environment and workflow.
|
||||
|
||||
### Abiword and Gnumeric
|
||||
|
||||
Sometimes, a big office suite is exactly what you _don't_ need. If you prefer to keep your office work simple, you might do better with a lightweight and task-specific application. For instance, I mostly write articles in a text editor because I know all styles are discarded during conversion to HTML. But there are times when a word processor is useful, either to open a document someone has sent to me or because I want a quick and easy way to generate some nicely styled text.
|
||||
|
||||
[Abiword][3] is a simple word processor with basic support for popular document formats and all the essential features you'd expect from a word processor. It isn't meant as a full office suite, and that's its best feature. While there's no such a thing as too many options, there definitely is such a thing as information overload, and that's exactly what a full office suite or word processor is sometimes guilty of. If you're looking to avoid that, then use something simple instead.
|
||||
|
||||
Similarly, the [Gnumeric][4] project provides a simple spreadsheet application. Gnumeric avoids any features that aren't strictly necessary for a spreadsheet, so you still get a robust formula syntax, plenty of functions, and all the options you need for styling and manipulating cells. I don't do much with spreadsheets, so I find myself quite happy with Gnumeric on the rare occasions I need to review or process data in a ledger.
|
||||
|
||||
### Pandoc
|
||||
|
||||
It's possible to get even more minimal with specialized commands and document processors. The `pandoc` command specializes in document conversion. It's like the `libreoffice --headless` command, except with ten times the number of document formats to work with. You can even generate presentations with it! If part of your work is taking source text from one document and formatting it for several modes of delivery, then Pandoc is a necessity, and so you should [download our cheat sheet][5].
|
||||
|
||||
Broadly, Pandoc is representative of a completely different way of working. It gets you away from the confines of office applications. It separates you from trying to get your thoughts down into typed words and deciding what font those words ought to use, all at the same time. Working in plain text and then converting to all of your delivery targets afterward lets you work with any application you want, whether it's a notepad on your mobile device, a simple text editor on whatever computer you happen to be sitting in front of, or a text editor in the cloud.
|
||||
|
||||
### Look for the alternatives
|
||||
|
||||
There are lots of unexpected alternatives available for Linux. You can find them by taking a step back from what you're doing, analyzing your work process, assessing your required results, and investigating new applications that claim to do just the things you rely upon.
|
||||
|
||||
Changing the tools you use, your workflow, and your daily routine can be disorienting, especially when you don't know exactly where it is you're looking to go. But the advantage to Linux is that you're afforded the opportunity to re-evaluate the assumptions you've subconsciously developed over years of computer usage. If you look hard enough for an answer, you'll eventually realize what the question was in the first place. And oftentimes, you'll end up appreciating what you learn.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/linux-workday
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
|
||||
[2]: http://libreoffice.org
|
||||
[3]: https://www.abisource.com
|
||||
[4]: http://www.gnumeric.org
|
||||
[5]: https://opensource.com/article/20/5/pandoc-cheat-sheet
|
@ -0,0 +1,219 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fedora Aarch64 on the SolidRun HoneyComb LX2K)
|
||||
[#]: via: (https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/)
|
||||
[#]: author: (John Boero https://fedoramagazine.org/author/boeroboy/)
|
||||
|
||||
Fedora Aarch64 on the SolidRun HoneyComb LX2K
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Photo by [Tim Mossholder][2] on [Unsplash][3]
|
||||
|
||||
Almost a year has passed since the [HoneyComb][4] development kit was released by SolidRun. I remember reading about this Mini-ITX Arm workstation board being released and thinking “what a great idea.” Then I saw the price and realized this isn’t just another Raspberry Pi killer. Currently that price is $750 USD plus shipping and duty. Niche devices like the HoneyComb aren’t mass produced like the simpler Pi is, and they pack in quite a bit of high end tech. Eventually COVID lockdown boredom got the best of me and I put a build together. Adding a case and RAM, the build ended up costing about $1100 shipped to London. This is a recount of my experiences and the current state of using Fedora on this fun bit of hardware.
|
||||
|
||||
First and foremost, the tech packed into this board is impressive. It’s not about to kill a Xeon workstation in raw performance but it’s going to wallop it in performance/watt efficiency. Essentially this is a powerful server in the energy footprint of a small laptop. It’s also a powerful hybrid of compute and network functionality, combining powerful network features in a carrier board with modular daughter card sporting a 16-core A72 with 2 ECC-capable DDR4 SO-DIMM slots. The carrier board comes in a few editions, giving flexibility to swap or upgrade your RAM + CPU options. I purchased the edition pictured below with 16 cores, 32GB (non-ECC), 512GB NVMe, and 4x10Gbe. For an extra $250 you can add the 100Gbe option if you’re building a 5G deployment or an ISP for a small country (bottom right of board). Imagine this jacked into a 100Gb uplink port acting as proxy, tls inspector, router, or storage for a large 10gb TOR switch.
|
||||
|
||||
![][5]
|
||||
|
||||
When I ordered it I didn’t fully understand the network co processor included from NXP. NXP is the company that makes the unique [LX2160A][6] CPU/SOC for this as well as configurable ports and offload engine that enable handling up to 150Gb/s of network traffic without the CPU breaking a sweat. Here is a list of options from NXP’s Layerscape user manual.
|
||||
|
||||
![Configure ports in switch, LAG, MUX mode, or straight NICs.][7]
|
||||
|
||||
I have a 10gb network in my home attic via a Ubiquiti ES-16-XG so I was eager to see how much this board could push. I also have a QNAP connected via 10gb which rarely manages to saturate the line, so could this also be a NAS replacement? It turned out I needed to sort out drivers and get a stable install first. Since the board has been out for a year, I had some catching up to do. SolidRun keeps an active Discord on [Developer-Ecosystem][8] which was immensely helpful as install wasn’t as straightforward as previous blogs have mentioned. I’ve always been cursed. If you’ve ever seen Pure Luck, I’m bound to hit every hardware glitch.
|
||||
|
||||
![][9]
|
||||
|
||||
For starters, you can add a GPU and install graphically or install via USB console. I started with a spare GPU (Radeon Pro WX2100) intending to build a headless box which in the end over-complicated things. If you need to swap parts or re-flash a BIOS via the microSD card, you’ll need to swap display, keyboard + mouse. Chaos. Much simpler just to plug into the micro USB console port and access it via /dev/ttyUSB0 for that picture-in-picture experience. It’s really great to have the open ended PCIe3-x8 slot but I’ll keep it open for now. Note that the board does not support PCIe Atomics so some devices may have compatibility issues.
|
||||
|
||||
Now comes the fun part. BIOS is not built-in here. You’ll need to [build][10] from source for to your RAM speed and install via microSDHC. At first this seems annoying but then you realize that with removable BIOS installer it’s pretty hard to brick this thing. Not bad. The good news is the latest UEFI builds have worked well for me. Just remember that every time you re-flash your BIOS you’ll need to set everything up again. This was enough to boot Fedora aarch64 from USB. The board offers 64GB of eMMC flash which you can install to if you like. I immediately benched it to find it reads about 165MB/s and writes 55MB/s which is practical speed for embedded usage but I’ll definitely be installing to NVMe instead. I had an older Samsung 950 Pro in my spares from a previous Linux box but I encountered major issues with it even with the widely documented kernel param workaround:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
nvme_core.default_ps_max_latency_us=0
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
In the end I upgraded my main workstation so I could repurpose its existing Samsung EVO 960 for the HoneyComb which worked much better.
|
||||
|
||||
After some fidgeting I was able to install Fedora but it became apparent that the integrated network ports still don’t work with the mainline kernel. The NXP tech is great but requires a custom kernel build and tooling. Some earlier blogs got around this with a USB->RJ45 Ethernet adapter which works fine. Hopefully network support will be mainlined soon, but for now I snagged a kernel SRPM from the helpful engineers on Discord. With the custom kernel the 1Gbe NIC worked fine, but it turns out the SFP+ ports need more configuration. They won’t be recognized as interfaces until you use NXP’s _restool_ utility to map ports to their usage. In this case just a runtime mapping of _dmap -> dni_ was required. This is NXP’s way of mapping a MAC to a network interface via IOCTL commands. The restool binary isn’t provided either and must be built from source. It then layers on management scripts which use cheeky $arg0 references for redirection to call the restool binary with complex arguments.
|
||||
|
||||
Since I was starting to accumulate quite a few custom packages it was apparent that a COPR repo was needed to simplify this for Fedora. If you’re not familiar with COPR I think it’s one of Fedora’s finest resources. This repo contains the uefi build (currently failing build), 5.10.5 kernel built with network support, and the restool binary with supporting scripts. I also added a oneshot systemd unit to enable the SFP+ ports on boot:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
systemd enable --now [dpmac@7.service][11]
|
||||
systemd enable --now [dpmac@8.service][12]
|
||||
systemd enable --now [dpmac@9.service][13]
|
||||
systemd enable --now [dpmac@10.service][14]
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Now each SPF+ port will boot configured as eth1-4, with eth0 being the 1Gb. NetworkManager will struggle unless these are consistent, and if you change the service start order the eth devices will re-order. I actually put a sleep $@ in each activation so they are consistent and don’t have locking issues. Unfortunately it adds 10 seconds to boot time. This has been fixed in the latest kernel and won’t be an issue once mainlined.
|
||||
|
||||
![][15]
|
||||
|
||||
I’d love to explore the built-in LAG features but this still needs to be coded into the
|
||||
|
||||
restool
|
||||
|
||||
options. I’ll save it for later. In the meantime I managed a single 10gb link as primary, and a 3×10 LACP Team for kicks. Eventually I changed to 4×10 LACP via copper SFP+ cables mounted in the attic.
|
||||
|
||||
### Energy Efficiency
|
||||
|
||||
Now with a stable environment it’s time to raise some hell. It’s really nice to see PWM support was recently added for the CPU fan, which sounds like a mini jet engine without it. Now the sound level is perfectly manageable and thermal control is automatic. Time to test drive with a power meter. Total power usage is consistently between 20-40 watts (usually in the low 20s) which is really impressive. I tried a few _tuned_ profiles which didn’t seem to have much effect on energy. If you add a power-hungry GPU or device that can obviously increase but for a dev server it’s perfect and well below the Z600 workstations I have next to it which consume 160-250 watts each when fired up.
|
||||
|
||||
### Remote Access
|
||||
|
||||
I’m an old soul so I still prefer KDE with Xorg and NX via X2go server. I can access SSH or a full GUI at native performance without a GPU. This lets me get a feel for performance, thermal stats, and also helps to evaluate the device as a workstation or potential VDI. The version of KDE shipped with the aarch64 server spin doesn’t seem to recognize some sensors but that seems to be because of KDE’s latest widget changes which I’d have to dig into.
|
||||
|
||||
![X2go KDE session over SSH][16]
|
||||
|
||||
Cockpit support is also outstanding out of the box. If SSH and X2go remote access aren’t your thing, Cockpit provides a great remote management platform with a growing list of plugins. Everything works great in my experience.
|
||||
|
||||
![Cockpit behaves as expected.][17]
|
||||
|
||||
All I needed to do now is shift into high gear with jumbo frames. MTU 1500 yields me an iperf of about 2-4Gbps bottlenecked at CPU0. Ain’t nobody got time for that. Set MTU 9000 and suddenly it gets the full 10Gbps both ways with time to spare on the CPU. Again, it would be nice to use the hardware assisted LAG since the device is supposed to handle up to 150Gbps duplex no sweat (with the 100Gbe QSFP option), which is nice given the Ubiquiti ES-16-XG tops out at 160Gbps full duplex (10gb/16 ports).
|
||||
|
||||
### Storage
|
||||
|
||||
As a storage solution this hardware provides great value in a small thermal window and energy saving footprint. I could accomplish similar performance with an old x86 box for cheap but the energy usage alone would eclipse any savings in short order. By comparison I’ve seen some consumer NAS devices offer 10Gbe and NVMe cache sharing an inadequate number of PCIe2 lanes and bottlenecked at the bus. This is fully customizable and since the energy footprint is similar to a small laptop a small UPS backup should allow full writeback cache mode for maximum performance. This would make a great oVirt NFS or iSCSI storage pool if needed. I would pair it with a nice NAS case or rack mount case with bays. Some vendors such as [Bamboo][18] are actually building server options around this platform as we speak.
|
||||
|
||||
The board has 4 SATA3 ports but if I were truly going to build a NAS with this I would probably add a RAID card that makes best use of the PCIe8x slot, which thankfully is open ended. Why some hardware vendors choose to include close-ended PCIe 8x,4x slots is beyond me. Future models will ship with a physical x16 slot but only 8x electrically. Some users on the SolidRun Discord talk about bifurcation and splitting out the 8 PCIe lanes which is an option as well. Note that some of those lanes are also reserved for NVMe, SATA, and network. The CEX7 form factor and interchangeable carrier board presents interesting possibilities later as the NXP LX2160A docs claim to support up to 24 lanes. For a dev board it’s perfectly fine as-is.
|
||||
|
||||
### Network Perf
|
||||
|
||||
For now I’ve managed to rig up a 4×10 LACP Team with NetworkManager for full load balancing. This same setup can be done with a QSFP+ breakout cable. KDE nm Network widget still doesn’t support Teams but I can set them up via nm-connection-editor or Cockpit. Automation could be achieved with _nmcli_ and _teamdctl_. An iperf3 test shows the connection maxing out at about 13Gbps to/from the 2×10 LACP team on my workstation. I know that iperf isn’t a true indication of real-world usage but it’s fun for benchmarks and tuning nonetheless. This did in fact require a lot of tuning and at this point I feel like I could fill a book just with iperf stats.
|
||||
|
||||
```
|
||||
$ iperf3 -c honeycomb -P 4 --cport 5000 -R
|
||||
Connecting to host honeycomb, port 5201
|
||||
Reverse mode, remote host honeycomb is sending
|
||||
[ 5] local 192.168.2.10 port 5000 connected to 192.168.2.4 port 5201
|
||||
[ 7] local 192.168.2.10 port 5001 connected to 192.168.2.4 port 5201
|
||||
[ 9] local 192.168.2.10 port 5002 connected to 192.168.2.4 port 5201
|
||||
[ 11] local 192.168.2.10 port 5003 connected to 192.168.2.4 port 5201
|
||||
[ ID] Interval Transfer Bitrate
|
||||
[ 5] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec
|
||||
[ 7] 1.00-2.00 sec 382 MBytes 3.21 Gbits/sec
|
||||
[ 9] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec
|
||||
[ 11] 1.00-2.00 sec 383 MBytes 3.21 Gbits/sec
|
||||
[SUM] 1.00-2.00 sec 1.49 GBytes 12.8 Gbits/sec
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
(TRUNCATED)
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
[ 5] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec
|
||||
[ 7] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec
|
||||
[ 9] 2.00-3.00 sec 380 MBytes 3.18 Gbits/sec
|
||||
[ 11] 2.00-3.00 sec 380 MBytes 3.19 Gbits/sec
|
||||
[SUM] 2.00-3.00 sec 1.48 GBytes 12.7 Gbits/sec
|
||||
- - - - - - - - - - - - - - - - - - - - - - - - -
|
||||
[ ID] Interval Transfer Bitrate Retr
|
||||
[ 5] 0.00-10.00 sec 3.67 GBytes 3.16 Gbits/sec 1 sender
|
||||
[ 5] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver
|
||||
[ 7] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 7 sender
|
||||
[ 7] 0.00-10.00 sec 3.67 GBytes 3.15 Gbits/sec receiver
|
||||
[ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec 36 sender
|
||||
[ 9] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver
|
||||
[ 11] 0.00-10.00 sec 3.69 GBytes 3.17 Gbits/sec 1 sender
|
||||
[ 11] 0.00-10.00 sec 3.68 GBytes 3.16 Gbits/sec receiver
|
||||
[SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec 45 sender
|
||||
[SUM] 0.00-10.00 sec 14.7 GBytes 12.6 Gbits/sec receiver
|
||||
|
||||
iperf Done
|
||||
```
|
||||
|
||||
### Notes on iperf3
|
||||
|
||||
I struggled with LACP Team configuration for hours, having done this before with an HP cluster on the same switch. I’d heard stories about bonds being old news with team support adding better load balancing to single TCP flows. This still seems bogus as you still can’t load balance a single flow with a team in my experience. Also LACP claims to be fully automated and easier to set up than traditional load balanced trunks but I find the opposite to be true. For all it claims to automate you still need to have hashing algorithms configured correctly at switches and host. With a few quirks along the way I once accidentally left a team in broadcast mode (not LACP) which registered duplicate packets on the iperf server and made it look like a single connection was getting double bandwidth. That mistake caused confusion as I tried to reproduce it with LACP.
|
||||
|
||||
Then I finally found the LACP hash settings in Ubiquiti’s new firmware GUI. It’s hidden behind a tiny pencil icon on each LAG. I managed to set my LAGs to hash on Src+Dest IP+port when they were defaulting to MAC/port. Still I was only seeing traffic on one slave of my 2×10 team even with parallel clients. Eventually I tried parallel clients with -V and it all made sense. By default iperf3 client ports are ephemeral but they follow an even sequence: 42174, 42176, 42178, 42180, etc… If your lb hash across a pair of sequential MACs includes src+dst port but those ports are always even, you’ll never hit the other interface with an odd MAC. How crazy is that for iperf to do? I tried looking at the source for iperf3 and I don’t even see how that could be happening. Instead if you specify a client port as well as parallel clients, they use a straight sequence: 50000, 50001, 50002, 50003, etc.. With odd+even numbers in client ports, I’m finally able to LB across all interfaces in all LAG groups. This setup would scale out well with more clients on the network.
|
||||
|
||||
![Proper LACP load balancing.][19]
|
||||
|
||||
Everything could probably be tuned a bit better but for now it is excellent performance and it puts my QNAP to shame. I’ll continue experimenting with the network co-processor and seeing if I can enable the native LAG support for even better performance. Across the network I would expect a practical peak of about 40 Gbps raw which is great.
|
||||
|
||||
![][20]
|
||||
|
||||
### Virtualization
|
||||
|
||||
What about virt? One of the best parts about having a 16 A72 cores is support for Aarch64 VMs at full speed using KVM, which you won’t be able to do on x86. I can use this single box to spin up a dozen or so VMs at a time for CI automation and testing, or just to test our latest HashiCorp builds with aarch64 builds on COPR. Qemu on x86 without KVM can emulate aarch64 but crawls by comparison. I’ve not yet tried to add it to an oVirt cluster yet but it’s really snappy actually and proves more cost effective than spinning up Arm VMs in a cloud. One of the use cases for this environment is NFV, and I think it fits it perfectly so long as you pair it with ECC RAM which I skipped as I’m not running anything critical. If anybody wants to test drive a VM DM me and I’ll try to get you some temp access.
|
||||
|
||||
![Virtual Machines in Cockpit][21]
|
||||
|
||||
### Benchmarks
|
||||
|
||||
[Phoronix][22] has already done quite a few benchmarks on [OpenBenchmarking.org][23] but I wanted to rerun them with the latest versions on my own Fedora 33 build for consistency. I also wanted to compare them to my Xeons which is not really a fair comparison. Both use DDR4 with similar clock speeds – around 2Ghz but different architectures and caches obviously yield different results. Also the Xeons are dual socket which is a huge cooling advantage for single threaded workloads. You can watch one process bounce between the coolest CPU sockets. The Honeycomb doesn’t have this luxury and has a smaller fan but the clock speed is playing it safe and slow at 2Ghz so I would bet the SoC has room to run faster if cooling were adjusted. I also haven’t played with the PWM settings to adjust the fan speed up just in case. Benchmarks performed using the tuned profile network-throughput.
|
||||
|
||||
Strangely some single core operations seem to actually perform better on the Honeycomb than they do on my Xeons. I tried single-threaded zstd compression with default level 3 on a a few files and found it actually performs consistently better on the Honeycomb. However using the actual pts/compress-zstd benchmark with multithreaded option turns the tables. The 16 cores still manage an impressive **2073** MB/s:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Zstd Compression 1.4.5:
|
||||
pts/compress-zstd-1.2.1 &#91;Compression Level: 3]
|
||||
Test 1 of 1
|
||||
Estimated Trial Run Count: 3
|
||||
Estimated Time To Completion: 9 Minutes &#91;22:41 UTC]
|
||||
Started Run 1 @ 22:33:02
|
||||
Started Run 2 @ 22:33:53
|
||||
Started Run 3 @ 22:34:37
|
||||
Compression Level: 3:
|
||||
2079.3
|
||||
2067.5
|
||||
2073.9
|
||||
Average: 2073.57 MB/s
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
For apples to oranges comparison my 2×10 core Xeon E5-2660 v3 box does **2790** MB/s, so 2073 seems perfectly respectable as a potential workstation. Paired with a midrange GPU this device would also make a great video transcoder or media server. Some users have asked about mining but I wouldn’t use one of these for mining crypto currency. The lack of PCIe atomics means certain OpenCL and CUDA features might not be supported and with only 8 PCIe lanes exposed you’re fairly limited. That said it could potentially make a great mobile ML, VR, IoT, or vision development platform. The possibilities are pretty open as the whole package is very well balanced and flexible.
|
||||
|
||||
### Conclusion
|
||||
|
||||
I wasn’t organized enough this year to arrange a FOSDEM visit but this is something I would have loved to talk about. I’m definitely glad I tried out. Special thanks to Jon Nettleton and the folks on SolidRun’s Discord for the help and troubleshooting. The kit is powerful and potentially replaces a lot of energy waste in my home lab. It provides a great Arm platform for development and it’s great to see how solid Fedora’s alternative architecture support is. I got my Linux start on Gentoo back in the day, but Fedora really has upped it’s arch game. I’m really glad I didn’t have to sit waiting for compilation on a proprietary platform. I look forward to the remaining patches to be mainlined into the Fedora kernel and I hope to see a few more generations use this package, especially as Apple goes all in on Arm. It will also be interesting to see what features emerge if Nvidia’s Arm acquisition goes through.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/fedora-aarch64-on-the-solidrun-honeycomb-lx2k/
|
||||
|
||||
作者:[John Boero][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/boeroboy/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/02/honeycomb-fed-aarch64-816x346.jpg
|
||||
[2]: https://unsplash.com/@timmossholder?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[3]: https://unsplash.com/s/photos/honeycombs?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[4]: http://solid-run.com/arm-servers-networking-platforms/honeycomb-workstation/#overview
|
||||
[5]: https://www.solid-run.com/wp-content/uploads/2020/11/HoneyComb-layout-front.png
|
||||
[6]: https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/layerscape-processors/layerscape-lx2160a-processor:LX2160A
|
||||
[7]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/image-894x1024.png
|
||||
[8]: https://discord.com/channels/620838168794497044
|
||||
[9]: https://i.imgflip.com/11c7o.gif
|
||||
[10]: https://github.com/SolidRun/lx2160a_uefi
|
||||
[11]: mailto:dpmac@7.service
|
||||
[12]: mailto:dpmac@8.service
|
||||
[13]: mailto:dpmac@9.service
|
||||
[14]: mailto:dpmac@10.service
|
||||
[15]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/image-2-1024x403.png
|
||||
[16]: https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/Screenshot_20210202_112051-1024x713.jpg
|
||||
[17]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-2-1024x722.png
|
||||
[18]: https://www.bamboosystems.io/b1000n/
|
||||
[19]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-4-1024x245.png
|
||||
[20]: http://systems.cs.columbia.edu/files/kvm-arm-logo.png
|
||||
[21]: https://fedoramagazine.org/wp-content/uploads/2021/02/image-1024x717.png
|
||||
[22]: https://www.phoronix.com/scan.php?page=news_item&px=SolidRun-ClearFog-ARM-ITX
|
||||
[23]: https://openbenchmarking.org/result/1905313-JONA-190527343&obr_sor=y&obr_rro=y&obr_hgv=ClearFog-ITX
|
@ -0,0 +1,290 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to set up custom sensors in Home Assistant)
|
||||
[#]: via: (https://opensource.com/article/21/2/home-assistant-custom-sensors)
|
||||
[#]: author: (Steve Ovens https://opensource.com/users/stratusss)
|
||||
|
||||
How to set up custom sensors in Home Assistant
|
||||
======
|
||||
Dive into the YAML files to set up custom sensors in the sixth article
|
||||
in this home automation series.
|
||||
![Computer screen with files or windows open][1]
|
||||
|
||||
In the last article in this series about home automation, I started digging into Home Assistant. I [set up a Zigbee integration][2] with a Sonoff Zigbee Bridge and installed a few add-ons, including Node-RED, File Editor, Mosquitto broker, and Samba. I wrapped up by walking through Node-RED's configuration, which I will use heavily later on in this series. The four articles before that one discussed [what Home Assistant is][3], why you may want [local control][4], some of the [communication protocols][5] for smart home components, and how to [install Home Assistant][6] in a virtual machine (VM) using libvirt.
|
||||
|
||||
In this sixth article, I'll walk through the YAML configuration files. This is largely unnecessary if you are just using the integrations supported in the user interface (UI). However, there are times, particularly if you are pulling in custom sensor data, where you have to get your hands dirty with the configuration files.
|
||||
|
||||
Let's dive in.
|
||||
|
||||
### Examine the configuration files
|
||||
|
||||
There are several potential configuration files you will want to investigate. Although everything I am about to show you _can_ be done in the main configuration.yaml file, it can help to split your configuration into dedicated files, especially with large installations.
|
||||
|
||||
Below I will walk through how I configure my system. For my custom sensors, I use the ESP8266 chipset, which is very maker-friendly. I primarily use [Tasmota][7] for my custom firmware, but I also have some components running [ESPHome][8]. Configuring firmware is outside the scope of this article. For now, I will assume you set up your devices with some custom firmware (or you wrote your own with [Arduino IDE][9] ).
|
||||
|
||||
#### The /config/configuration.yaml file
|
||||
|
||||
Configuration.yaml is the main file Home Assistant reads. For the following, use the File Editor you installed in the previous article. If you do not see File Editor in the left sidebar, enable it by going back into the **Supervisor** settings and clicking on **File Editor**. You should see a screen like this:
|
||||
|
||||
![Install File Editor][10]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
Make sure **Show in sidebar** is toggled on. I also always toggle on the **Watchdog** setting for any add-ons I use frequently.
|
||||
|
||||
Once that is completed, launch File Editor. There is a folder icon in the top-left header bar. This is the navigation icon. The `/config` folder is where the configuration files you are concerned with are stored. If you click on the folder icon, you will see a few important files:
|
||||
|
||||
![Configuration split files][12]
|
||||
|
||||
The following is a default configuration.yaml:
|
||||
|
||||
![Default Home Assistant configuration.yaml][13]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
The notation `script: !include scripts.yaml` indicates that Home Assistant should reference the contents of scripts.yaml anytime it needs the definition of a script object. You'll notice that each of these files correlates to files observed when the folder icon is clicked.
|
||||
|
||||
I added three lines to my configuration.yaml:
|
||||
|
||||
|
||||
```
|
||||
input_boolean: !include input_boolean.yaml
|
||||
binary_sensor: !include binary_sensor.yaml
|
||||
sensor: !include sensor.yaml
|
||||
```
|
||||
|
||||
As a quick aside, I configured my MQTT settings (see Home Assistant's [MQTT documentation][14] for more details) in the configuration.yaml file:
|
||||
|
||||
|
||||
```
|
||||
mqtt:
|
||||
discovery: true
|
||||
discovery_prefix: homeassistant
|
||||
broker: 192.168.11.11
|
||||
username: mqtt
|
||||
password: superpassword
|
||||
```
|
||||
|
||||
If you make an edit, don't forget to click on the Disk icon to save your work.
|
||||
|
||||
![Save icon in Home Assistant config][15]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
#### The /config/binary_sensor.yaml file
|
||||
|
||||
After you name your file in configuration.yaml, you'll have to create it. In the File Editor, click on the folder icon again. There is a small icon of a piece of paper with a **+** sign in its center. Click on it to bring up this dialog:
|
||||
|
||||
![Create config file][16]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
I have three main types of [binary sensors][17]: door, motion, and power. A binary sensor has only two states: on or off. All my binary sensors send their data to MQTT. See my article on [cloud vs. local control][4] for more information about MQTT.
|
||||
|
||||
My binary_sensor.yaml file looks like this:
|
||||
|
||||
|
||||
```
|
||||
- platform: mqtt
|
||||
state_topic: "BRMotion/state/PIR1"
|
||||
name: "BRMotion"
|
||||
qos: 1
|
||||
payload_on: "ON"
|
||||
payload_off: "OFF"
|
||||
device_class: motion
|
||||
|
||||
- platform: mqtt
|
||||
state_topic: "IRBlaster/state/PROJECTOR"
|
||||
name: "ProjectorStatus"
|
||||
qos: 1
|
||||
payload_on: "ON"
|
||||
payload_off: "OFF"
|
||||
device_class: power
|
||||
|
||||
- platform: mqtt
|
||||
state_topic: "MainHallway/state/DOOR"
|
||||
name: "FrontDoor"
|
||||
qos: 1
|
||||
payload_on: "open"
|
||||
payload_off: "closed"
|
||||
device_class: door
|
||||
```
|
||||
|
||||
Take a look at the definitions. Since `platform` is self-explanatory, start with `state_topic`.
|
||||
|
||||
* `state_topic`, as the name implies, is the topic where the device's state is published. This means anyone subscribed to the topic will be notified any time the state changes. This path is completely arbitrary, so you can name it anything you like. I tend to use the convention `location/state/object`, as this makes sense for me. I want to be able to reference all devices in a location, and for me, this layout is the easiest to remember. Grouping by device type is also a valid organizational layout.
|
||||
|
||||
* `name` is the string used to reference the device inside Home Assistant. It is normally referenced by `type.name`, as seen in this card in the Home Assistant [Lovelace][18] interface:
|
||||
|
||||
![Binary sensor card][19]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
* `qos`, short for quality of service, refers to how an MQTT client communicates with the broker when posting to a topic.
|
||||
|
||||
* `payload_on` and `payload_off` are determined by the firmware. These sections tell Home Assistant what text the device will send to indicate its current state.
|
||||
|
||||
* `device_class:` There are multiple possibilities for a device class. Refer to the [Home Assistant documentation][17] for more information and a description of each type available.
|
||||
|
||||
|
||||
|
||||
|
||||
#### The /config/sensor.yaml file
|
||||
|
||||
This file differs from binary_sensor.yaml in one very important way: The sensors within this configuration file can have vastly different data inside their payloads. Take a look at one of the more tricky bits of sensor data, temperature.
|
||||
|
||||
Here is the definition for my DHT temperature sensor:
|
||||
|
||||
|
||||
```
|
||||
- platform: mqtt
|
||||
state_topic: "Steve_Desk_Sensor/tele/SENSOR"
|
||||
name: "Steve Desk Temperature"
|
||||
value_template: '{{ value_json.DHT11.Temperature }}'
|
||||
|
||||
- platform: mqtt
|
||||
state_topic: "Steve_Desk_Sensor/tele/SENSOR"
|
||||
name: "Steve Desk Humidity"
|
||||
value_template: '{{ value_json.DHT11.Humidity }}'
|
||||
```
|
||||
|
||||
You'll notice two things right from the start. First, there are two definitions for the same `state_topic`. This is because this sensor publishes three different statistics.
|
||||
|
||||
Second, there is a new definition of `value_template`. Most sensors, whether custom or not, send their data inside a JSON payload. The template tells Home Assistant where the important information is in the JSON file. The following shows the raw JSON coming from my homemade sensor. (I used the program `jq` to make the JSON more readable.)
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"Time": "2020-12-23T16:59:11",
|
||||
"DHT11": {
|
||||
"Temperature": 24.8,
|
||||
"Humidity": 32.4,
|
||||
"DewPoint": 7.1
|
||||
},
|
||||
"BH1750": {
|
||||
"Illuminance": 24
|
||||
},
|
||||
"TempUnit": "C"
|
||||
}
|
||||
```
|
||||
|
||||
There are a few things to note here. First, as the sensor data is stored in a time-based data store, every reading has a `Time` entry. Second, there are two different sensors attached to this output. This is because I have both a DHT11 temperature sensor and a BH1750 light sensor attached to the same ESP8266 chip. Finally, my temperature is reported in Celsius.
|
||||
|
||||
Hopefully, the Home Assistant definitions will make a little more sense now. `value_json` is just a standard name given to any JSON object ingested by Home Assistant. The format of the `value_template` is `value_json.<component>.<data point>`.
|
||||
|
||||
For example, to retrieve the dewpoint:
|
||||
|
||||
|
||||
```
|
||||
`value_template: '{{ value_json.DHT11.DewPoint}}'`
|
||||
```
|
||||
|
||||
While you can dump this information to a file from within Home Assistant, I use Tasmota's `Console` to see the data it is publishing. (If you want me to do an article on Tasmota, please let me know in the comments below.)
|
||||
|
||||
As a side note, I also keep tabs on my local Home Assistant resource usage. To do so, I put this in my sensor.yaml file:
|
||||
|
||||
|
||||
```
|
||||
- platform: systemmonitor
|
||||
resources:
|
||||
- type: disk_use_percent
|
||||
arg: /
|
||||
- type: memory_free
|
||||
- type: memory_use
|
||||
- type: processor_use
|
||||
```
|
||||
|
||||
While this is technically not a sensor, I put it here, as I think of it as a data sensor. For more information, see the Home Assistant's [system monitoring][20] documentation.
|
||||
|
||||
#### The /config/input_boolean file
|
||||
|
||||
This last section is pretty easy to set up, and I use it for a wide variety of applications. An input boolean is used to track the status of something. It's either on or off, home or away, etc. I use these quite extensively in my automations.
|
||||
|
||||
My definitions are:
|
||||
|
||||
|
||||
```
|
||||
steve_home:
|
||||
name: steve
|
||||
steve_in_bed:
|
||||
name: 'steve in bed'
|
||||
guest_home:
|
||||
|
||||
kitchen_override:
|
||||
name: kitchen
|
||||
kitchen_fan_override:
|
||||
name: kitchen_fan
|
||||
laundryroom_override:
|
||||
name: laundryroom
|
||||
bathroom_override:
|
||||
name: bathroom
|
||||
hallway_override:
|
||||
name: hallway
|
||||
livingroom_override:
|
||||
name: livingroom
|
||||
ensuite_bathroom_override:
|
||||
name: ensuite_bathroom
|
||||
steve_desk_light_override:
|
||||
name: steve_desk_light
|
||||
projector_led_override:
|
||||
name: projector_led
|
||||
|
||||
project_power_status:
|
||||
name: 'Projector Power Status'
|
||||
tv_power_status:
|
||||
name: 'TV Power Status'
|
||||
bed_time:
|
||||
name: "It's Bedtime"
|
||||
```
|
||||
|
||||
I use some of these directly in the Lovelace UI. I create little badges that I put at the top of each of the pages I have in the UI:
|
||||
|
||||
![Home Assistant options in Lovelace UI][21]
|
||||
|
||||
(Steve Ovens, [CC BY-SA 4.0][11])
|
||||
|
||||
These can be used to determine whether I am home, if a guest is in my house, and so on. Clicking on one of these badges allows me to toggle the boolean, and this object can be read by automations to make decisions about how the “smart devices” react to a person's presence (if at all). I'll revisit the booleans in a future article when I examine Node-RED in more detail.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
In this article, I looked at the YAML configuration files and added a few custom sensors into the mix. You are well on the way to getting some functioning automation with Home Assistant and Node-RED. In the next article, I'll dive into some basic Node-RED flows and introduce some basic automations.
|
||||
|
||||
Stick around; I've got plenty more to cover, and as always, leave a comment below if you would like me to examine something specific. If I can, I'll be sure to incorporate the answers to your questions into future articles.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/home-assistant-custom-sensors
|
||||
|
||||
作者:[Steve Ovens][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/stratusss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
|
||||
[2]: https://opensource.com/article/21/1/home-automation-5-homeassistant-addons
|
||||
[3]: https://opensource.com/article/20/11/home-assistant
|
||||
[4]: https://opensource.com/article/20/11/cloud-vs-local-home-automation
|
||||
[5]: https://opensource.com/article/20/11/home-automation-part-3
|
||||
[6]: https://opensource.com/article/20/12/home-assistant
|
||||
[7]: https://tasmota.github.io/docs/
|
||||
[8]: https://esphome.io/
|
||||
[9]: https://create.arduino.cc/projecthub/Niv_the_anonymous/esp8266-beginner-tutorial-project-6414c8
|
||||
[10]: https://opensource.com/sites/default/files/uploads/ha-setup22-file-editor-settings.png (Install File Editor)
|
||||
[11]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[12]: https://opensource.com/sites/default/files/uploads/ha-setup29-configuration-split-files1.png (Configuration split files)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/ha-setup28-configuration-yaml.png (Default Home Assistant configuration.yaml)
|
||||
[14]: https://www.home-assistant.io/docs/mqtt/broker
|
||||
[15]: https://opensource.com/sites/default/files/uploads/ha-setup23-configuration-yaml2.png (Save icon in Home Assistant config)
|
||||
[16]: https://opensource.com/sites/default/files/uploads/ha-setup24-new-config-file.png (Create config file)
|
||||
[17]: https://www.home-assistant.io/integrations/binary_sensor/
|
||||
[18]: https://www.home-assistant.io/lovelace/
|
||||
[19]: https://opensource.com/sites/default/files/uploads/ha-setup25-bindary_sensor_card.png (Binary sensor card)
|
||||
[20]: https://www.home-assistant.io/integrations/systemmonitor
|
||||
[21]: https://opensource.com/sites/default/files/uploads/ha-setup25-input-booleans.png (Home Assistant options in Lovelace UI)
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why choose Plausible for an open source alternative to Google Analytics)
|
||||
[#]: via: (https://opensource.com/article/21/2/plausible)
|
||||
[#]: author: (Ben Rometsch https://opensource.com/users/flagsmith)
|
||||
|
||||
Why choose Plausible for an open source alternative to Google Analytics
|
||||
======
|
||||
Plausible is gaining attention and users as a viable, effective
|
||||
alternative to Google Analytics.
|
||||
![Analytics: Charts and Graphs][1]
|
||||
|
||||
Taking on the might of Google Analytics may seem like a big challenge. In fact, you could say it doesn't sound plausible… But that's exactly what [Plausible.io][2] has done with great success, signing up thousands of new users since 2018.
|
||||
|
||||
Plausible's co-founders Uku Taht and Marko Saric recently appeared on [The Craft of Open Source][3] podcast to talk about the project and how they:
|
||||
|
||||
* Created a viable alternative to Google Analytics
|
||||
* Gained so much momentum in less than two years
|
||||
* Achieved their goals by open sourcing the project
|
||||
|
||||
|
||||
|
||||
Read on for a summary of their conversation with podcast host and Flagsmith founder Ben Rometsch.
|
||||
|
||||
### How Plausible got started
|
||||
|
||||
In winter 2018, Uku started coding a project that he thought was desperately needed—a viable, effective alternative to Google Analytics—after becoming disillusioned with the direction Google products were heading and the fact that all other data solutions seemed to use Google as a "data-handling middleman."
|
||||
|
||||
Uku's first instinct was to focus on the analytics side of things using existing database solutions. Right away, he faced some challenges. The first attempt, using PostgreSQL, was technically naïve, as it became overwhelmed and inefficient pretty quickly. Therefore, his goal morphed into making an analytics product that can handle large quantities of data points with no discernable decline in performance. To cut a long story short, Uku succeeded, and Plausible can now ingest more than 80 million records per month.
|
||||
|
||||
The first version of Plausible was released in summer 2019. In March 2020, Marko came on board to head up the project's communications and marketing side. Since then, its popularity has grown with considerable momentum.
|
||||
|
||||
### Why open source?
|
||||
|
||||
Uku was keen to follow the "indie hacker" route of software development: create a product, put it out there, and see how it grows. Open source makes sense in this respect because you can quickly grow a community and gain popularity.
|
||||
|
||||
But Plausible didn't start out as open source. Uku was initially concerned about the software's sensitive code, such as billing codes, but he soon released that this was of no use to people without the API token.
|
||||
|
||||
Now, Plausible is fully open source under [AGPL][4], which they chose instead of the MIT License. Uku explains that under an MIT License, anyone can do anything to the code without restriction. Under AGPL, if someone changes the code, they must open source their changes and contribute the code back to the community. This means that large corporations cannot take the original code, build from it, then reap all the rewards. They must share it, making for a more level playing field. For instance, if a company wanted to plug in their billing or login system, they would be legally obliged to publish the code.
|
||||
|
||||
During the podcast, Uku asked me about Flagsmith's license which is currently under a BSD 3-Clause license, which is highly permissive, but I am about to move some features behind a more restrictive license. So far, the Flagsmith community has been understanding of the change as they realize this will lead to more and better features.
|
||||
|
||||
### Plausible vs. Google Analytics
|
||||
|
||||
Uku says, in his opinion, the spirit of open source is that the code should be open for commercial use by anyone and shared with the community, but you can keep back a closed-source API module as a proprietary add-on. In this way, Plausible and other companies can cater to different use-cases by creating and selling bespoke API add-on licenses.
|
||||
|
||||
Marko is a developer by trade, but from the marketing side of things, he worked to get the project covered on sites such as Hacker News and Lobster and build a Twitter presence to help generate momentum. The buzz created by this publicity also meant that the project took off on GitHub, going from 500 to 4,300 stars. As traffic grew, Plausible appeared on GitHub's trending list, which helped its popularity snowball.
|
||||
|
||||
Marko also focused heavily on publishing and promoting blog posts. This strategy paid off, as four or five posts went viral within the first six months, and he used those spikes to amplify the marketing message and accelerate growth.
|
||||
|
||||
The biggest challenge in Plausible's growth was getting people to switch from Google Analytics. The project's main goal was to create a web analytics product that is useful, efficient, and accurate. It also needed to be compliant with regulations and offer a high degree of privacy for both the business and website visitors.
|
||||
|
||||
Plausible is now running on more than 8,000 websites. From talking to customers, Uku estimates that around 90% of them would have run Google Analytics.
|
||||
|
||||
Plausible runs on a standard software-as-a-service (SaaS) subscription model. To make things fairer, it charges per page view on a monthly basis, rather than charging per website. This can prove tricky with seasonal websites, say e-commerce sites that spike at the holidays or US election sites that spike once every four years. These can cause pricing problems under the monthly subscription model, but it generally works well for most sites.
|
||||
|
||||
### Check out the podcast
|
||||
|
||||
To discover more about how Uku and Marko grew the open source Plausible project at a phenomenal rate and made it into a commercial success, [listen to the podcast][3] and check out [other episodes][5] to learn more about "the ins-and-outs of the open source software community."
|
||||
|
||||
Sandstorm's Jade Wang shares some of her favorite open source web apps that are self-hosted...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/2/plausible
|
||||
|
||||
作者:[Ben Rometsch][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/flagsmith
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/analytics-graphs-charts.png?itok=sersoqbV (Analytics: Charts and Graphs)
|
||||
[2]: https://plausible.io/
|
||||
[3]: https://www.flagsmith.com/podcast/02-plausible
|
||||
[4]: https://www.gnu.org/licenses/agpl-3.0.en.html
|
||||
[5]: https://www.flagsmith.com/podcast
|
@ -0,0 +1,108 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Viper Browser: A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism)
|
||||
[#]: via: (https://itsfoss.com/viper-browser/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Viper Browser: A Lightweight Qt5-based Web Browser With A Focus on Privacy and Minimalism
|
||||
======
|
||||
|
||||
_**Brief: Viper Browser is a Qt-based browser that offers a simple user experience keeping privacy in mind.**_
|
||||
|
||||
While the majority of the popular browsers run on top of Chromium, unique alternatives like [Firefox][1], [Beaker Browser][2], and some other [chrome alternatives][3] should not cease to exist.
|
||||
|
||||
Especially, considering Google’s recent potential thought of stripping [Google Chrome-specific features from Chromium][4] giving an excuse of abuse.
|
||||
|
||||
In the look-out for more Chrome alternatives, I came across an interesting project “[Viper Browser][5]” as per our reader’s suggestion on [Mastodon][6].
|
||||
|
||||
### Viper Browser: An Open-Source Qt5-based Browser
|
||||
|
||||
_**Note**: Viper Browser is fairly a new project with a couple of contributors. It lacks certain features which I’ll be mentioning as you read on._
|
||||
|
||||
![][7]
|
||||
|
||||
Viper is an interesting web browser that focuses on being a powerful yet lightweight option while utilizing [QtWebEngine][8].
|
||||
|
||||
QtWebEngine borrows the code from Chromium but it does not include the binaries and services that connect to the Google platform.
|
||||
|
||||
I spent some time using it and performing some daily browsing activities and I must say that I’m quite interested. Not just because it is something simple to use (how complicated a browser can be), but it also focuses on enhancing your privacy by giving you the option to add different Ad blocking options along with some useful options.
|
||||
|
||||
![][9]
|
||||
|
||||
Even though I think it is not meant for everyone, it is still worth taking a look. Let me highlight the features briefly before you can proceed trying it out.
|
||||
|
||||
### Features of Viper Browser
|
||||
|
||||
![][10]
|
||||
|
||||
I’ll list some of the key features that you can find useful:
|
||||
|
||||
* Ability to manage cookies
|
||||
* Multiple preset options to choose different Adblocker networks
|
||||
* Simple and easy to use
|
||||
* Privacy-friendly default search engine – [Startpage][11] (you can change this)
|
||||
* Ability to add user scripts
|
||||
* Ability to add new user agents
|
||||
* Option to disable JavaScript
|
||||
* Ability to prevent images from loading up
|
||||
|
||||
|
||||
|
||||
In addition to all these highlights, you can easily tweak the privacy settings to remove your history, clean cookies when existing, and some more options.
|
||||
|
||||
![][12]
|
||||
|
||||
### Installing Viper Browser on Linux
|
||||
|
||||
It just offers an AppImage file on its [releases section][13] that you can utilize to test on any Linux distribution.
|
||||
|
||||
In case you need help, you may refer to our guide on [using AppImage file on Linux][14] as well. If you’re curious, you can explore more about it on [GitHub][5].
|
||||
|
||||
[Viper Browser][5]
|
||||
|
||||
### My Thoughts on Using Viper Browser
|
||||
|
||||
I don’t think it is something that could replace your current browser immediately but if you are interested to test out new projects that are trying to offer Chrome alternatives, this is surely one of them.
|
||||
|
||||
When I tried logging in my Google account, it prevented me by mentioning that it is potentially an insecure browser or unsupported browser. So, if you rely on your Google account, it is a disappointing news.
|
||||
|
||||
However, other social media platforms work just fine along with YouTube (without signing in). Netflix is not something supported but overall the browsing experience is quite fast and usable.
|
||||
|
||||
You can install user scripts, but Chrome extensions aren’t supported yet. Of course, it is either intentional or something to be looked after as the development progresses considering it as a privacy-friendly web browser.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Considering that this is a less-known yet something interesting for some, do you have any suggestions for us to take a look at? An open-source project that deserves coverage?
|
||||
|
||||
Let me know in the comments down below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/viper-browser/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.mozilla.org/en-US/firefox/new/
|
||||
[2]: https://itsfoss.com/beaker-browser-1-release/
|
||||
[3]: https://itsfoss.com/open-source-browsers-linux/
|
||||
[4]: https://www.bleepingcomputer.com/news/google/google-to-kill-chrome-sync-feature-in-third-party-browsers/
|
||||
[5]: https://github.com/LeFroid/Viper-Browser
|
||||
[6]: https://mastodon.social/web/accounts/199851
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser.png?resize=800%2C583&ssl=1
|
||||
[8]: https://wiki.qt.io/QtWebEngine
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser-setup.jpg?resize=793%2C600&ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-preferences.jpg?resize=800%2C660&ssl=1
|
||||
[11]: https://www.startpage.com
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/02/viper-browser-tools.jpg?resize=800%2C262&ssl=1
|
||||
[13]: https://github.com/LeFroid/Viper-Browser/releases
|
||||
[14]: https://itsfoss.com/use-appimage-linux/
|
@ -1,250 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting Started With Pacman Commands in Arch-based Linux Distributions)
|
||||
[#]: via: (https://itsfoss.com/pacman-command/)
|
||||
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
|
||||
|
||||
基于 Arch 的 Linux 发行版中 Pacman 命令入门
|
||||
======
|
||||
|
||||
_**简介:这本初学者指南向您展示了在 Linux 中可以使用 pacman 命令做什么,如何使用它们来查找新的软件包,安装和升级新的软件包,以及清理您的系统。**_
|
||||
|
||||
[pacman][1] 包管理器是 [Arch Linux][2] 和其他主要发行版如 Red Hat 和 Ubuntu/Debian 之间的主要区别之一。它结合了简单的二进制包格式和易于使用的[构建系统 ][3]。pacman 的目标是方便地管理软件包,无论它是来自[官方库 ][4] 还是用户自己的构建软件库。
|
||||
|
||||
如果您曾经使用过 Ubuntu 或基于 debian 的 发行版,那么您可能使用过 apt-get 或 apt 命令。pacman 在 Arch Linux 中是同样的。如果你[刚刚安装了 Arch Linux][5],在安装 Arch Linux 后,首先要做的[几件事 ][6] 之一就是学习使用 pacman 命令。
|
||||
|
||||
在这个初学者指南中,我将解释一些基本的 pacman 命令的用法,你应该知道如何用这些命令来管理你的基于 Archlinux 的系统。
|
||||
|
||||
### Arch Linux 用户应该知道的几个重要的 pacman 命令
|
||||
|
||||
![][7]
|
||||
|
||||
与其他包管理器一样,pacman 可以将包列表与软件库同步,它能够自动解决所有所需的依赖项,以使得用户可以通过一个简单的命令下载和安装软件。
|
||||
|
||||
#### 通过 pacman 安装软件
|
||||
|
||||
你可以用一下形式的代码来安装一个或者多个软件包:
|
||||
|
||||
```
|
||||
pacman -S _package_name1_ _package_name2_ ...
|
||||
```
|
||||
|
||||
![安装一个包 ][8]
|
||||
|
||||
-S 选项的意思是同步 (synchronization),它的意思是 pacman 在安装之前先与软件库进行同步。
|
||||
|
||||
pacman 数据库根据安装的原因将安装的包分为两组:
|
||||
|
||||
* **显式安装**:由 pacman -S 或 -U 命令直接安装的包
|
||||
* **依赖安装**:由于被其他显式安装的包所[依赖 ][9],而被自动安装的包。
|
||||
|
||||
|
||||
|
||||
#### 卸载已安装的软件包
|
||||
|
||||
卸载一个包,并且删除它的所有依赖。
|
||||
|
||||
```
|
||||
pacman -R package_name_
|
||||
```
|
||||
|
||||
![移除一个包 ][10]
|
||||
|
||||
删除一个包,以及其不被其他包所需要的依赖项:
|
||||
|
||||
```
|
||||
pacman -Rs _package_name_
|
||||
```
|
||||
|
||||
删除所有不再需要的依赖项。比如,需要这个依赖的包已经被删除了。
|
||||
|
||||
```
|
||||
pacman -Qdtq | pacman -Rs -
|
||||
```
|
||||
|
||||
#### 升级软件包
|
||||
|
||||
Pacman 提供了一个简单的办法来[升级 Arch Linux][11]。你只需要一条命令就可以升级所有已安装的软件包。这可能需要一段时间,这取决于系统的新旧程度。
|
||||
|
||||
以下命令可以同步存储库数据库 _并且_ 更新系统的所有软件包,但不包括不在软件库中的“本地安装的”包:
|
||||
|
||||
```
|
||||
pacman -Syu
|
||||
```
|
||||
|
||||
* S 代表同步
|
||||
* y 代表更新本地存储库
|
||||
* u 代表系统更新
|
||||
|
||||
|
||||
|
||||
也就是说,同步到中央存储库(主程序包数据库),刷新主程序包数据库的本地副本,然后执行系统更新(通过更新所有有更新版本可用的程序包)。
|
||||
|
||||
![系统更新 ][12]
|
||||
|
||||
注意!
|
||||
|
||||
Arch Linux 用户在系统升级前,建议您访问 [Arch-Linux 主页 ][2] 查看最新消息,以了解异常更新的情况。如果系统更新需要人工干预,主页上将发布相关的新闻。您也可以订阅 [RSS feed][13] 或 [Arch 的声明邮件 ][14]。
|
||||
|
||||
在升级基础软件(如 kernel、xorg、systemd 或 glibc) 之前,请注意查看相应的 [论坛 ][15],以了解大家报告的各种问题。
|
||||
|
||||
**在 Arch 和 Manjaro 等滚动发行版中不支持仅部分升级**。这意味着,当新的库版本被推送到存储库时,存储库中的所有包都需要根据库进行升级。例如,如果两个包依赖于同一个库,则仅升级一个包可能会破坏依赖于库的旧版本的另一个包。
|
||||
|
||||
#### 用 Pacman 查找包
|
||||
|
||||
Pacman 使用 -Q 选项查询本地包数据库,使用 -S 选项查询同步数据库,使用 -F 选项查询文件数据库。
|
||||
|
||||
Pacman 可以在数据库中搜索包,包括包的名称和描述:
|
||||
|
||||
```
|
||||
pacman -Ss _string1_ _string2_ ...
|
||||
```
|
||||
|
||||
![查找一个包 ][16]
|
||||
|
||||
查找已经被安装的包:
|
||||
|
||||
```
|
||||
pacman -Qs _string1_ _string2_ ...
|
||||
```
|
||||
|
||||
根据文件名在远程数据库中查找它所属的包:
|
||||
|
||||
```
|
||||
pacman -F _string1_ _string2_ ...
|
||||
```
|
||||
|
||||
查看一个包的依赖树:
|
||||
|
||||
```
|
||||
pactree _package_naenter code hereme_
|
||||
```
|
||||
|
||||
#### 清除包缓存
|
||||
|
||||
Pacman 将其下载的包存储在 /var/cache/Pacman/pkg/ 中,并且不会自动删除旧版本或卸载的版本。这有一些优点:
|
||||
|
||||
1。它允许[降级 ][17] 一个包,而不需要通过其他来源检索以前的版本。
|
||||
2。已卸载的软件包可以轻松地直接从缓存文件夹重新安装。
|
||||
|
||||
|
||||
|
||||
但是,有必要定期清理缓存以防止文件夹增大。
|
||||
|
||||
[pacman contrib][19] 包中提供的 [paccache(8)][18] 脚本默认情况下会删除已安装和未安装包的所有缓存版本,但最近 3 个版本除外:
|
||||
|
||||
```
|
||||
paccache -r
|
||||
```
|
||||
|
||||
![清除缓存 ][20]
|
||||
|
||||
要删除当前未安装的所有缓存包和未使用的同步数据库,请执行:
|
||||
|
||||
```
|
||||
pacman -Sc
|
||||
```
|
||||
|
||||
要从缓存中删除所有文件,请使用 clean 选项两次,这是最激进的方法,不会在缓存文件夹中留下任何内容:
|
||||
|
||||
```
|
||||
pacman -Scc
|
||||
```
|
||||
|
||||
#### 安装本地或者第三方的包
|
||||
|
||||
安装不是来自远程存储库的“本地”包:
|
||||
|
||||
```
|
||||
pacman -U _/path/to/package/package_name-version.pkg.tar.xz_
|
||||
```
|
||||
|
||||
安装官方存储库中未包含的“远程”软件包:
|
||||
|
||||
```
|
||||
pacman -U http://www.example.com/repo/example.pkg.tar.xz
|
||||
```
|
||||
|
||||
### 额外内容:用 pacman 排除常见错误
|
||||
|
||||
下面是使用 pacman 管理包时可能遇到的一些常见错误。
|
||||
|
||||
#### 提交事务失败(文件冲突)
|
||||
|
||||
如果你看到以下报错:
|
||||
|
||||
```
|
||||
error: could not prepare transaction
|
||||
error: failed to commit transaction (conflicting files)
|
||||
package: /path/to/file exists in filesystem
|
||||
Errors occurred, no packages were upgraded.
|
||||
```
|
||||
|
||||
这是因为 pacman 检测到文件冲突,不会为您覆盖文件。
|
||||
|
||||
解决这个问题的一个安全方法是首先检查另一个包是否拥有这个文件 (pacman-Qo\path/to/file)。如果该文件属于另一个包,请提交错误报告。如果文件不属于另一个包,请重命名“存在于文件系统中”的文件,然后重新发出 update 命令。如果一切顺利,文件可能会被删除。
|
||||
|
||||
您可以显式地运行 **pacman -S –overwrite glob package**,强制 pacman 覆盖与 _glob_ 匹配的文件,而不是手动重命名并在以后删除属于所讨论的包的所有文件。
|
||||
|
||||
#### 提交事务失败(包无效或损坏)
|
||||
|
||||
在 /var/cache/pacman/pkg/ 中查找 .part 文件(部分下载的包),并将其删除。这通常是由在 pacman.conf 文件中使用自定义 XferCommand 引起的。
|
||||
|
||||
#### 初始化事务失败(无法锁定数据库)
|
||||
|
||||
当 pacman 要修改包数据库时,例如安装包时,它会在 /var/lib/pacman/db.lck 处创建一个锁文件。这可以防止 pacman 的另一个实例同时尝试更改包数据库。
|
||||
|
||||
如果 pacman 在更改数据库时被中断,这个过时的锁文件可能仍然保留。如果您确定没有 pacman 实例正在运行,那么请删除锁文件。
|
||||
|
||||
检查进程是否持有锁定文件:
|
||||
|
||||
```
|
||||
lsof /var/lib/pacman/db.lck
|
||||
```
|
||||
|
||||
如果上述命令未返回任何内容,则可以删除锁文件:
|
||||
|
||||
```
|
||||
rm /var/lib/pacman/db.lck
|
||||
```
|
||||
|
||||
如果您发现 lsof 命令输出了使用锁文件的进程的 PID,请先杀死这个进程,然后删除锁文件。
|
||||
|
||||
我希望你喜欢我对 Pacman 基础命令的解释。请在下面留言,不要忘记订阅我们的社交媒体。注意安全!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/pacman-command/
|
||||
|
||||
作者:[Dimitrios Savvopoulos][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/dimitrios/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.archlinux.org/pacman/
|
||||
[2]: https://www.archlinux.org/
|
||||
[3]: https://wiki.archlinux.org/index.php/Arch_Build_System
|
||||
[4]: https://wiki.archlinux.org/index.php/Official_repositories
|
||||
[5]: https://itsfoss.com/install-arch-linux/
|
||||
[6]: https://itsfoss.com/things-to-do-after-installing-arch-linux/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/essential-pacman-commands.jpg?ssl=1
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-S.png?ssl=1
|
||||
[9]: https://wiki.archlinux.org/index.php/Dependency
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-R.png?ssl=1
|
||||
[11]: https://itsfoss.com/update-arch-linux/
|
||||
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Syu.png?ssl=1
|
||||
[13]: https://www.archlinux.org/feeds/news/
|
||||
[14]: https://mailman.archlinux.org/mailman/listinfo/arch-announce/
|
||||
[15]: https://bbs.archlinux.org/
|
||||
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-pacman-Ss.png?ssl=1
|
||||
[17]: https://wiki.archlinux.org/index.php/Downgrade
|
||||
[18]: https://jlk.fjfi.cvut.cz/arch/manpages/man/paccache.8
|
||||
[19]: https://www.archlinux.org/packages/?name=pacman-contrib
|
||||
[20]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/sudo-paccache-r.png?ssl=1
|
@ -0,0 +1,130 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Chao-zhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Give Your GNOME Desktop a Tiling Makeover With Material Shell GNOME Extension)
|
||||
[#]: via: (https://itsfoss.com/material-shell/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
使用 Material Shell 扩展将你的 GNOME 桌面打造成平铺式风格
|
||||
======
|
||||
|
||||
平铺式窗口的特性吸引了很多人的追捧。也许是因为他很好看,也许是因为对于 [linux 快捷键 ][1] 玩家他能提高效率。又或者是因为使用不同寻常的平铺式窗口是一种挑战。
|
||||
|
||||
![Tiling Windows in Linux | Image Source][2]
|
||||
|
||||
从 i3 到 [Sway][3],linux 桌面拥有各种各样的平铺式窗口管理器。配置一个平铺式窗口管理器需要一个陡峭的学习曲线。
|
||||
|
||||
这就是为什么像 [Regolith desktop][4] 这样的项目会存在,它给你提供一个已经配置好的平铺式桌面。所以你不需要做大多的准备就可以直接开始使用。
|
||||
|
||||
让我给你介绍一个相似的项目 ——Material Shell。它可以让你用上平铺式桌面,甚至比 [Regolith][5] 还简单。
|
||||
|
||||
### Material Shell 扩展:将 GNOME 桌面转变成平铺式窗口管理器
|
||||
|
||||
[Material Shell][6] 是一个 GNOME 扩展,这就是它最好的点。这意味着你不需要注销并登陆其他桌面环境。你只需要启用或关闭这个扩展就可以自如的切换你的工作环境。
|
||||
|
||||
我会列出 Material Shell 的各种特性,但是也许视频更容易让你理解:
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][7]
|
||||
|
||||
这个项目叫做 Material Shell 是因为他遵循 [Material Design][8] 原则。因此这个应用拥有一个美观的界面。这就是他最重要的一个特性。
|
||||
|
||||
#### 直观的界面
|
||||
|
||||
Material Shell 添加了一个左侧面板,可以快速访问。在此面板上,您可以在底部找到系统托盘,在顶部找到搜索和工作区。
|
||||
|
||||
所有新打开的应用都会添加到当前工作区中。您也可以创建新的工作区并切换到该工作区,以将正在运行的应用分类。其实这就是工作区最初的意义。
|
||||
|
||||
在 Material Shell 中,每个工作区都可以显示为具有多个应用程序的行列,而不是包含多个应用程序的程序框。
|
||||
|
||||
#### 平铺式窗口
|
||||
|
||||
在工作区中,你可以看到所有打开的应用程序都在顶部。默认情况下,应用程序会像在 GNOME desktop 中那样铺满整个屏幕。你可以使用右上角的布局改变器来改变布局,将其分成两半、多列或多个应用网格。
|
||||
|
||||
这段视频一目了然的显示了以上所有功能:
|
||||
|
||||
<!-- 丢了个视频链接,我不知道怎么添加 -->
|
||||
|
||||
#### 固定布局和工作区
|
||||
|
||||
Material Shell 会记住你打开的工作区和窗口,这样你就不必重新组织你的布局。这是一个很好的特性,因为如果您对应用程序的位置有要求的话,它可以节省时间。
|
||||
|
||||
#### 热建/快捷键
|
||||
|
||||
像任何平铺窗口管理器一样,您可以使用键盘快捷键在应用程序和工作区之间切换。
|
||||
|
||||
* `Super+W` 切换到上个工作区;
|
||||
* `Super+S` 切换到下个工作区;
|
||||
* `Super+A` 切换到左边的窗口;
|
||||
* `Super+D` 切换到右边的窗口;
|
||||
* `Super+1`,`Super+2` … `Super+0` 切换到某个指定的工作区;
|
||||
* `Super+Q` 关闭当前窗口;
|
||||
* `Super+[MouseDrag]` 移动窗口;
|
||||
* `Super+Shift+A` 将当前窗口左移;
|
||||
* `Super+Shift+D` 将当前窗口右移;
|
||||
* `Super+Shift+W` 将当前窗口上移;
|
||||
* `Super+Shift+S` 将当前窗口下移。
|
||||
|
||||
|
||||
|
||||
### 安装 Material Shell
|
||||
|
||||
警告!
|
||||
|
||||
对于大多数用户来说,平铺式窗口可能会导致混乱。你最好先熟悉如何使用 GNOME 扩展。如果你是 Linux 新手或者你害怕你的系统发生翻天覆地的变化,你应当避免使用这个扩展。
|
||||
|
||||
Material Shell 是一个 GNOME 扩展。所以,请你[检查你的桌面环境 ][9] 确保它是 _**GNOME 3.34 或者更高的版本**_。
|
||||
|
||||
我还想补充一点,平铺窗口可能会让许多用户感到困惑。
|
||||
|
||||
除此之外,我注意到在禁用 Material Shell 之后,它会导致 Firefox 和 Ubuntu dock 的顶栏消失。你可以在 GNOME 的扩展应用程序中禁用/启用 Ubuntu 的 dock 扩展来使其变回原来的样子。我想这些问题也应该在系统重启后消失,虽然我没试过。
|
||||
|
||||
我希望你知道[如何使用 GNOME 扩展 ][10]。最简单的办法就是[在浏览器中打开这个链接 ][11],安装 GNOME 扩展浏览器插件并且启用 Material Shell 扩展。
|
||||
|
||||
![][12]
|
||||
|
||||
如果你不喜欢这个扩展,你也可以在同样的链接中禁用它。或者在 GNOME 扩展程序中禁用它。
|
||||
|
||||
![][13]
|
||||
|
||||
**使不使用平铺式**
|
||||
|
||||
我使用多个电脑屏幕,我发现 Material Shell 不适用于多个屏幕的情况。这是开发者将来可以改进的地方。
|
||||
|
||||
除了这个毛病以外,Material Shell 是个让你开始使用平铺式窗口的好东西。如果你尝试了 Material Shell 并且喜欢它,请通过[给它一个星或在 GitHub 上赞助它 ][14] 来鼓励这个项目。
|
||||
|
||||
由于某些原因,平铺窗户越来越受欢迎。最近发布的 [Pop OS 20.04][15] 也增加了平铺窗口的功能。
|
||||
|
||||
但正如我前面提到的,平铺布局并不适合所有人,它可能会让很多人感到困惑。
|
||||
|
||||
你呢?你是喜欢平铺窗口还是喜欢经典的桌面布局?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/material-shell/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Chao-zhi](https://github.com/Chao-zhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/09/linux-ricing-example-800x450.jpg?resize=800%2C450&ssl=1
|
||||
[3]: https://itsfoss.com/sway-window-manager/
|
||||
[4]: https://itsfoss.com/regolith-linux-desktop/
|
||||
[5]: https://regolith-linux.org/
|
||||
[6]: https://material-shell.com
|
||||
[7]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[8]: https://material.io/
|
||||
[9]: https://itsfoss.com/find-desktop-environment/
|
||||
[10]: https://itsfoss.com/gnome-shell-extensions/
|
||||
[11]: https://extensions.gnome.org/extension/3357/material-shell/
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/install-material-shell.png?resize=800%2C307&ssl=1
|
||||
[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/material-shell-gnome-extension.png?resize=799%2C497&ssl=1
|
||||
[14]: https://github.com/material-shell/material-shell
|
||||
[15]: https://itsfoss.com/pop-os-20-04-review/
|
Loading…
Reference in New Issue
Block a user