Merge pull request #20 from LCTT/master

update from LCTT
This commit is contained in:
perfiffer 2021-09-28 07:55:02 +08:00 committed by GitHub
commit c4a526530f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
41 changed files with 5232 additions and 1887 deletions

View File

@ -0,0 +1,123 @@
[#]: subject: (Can We Recommend Linux for Gaming in 2021?)
[#]: via: (https://news.itsfoss.com/linux-for-gaming-opinion/)
[#]: author: (Ankush Das https://news.itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: (perfiffer)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13826-1.html)
2021 年了,我们能推荐使用 Linux 来玩游戏吗?
======
> 每天Linux 都在进行不断进步,以支持具有适当图形支持的现代游戏。但是,我们能推荐 Linux 用于游戏吗?
![](https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/04/Gaming-On-Linux.jpg?w=1200&ssl=1)
你经常会听到 Linux 爱好者称赞 Linux 上改进的游戏功能。是的,考虑到在 Linux 桌面上支持现代游戏所取得的进步Linux 已经在游戏方面获得了很大的提升。
甚至 Lutris 的创造者在我们的采访中也提到 [Linux 在游戏方面取得的进步简直令人难以置信][1]。
但是,这有什么值得大肆宣传的吗?我们能向游戏玩家推荐 Linux 吗? Linux 适合玩游戏吗?
此文中,我想分享一些关于在 Linux 系统上玩游戏的的事情,并分享我对它的看法。
### 你可以在 Linux 上玩游戏吗?是的!
如果有人曾经告诉过你,不能在 Linux 上玩游戏,**那是不对的**。
你可以在 Linux 上玩各种游戏而不会出现任何大的障碍。而且,在大多数情况下,它是可玩的,并且会提供很好的游戏体验。
事实上,如果你不知道从哪里开始,我们有一份 [Linux 游戏终极指南][2] 提供给你。
### 需要一个特定的 Linux 发行版才能玩游戏吗?
并非如此。这取决于你想获得多么方便的体验。
例如,如果你希望 Linux 发行版能够与你的图形驱动程序很好地配合,并获得最新的硬件支持,那么有一些发行版可以做到。同样,如果你只是想用集成的 GPU 玩原生的 Linux 独立游戏,任何发行版都可以。
因此,在你开启 Linux 游戏之旅的同时,会有一些因素影响你对发行版的选择。
不用担心,为了帮助你,我们提供了一份有用的 [最佳 Linux 游戏发行版列表][3]。
### Linux 上的虚拟现实游戏,唉……
![][4]
我确信 VR 游戏还没有完全普及。但是,如果你想要在 VR 头盔上获得激动人心的体验,那么**选择 Linux 作为你的首选平台可能不是一个好主意**。
你没有在 Linux 上获得便利体验所需的驱动程序或应用程序。没有任何发行版可以帮助你解决此问题。
如果你想了解有关虚拟现实状态的详细信息,可以看看 [Boiling Steam][5] 上的博客文章和 [GamingOnLinux][6] 上的使用 Valve 的 VR 头盔的有趣体验。
我已经提供了这些博客文章的链接以供参考,但总而言之 —— 如果你想体验 VR 游戏,请避免使用 Linux如果你实在太闲请随意尝试
### 可以在 Linux 上玩 Windows 系统的游戏吗?
可以,也不可以。
你可以使用 [Steam Play 来玩 Windows 专属的游戏][7],但是它也存在一些问题,并不是每个游戏都可以运行。
例如,我最终还是使用 Windows 来玩《[地平线 4][8]》。如果你喜欢汽车模拟或赛车游戏,这是一款你可能不想错过的杰作。
或许我们在不久的将来可以看到它通过 Steam Play 完美的运行,谁知道呢?
因此,可以肯定的是,你会遇到许多类似的游戏,可能根本无法运行。这是残酷的事实。
而且,要知道该游戏是否可以在 Linux 上运行,请前往 [ProtonDB][9] 并搜索该游戏,看看它是否至少具有 “**黄金**” 状态。
### 带有反作弊引擎的多人游戏可以吗?
![][10]
许多游戏玩家更喜欢玩多人游戏,如《[Apex Legends][11]》、《[彩虹六号:围攻][12]》和《[堡垒之夜][13]》。
然而,一些依赖于反作弊引擎的流行游戏还不能在 Linux 上运行。它仍然是一项进行中的工作,可能在未来的 Linux 内核版本中实现 —— 但目前还没有。
请注意,像 《[反恐精英:全球攻势][14]》、《Dota 2》、《军团要塞 2》、《[英灵神殿][15]》等多人游戏提供原生 Linux 支持并且运行良好!
### 我会推荐使用 Linux 来玩游戏吗?
![][16]
考虑到你可以玩很多 Windows 专属的游戏、原生的独立游戏,以及 Linux 原生支持的各种 AAA 游戏,我能推荐初次使用者尝试在 Linux 上玩游戏。
但是,需要**注意**的是 —— 我建议你列出你想玩的游戏列表,以确保它在 Linux 上运行没有任何问题。否则,你最终都可能浪费大量时间进行故障排除而没有结果。
不要忘记,我相信 Linux 上的 VR 游戏是一个很大的问题。
而且,如果你想探索所有最新最好的游戏,我建议你坚持使用 Windows 的游戏电脑。
**虽然我应该鼓励更多的用户采用 Linux 作为游戏平台,但我不会忽视为什么普通消费者仍然喜欢使用 Windows 机器来玩游戏的客观事实。**
你怎么认为呢?你同意我的想法吗?欢迎在下方的评论区分享你的想法!
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/linux-for-gaming-opinion/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[perfiffer](https://github.com/perfiffer)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/lutris-creator-interview/
[2]: https://itsfoss.com/linux-gaming-guide/
[3]: https://itsfoss.com/linux-gaming-distributions/
[4]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/04/linux-gaming-vr.jpg?w=1200&ssl=1
[5]: https://boilingsteam.com/the-state-of-virtual-reality-on-linux/
[6]: https://www.gamingonlinux.com/2020/08/my-experiences-of-valves-vr-on-linux
[7]: https://itsfoss.com/steam-play/
[8]: https://forzamotorsport.net/en-US/games/fh4
[9]: https://www.protondb.com/
[10]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/04/linux-gaming-illustration.jpg?w=1200&ssl=1
[11]: https://www.ea.com/games/apex-legends
[12]: https://www.ubisoft.com/en-us/game/rainbow-six/siege
[13]: https://www.epicgames.com/fortnite/en-US/home
[14]: https://store.steampowered.com/app/730/CounterStrike_Global_Offensive/
[15]: https://store.steampowered.com/app/892970/Valheim/
[16]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/04/gaming-on-linux-support.jpg?w=1200&ssl=1

View File

@ -0,0 +1,143 @@
[#]: subject: "Linux Jargon Buster: What is sudo rm -rf? Why is it Dangerous?"
[#]: via: "https://itsfoss.com/sudo-rm-rf/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13813-1.html"
Linux 黑话解释:什么是 sudo rm -rf为什么如此危险
======
![][11]
当你刚接触 Linux 时,你会经常遇到这样的建议:永远不要运行 `sudo rm -rf /`。在 Linux 世界里,更是围绕着 `sudo rm -rf` 有很多梗。
![][1]
但似乎对于它也有一些混乱的认识。在 [清理 Ubuntu 以腾出空间][2] 的教程中,我建议运行一些涉及 `sudo``rm -rf` 的命令。一位读者问我,如果 `sudo rm -rf` 是一个不应该运行的危险的 Linux 命令,我为什么要建议这样做。
因此,我想到了写一篇 Linux 黑话解释,以消除误解。
### sudo rm -rf 在做什么?
让我们按步骤来学习。
`rm` 命令用于 [在 Linux 命令行中删除文件和目录][3]。
```
$ rm agatha
$
```
但是因为有只读的 [文件权限][4],有些文件不会被立即删除。它们必须用选项 `-f` 强制删除。
```
$ rm books
rm: remove write-protected regular file 'books'? y
$ rm -f christie
$
```
另外,`rm` 命令不能被用来直接删除目录(文件夹)。你必须在 `rm` 命令中使用递归选项 `-r`
```
$ rm new_dir
rm: cannot remove 'new_dir': Is a directory
```
因此最终,`rm -rf` 命令意味着递归地、强制删除指定的目录。
```
$ rm -r new_dir
rm: remove write-protected regular file 'new_dir/books'? ^C
$ rm -rf new_dir
$
```
下面是上述所有命令的截图。
![解释 rm 命令的例子][5]
如果你在 `rm -rf` 命令前加入 `sudo`,你就是在删除具有 root 权限的文件。这意味着你可以删除由 [root 用户][6] 拥有的系统文件。
### 所以sudo rm -rf 是一个危险的 Linux 命令?
嗯,任何删除东西的命令都可能是危险的,如果你不确定你正在删除什么。
`rm -rf` 命令看作一把刀。刀是一个危险的东西吗?有可能。如果你用刀切蔬菜,那是好事。如果你用刀切手指,那当然是不好的。
`rm -rf` 命令也是如此。它本身并不危险。它只是用来删除文件的。但是,如果你在不知情的情况下用它来删除重要文件,那就有问题了。
现在来看看 `sudo rm -rf /`
你知道,使用 `sudo`,你是以 root 身份运行一个命令,这允许你对系统进行任何改变。
`/` 是根目录的符号。`/var` 表示根目录下的 `var` 目录。`/var/log/apt` 指的是根目录的 `log` 目录下的 `apt` 目录。
![Linux 目录层次表示法][7]
按照 [Linux 目录层次结构][8]Linux 文件系统中的一切都从根目录开始。如果你删除了根目录,你基本上就是删除了系统中的所有文件。
这就是为什么建议不要运行 `sudo rm -rf /` 命令,因为你会抹去你的整个 Linux 系统。
请注意,在某些情况下,你可能正在运行像 `sudo rm -rf /var/log/apt` 这样的命令,这可能是没问题的。同样,你必须注意你正在删除的东西,就像你必须注意你正在用刀切割的东西一样。
### 我在玩火:如果我运行 sudo rm -rf /,看看会发生什么呢?
大多数 Linux 发行版都提供了一个故障安全保护,防止意外删除根目录。
```
$ sudo rm -rf /
[sudo] password for abhishek:
rm: it is dangerous to operate recursively on '/'
rm: use --no-preserve-root to override this failsafe
```
我的意思是,人是会打错字的,如果你不小心打了 `/ var/log/apt`,而不是 `/var/log/apt``/` 和 `var` 之间的空格意味着你给出了 `/``var` 目录来删除你将会删除根目录。LCTT 译注:我真干过,键盘敲的飞起,结果多敲了一个空格,然后就丢了半个文件系统 —— 那时候 Linux 还没这种故障安全保护。)
![使用 sudo rm -rf 时要注意][9]
别担心。你的 Linux 系统会照顾到这种意外。
现在,如果你一心想用 `sudo rm -rf /` 来破坏你的系统呢?你将必须使用它将要求你使用的 `-no-preserve-root` 选项与之配合。
不,请不要自己这样做。让我做给你看看。
所以,我在一个虚拟机中运行基本的操作系统。我运行 `sudo rm -rf / --no-preserve-root`,你可以在下面的视频中看到灯光熄灭(大约 1 分钟)。
![video](https://vimeo.com/594025609)
### 清楚了么?
Linux 有一个活跃的社区,大多数人都会帮助新用户。 之所以说是大多数,是是因为有一些的邪恶坏人潜伏着捣乱新用户。他们经常会建议对初学者所面临的最简单的问题运行 `rm -rf /`。我认为这些白痴在这种邪恶行为中得到了某种至上主义的满足。我会立即将他们从我管理的论坛和群组中踢出去。
我希望这篇文章能让你更清楚地了解这些情况。你有可能仍然有一些困惑,特别是因为它涉及到根目录、文件权限和其他新用户可能不熟悉的东西。如果是这样的话,请在评论区告诉我你的疑惑,我会尽力去解决。
最后,请记住。<ruby>不要喝酒胡搞<rt>Dont drink and root</rt></ruby>。在运行你的 Linux 系统时要安全驾驶。
--------------------------------------------------------------------------------
via: https://itsfoss.com/sudo-rm-rf/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/04/sudo-rm-rf.gif?resize=400%2C225&ssl=1
[2]: https://itsfoss.com/free-up-space-ubuntu-linux/
[3]: https://linuxhandbook.com/remove-files-directories/
[4]: https://linuxhandbook.com/linux-file-permissions/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/rm-rf-command-example-800x487.png?resize=800%2C487&ssl=1
[6]: https://itsfoss.com/root-user-ubuntu/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/linux-directory-structure.png?resize=800%2C400&ssl=1
[8]: https://linuxhandbook.com/linux-directory-structure/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/sudo-rm-rf-example.png?resize=798%2C346&ssl=1
[10]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/dont-drink-and-root.jpg?resize=800%2C450&ssl=1

View File

@ -0,0 +1,78 @@
[#]: subject: "Apps for daily needs part 5: video editors"
[#]: via: "https://fedoramagazine.org/apps-for-daily-needs-part-5-video-editors/"
[#]: author: "Arman Arisman https://fedoramagazine.org/author/armanwu/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13808-1.html"
满足日常需求的应用(五):视频编辑器
======
![][1]
视频编辑已经成为一种流行的活动。人们出于各种原因需要视频编辑,不管是工作、教育或仅仅是一种爱好。现在也有很多平台可以在互联网上分享视频,以及几乎所有的社交媒体和聊天工具都提供分享视频的功能。本文将介绍一些你可以在 Fedora Linux 上使用的开源视频编辑器。你可能需要安装提到的这些软件才能使用。如果你不熟悉如何在 Fedora Linux 中添加软件包,请参阅我之前的文章《[安装 Fedora 34 工作站后要做的事情][4]》。这里列出了视频编辑器类别的一些日常需求的应用程序。
### Kdenlive
当有人问起 Linux 上的开源视频编辑器时,经常出现的答案是 Kdenlive。它是一个在开源用户群体中非常流行的视频编辑器。这是因为它的功能对于一般用途来说是足够的而且对于非专业人士的人来说也很容易使用。
Kdenlive 支持多轨编辑因此你可以将多个来源的音频、视频、图像和文本结合起来。这个应用程序还支持各种视频和音频格式这样你就不必事先转换它们。此外Kdenlive 提供了各种各样的效果和转场以支持你的创造力来制作很酷的视频。Kdenlive 提供的一些功能有:用于创建 2D 字幕的字幕器、音频和视频范围、代理编辑、时间线预览、关键帧效果等等。
![][5]
更多信息可在此链接中获得:<https://kdenlive.org/en/>
### Shotcut
Shotcut 与 Kdenlive 的功能大致相同。这个应用程序是一个通用的视频编辑器。它有一个相当简单的界面,但功能齐全,可以满足你视频编辑工作的各种需要。
Shotcut 拥有一套完整视频编辑功能提供了从简单的编辑到高级的功能。它还支持各种视频、音频和图像格式。你不需要担心你的编辑历史因为这个应用程序有无限的撤销和重做功能。Shotcut 还提供了各种视频和音频效果因此你可以自由地创造性地制作你的视频作品。它提供的一些功能有音频过滤器、音频混合、交叉淡化的音视频溶解过渡、音调发生器、速度变化、视频合成、3 路色轮、轨道合成/混合模式、视频过滤器等。
![][6]
更多信息可在此链接中获得:<https://shotcut.org/>
### Pitivi
如果你想要一个具有直观和简洁用户界面的视频编辑器Pitivi 将是正确的选择。你会对它的外观感到舒适并且不难找到你需要的功能。这个应用程序被归类为非常容易学习特别是如果你需要一个用于简单编辑需求的应用程序时。然而Pitivi 仍然提供了种种功能,如修剪 & 切割、混音、关键帧音频效果、音频波形、音量关键帧曲线、视频过渡等。
![][7]
更多信息可在此链接中获得:<https://www.pitivi.org/>
### Cinelerra
Cinelerra 是一个已经开发了很久的视频编辑器。它为你的视频工作提供了大量的功能如内置帧渲染、各种视频效果、无限的层、8K 支持、多相机支持、视频音频同步、渲染农场、动态图形、实时预览等。这个应用程序可能不适合那些刚开始学习的人。我认为你需要一段时间来适应这个界面,特别是如果你已经熟悉了其他流行的视频编辑应用程序。但 Cinelerra 仍是一个有趣的选择,可以作为你的视频编辑器。
![][8]
更多信息请见此链接:<http://cinelerra.org/>
### 总结
这篇文章介绍了四个在 Fedora Linux 上可用的视频编辑器应用,以满足你的日常需求。实际上,在 Fedora Linux 上还有很多其他的视频编辑器可以使用。你也可以使用 OliveFedora Linux 仓库)、 OpenShotrpmfusion-free、Flowbladerpmfusion-free等等。每个视频编辑器都有自己的优势。有些在纠正颜色方面比较好而有些在各种转场和效果方面比较好。当涉及到如何轻松地添加文本时有些则更好。请选择适合你需求的应用程序。希望这篇文章可以帮助你选择正确的视频编辑。如果你有使用这些应用程序的经验请在评论中分享你的经验。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/apps-for-daily-needs-part-5-video-editors/
作者:[Arman Arisman][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/armanwu/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/FedoraMagz-Apps-5-video-816x345.jpg
[2]: https://unsplash.com/@brookecagle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/meeting-on-cafe-computer?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/things-to-do-after-installing-fedora-34-workstation/
[5]: https://fedoramagazine.org/wp-content/uploads/2021/08/video-kdenlive-1024x576.png
[6]: https://fedoramagazine.org/wp-content/uploads/2021/08/video-shotcut-1024x576.png
[7]: https://fedoramagazine.org/wp-content/uploads/2021/08/video-pitivi-1024x576.png
[8]: https://fedoramagazine.org/wp-content/uploads/2021/08/video-cinelerra-1024x576.png
[9]: https://www.olivevideoeditor.org/

View File

@ -3,31 +3,31 @@
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13806-1.html"
用 Linux sed 命令替换智能引号
======
用你喜欢的 sed 版本去除“智能”引号。
![Coding on a computer][1]
In typography, a pair of quotation marks were traditionally oriented toward one another. They look like this:
> 用你喜欢的 sed 版本去除“智能”引号。
![](https://img.linux.net.cn/data/attachment/album/202109/21/151406chun5nyumy8wyu5y.png)
在排版学中,一对引号传统上是朝向彼此的。它们看起来像这样:
“智能引号”
随着计算机在二十世纪中期的普及,这种向往往被放弃了。计算机的原始字符集没有太多的空间,所以在 ASCII 规范中,两个双引号和两个单引号被缩减为各一个是合理的。如今,通用的字符集是 Unicode有足够的空间容纳许多花哨的引号和撇号但许多人已经习惯了开头和结尾引号都只有一个字符的极简主义。此外计算机实际上将不同种类的引号和撇号视为不同的字符。换句话说对计算机来说右双引号与左双引号或直引号是不同的。
随着计算机在二十世纪中期的普及,这种向往往被放弃了。计算机的原始字符集没有太多的空间,所以在 ASCII 规范中,两个双引号和两个单引号被缩减为各一个是合理的。如今,通用的字符集是 Unicode有足够的空间容纳许多花哨的引号和撇号但许多人已经习惯了开头和结尾引号都只有一个字符的极简主义。此外计算机实际上将不同种类的引号和撇号视为不同的字符。换句话说对计算机来说右双引号与左双引号或直引号是不同的。
### 用 sed 替换智能引号
计算机并不是打字机。当你按下键盘上的一个键时,你不是在按一个带有印章的控制杆。你只是按下一个按钮,向你的计算机发送一个信号,计算机将其解释为一个显示特定预定义字符的请求。这个请求取决于你的键盘映射。作为一个 Dvorak 打字员,我目睹了人们在发现我的键盘上的 “asdf” 在屏幕上产生 “aoeu” 时脸上的困惑。你也可能按了一些特殊的组合键来产生字符,如 ™ 或 ß 或 ≠,这甚至没有印在你的键盘上。
每个字母或字符不管它是否印在你的键盘上都有一个编码。字符编码可以用不同的方式表达但对计算机来说Unicode 序列 u2018 和 u2019 产生 **** 和 ****,而代码 u201c 和 u201d 产生 **“** 和 **”** 字符。知道这些“秘密”代码意味着你可以使用 [sed][2] 这样的命令以编程方式替换它们。任何版本的 sed 都可以,所以你可以使用 GNU sed 或 BSD sed甚至是 [Busybox][3] sed。
每个字母或字符不管它是否印在你的键盘上都有一个编码。字符编码可以用不同的方式表达但对计算机来说Unicode 序列 u2018 和 u2019 产生 ````,而代码 u201c 和 u201d 产生 `“``”` 字符。知道这些“秘密”代码意味着你可以使用 [sed][2] 这样的命令以编程方式替换它们。任何版本的 sed 都可以,所以你可以使用 GNU sed 或 BSD sed甚至是 [Busybox][3] sed。
下面是我使用的简单的 shell 脚本:
```
#!/bin/sh
# GNU All-Permissive License
@ -39,7 +39,6 @@ $SED -i -e "s/[$SDQUO]/\'/g" -e "s/[$RDQUO]/\"/g" "${1}"
将此脚本保存为 `fixquotes.sh`,然后创建一个包含智能引号的单独测试文件:
```
Single quote
“Double quote”
@ -47,7 +46,6 @@ $SED -i -e "s/[$SDQUO]/\'/g" -e "s/[$RDQUO]/\"/g" "${1}"
运行该脚本,然后使用 [cat][4] 命令查看结果:
```
$ sh ./fixquotes.sh test.txt
$ cat test.txt
@ -68,7 +66,7 @@ via: https://opensource.com/article/21/9/sed-replace-smart-quotes
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,32 +3,30 @@
[#]: author: "Mateus Rodrigues Costa https://fedoramagazine.org/author/mateusrodcosta/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13803-1.html"
如何用 rpm-ostree db 检查更新信息和更新日志
如何用 rpm-ostree 数据库检查更新信息和更新日志
======
![][1]
照片由 [Dan-Cristian Pădureț][2] 发布在 [Unsplash][3]
这篇文章将教你如何使用 `rpm-ostree` 数据库及其子命令检查更新、检查更改的软件包和阅读更新日志
这篇文章将教你如何使用 _rpm-ostree db_ 及其子命令检查更新、检查更改的软件包和阅读更新日志。
这些命令将在 Fedora Silverblue 上进行演示,并且应该在任何使用 _rpm-ostree_ 的操作系统上工作。
这些命令将在 Fedora Silverblue 上进行演示,并且应该在任何使用 `rpm-ostree` 的操作系统上工作。
### 简介
假设你对不可更改的系统感兴趣。在基于容器技术构建用例时使用只读的基本系统听起来非常有吸引力,它会说服你选择使用 _rpm-ostree_ 的发行版。
假设你对不可更改的系统感兴趣。在基于容器技术构建用例时使用只读的基本系统听起来非常有吸引力,它会说服你选择使用 `rpm-ostree` 的发行版。
你现在发现自己在 [Fedora Silverblue][4](或其他类似的发行版)上,你想检查更新。但你遇到了一个问题。虽然你可以通过 GNOME Software 找到 Fedora Silverblue 上的更新包,但你实际上无法阅读它们的更新日志。你也不能[使用 _dnf updateinfo_ 在命令行上读取它们][5],因为主机系统上没有 DNF。
你现在发现自己在 [Fedora Silverblue][4](或其他类似的发行版)上,你想检查更新。但你遇到了一个问题。虽然你可以通过 GNOME Software 找到 Fedora Silverblue 上的更新包,但你实际上无法阅读它们的更新日志。你也不能 [使用 dnf updateinfo 在命令行上读取它们][5],因为主机系统上没有 DNF。
那么,你应该怎么做呢?嗯,_rpm-ostree_ 有一些子命令可以在这种情况下提供帮助。
那么,你应该怎么做呢?嗯,`rpm-ostree` 有一些子命令可以在这种情况下提供帮助。
### 检查更新
第一步是检查更新。只需运行 _rpm-ostree upgrade -check_
第一步是检查更新。只需运行:
```
$ rpm-ostree upgrade --check
@ -41,9 +39,9 @@ AvailableUpdate:
Diff: 4 upgraded
```
请注意,虽然它没有在输出中告诉更新的软件包,但它显示了更新的 Commit 为 _d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4_。这在后面会很有用。
请注意,虽然它没有在输出中告诉更新的软件包,但它显示了更新的提交为 `d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4`。这在后面会很有用。
接下来你需要做的是找到你正在运行的当前部署的 Commit。运行 _rpm-ostree status_ 以获得当前部署的 BaseCommit
接下来你需要做的是找到你正在运行的当前部署的提交。运行 `rpm-ostree status` 以获得当前部署的<ruby>基提交<rt>BaseCommit</rt></ruby>
```
$ rpm-ostree status
@ -58,9 +56,9 @@ Deployments:
...
```
对于这个例子,BaseCommit 是 _e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e_
对于这个例子,基提交是`e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e`
现在你可以用 _rpm-ostree db diff [commit1] [commit2]_ 找到这两个提交的差异。在这个命令中_commit1_ 将是当前部署的 BaseCommit_commit2_ 将是升级检查命令中的 Commit
现在你可以用 `rpm-ostree db diff [commit1] [commit2]` 找到这两个提交的差异。在这个命令中,`[commit1]` 将是当前部署的基提交,`[commit2]` 将是升级检查命令中的提交
```
$ rpm-ostree db diff e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4
@ -70,7 +68,7 @@ Upgraded:
soundtouch 2.1.1-6.fc34 -> 2.1.2-1.fc34
```
diff 输出显示 _soundtouch_ 被更新了,并指出了版本号。通过在前面的命令中加入 _-changelogs_ 来查看更新日志:
`diff` 输出显示 `soundtouch` 被更新了,并指出了版本号。通过在前面的命令中加入 `-changelogs` 来查看更新日志:
```
$ rpm-ostree db diff e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 --changelogs
@ -90,7 +88,7 @@ Upgraded:
### 总结
使用 _rpm-ostree db_,你现在可以拥有相当于 _dnf check-update__dnf updateinfo_ 的功能。
使用 `rpm-ostree db`,你现在可以拥有相当于 `dnf check-update``dnf updateinfo` 的功能。
如果你想检查你所安装的更新的详细信息,这将非常有用。
@ -101,7 +99,7 @@ via: https://fedoramagazine.org/how-to-check-for-update-info-and-changelogs-with
作者:[Mateus Rodrigues Costa][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,22 +3,22 @@
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13802-1.html"
在 Linux 中使用 OBS 和 Wayland 进行屏幕录制
======
有[大量可用于 Linux 的屏幕录像机][1]。但是当涉及到支持 [Wayland][2] 时,几乎所有的都不能用。
[大量可用于 Linux 的屏幕录像机][1]。但是当涉及到支持 [Wayland][2] 时,几乎所有的都不能用。
这是个问题,因为许多新发布的版本都再次默认切换到 Wayland 显示管理器。而如果像屏幕录像机这样基本的东西不能工作,就会给人留下不好的体验。
[GNOME 的内置屏幕录像机][3]可以工作,但它是隐藏的,没有 GUI也没有办法配置和控制记录内容。还有一个叫 [Kooha][4] 的工具,但它一直在屏幕上显示一个计时器。
[GNOME 的内置屏幕录像机][3] 可以工作,但它是隐藏的,没有 GUI也没有办法配置和控制记录内容。此外,还有一个叫 [Kooha][4] 的工具,但它一直在屏幕上显示一个计时器。
只是为了录制屏幕而[在 Xorg 和 Wayland 之间切换][5],这不是很方便。
只是为了录制屏幕而 [在 Xorg 和 Wayland 之间切换][5],这不是很方便。
在这一切中,我很高兴地得知,由于 Pipewire 的帮助,在 OBS Studio v27 中支持了 Wayland。但即使是这样也不是很简单因此我将向你展示使用 [OBS Studio][6] 在 Wayland 上录制屏幕的步骤。
这种情况下,我很高兴地得知,由于 Pipewire 的帮助,在 OBS Studio v27 中支持了 Wayland。但即使是这样也不是很简单因此我将向你展示使用 [OBS Studio][6] 在 Wayland 上录制屏幕的步骤。
### 使用 OBS 在 Wayland 上进行屏幕录制
@ -30,7 +30,7 @@
你应该先安装 OBS Studio v27。它已经包含在 Ubuntu 21.10 中,我会在本教程中使用它。
要在 Ubuntu 18.04、20.04、Linux Mint 20 等系统上安装 OBS Studio 27请使用[官方的 OBS Studio PPA][8]。
要在 Ubuntu 18.04、20.04、Linux Mint 20 等系统上安装 OBS Studio 27请使用 [官方的 OBS Studio PPA][8]。
打开终端,逐一使用以下命令:
@ -56,7 +56,7 @@ sudo apt install obs-studio
![Do you see PipeWire option in the screen sources?][10]
**如果答案是否定的,请退出 OBS Studio**。这很正常。至少在 Ubuntu 下OBS Studio 不会自动切换到使用 Wayland。对此有一个修复方法。
**如果没看到,请退出 OBS Studio**。这很正常。至少在 Ubuntu 下OBS Studio 不会自动切换到使用 Wayland。对此有一个修复方法。
打开一个终端,使用以下命令:
@ -74,7 +74,7 @@ obs
![][10]
你这次用 QT_QPA_PLATFORM 变量明确要求 OBS Studio 使用 Wayland。
你这次用 `QT_QPA_PLATFORM` 变量明确要求 OBS Studio 使用 Wayland。
选择 PipeWire 作为源,然后它要求你选择一个显示屏幕。选择它并点击分享按钮。
@ -88,7 +88,7 @@ obs
这很好。你刚刚验证了你可以在 Wayland 上录制屏幕。但每次设置环境变量并从终端启动 OBS 并不方便。
你可以做的是**把这个变量导出到你的 ~/.bash_profile对你而言或 /etc/profile对系统中的所有用户而言。**
你可以做的是**把这个变量导出到你的 `~/.bash_profile`(对你而言)或 `/etc/profile`(对系统中的所有用户而言)。**
```
export QT_QPA_PLATFORM=wayland
@ -105,7 +105,7 @@ via: https://itsfoss.com/screen-record-obs-wayland/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,116 @@
[#]: subject: "Watch commands and tasks with the Linux watch command"
[#]: via: "https://opensource.com/article/21/9/linux-watch-command"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13810-1.html"
用 Linux 的 watch 命令观察命令和任务
======
> 了解 watch 命令如何让你知道任务已完成或命令已执行。
![](https://img.linux.net.cn/data/attachment/album/202109/22/104541ddfgzpvud5ga55sp.jpg)
有很多时候,你需要等待一些事情的完成,比如:
* 一个文件的下载。
* 创建或解压一个 [tar][2] 文件。
* 一个 [Ansible][3] 作业。
其中一些进程有进度指示,但有时进程是通过一层抽象运行的,衡量进度的唯一方法是通过其副作用。其中一些可能是:
* 一个正在下载的文件不断增长。
* 一个从 tarball 中提取的目录被文件填满了。
* Ansible 作业构建了一个[容器][4]。
你可以用这样的命令查询所有这些:
```
$ ls -l downloaded-file
$ find . | wc -l
$ podman ps
$ docker ps
```
但是反复运行这些命令,即使是利用 [Bash 历史][5] 和**向上箭头**的便利,也是很乏味的。
另一种方法是写一个小的 Bash 脚本来为你自动执行这些命令:
```
while :
do
docker ps
sleep 2
done
```
但这样的脚本写起来也会很繁琐。你可以写一个小的通用脚本,并将其打包,这样它就可以一直被你使用。幸运的是,其他开源的开发者已经有了这样的经验和做法。
那就是 `watch` 这个命令。
### 安装 watch
`watch` 命令是 [procps-ng 包][6]的一部分,所以如果你是在 Linux 上,你已经安装了它。
在 macOS 上,使用 [MacPorts][7] 或 [Homebrew][8] 安装 `watch`。在 Windows 上,使用 [Chocolatey][9]。
### 使用 watch
`watch` 命令定期运行一个命令并显示其输出。它有一些文本终端的特性,所以只有最新的输出才会出现在屏幕上。
最简单的用法是:`watch <command>`。
例如,在 `docker ps` 命令前加上 `watch`,就可以这样操作:
```
$ watch docker ps
```
`watch` 命令,以及一些创造性的 Unix 命令行技巧,可以生成临时的仪表盘。例如,要计算审计事件:
```
$ watch 'grep audit: /var/log/kern.log |wc -l'
```
在最后一个例子中,如果有一个可视化的指示,表明审计事件的数量发生了变化,这可能是有用的。如果变化是预期的,但你想让一些东西看起来“不同”,`watch --differences` 就很好用。它可以高亮显示与上次运行的任何差异。如果你在多个文件中搜索,这一点尤其有效,所以你可以很容易地看到哪个文件发生了变化。
如果没有预期的变化,你可以使用 `watch --differences=permanent` 要求它们被“永久”高亮显示,以便知道哪些变化需要调查。这通常是更有用的。
### 控制频率
最后,有时该命令可能是资源密集型的,不应运行得太频繁。`-n` 参数控制频率。`watch` 默认使用 2 秒间隔,但是 `watch -n 10` 可能适合于资源密集型的情况,比如在子目录的任何文件中搜索一个模式:
```
$ watch -n 10 'find . -type f | xargs grep suspicious-pattern'
```
### 用 watch 观察一个命令
`watch` 命令对于许多临时性的系统管理任务非常有用,在这些任务中,你需要在没有进度条的情况下等待一些耗时的步骤,然后再进入下一个步骤。尽管这种情况并不理想,但 `watch` 可以使情况稍微好转。它让你有时间为工作做回顾性笔记!"。下载 [备忘录][10],让有用的语法和选项触手可及。。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/linux-watch-command
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6 (Clock, pen, and notepad on a desk)
[2]: https://opensource.com/article/17/7/how-unzip-targz-file
[3]: https://opensource.com/resources/what-ansible
[4]: https://opensource.com/resources/what-docker
[5]: https://opensource.com/article/20/6/bash-history-control
[6]: https://opensource.com/article/21/8/linux-procps-ng
[7]: https://opensource.com/article/20/11/macports
[8]: https://opensource.com/article/20/6/homebrew-mac
[9]: https://opensource.com/article/20/3/chocolatey
[10]: https://opensource.com/downloads/watch-cheat-sheet

View File

@ -0,0 +1,194 @@
[#]: subject: "How to Install Kali Linux in VMware"
[#]: via: "https://itsfoss.com/install-kali-linux-vmware/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: " "
[#]: url: " "
如何在 VMware 中安装 Kali Linux
======
![](https://img.linux.net.cn/data/attachment/album/202109/21/144206sjhgutyjgtu6m22z.jpg)
Kali Linux 是 [用于学习和练习黑客攻击和渗透测试的 Linux 发行版][1] 的不二之选。
而且,如果你经常捣鼓 Linux 发行版,出于好奇心,你可能已经尝试过它。
> **警告!**
>
> 本文介绍的内容仅供学习 Kali Linux 的安装,请勿使用 Kali Linux 进行任何非授权的行为。Kali Linux 应该用于在授权的情况下,对授权的目标进行合理的渗透测试,以了解其脆弱性并加以防范。本文作译者和本站均不对非授权和非法的使用及其造成的后果负责。
然而,无论你用它做什么,它都不能替代正规成熟的桌面 Linux 操作系统。因此,(至少对于初学者来说)建议使用虚拟机程序(如 VMware来安装 Kali Linux。
通过虚拟机,你可以把 Kali Linux 作为你的 Windows 或 Linux 系统中的一个常规应用程序来使用,就像在你的系统中运行 VLC 或 Skype 一样。
有一些免费的虚拟化工具可供使用。你可以 [在 Oracle VirtualBox 上安装 Kali Linux][2] ,也可以使用 VMWare Workstation。
本教程重点介绍 VMWare。
### 在 Windows 和 Linux 的 VMware 上安装 Kali Linux
> **非 FOSS 警报!**
>
> VMWare 不是开源软件。
对于本教程,我假定你使用的是 Windows是考虑到大多数 VMware 用户喜欢使用 Windows 10/11。
然而,除了在 Windows 上安装 VMWare 的部分,本 **教程对 Linux 也是有效的**。你可以 [轻松地在 Ubuntu 和其他 Linux 发行版上安装 VMWare][3]。
#### 步骤 1安装 VMWare Workstation Player在 Windows 上)
如果你的系统上已经安装了 VMware你可以跳到安装 Kali Linux 的步骤。
前往 [VMWare 的 Workstation Player 官方网页][4],然后点击 “Download For Free” 按钮。
![][5]
接下来,你可以选择版本(如果你想要特定的版本或遇到最新版本的 bug然后点击 “Go to Downloads”。
![][6]
然后你就会看到 Windows 和 Linux 版本的下载按钮。你需要点击 “Windows 64-bit” 的按钮,因为这就是我们在这里需要的。
![][7]
顺便提一句,它不支持 32 位系统。
最后,当你得到下载的 .exe 文件时,启动它以开始安装过程。你需要点击 “Next” 来开始安装 VMware。
![][8]
接下来,你需要同意这些政策和条件才能继续。
![][9]
现在,你可以选择安装的路径。理想情况下,保持默认设置。但是,如果你在虚拟机中需要更好的键盘响应/屏幕上的键盘性能,你可能想启用 “<ruby>增强型键盘驱动程序<rt>Enhanced Keyboard Driver</rt></ruby>”。
![][10]
进入下一步,你可以选择禁用每次启动程序时的更新检查(可能很烦人),并禁用向 VMware 发送数据,这是其用户体验改进计划的一部分。
![][11]
如果你想使用桌面和开始菜单的快捷方式进行快速访问,你可以勾选这些设置,或像我一样将其取消。
![][12]
现在,继续以开始安装。
![][13]
这可能需要一些时间,完成后,你会看到另一个窗口,让你完成这个过程,并让你选择输入一个许可证密钥。如果你想获得商业许可,你需要 VMware Workstation 专业版,否则,该 Player 版本对个人使用是免费的。
![][14]
> **注意!**
>
> 请确保你的系统已经启用了虚拟化功能。最近的 VMWare 的 Windows 版本要求你明确启用虚拟化以使用虚拟机。
#### 步骤 2在 VMware 上安装 Kali Linux
开始时,你需要下载 Kali Linux 的镜像文件。而且如果你打算在虚拟机上使用它Kali Linux 会提供一个单独的 ISO 文件。
![][15]
前往其 [官方下载页面][16],下载可用的预构建的 VMware 镜像。
![][17]
你可以直接下载 .7z 文件或利用 Torrent一般来说速度更快。在这两种情况下你也可以用提供的 SHA256 值检查文件的完整性。
下载完成,你需要将文件解压到你选择的任何路径。
![][18]
打开 VMware Workstation Player然后点击 “<ruby>打开一个虚拟机<rt>Open a Virtual Machine</rt></ruby>”。现在,寻找你提取的文件夹。然后浏览它,直到你找到一个扩展名为 .vmx 的文件。
比如说,`Kali-Linux-2021.3-vmware-amd64.vmx`。
![][19]
选择 .vmx 文件来打开该虚拟机。它应该直接出现在你的 VMware Player 中。
你可以选择以默认设置启动虚拟机。或者,如果你想调整分配给虚拟机的硬件,可以在启动前随意改变设置。
![][20]
根据你的计算机硬件,你应该分配更多的内存和至少一半的处理器核心,以获得流畅的性能。
在这种情况下,我有 16GB 的内存和一个四核处理器。因此,为这个虚拟机分配近 7GB 的内存和两个内核是安全的。
![][21]
虽然你可以分配更多的资源,但它可能会影响你的宿主机操作系统在工作时的性能。所以,建议在这两者之间保持平衡。
现在,保存设置并点击 “<ruby>播放虚拟机<rt>Play virtual machine</rt></ruby>” 来启动 Kali Linux on VMware。
当它开始加载时,你可能会看到一些提示,告诉你可以通过调整一些虚拟机设置来提高性能。
你不用必须这样做,但如果你注意到性能问题,你可以禁用<ruby>侧通道缓解措施<rt>side-channel mitigations</rt></ruby>(用于增强安全性)来提高虚拟机的性能。
另外,你可能会被提示下载并 [安装 VMware tools for Linux][22];你需要这样做以获得良好的虚拟机体验。
完成之后,你就会看到 Kali Linux 的登录界面。
![][23]
考虑到你启动了一个预先建立的 VMware 虚拟机,你需要输入默认的登录名和密码来继续。
- 用户名:`kali`
- 密码: `kali`
![][24]
就是这样!你已经完成了在 VMware 上安装 Kali Linux。现在你所要做的就是开始探索了
### 接下来呢?
这里有一些你可以利用的提示:
* 如果剪贴板共享和文件共享不工作请在访客系统Kali Linux上 [安装 VMWare tools][22]。
* 如果你是新手,请查看这个 [Kali Linux 工具列表][25]。
如果你觉得这个教程有帮助,欢迎分享你的想法。你是否喜欢在不使用 VMware 镜像的情况下安装 Kali Linux请在下面的评论中告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-kali-linux-vmware/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-hacking-penetration-testing/
[2]: https://itsfoss.com/install-kali-linux-virtualbox/
[3]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
[4]: https://www.vmware.com/products/workstation-player.html
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-download.png?resize=732%2C486&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-download-1.png?resize=800%2C292&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-download-final.png?resize=800%2C212&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-setup-1.png?resize=692%2C465&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-setup-license.png?resize=629%2C443&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-setup-2.png?resize=638%2C440&ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-workstation-tracking.png?resize=618%2C473&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-workstation-shortcuts.png?resize=595%2C445&ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-install.png?resize=620%2C474&ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-installed.png?resize=589%2C441&ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-image-kali.png?resize=800%2C488&ssl=1
[16]: https://www.kali.org/get-kali/
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-kali-linux-image-download.png?resize=800%2C764&ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/extract-vmware-image.png?resize=617%2C359&ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-kali-linux-image-folder.png?resize=800%2C498&ssl=1
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/virtual-machine-settings-kali.png?resize=800%2C652&ssl=1
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/kali-vm-settings.png?resize=800%2C329&ssl=1
[22]: https://itsfoss.com/install-vmware-tools-linux/
[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/kali-linux-vm-login.png?resize=800%2C540&ssl=1
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-kali-linux.png?resize=800%2C537&ssl=1
[25]: https://itsfoss.com/best-kali-linux-tools/

View File

@ -0,0 +1,126 @@
[#]: subject: "Start using YAML now"
[#]: via: "https://opensource.com/article/21/9/intro-yaml"
[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13811-1.html"
YAML 使用入门
======
> 什么是 YAML为什么我们现在应该开始使用它
![](https://img.linux.net.cn/data/attachment/album/202109/23/095242fw0qzzp5fe6e565z.jpg)
[YAML](https://yaml.org/)<ruby>YAML 不是标记语言<rt>YAML Ain't Markup Language</rt></ruby>)是一种适宜阅读理解的数据序列化语言。它的语法简单而易于阅读。它不包含引号、打开和关闭的标签或大括号。它不包含任何可能使人类难以解析嵌套规则的东西。你可以看一下你的 YAML 文档就知道它在什么。
### YAML 特性
YAML 有一些超级特性,使其优于其他序列化格式:
* 易于略读。
* 易于使用。
* 可在编程语言之间移植。
* 敏捷语言的原生数据结构。
* 支持通用工具的一致模型。
* 支持一次性处理。
* 表现力和可扩展性。
我将通过一些例子进一步向你展示 YAML 的强大。
你能弄清楚下面发生了什么吗?
```
-------
# My grocery list
groceries:
- Milk
- Eggs
- Bread
- Butter
...
```
上面的例子包含了一个简单的杂货购物清单,它是一个完全格式化的 YAML 文档。在 YAML 中,字符串不加引号,而列表需要简单的连字符和空格。一个 YAML 文档以 `---` 开始,以 `...` 结束但它们是可选的。YAML中的注释以 `#` 开始。
缩进是 YAML 的关键。缩进必须包含空格,而不是制表符。虽然所需的空格数量是灵活的,但保持一致是个好主意。
### 基本元素
#### 集合
YAML 有两种类型的集合。列表(用于序列)和字典(用于映射)。列表是键值对,每个值都在一个新的行中,以连字符和空格开始。字典也是键值对,每个值都是一个映射,包含一个键、一个冒号和空格以及一个值。
例如:
```
# My List
groceries:
- Milk
- Eggs
- Bread
- Butter
# My dictionary
contact:
name: Ayush Sharma
email: myemail@example.com
```
列表和字典经常被结合起来,以提供更复杂的数据结构。列表可以包含字典,而字典可以包含列表。
#### 字符串
YAML 中的字符串不需要加引号。多行字符串可以用 `|``>` 来定义。前者保留了换行符,而后者则没有。
例如:
```
my_string: |
This is my string.
It can contain many lines.
Newlines are preserved.
```
```
my_string_2: >
This is my string.
This can also contain many lines.
Newlines aren't preserved and all lines are folded.
```
#### 锚点
YAML 可以通过节点锚点来获得可重复的数据块。`&` 字符定义了一个数据块,以后可以用 `*` 来引用。例如:
```
billing_address: &add1
house: B1
street: My Street
shipping_address: *add1
```
至止你对 YAML 的了解就足以让你开始工作了。你可以使用在线 YAML 解析器来测试。如果你每天都与 YAML 打交道,那么 [这个方便的备忘单][3] 会对你有所帮助。
_这篇文章最初发表在[作者的个人博客][4]上并经授权改编。_
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/intro-yaml
作者:[Ayush Sharma][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayushsharma
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: mailto:myemail@example.com
[3]: https://yaml.org/refcard.html
[4]: https://notes.ayushsharma.in/2021/08/introduction-to-yaml

View File

@ -0,0 +1,235 @@
[#]: subject: "How to Install Ubuntu Desktop on Raspberry Pi 4"
[#]: via: "https://itsfoss.com/install-ubuntu-desktop-raspberry-pi/"
[#]: author: "Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "turbokernel"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13817-1.html"
如何在树莓派 4 上安装 Ubuntu 桌面系统
======
> 本教程将详细告诉你在树莓派 4 设备上如何安装 Ubuntu 桌面。
![](https://img.linux.net.cn/data/attachment/album/202109/25/084015z4cfiiy8e1ezmmz0.jpg)
革命性的<ruby>树莓派<rt>Raspberry Pi</rt></ruby>是最受欢迎的单板计算机。它拥有基于 Debian 的操作系统,叫做 <ruby>[树莓派操作系统][1]<rt>Raspberry Pi OS</rt></ruby>(原名 Raspbian
还有其他几个 [可用于树莓派的操作系统][2],但几乎所有的都是轻量级的,适合于树莓派设备的小尺寸和低端硬件。
随着标榜 8GB 内存和支持 4K 显示的树莓派 4B 的推出,情况发生了变化。其目的是将树莓派作为常规桌面使用,并在更大程度上成功地做到了这一点。
在 4B 型号之前,你可以 [在树莓派上安装 Ubuntu 服务器][3],但桌面版本却无法使用。然而,**Ubuntu 现在为树莓派 4 提供了官方的桌面镜像**。
在本教程中,我将展示在树莓派 4 上安装 Ubuntu 桌面的步骤。
首先,快速了解一下运行要求。
### 在树莓派 4 上运行 Ubuntu 的要求
![][4]
以下是你需要的东西:
1. 一个能够联网的 Linux 或 Windows 系统。
2. [树莓派镜像工具][5] :树莓派的官方开源工具,可以在你的 SD 卡上写入发行版镜像。
3. Micro SD 卡:最低使用 16GB 的存储卡,推荐使用 32GB 的版本。
4. 一个基于 USB 的 Micro SD 卡读卡器(如果你的电脑没有读卡器)。
5. 树莓派 4 必备配件,如 HDMI 兼容显示器、[Micro HDMI 连接到标准 HDMIA/M 接口的电缆][6]、[电源(建议使用官方适配器)][7]、USB 的有线/无线键盘和鼠标/触摸板。
最好能够提前 [详细阅读树莓派的要求][8] 。
现在,闲话少叙,让我快速带领你完成 SD 卡的镜像准备。
### 为树莓派准备 Ubuntu 桌面镜像
树莓派提供了一个 GUI 应用程序,用于将 ISO 镜像写入 SD 卡中。**这个工具还可以自动下载兼容的操作系统,如 Ubuntu、树莓派操作系统等**。
![下载并将操作系统放入 SD 卡的官方工具][9]
你可以从官方网站上下载这个工具的 Ubuntu、Windows 和 macOS 版本:
- [下载树莓派镜像工具][10]
在 Ubuntu 和其他 Linux 发行版上,你也可以使用 Snap 安装它:
```
sudo snap install rpi-imager
```
安装完毕后,运行该工具。当你看到下面的界面时,选择 “<ruby>选择操作系统<rt>CHOOSE OS</rt></ruby>”:
![镜像工具:选择首选操作系统][11]
在“<ruby>操作系统<rt>Operating System</rt></ruby>”下,选择 “<ruby>其它通用的操作系统<rt>Other general purpose OS</rt></ruby>”:
![镜像工具: 其他通用的操作系统][12]
现在,选择 “Ubuntu”
![镜像工具:发行版 - Ubuntu][13]
接下来,选择 “Ubuntu Desktop 21.04RPI 4/400如下图所示。
![镜像工具:发行版 - Ubuntu 21.04][14]
> **注意:**
>
> 如果你没有一个稳定的网络连接,你可以 [从 Ubuntu 的网站上单独下载 Ubuntu 的树莓派镜像][15]。在镜像工具中,在选择操作系统时,从底部选择“<ruby>使用自定义<rt>Use custom</rt></ruby>”选项。你也可以使用 Etcher 将镜像写入到 SD 卡上。
将 Micro SD 卡插入读卡器中,等待它挂载。选择“<ruby>存储设备<rt>Storage</rt></ruby>”下的 “<ruby>选择存储设备<rt>CHOOSE STORAGE</rt></ruby>”:
![镜像工具选择存储设备SD 卡)][16]
你应该可以根据存储空间大小,识别你的 Micro SD 卡。这里,我使用的是 32GB 的卡:
![镜像工具:选择 SD 卡][17]
现在点击“<ruby>写入<rt>WRITE</rt></ruby>”:
![镜像工具:镜像写入][18]
如果你已经备份了 SD 卡上的内容或是一张新卡,你可以直接进行:
![镜像工具:镜像写入确认][19]
由于这需要 [sudo][20] 的权限,你必须输入密码。如果你从终端运行 `sudo rpi-imager`,就不会出现这种情况:
![镜像工具:镜像写入授权需要密码][21]
如果你的 SD 卡有点旧,这将需要一些时间。如果它是一个新的高速 SD 卡,就无需很长时间:
![镜像工具:写入镜像][22]
为确保镜像写入成功,我不建议跳过验证:
![镜像工具:验证写入][23]
写入结束后,会有以下确认提示:
![镜像工具:写入成功][24]
现在,从你的系统中安全移除 SD 卡。
### 在树莓派上使用装有 Ubuntu 的 MicroSD 卡
已经成功了一半了。与常规的 Ubuntu 安装不同无需创建一个临场安装环境。Ubuntu 已经安装在 SD 卡上了,而且几乎可以直接使用了。让我们来看看这里还剩下什么。
#### 第 1 步:将 SD 卡插入树莓派中
对于第一次使用的用户来说,有时会有点困惑,不知道那个卡槽到底在哪里?不用担心。它位于电路板背面的左手边。下面是一个插入卡后的倒置视图。
![树莓派 4B 板倒置,插入 Micro SD 卡][25]
按照这个方向将卡慢慢插入板子下面的卡槽,轻轻地插,直到它不再往前移动。你可能还会听到一点咔嚓声来确认。这意味着它已经完美地插入了。
![树莓派 SD 插槽在板子背面的左侧][26]
当你把它插进去的时候,你可能会注意到在插槽中有两个小针脚调整了自己的位置(如上图所示),但这没关系。一旦插入,卡看起来会有一点突出。这就是它应该有的样子。
![树莓派 SD 卡插入时有一小部分可见][27]
#### 第 2 步:设置树莓派
我无需在这里详细介绍。
保证电源线接头、微型 HDMI 线接头、键盘和鼠标接头(有线/无线)都牢固地连接到树莓派板的相关端口。
确保显示器和电源插头也已正确连接,然后再去打开电源插座。我不建议把适配器插到带电的插座上。参考 [电弧][28]。
确认了以上两个步骤后,你就可以 [打开树莓派设备的电源][29]。
#### 第 3 步:在树莓派上 Ubuntu 桌面的首次运行
当你打开树莓派的电源,你需要在初次运行时进行一些基本配置。你只需按照屏幕上的指示操作即可。
选择你的语言、键盘布局、连接到 WiFi 等:
![选择语言][30]
![选择键盘布局][31]
![选择 WiFi][32]
你可以根据需求选择时区:
![选择时区][33]
然后创建用户和密码:
![输入所需的用户名和密码][34]
之后的步骤将配置一些东西,这个过程需要一些时间:
![完成 Ubuntu 设置][35]
![完成 Ubuntu 设置][36]
系统会重新启动之前需要一些时间,最终,你将会来到 Ubuntu 的登录界面:
![Ubuntu 的登录界面][37]
现在,你可以开始享受树莓派上的 Ubuntu 桌面了:
![树莓派上的 Ubuntu 桌面][38]
### 总结
我注意到**一个暂时的异常情况**。在进行安装时,我的显示器左侧有一个红色的闪烁边界。这种闪烁(也有不同的颜色)在屏幕的随机部分也能注意到。但在重启和第一次启动后,它就消失了。
很高兴能够看到它在树莓派上运行,我非常需要 Ubuntu 开始为树莓派等流行的 ARM 设备提供支持。
希望这个教程对你有所帮助。如果你有问题或建议,请在评论中告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-ubuntu-desktop-raspberry-pi/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/tutorial-how-to-install-raspberry-pi-os-raspbian-wheezy/
[2]: https://itsfoss.com/raspberry-pi-os/
[3]: https://itsfoss.com/install-ubuntu-server-raspberry-pi/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-desktop-raspberry-pi.png?resize=800%2C450&ssl=1
[5]: https://github.com/raspberrypi/rpi-imager
[6]: https://www.raspberrypi.org/products/micro-hdmi-to-standard-hdmi-a-cable/
[7]: https://www.raspberrypi.org/products/type-c-power-supply/
[8]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/raspberry-pi-imager-tool.webp?resize=680%2C448&ssl=1
[10]: https://www.raspberrypi.org/software/
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-choose-os.webp?resize=681%2C443&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-other-general-purpose-os.webp?resize=679%2C440&ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-os-ubuntu.webp?resize=677%2C440&ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-os-ubuntu-21-04.webp?resize=677%2C440&ssl=1
[15]: https://ubuntu.com/download/raspberry-pi
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-choose-storage.webp?resize=677%2C438&ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-choose-sd-card.webp?resize=790%2C450&ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-image-write.webp?resize=676%2C437&ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-image-write-confirm.webp?resize=679%2C440&ssl=1
[20]: https://itsfoss.com/add-sudo-user-ubuntu/
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-image-write-password.webp?resize=380%2C227&ssl=1
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-writing-image.webp?resize=673%2C438&ssl=1
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-verifying-changes.webp?resize=677%2C440&ssl=1
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-write-successful.webp?resize=675%2C442&ssl=1
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-inverted-micro-sd-card-inserted.webp?resize=800%2C572&ssl=1
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/raspberry-pi-sd-slot-left-side-middle-below-board.webp?resize=632%2C324&ssl=1
[27]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-sd-card-inserted.webp?resize=650%2C432&ssl=1
[28]: https://www.electricianatlanta.net/what-is-electrical-arcing-and-why-is-it-dangerous/
[29]: https://itsfoss.com/turn-on-raspberry-pi/
[30]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run.webp?resize=800%2C451&ssl=1
[31]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-2.webp?resize=800%2C600&ssl=1
[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-3.webp?resize=800%2C600&ssl=1
[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-4.webp?resize=800%2C600&ssl=1
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-5.webp?resize=800%2C600&ssl=1
[35]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-6.webp?resize=800%2C600&ssl=1
[36]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-7.webp?resize=800%2C600&ssl=1
[37]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-login-screen.webp?resize=800%2C600&ssl=1
[38]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-21-04-post-setup-desktop.webp?resize=800%2C450&ssl=1

View File

@ -0,0 +1,177 @@
[#]: subject: "Use this Linux command-line tool to learn more about your NVMe drives"
[#]: via: "https://opensource.com/article/21/9/nvme-cli"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "turbokernel"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13822-1.html"
使用 Linux 命令行工具来了解你的 NVMe 驱动器
======
> nvme-cli 命令拥有诸多实用的选项,且它是控制和管理数据一种很好的方式。
![](https://img.linux.net.cn/data/attachment/album/202109/26/102441ux8cy36gy1vggykz.jpg)
NVMe 是指<ruby>非易失性内存规范<rt>Non-Volatile Memory Express</rt></ruby>,它规范了软件和存储通过 PCIe 和其他协议(包括 TCP进行通信的方式。它是由非营利组织领导的 [开放规范][2],并定义了几种形式的固态存储。
我的笔记本电脑有一个 NVMe 驱动器,我的台式机也有。而且它们的速度很快。我喜欢我的电脑启动的速度,以及它们读写数据的速度。几乎没有延迟。
没过多久我就对驱动这种超高速存储的技术产生了好奇所以我做了一些调查。我了解到NVMe 驱动器消耗的电力更少,而提供的数据访问速度甚至比 SATA 的 SSD 驱动器快得多。这很有趣,但我想知道更多关于我的特定 NVMe 驱动器的信息,我想知道它们与其他驱动器有何区别。我可以安全地擦除驱动器吗?我怎样才能检查它的完整性?
带着这些问题我在互联网上搜索,发现了一个开源项目,其中有一系列管理 NVMe 驱动器的工具。它被称为 [nvme-cli][3]。
### 安装 nvme-cli
你可以从你的发行版的包管理器中安装 `nvme-cli`。例如,在 Fedora、CentOS 或类似系统上:
```
$ sudo dnf install nvme-cli
```
在 Debian、Mint、Elementary 和类似系统上:
```
$ sudo apt install nvme-cli
```
### 探索 NVMe 驱动器
在安装 `nvme-cli` 后,我想探索我的驱动器。`nvme-cli` 没有手册页,但你可以通过输入 `nvme help` 获得很多帮助:
```
$ nvme help
nvme-1.14
usage: nvme <command> [<device>] [<args>]
The '<device>' may be either an NVMe character device (ex: /dev/nvme0) or an
nvme block device (ex: /dev/nvme0n1).
The following are all implemented sub-commands:
list List all NVMe devices and namespaces on machine
list-subsys List nvme subsystems
id-ctrl Send NVMe Identify Controller
id-ns Send NVMe Identify Namespace, display structure
id-ns-granularity Send NVMe Identify Namespace Granularity List, display structure
list-ns Send NVMe Identify List, display structure
list-ctrl Send NVMe Identify Controller List, display structure
nvm-id-ctrl Send NVMe Identify Controller NVM Command Set, display structure
primary-ctrl-caps Send NVMe Identify Primary Controller Capabilities
[...]
```
### 列出所有的 NVMe 驱动器
`sudo nvme list` 命令列出你机器上所有的 NVMe 设备和命名空间。我用它在 `/dev/nvme0n1` 找到了一个 NVMe 驱动器。下面是命令输出结果:
```
$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev
--------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 S42GMY9M141281 SAMSUNG MZVLB256HAHQ-000L7 1
214.68 GB / 256.06 GB 512 B + 0 B 0L2QEXD7
```
我有一个名为 `nvme0n1` 的驱动器。它列出了序列号、品牌、容量、固件版本等等。
通过使用 `id-ctrl` 子命令,你可以得到更多关于该硬盘和它所支持的特性的信息:
```
$ sudo nvme id-ctrl /dev/nvme0n1
NVME Identify Controller:
vid : 0x144d
ssvid : 0x144d
sn : S42GMY9M141281
mn : SAMSUNG MZVLB256HAHQ-000L7
fr : 0L2QEXD7
rab : 2
ieee : 002538
cmic : 0
mdts : 9
cntlid : 0x4
ver : 0x10200
rtd3r : 0x186a0
rtd3e : 0x7a1200
[...]
```
### 驱动器健康
你可以通过 `smart-log` 子命令来了解硬盘的整体健康状况:
```
$ sudo nvme smart-log /dev/nvme0n1
Smart Log for NVME device:nvme0n1 namespace-id:ffffffff
critical_warning : 0
temperature : 21 C
available_spare : 100%
available_spare_threshold : 10%
percentage_used : 2%
endurance group critical warning summary: 0
data_units_read : 5,749,452
data_units_written : 10,602,948
host_read_commands : 77,809,121
host_write_commands : 153,405,213
controller_busy_time : 756
power_cycles : 1,719
power_on_hours : 1,311
unsafe_shutdowns : 129
media_errors : 0
num_err_log_entries : 1,243
Warning Temperature Time : 0
Critical Composite Temperature Time : 0
Temperature Sensor 1 : 21 C
Temperature Sensor 2 : 22 C
Thermal Management T1 Trans Count : 0
Thermal Management T2 Trans Count : 0
Thermal Management T1 Total Time : 0
Thermal Management T2 Total Time : 0
```
这为你提供了硬盘的当前温度、到目前为止的使用时间、不安全的关机次数等等。
### 格式化一个 NVMe 驱动器
你可以用 `nvme-cli` 格式化一个 NVMe 驱动器,但要注意。这将删除驱动器上的所有数据!如果你的硬盘上有重要的数据,你必须在这样做之前将其备份,否则你**将会**丢失数据。子命令是 `format`
```
$ sudo nvme format /dev/nvme0nX
```
(为了安全起见,我用 `X` 替换了驱动器的实际位置,以防止复制粘贴的错误。将 `X` 改为 `1``nvme list` 结果中列出的实际位置。)
### 安全地擦除 NVMe 驱动器
当你准备出售或处理你的 NVMe 电脑时,你可能想安全地擦除驱动器。这里的警告与格式化过程中的警告相同。首先要备份重要的数据,因为这个命令会删除这些数据!
```
$ sudo nvme sanitize /dev/nvme0nX
```
### 尝试 nvme-cli
`nvme-cli` 命令是在 [GPLv2][4] 许可下发布的。它是一个强大的命令,有很多有用的选项,用来有效地控制和管理数据。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/nvme-cli
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://nvmexpress.org/
[3]: https://github.com/linux-nvme/nvme-cli
[4]: https://github.com/linux-nvme/nvme-cli/blob/master/LICENSE

View File

@ -0,0 +1,146 @@
[#]: subject: "Run containers on your Mac with Lima"
[#]: via: "https://opensource.com/article/21/9/run-containers-mac-lima"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13824-1.html"
用 Lima 在你的 Mac 上运行容器
======
> Lima 可以帮助克服在 Mac 上运行容器的挑战。
![](https://img.linux.net.cn/data/attachment/album/202109/27/091509kx8u9uqdzcz8c6ud.jpg)
在你的 Mac 上运行容器可能是一个挑战。毕竟,容器是基于 Linux 特有的技术,如控制组和命名空间。
幸运的是macOS 拥有一个内置的<ruby>虚拟机监控程序<rt>hypervisor</rt></ruby>,允许在 Mac 上运行虚拟机VM。虚拟机监控程序是一个底层的内核功能而不是一个面向用户的功能。
hyperkit 是一个可以使用 macOS 虚拟机监控程序运行虚拟机的 [开源项目][2]。hyperkit 被设计成一个“极简化”的虚拟机运行器。与 VirtualBox 不同,它没有花哨的 UI 功能来管理虚拟机。
你可以获取 hyperkit这是一个运行容器管理器的极简 Linux 发行版,并将所有部分组合在一起。但这将有很多变动组件,且听起来像有很多工作。特别是如果你想通过使用 `vpnkit` (一个开源项目,用于创建感觉更像是主机网络一部分的虚拟机网络)使网络连接更加无缝。
### Lima
当 [lima 项目][3] 已经解决了这些细节问题时,就没有理由再去做这些努力了。让 `lima` 运行的最简单方法之一是使用 [Homebrew][4]。你可以用这个命令安装 `lima`
```
$ brew install lima
```
安装后,可能需要一些时间,就享受一些乐趣了。为了让 `lima` 知道你已经准备好了,你需要启动它。下面是命令:
```
$ limactl start
```
如果这是你第一次运行,你会被问到是否喜欢默认值,或者是否要改变其中的任何一项。默认值是非常安全的,但我喜欢生活在疯狂的一面。这就是为什么我跳进一个编辑器,从以下地方进行修改:
```
- location: "~"
# CAUTION: `writable` SHOULD be false for the home directory.
# Setting `writable` to true is possible but untested and dangerous.
writable: false
```
变成:
```
- location: "~"
# I *also* like to live dangerously -- Austin Powers
writable: true
```
正如评论中所说,这可能是危险的。可悲的是,许多现有的工作流程都依赖于挂载是可读写的。
默认情况下,`lima` 运行 `containerd` 来管理容器。`containerd` 管理器也是一个非常简洁的管理器。虽然使用一个包装的守护程序,如 `dockerd`,来增加这些漂亮的工效是很常见的,但也有另一种方法。
### nerdctl 工具
`nerdctl` 工具是 Docker 客户端的直接替换,它将这些功能放在客户端,而不是服务器上。`lima` 工具允许无需在本地安装就可以直接从虚拟机内部运行 `nerdctl`
做完这些后,可以运行一个容器了!这个容器将运行一个 HTTP 服务器。你可以在你的 Mac 上创建这些文件:
```
$ ls
index.html
$ cat index.html
hello
```
现在,挂载并转发端口:
```
$ lima nerdctl run --rm -it -p 8000:8000 -v $(pwd):/html --entrypoint bash python
root@9486145449ab:/#
```
在容器内,运行一个简单的 Web 服务器:
```
$ lima nerdctl run --rm -it -p 8000:8000 -v $(pwd):/html --entrypoint bash python
root@9486145449ab:/# cd /html/
root@9486145449ab:/html# python -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (<http://0.0.0.0:8000/>) ...
```
在另一个终端,你可以检查一切看起来都很好:
```
$ curl localhost:8000
hello
```
回到容器上,有一条记录 HTTP 客户端连接的日志信息:
```
10.4.0.1 - - [09/Sep/2021 14:59:08] "GET / HTTP/1.1" 200 -
```
一个文件是不够的,所以还要做些优化。 在服务器上执行 `CTRL-C`,并添加另一个文件:
```
^C
Keyboard interrupt received, exiting.
root@9486145449ab:/html# echo goodbye &gt; foo.html
root@9486145449ab:/html# python -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
```
检查你是否能看到新的文件:
```
$ curl localhost:8000/foo.html
goodbye
```
### 总结
总结一下,安装 `lima` 需要一些时间,但完成后,你可以做以下事情:
* 运行容器。
* 将你的主目录中的任意子目录挂载到容器中。
* 编辑这些目录中的文件。
* 运行网络服务器,在 Mac 程序看来,它们是在 localhost 上运行的。
这些都是通过 `lima nerdctl` 实现的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/run-containers-mac-lima
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_2015-2-osdc-lead.png?itok=kAfHrBoy (Containers for shipping overseas)
[2]: https://www.docker.com/blog/docker-unikernels-open-source/
[3]: https://github.com/lima-vm/lima
[4]: https://brew.sh/

View File

@ -0,0 +1,164 @@
[#]: subject: "GNOME 41 Released: The Most Popular Linux Desktop Environment Gets Better"
[#]: via: "https://news.itsfoss.com/gnome-41-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13816-1.html"
GNOME 41 发布:最受欢迎的 Linux 桌面环境的精细打磨
======
> GNOME 41 是一次有价值的升级,它带来了新的应用程序、功能和细微的视觉改进。
![](hhttps://img.linux.net.cn/data/attachment/album/202109/24/130703iznp8p53dbd1kktz.jpg)
现在 GNOME 41 稳定版终于发布了。
虽然 GNOME 40 带来了不少激进的改变,让许多用户不得不去适应新的工作流程,但 GNOME 41 似乎避免了这个问题。
在 GNOME 41 中没有明显的工作流程变化,但有增加了新的功能,做了全面的改进。
GNOME 41 的测试版已经发布了一段时间了。而且,为了发现它值得关注的地方,我们在发布前就使用 [GNOME OS][1] 试用了它的稳定版。
### GNOME 41 有什么新功能?
GNOME 41 并没有给你带来任何新的视觉感受,但是一些有用的改进可以帮助你改善或控制工作流程。
此外,还升级了一些 GNOME 应用程序。
让我提一下 GNOME 41 的主要亮点。
#### GNOME 41 软件的上下文板块
![][3]
每个版本中,用户都期待着对 GNOME “<ruby>软件<rt>Software</rt></ruby>”的改进。
虽然他们一直在朝着正确的方向改进它但它需要一次视觉上的重新打造。而且这一次GNOME 41 带来了急需的 UI 更新。
软件商店的描述性更强了,看起来应该对新用户有吸引力。它使用表情符号/创意图标来对应用程序进行分类,使软件中心变得更时尚。
就像 [Apps for GNOME][4] 门户网站一样,“软件”的应用程序屏幕包括了更多的细节,以尽可能地告知用户,而不需要参考项目页面或其网站。
![][5]
换句话说,这些添加到应用程序页面的上下文板块,提供了有关设备支持、安全/许可、年龄等级、下载的大小、项目等信息。
你还可以为某些应用程序(如 GIMP选择可用的附加组件以便一次都安装上。这样你就可以节省寻找附加组件和单独安装它们的时间了。
事实证明GNOME 41 “软件”比以前更有好用了。
#### 新的多任务选项
![][6]
GNOME 41 打造了新的多任务设置以帮助你改善工作流程。
你可以通过切换热角来快速打开“<ruby>活动概览<rt>Activities Overview</rt></ruby>”。还添加了一个拖动窗口到边缘时调整其大小的能力。
根据你的需求,你可以设置一个固定的可用工作空间的数量,当然也可以保持动态数量。
除此以外,你还可以调整这些功能:
* 多显示器工作区
* 应用程序切换行为
当你有多个显示器时,你可以选择将工作空间限制在一个屏幕上,或在连接的显示器上连续显示。
而且,当你在切换应用程序并浏览它们时,你可以自定义只在同一工作区或在所有工作区预览应用程序。
#### 节电设置
![][7]
在 GNOME 41 中,现在有一个有效节省电力的性能调整。这对于笔记本用户手动调整其性能,或者当一个应用程序要求切换模式以节省电力时,是非常有用的。
![][8]
#### GNOME “日历”的改进
GNOME “<ruby>日历<rt>Calendar</rt></ruby>”现在可以打开 ICS 文件及导入活动。
#### 触摸板手势
无缝的工作流程的体验:可以利用三指垂直向上/向下滑动的动作来获得“活动概览”,以及利用三指水平向右/向左滑动的动作在工作空间之间导航。
很高兴看到他们重点放在改善使用触摸板的工作流程上,这类似于 [elementary OS 6 的功能][9]。
#### GNOME 连接应用
![][10]
添加了一个新的“<ruby>连接<rt>Connections</rt></ruby>”应用程序,可以连接到远程计算机,不管是什么平台。
我看到这个应用程序仍然是一个 alpha 版本,但也许随着接下来的几次更新,就会看到这个应用程序的完成版本。
我还没有试过它是否可以工作,但也许值得再写一篇简短的文章来告诉你如何使用它。
#### SIP/VoIP 支持
在 [GNOME 41 测试版][11] 中,我发现了对 SIP/VoIP 的支持。
如果你是一个商业用户或者经常打国际电话,你现在可以直接从 GNOME 41 的拨号盘上拨打 VoIP 电话了。
不幸的是,在使用带有 GNOME 41 稳定版的 GNOME OS 时,我无法找到包含的“<ruby>通话<rt>Calls</rt></ruby>”应用程序。所以,我无法截图给你看。
#### GNOME Web / Epiphany 的改进
![][12]
GNOME Web即 Epiphany 浏览器)最近进行了很多很棒的改进。
在 GNOME 41 中Epiphany 浏览器现在利用 AdGuard 的脚本来阻止 YouTube 广告。别忘了,它还增加了对 Epiphany canary 构建版的支持。
#### 其他改进
在底层,有一些细微但重要的变化带来了更好、更快的用户体验。
例如,你可能会注意到,在应用程序/窗口的标题区域,图标更加醒目。这是为了提高清晰度和增强外观。
同样地GNOME 应用程序和功能也有许多改进,你在使用它们时可能会发现:
* GNOME “<ruby>地图<rt>Map</rt></ruby>”现在以一种用户友好的方式显示平均海平面。
* Nautilus 文件管理器进行了改进,支持有密码保护的压缩文件,并能够让你切换启用/禁用自动清理垃圾的功能
* “<ruby>音乐<rt>Music</rt></ruby>”应用程序的用户界面进行了更新
* GNOME 文本编辑器有了更多功能
* GTK 更新至 4.4.0
* 增加 libawaita以潜在地改善 GNOME 应用程序的用户体验
你可以参考 [官方更新日志和公告博文][13] 来探索所有的技术变化。
### 总结
GNOME 41 可能不是一个突破性的升级,但它是一个带有许多有价值的补充的重要更新。
你可以期待在下个月发布 Fedora 35 中带有它。不幸的是Ubuntu 21.10 将不包括它,但你可以在其他 Linux 发行版中等待它。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/gnome-41-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-os/
[2]: https://i2.wp.com/i.ytimg.com/vi/holOYrZquBQ/hqdefault.jpg?w=780&ssl=1
[3]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-software.png?w=1233&ssl=1
[4]: https://news.itsfoss.com/apps-for-gnome-portal/
[5]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-software-app.png?w=1284&ssl=1
[6]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-multitasking.png?w=1032&ssl=1
[7]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-power-settings.png?w=443&ssl=1
[8]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-power-options.png?w=1012&ssl=1
[9]: https://news.itsfoss.com/elementary-os-6-features/
[10]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/connections-gnome-41.png?w=1075&ssl=1
[11]: https://news.itsfoss.com/gnome-41-beta/
[12]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-web-41.png?w=1328&ssl=1
[13]: https://help.gnome.org/misc/release-notes/41.0/

View File

@ -0,0 +1,82 @@
[#]: subject: "Linux Gamers Can Finally Play Games like Apex Legends, Fortnite, Thanks to Easy Anti-Cheat Support"
[#]: via: "https://news.itsfoss.com/easy-anti-cheat-linux/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13821-1.html"
Linux 玩家终于可以玩《Apex Legends》、《Fortnite》等游戏了
======
> 如果你是一个狂热的多人游戏玩家你将能够玩到《Apex Legends》和《Fortnite》这样的热门游戏。但是你可能需要等待一段时间。
![](https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/easy-anti-cheat-linux.png?w=1200&ssl=1)
Linux 玩家们,这可是个大新闻啊!
Epic Games 为其“简易反作弊”服务增加了完整的 Linux 支持,官方提供了兼容 [SteamPlay][1](或 Proton和 Wine 兼容性。
尽管我们预计这将在未来的某个时候发生,但 Steam Deck 的引入改变了 [在 Linux 上玩游戏][2] 的场景。
你可能知道Steam Deck 是由 Linux 驱动的,这就是为什么 Epic Games 有兴趣扩大对 Linux 平台的支持。
因此,可以说,如果不是 Valve 在 Steam Deck 上的努力,在 Linux 上获得“简易反作弊”支持的机会并不乐观。
### 多人游戏玩家可以考虑转到 Linux 上了
有了 [简易反作弊][3] 的支持许多流行的多人游戏如《Apex Legends》、《Fortnite》、《Tom Clancy's Division 2》、《Rust》 和其他许多游戏应该可以在 Linux 上完美地运行了。
根据 Epic Games 的公告:
> 从最新的 SDK 版本开始,开发者只需在 Epic 在线服务开发者门户点击几下,就可以通过 Wine 或 Proton 激活对 Linux 的反作弊支持。
因此,开发人员可能需要一段时间来激活对各种游戏的反作弊支持。但是,对于大多数带有简易反作弊功能的游戏来说,这应该是一个绿色信号。
少了一个 [Windows 与 Linux 双启动][4] 的理由。
《Apex Legends》 是我喜欢的多人游戏之一。而且,我不得不使用 Windows 来玩这个游戏。希望这种情况很快就会改变,在未来几周内,我可以在 Linux 上试一试!
同样,如果你几乎就要转到 Linux 了,但因为它与游戏的兼容性问题而迟疑,我想说问题已经解决了一半了!
当然,我们仍然需要对 BattleEye、其他反作弊服务和游戏客户端的官方支持。但是这是个开端。
### Steam Deck 现在是一个令人信服的游戏选择
虽然许多人不确定 Steam Deck 是否支持所有的 AAA 级游戏,但这应该会有所改善!
[Steam Deck][5] 现在应该是多人游戏玩家的一个简单选择。
### 总结
如果 Steam Deck 作为一个成功的掌上游戏机而成为了焦点,那么正如我们所知,在 Linux 上玩游戏也将发生改变。
而且,我认为 Epic Games 在其反作弊服务中加入 Linux 支持仅仅只是一个开始。
也许,我们永远都不用借助 [ProtonDB][6] 来在 Linux 上玩一个只有 Windows 支持的游戏谁知道呢但是在这之后Linux 游戏的未来似乎充满希望。
如果你是一个开发者,你可能想阅读 [该公告][7] 来获得最新的 SDK。
你对 Epic Games 将简易反作弊引入 Linux 有何看法?欢迎在下面的评论中分享你的想法。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/easy-anti-cheat-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/steam-play/
[2]: https://itsfoss.com/linux-gaming-guide/
[3]: https://www.easy.ac/en-us/
[4]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
[5]: https://www.steamdeck.com/en/
[6]: https://www.protondb.com
[7]: https://dev.epicgames.com/en-US/news/epic-online-services-launches-anti-cheat-support-for-linux-mac-and-steam-deck

View File

@ -0,0 +1,80 @@
[#]: subject: "Ubuntu Touch OTA-19 Brings in Support for New Devices With Multiple Bug Fixes"
[#]: via: "https://news.itsfoss.com/ubuntu-touch-ota-19/"
[#]: author: "Rishabh Moharir https://news.itsfoss.com/author/rishabh/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu Touch OTA-19 Brings in Support for New Devices With Multiple Bug Fixes
======
Ubuntu Touch is an open-source OS for mobile devices that respects user privacy, unlike Googles Android for privacy-focused users. The UBports community has released yet another update of Ubuntu Touch that is based on Ubuntu 16.04.
This release supports many new devices and contains significant updates on certain phones, along with numerous bug fixes.
Lets take a look at them.
![Source: UBports][1]
### Whats New?
#### Framework and Packages
The 16.04.7 Qt framework and packages like qml-module-qtwebview and libqt5webview5-dev have been added to the App framework. This improves application compatibility with other platforms.
#### Imrpovements to Halium devices
The gyroscope and magnetic field sensors can now be accessed on the Halium 5.1 and 7.1 devices. Do note that the functionality of the compass is still under development. The same applies to the magnetic field sensor for the Halium 9 and 10 devices that now use sensorfw, thus replacing the legacy platform-API.
#### Pixel 3a
You can completely shut down the device as intended. It no longer hangs in the process of the shutdown, so you should have better battery life. Additionally, the freezing of the camera app while capturing sounds when recording videos has also been fixed.
#### Messaging App Fix
The messaging app has also received a minor update. When messages arrive, the keyboard will no longer appear automatically but when needed. This is useful if the user doesnt want to reply or access the keyboard on demand.
#### Media Hub
A bug that prevented the device from sleep when audio sounds were played in successions has been fixed. Another major bug that reduced battery life drastically, because of uncleared requested wake locks, has also been taken care of.
### Other Improvements
There are several new devices added to support Ubuntu Touch, along with fixes for the Wi-Fi, audio, and camera.
You can look at the [official release notes][2] to check the list of devices added and explore more technical details.
### Update or Installation
Ubuntu Touch users should automatically receive an update or head to the updates in the System Settings to check for available updates.
Those willing to try out Ubuntu Touch for the first time can explore the [official website][3] and check if their device is supported correctly before installation.
I am particularly looking forward to the Ubuntu Touch OS update with Ubuntu 18.04 or newer as its base after several minor updates.
_What do you think about this new update? Have you tried it yet?_
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntu-touch-ota-19/
作者:[Rishabh Moharir][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/rishabh/
[b]: https://github.com/lujun9972
[1]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQyMiIgd2lkdGg9IjM0MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[2]: https://ubports.com/blog/ubports-news-1/post/ubuntu-touch-ota-19-release-3779
[3]: https://ubuntu-touch.io/get-ubuntu-touch

View File

@ -1,145 +0,0 @@
[#]: subject: (Can We Recommend Linux for Gaming in 2021?)
[#]: via: (https://news.itsfoss.com/linux-for-gaming-opinion/)
[#]: author: (Ankush Das https://news.itsfoss.com/author/ankush/)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Can We Recommend Linux for Gaming in 2021?
======
You will often hear Linux enthusiasts praise about the improved gaming capabilities on Linux. Yes, we have come a long way considering the advancements made to support modern games on Linux desktop.
Even Lutris creator mentions in our interview that the [progress Linux has made in terms of gaming is simply incredible][1].
But, is it something to be hyped about? Can we recommend Linux to a gamer? Is Linux suitable for gaming?
In this article, I want to share a few things about gaming on a Linux system and share what I think about it.
### You Can Play Games on Linux: Yes!
If anyones ever told you that you cannot game on Linux, **that is not true**.
You can play a variety of games on Linux without any major hiccups. And, for the most part, it is playable and totally a good experience.
In fact, we have an ultimate guide for [Gaming on Linux][2] if you do not know where to start.
### Do I Need a Specific Linux Distro to Play Games?
Not really. It depends on how convenient you want the experience to be.
For instance, if you want a Linux distribution to work well with your graphics driver and get the latest hardware support, theres something for that. Similarly, if you just want to play native Linux indie games with an integrated GPU, any Linux distro can work.
So, there are a few variables when choosing a Linux distribution for your gaming adventures.
Fret not, to help you out, we have a useful list of the [best Linux gaming distributions][3].
### Virtual Reality Games on Linux: Uh-Oh!
![][4]
Im sure VR gaming is not something widely adopted yet. But, if you want the exciting experience on a VR headset, **choosing Linux as your preferred platform might be a bad idea**.
You do not have the necessary drivers or applications for a convenient experience on Linux. No distribution can help you solve this problem.
If you are curious, you can go through the details shed on the **state of virtual reality** in a blog post on [Boiling Steam][5] and an interesting experience with Valves VR headset on [GamingOnLinux][6].
Ive linked those blog posts for reference but long story short — avoid Linux if you want to experience VR games (feel free to experiment if you have the time though).
### Can You Play Windows Exclusive Games on Linux?
Yes and No.
You can use [Steam Play to play Windows-only games][7], **but it has its share of issues**. Not every game works.
For instance, I end up using Windows to play [Forza Horizon 4][8]. If you love car simulation or racing games, this is a masterpiece that you may not want to miss.
Maybe we will see it working through Steam Play without issues in the near future, who knows?
So, it is safe to assume that you will encounter many similar games that may not work at all. Thats the bitter truth.
And, to know if the game works on Linux, head to [ProtonDB][9] and search for the game to see if it has a “**Gold**” status at the very least.
### Multiplayer Gaming With Anti-Cheat Engines: Does It Work?
![][10]
A huge chunk of gamers prefer playing multiplayer games like [Apex Legends][11], [Rainbow Six Siege][12], and [Fortnite][13].
However, some of those popular titles that rely on anti-cheat engines do not work on Linux yet. It is still something a work in progress and can be made possible in future Linux Kernel releases — just not yet.
Do note that multiplayer games like [CS:GO][14], Dota 2, Team Fortress 2, [Valheim][15], and several more offer native Linux support and works great!
### Would I Recommend Linux for Gaming?
![][4]
Considering that you can play a lot of Windows-specific games, native indie games, and a variety of AAA games with native Linux support, I can recommend a first-time user to try gaming on Linux.
But, that comes with a **caution** — I would suggest you to make a potential list of games that you want to play to make sure that it runs on Linux without any issues. In either case, you may end up wasting a lot of time troubleshooting with no results.
Not to forget, a big no to VR gaming on Linux, I believe.
And, if you want to explore all the latest and greatest titles, I will recommend you to stick to your Windows-powered gaming machine.
**While I should encourage more users to adopt Linux as a gaming platform, but I wont be ignoring the practical side of why common consumers still prefer a Windows-powered machine to game on.**
_What do you think? Do you agree with my thoughts? Feel free to share what you feel in the comments below!_
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
#### _Related_
* [The Progress Linux has Made in Terms of Gaming is Simply Incredible: Lutris Creator][1]
* ![][16] ![][17]
* [Popular Game Titles Metro Exodus and Total War: Rome Remastered Releasing for Linux in April][18]
* ![][16] ![][19]
* [Good News for Linux Gamers! An Unofficial Epic Games Store Launcher for Linux is in Works][20]
* ![][16] ![Heroic Games Launcher][21]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/linux-for-gaming-opinion/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/lutris-creator-interview/
[2]: https://itsfoss.com/linux-gaming-guide/
[3]: https://itsfoss.com/linux-gaming-distributions/
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzUyMScgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[5]: https://boilingsteam.com/the-state-of-virtual-reality-on-linux/
[6]: https://www.gamingonlinux.com/2020/08/my-experiences-of-valves-vr-on-linux
[7]: https://itsfoss.com/steam-play/
[8]: https://forzamotorsport.net/en-US/games/fh4
[9]: https://www.protondb.com/
[10]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzUyMCcgd2lkdGg9Jzc4MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[11]: https://www.ea.com/games/apex-legends
[12]: https://www.ubisoft.com/en-us/game/rainbow-six/siege
[13]: https://www.epicgames.com/fortnite/en-US/home
[14]: https://store.steampowered.com/app/730/CounterStrike_Global_Offensive/
[15]: https://store.steampowered.com/app/892970/Valheim/
[16]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9JzIwMCcgd2lkdGg9JzM1MCcgeG1sbnM9J2h0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnJyB2ZXJzaW9uPScxLjEnLz4=
[17]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/03/lutris-interview-ft.png?fit=1200%2C675&ssl=1&resize=350%2C200
[18]: https://news.itsfoss.com/metro-exodus-total-war-rome-linux/
[19]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/03/metro-total-war-ft.png?fit=1200%2C675&ssl=1&resize=350%2C200
[20]: https://news.itsfoss.com/heroic-games-launcher/
[21]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/01/heroic-games-launcher.jpg?fit=1200%2C675&ssl=1&resize=350%2C200

View File

@ -0,0 +1,111 @@
[#]: subject: "5 reasons to switch to Firefox right now"
[#]: via: "https://opensource.com/article/21/9/switch-to-firefox"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
5 reasons to switch to Firefox right now
======
Version 0.1 of Mozilla Firefox was released 19 years ago.
![red panda][1]
Mozilla Firefox was one of the applications that opened my eyes to open source. It wasn't by any means the tipping point, but it was part of a larger cumulative effect of several open source applications grabbing my attention, which ultimately resulted in me switching to Linux, and never looking back. Since switching to Firefox, which occurred well before I consciously changed to open source, I've been an avid Firefox user. My mobile phone was a Firefox OS mobile phone, and it was until the project was abandoned. Interestingly, though, I didn't necessarily consider myself a Firefox _fan_. I used it then and continue to use it today because it continues to be the best browser available in many different ways. Here are five reasons you should [switch to Firefox][2] right now.
### 1\. Firefox is focused on privacy
The maintainer of Firefox is the [Mozilla foundation][3], a non-profit organization not motivated at all by your personal data. Mozilla doesn't care about what you search for, what websites you visit, or how much time you spend on the web. That's just not Mozilla's business model, but it is expressly the business model of many other popular browsers.
Even if you don't object to a browser tracking your activities for privacy reasons, you may have had the experience of making that one-time purchase of an unusual gift, only to have every site you visit try to sell you that item for the rest of your life. The Internet is big, so it sometimes seems like a good idea to restructure it based on users' interests. I admit that if all of my searches could have "open source" appended to them, it would probably provide more relevant results. But then again, I'd rather opt in to that kind of optimization rather than have it, along with many other unknowns, being decided for me, outside my control.
![Privacy in Firefox][4]
Privacy in Firefox (Seth Kenlon, CC BY-SA 4.0) 
Mozilla has this policy: [Take less. Keep it safe. No secrets.][5]
Never one to stop short, Firefox also offers account [monitoring][6], an optional service from Mozilla that alerts you if any of your online accounts become compromised through a large-scale data breach. Additionally, Mozilla offers a paid VPN service using open source Wireguard software so you can browse safely from anywhere.
### 2\. Firefox can use containers
It may be hard to believe, but there was a time when leading web browsers did not feature tabs. In the late 90s and early 00s, when you wanted to visit two web pages simultaneously, you had to open two separate browser windows. Firefox (and the Mozilla browser before it) was an early adopter of the tabbed interface.
Tabs are expected in browsers now, but through Firefox's extensions, there's been an interesting new twist on the power of the tabbed interface. Developed by Mozilla itself, the [Firefox Multi-Account Containers plugin][7] can turn each tab into an isolated "container" within your browser.
![Containerized tabs in Firefox][8]
Containerized tabs in Firefox (Seth Kenlon, CC BY-SA 4.0) 
For instance, say your employer uses Google Apps, but you don't trust Google with your personal information. You can use the Multi-Account Containers plugin to isolate your work activities so that Google touches only your professional life and has no access to any other part of your life.
You can even open the same site with two different accounts, and as a bonus, the tabs are color-coded, so it's useful whether you're looking to isolate sites or just add new visual cues to your browser.
### 3\. Firefox user interface design
As much as we humans tend to get excited about new things, there's just as much comfort to be found in something familiar and reliable. Firefox has updated its interface over the years, and it's had its fair share of innovations that are now unofficial industry standards, but overall it has remained very much the same. Its user interface retains all the standard conventions you already take for granted.
When you download a file, you're prompted for how you want Firefox to handle the file. At your option, you can open the file in an appropriate application or save it to your hard drive. You can prompt Firefox to remember your choices for the future or have it continue to prompt you.
![Firefox user interface][9]
Firefox user interface (Seth Kenlon, CC BY-SA 4.0)
When you need an application menu, you can find it in a modern-style "hamburger" menu, or else you can press the **Alt** key to show a traditional menu along the top of the Firefox window.
Everything in Firefox is familiar, whether you're a long-time user of Firefox or not because it builds on years of user interface design. Where it can innovate, it does, but where it's counter-productive to change something intuitive, it refrains.
### 4\. Developer tools in Firefox
Back in the early days of the world wide web, you could navigate to any website and view the source code. There was a high chance of learning HTML just from doing this a few times. Everything was open, transparent, obvious, and relatively straightforward.
The Internet has evolved into a powerful [cloud-based supercomputer][10], and extracting meaningful context from a website now sometimes requires more than just a text dump of its underlying markup. To ensure everyone can reverse engineer (and engineer) how a website functions, Firefox incorporates a set of powerful development tools into the browser.
![Developer tools][11]
Developer tools (Seth Kenlon, CC BY-SA 4.0) 
Although this was a Firefox-based innovation originally (by [Firebug][12] back in 2006), many browsers have a devtools feature now. Not all devtools are equal, though, and it's Firefox's developer panels that make Firefox my go-to browser for web design and UX testing.
### 5\. Firefox is open source
Most importantly, Firefox is fully open source. It's an excellent browser with nothing to hide. It's got no ulterior motive aside from keeping the web open, educating people about the Internet, and promoting open source solutions to everyday tasks.
![Firefox is open source][13]
Firefox is open source (Seth Kenlon, CC BY-SA 4.0) 
You can contribute to Firefox. You can file bugs about things that you don't like. You can see the code you run when you interface with the Internet. Firefox has taken a stand for the open web for decades. It's stayed true to its principles and arguably has forced the hand of several competitors who probably wouldn't have chosen to go open source if Firefox hadn't set the public's expectations.
Firefox is a powerful force on the modern Internet, and it's a great browser. Firefox runs on your desktop and mobile devices, so do yourself a favor and [get Firefox][14].
Learn how to contribute to one of the largest and most popular open source projects on the web:...
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/switch-to-firefox
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/redpanda_firefox_pet_animal.jpg?itok=aSpKsyna (red panda)
[2]: http://getfirefox.org
[3]: https://foundation.mozilla.org/en/
[4]: https://opensource.com/sites/default/files/firefox-privacy.jpg
[5]: https://blog.mozilla.org/en/products/firefox/firefox-data-privacy-promise/
[6]: https://monitor.firefox.com
[7]: https://github.com/mozilla/multi-account-containers#readme
[8]: https://opensource.com/sites/default/files/firefox-container.jpg
[9]: https://opensource.com/sites/default/files/screenshot_from_2021-09-16_12-13-14.png
[10]: https://www.redhat.com/en/products/open-hybrid-cloud
[11]: https://opensource.com/sites/default/files/firefox-dev.jpg
[12]: https://getfirebug.com/
[13]: https://opensource.com/sites/default/files/firefox-open.jpg
[14]: http://getfirefox.com

View File

@ -2,7 +2,7 @@
[#]: via: (https://opensource.com/article/21/6/what-config-files)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (unigeorge)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -168,7 +168,7 @@ via: https://opensource.com/article/21/6/what-config-files
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,708 +0,0 @@
[#]: subject: "Code memory safety and efficiency by example"
[#]: via: "https://opensource.com/article/21/8/memory-programming-c"
[#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Code memory safety and efficiency by example
======
Learn more about memory safety and efficiency
![Code going into a computer.][1]
C is a high-level language with close-to-the-metal features that make it seem, at times, more like a portable assembly language than a sibling of Java or Python. Among these features is memory management, which covers an executing program's safe and efficient use of memory. This article goes into the details of memory safety and efficiency through code examples in C and a code segment from the assembly language that a modern C compiler generates.
Although the code examples are in C, the guidelines for safe and efficient memory management are the same for C++. The two languages differ in various details (e.g., C++ has object-oriented features and generics that C lacks), but these languages share the very same challenges with respect to memory management.
### Overview of memory for an executing program
For an executing program (aka _process_), memory is partitioned into three areas: The **stack**, the **heap**, and the **static area**. Here's an overview of each, with full code examples to follow.
As a backup for general-purpose CPU registers, the _stack_ provides scratchpad storage for the local variables within a code block, such as a function or a loop body. Arguments passed to a function count as local variables in this context. Consider a short example:
```
void some_func(int a, int b) {
   int n;
   ...
}
```
Storage for the arguments passed in parameters **a** and **b** and the local variable **n** would come from the stack unless the compiler could find general-purpose registers instead. The compiler favors such registers for scratchpad because CPU access to these registers is fast (one clock tick). However, these registers are few (roughly sixteen) on the standard architectures for desktop, laptop, and handheld machines.
At the implementation level, which only an assembly-language programmer would see, the stack is organized as a LIFO (Last In, First Out) list with **push** (insert) and **pop** (remove) operations. The **top** pointer can act as a base address for offsets; in this way, stack locations other than **top** become accessible. For example, the expression **top+16** points to a location sixteen bytes above the stack's **top**, and the expression **top-16** points to sixteen bytes below the **top**. Accordingly, stack locations that implement scratchpad storage are accessible through the **top** pointer. On a standard ARM or Intel architecture, the stack grows from high to low memory addresses; hence, to decrement **top** is to grow the stack for a process.
To use the stack is to use memory effortlessly and efficiently. The compiler, rather than the programmer, writes the code that manages the stack by allocating and deallocating the required scratchpad storage; the programmer declares function arguments and local variables, leaving the implementation to the compiler. Moreover, the very same stack storage can be reused across consecutive function calls and code blocks such as loops. Well-designed modular code makes stack storage the first memory option for scratchpad, with an optimizing compiler using, whenever possible, general-purpose registers instead of the stack.
The **heap** provides storage allocated explicitly through programmer code, although the syntax for heap allocation differs across languages. In C, a successful call to the library function **malloc** (or variants such as **calloc**) allocates a specified number of bytes. (In languages such as C++ and Java, the **new** operator serves the same purpose.) Programming languages differ dramatically on how heap-allocated storage is deallocated:
* In languages such as Java, Go, Lisp, and Python, the programmer does not explicitly deallocate dynamically allocated heap storage.
For example, this Java statement allocates heap storage for a string and stores the address of this heap storage in the variable **greeting**:
```
`String greeting = new String("Hello, world!");`
```
Java has a garbage collector, a runtime utility that automatically deallocates heap storage that is no longer accessible to the process that allocated the storage. Java heap deallocation is thus automatic through a garbage collector. In the example above, the garbage collector would deallocate the heap storage for the string after the variable **greeting** went out of scope.
* The Rust compiler writes the heap-deallocation code. This is Rust's pioneering effort to automate heap-deallocation without relying on a garbage collector, which entails runtime complexity and overhead. Hats off to the Rust effort!
* In C (and C++), heap deallocation is a programmer task. The programmer who allocates heap storage through a call to **malloc** is then responsible for deallocating this same storage with a matching call to the library function **free**. (In C++, the **new** operator allocates heap storage, whereas the **delete** and **delete[]** operators free such storage.) Here's a C example:
```
char* greeting = malloc(14);       /* 14 heap bytes */
strcpy(greeting, "Hello, world!"); /* copy greeting into bytes */
puts(greeting);                    /* print greeting */
free(greeting);                    /* free malloced bytes */
```
C avoids the cost and complexity of a garbage collector, but only by burdening the programmer with the task of heap deallocation.
The **static area** of memory provides storage for executable code such as C functions, string literals such as "Hello, world!", and global variables:
```
int n;                       /* global variable */
int main() {                 /* function */
   char* msg = "No comment"; /* string literal */
   ...
}
```
This area is static in that its size remains fixed from the start until the end of process execution. Because the static area amounts to a fixed-sized memory footprint for a process, the rule of thumb is to keep this area as small as possible by avoiding, for example, global arrays.
Code examples in the following sections flesh out this overview.
### Stack storage
Imagine a program that has various tasks to perform consecutively, including processing numeric data downloaded every few minutes over a network and stored in a local file. The **stack** program below simplifies the processing (odd integer values are made even) to keep the focus on the benefits of stack storage.
```
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#define Infile   "incoming.dat"
#define Outfile  "outgoing.dat"
#define IntCount 128000  /* 128,000 */
void other_task1() { /*...*/ }
void other_task2() { /*...*/ }
void process_data(const char* infile,
          const char* outfile,
          const unsigned n) {
  int nums[n];
  FILE* input = [fopen][2](infile, "r");
  if (NULL == infile) return;
  FILE* output = [fopen][2](outfile, "w");
  if (NULL == output) {
    [fclose][3](input);
    return;
  }
  [fread][4](nums, n, sizeof(int), input); /* read input data */
  unsigned i;
  for (i = 0; i &lt; n; i++) {
    if (1 == (nums[i] &amp; 0x1))  /* odd parity? */
      nums[i]--;               /* make even */
  }
  [fclose][3](input);               /* close input file */
  [fwrite][5](nums, n, sizeof(int), output);
  [fclose][3](output);
}
int main() {
  process_data(Infile, Outfile, IntCount);
  
  /** now perform other tasks **/
  other_task1(); /* automatically released stack storage available */
  other_task2(); /* ditto */
  
  return 0;
}
```
The **main** function at the bottom first calls the **process_data** function, which creates a stack-based array of a size given by argument **n** (128,000 in the current example). Accordingly, the array holds 128,000 x **sizeof(int)** bytes, which comes to 512,000 bytes on standard devices because an **int** is four bytes on these devices. Data then are read into the array (using library function **fread**), processed in a loop, and saved to the local file **outgoing.dat** (using library function **fwrite**).
When the **process_data** function returns to its caller **main**, the roughly 500MB of stack scratchpad for the **process_data** function become available for other functions in the **stack** program to use as scratchpad. In this example, **main** next calls the stub functions **other_task1** and **other_task2**. The three functions are called consecutively from **main**, which means that all three can use the same stack storage for scratchpad. Because the compiler rather than the programmer writes the stack-management code, this approach is both efficient and easy on the programmer.
In C, any variable defined inside a block (e.g., a function's or a loop's body) has an **auto** storage class by default, which means that the variable is stack-based. The storage class **register** is now outdated because C compilers are aggressive, on their own, in trying to use CPU registers whenever possible. Only a variable defined inside a block may be **register**, which the compiler changes to **auto** if no CPU register is available.Stack-based programming may be the preferred way to go, but this style does have its challenges. The **badStack** program below illustrates.
```
#include &lt;stdio.h&gt;
const int* get_array(const unsigned n) {
  int arr[n]; /* stack-based array */
  unsigned i;
  for (i = 0; i &lt; n; i++) arr[i] = 1 + 1;
  return arr;  /** ERROR **/
}
int main() {
  const unsigned n = 16;
  const int* ptr = get_array(n);
  
  unsigned i;
  for (i = 0; i &lt; n; i++) [printf][6]("%i ", ptr[i]);
  [puts][7]("\n");
  return 0;
}
```
The flow of control in the **badStack** program is straightforward. Function **main** calls function **get_array** with an argument of 128, which the called function then uses to create a local array of this size. The **get_array** function initializes the array and returns to **main** the array's identifier **arr**, which is a pointer constant that holds the address of the array's first **int** element.
The local array **arr** is accessible within the **get_array** function, of course, but this array cannot be legitimately accessed once **get_array** returns. Nonetheless, function **main** tries to print the stack-based array by using the stack address **arr**, which function **get_array** returns. Modern compilers warn about the mistake. For example, here's the warning from the GNU compiler:
```
badStack.c: In function 'get_array':
badStack.c:9:10: warning: function returns address of local variable [-Wreturn-local-addr]
8 |   return arr;  /** ERROR **/
```
The general rule is that stack-based storage should be accessed only within the code block that contains the local variables implemented with stack storage (in this case, the array pointer **arr** and the loop counter **i**). Accordingly, a function should never return a pointer to stack-based storage.
### Heap storage
Several code examples highlight the fine points of using heap storage in C. In the first example, heap storage is allocated, used, and then freed in line with best practice. The second example nests heap storage inside other heap storage, which complicates the deallocation operation.
```
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
int* get_heap_array(unsigned n) {
  int* heap_nums = [malloc][8](sizeof(int) * n); 
  
  unsigned i;
  for (i = 0; i &lt; n; i++)
    heap_nums[i] = i + 1;  /* initialize the array */
  
  /* stack storage for variables heap_nums and i released
     automatically when get_num_array returns */
  return heap_nums; /* return (copy of) the pointer */
}
int main() {
  unsigned n = 100, i;
  int* heap_nums = get_heap_array(n); /* save returned address */
  
  if (NULL == heap_nums) /* malloc failed */
    [fprintf][9](stderr, "%s\n", "malloc(...) failed...");
  else {
    for (i = 0; i &lt; n; i++) [printf][6]("%i\n", heap_nums[i]);
    [free][10](heap_nums); /* free the heap storage */
  }
  return 0; 
}
```
The **heap** program above has two functions: **main** calls **get_heap_array** with an argument (currently 100) that specifies how many **int** elements the array should have. Because the heap allocation could fail, **main** checks whether **get_heap_array** has returned **NULL**, which signals failure. If the allocation succeeds, **main** prints the **int** values in the array—and immediately thereafter deallocates, with a call to library function **free**, the heap-allocated storage. This is best practice.
The **get_heap_array** function opens with this statement, which merits a closer look:
```
`int* heap_nums = malloc(sizeof(int) * n); /* heap allocation */`
```
The **malloc** library function and its variants deal with bytes; hence, the argument to **malloc** is the number of bytes required for **n** elements of type **int**. (The **sizeof(int)** is four bytes on a standard modern device.) The **malloc** function returns either the address of the first among the allocated bytes or, in case of failure, **NULL**.
In a successful call to **malloc**, the returned address is 64-bits in size on a modern desktop machine. On handhelds and earlier desktop machines, the address might be 32-bits in size or, depending on age, even smaller. The elements in the heap-allocated array are of type **int**, a four-byte signed integer. The address of these heap-allocated **int**s is stored in the local variable **heap_nums**, which is stack-based. Here's a depiction:
```
                 heap-based
 stack-based        /
     \        +----+----+   +----+
 heap-nums---&gt;|int1|int2|...|intN|
              +----+----+   +----+
```
Once the **get_heap_array** function returns, stack storage for pointer variable **heap_nums** is reclaimed automatically—but the heap storage for the dynamic **int** array persists, which is why the **get_heap_array** function returns (a copy of) this address to **main**, which now is responsible, after printing the array's integers, for explicitly deallocating the heap storage with a call to the library function **free**:
```
`free(heap_nums); /* free the heap storage */`
```
The **malloc** function does not initialize heap-allocated storage, which therefore contains random values. By contrast, the **calloc **variant initializes the allocated storage to zeros. Both functions return **NULL** to signal failure.
In the **heap** example, **main** returns immediately after calling **free**, and the executing program terminates, which allows the system to reclaim any allocated heap storage. Nonetheless, the programmer should develop the habit of explicitly freeing heap storage as soon as it is no longer needed.
### Nested heap allocation
The next code example is trickier. C has various library functions that return a pointer to heap storage. Here's a familiar scenario:
1\. The C program invokes a library function that returns a pointer to heap-based storage, typically an aggregate such as an array or a structure:
```
`SomeStructure* ptr = lib_function(); /* returns pointer to heap storage */`
```
2\. The program then uses the allocated storage.
3\. For cleanup, the issue is whether a simple call to **free** will clean up all of the heap-allocated storage that the library function allocates. For example, the **SomeStructure** instance may have fields that, in turn, point to heap-allocated storage. A particularly troublesome case would be a dynamically allocated array of structures, each of which has a field pointing to more dynamically allocated storage.The following code example illustrates the problem and focuses on designing a library that safely provides heap-allocated storage to clients.
```
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
typedef struct {
  unsigned id;
  unsigned len;
  float*   heap_nums;
} HeapStruct;
unsigned structId = 1;
HeapStruct* get_heap_struct(unsigned n) {
  /* Try to allocate a HeapStruct. */
  HeapStruct* heap_struct = [malloc][8](sizeof(HeapStruct));
  if (NULL == heap_struct) /* failure? */
    return NULL;           /* if so, return NULL */
  /* Try to allocate floating-point aggregate within HeapStruct. */
  heap_struct-&gt;heap_nums = [malloc][8](sizeof(float) * n);
  if (NULL == heap_struct-&gt;heap_nums) {  /* failure? */
    [free][10](heap_struct);                   /* if so, first free the HeapStruct */
    return NULL;                         /* then return NULL */
  }
  /* Success: set fields */
  heap_struct-&gt;id = structId++;
  heap_struct-&gt;len = n;
  return heap_struct; /* return pointer to allocated HeapStruct */
}
void free_all(HeapStruct* heap_struct) {
  if (NULL == heap_struct) /* NULL pointer? */
    return;                /* if so, do nothing */
  
  [free][10](heap_struct-&gt;heap_nums); /* first free encapsulated aggregate */
  [free][10](heap_struct);            /* then free containing structure */  
}
int main() {
  const unsigned n = 100;
  HeapStruct* hs = get_heap_struct(n); /* get structure with N floats */
  /* Do some (meaningless) work for demo. */
  unsigned i;
  for (i = 0; i &lt; n; i++) hs-&gt;heap_nums[i] = 3.14 + (float) i;
  for (i = 0; i &lt; n; i += 10) [printf][6]("%12f\n", hs-&gt;heap_nums[i]);
  free_all(hs); /* free dynamically allocated storage */
  
  return 0;
}
```
The **nestedHeap** example above centers on a structure **HeapStruct** with a pointer field named **heap_nums**:
```
typedef struct {
  unsigned id;
  unsigned len;
  float*   heap_nums; /** pointer **/
} HeapStruct;
```
The function **get_heap_struct** tries to allocate heap storage for a **HeapStruct** instance, which entails allocating heap storage for a specified number of **float** variables to which the field **heap_nums** points. The result of a successful call to **get_heap_struct** can be depicted as follows, with **hs** as the pointer to the heap-allocated structure:
```
hs--&gt;HeapStruct instance
        id
        len
        heap_nums--&gt;N contiguous float elements
```
In the **get_heap_struct** function, the first heap allocation is straightforward:
```
HeapStruct* heap_struct = [malloc][8](sizeof(HeapStruct));
if (NULL == heap_struct) /* failure? */
  return NULL;           /* if so, return NULL */
```
The **sizeof(HeapStruct)** includes the bytes (four on a 32-bit machine, eight on a 64-bit machine) for the **heap_nums** field, which is a pointer to the **float** elements in a dynamically allocated array. At issue, then, is whether the **malloc** delivers the bytes for this structure or **NULL** to signal failure; if **NULL**, the **get_heap_struct** function returns **NULL** to notify the caller that the heap allocation failed.
The second attempted heap allocation is more complicated because, at this step, heap storage for the **HeapStruct** has been allocated:
```
heap_struct-&gt;heap_nums = [malloc][8](sizeof(float) * n);
if (NULL == heap_struct-&gt;heap_nums) {  /* failure? */
  [free][10](heap_struct);                   /* if so, first free the HeapStruct */
  return NULL;                         /* and then return NULL */
}
```
The argument **n** sent to the **get_heap_struct** function indicates how many **float** elements should be in the dynamically allocated **heap_nums** array. If the required **float** elements can be allocated, then the function sets the structure's **id** and **len** fields before returning the heap address of the **HeapStruct**. If the attempted allocation fails, however, two steps are necessary to meet best practice:
1\. The storage for the **HeapStruct** must be freed to avoid memory leakage. Without the dynamic **heap_nums** array, the **HeapStruct** is presumably of no use to the client function that calls **get_heap_struct**; hence, the bytes for the **HeapStruct** instance should be explicitly deallocated so that the system can reclaim these bytes for future heap allocations.
2\. **NULL** is returned to signal failure.
If the call to the **get_heap_struct** function succeeds, then freeing the heap storage is also tricky because it involves two **free** operations in the proper order. Accordingly, the program includes a **free_all** function instead of requiring the programmer to figure out the appropriate two-step deallocation. For review, here's the **free_all** function:
```
void free_all(HeapStruct* heap_struct) {
  if (NULL == heap_struct) /* NULL pointer? */
    return;                /* if so, do nothing */
  
  [free][10](heap_struct-&gt;heap_nums); /* first free encapsulated aggregate */
  [free][10](heap_struct);            /* then free containing structure */  
}
```
After checking that the argument **heap_struct** is not **NULL**, the function first frees the **heap_nums** array, which requires that the **heap_struct** pointer is still valid. It would be an error to release the **heap_struct** first. Once the **heap_nums** have been deallocated, the **heap_struct** can be freed as well. If **heap_struct** were freed, but **heap_nums** were not, then the **float** elements in the array would be leakage: still allocated bytes but with no possibility of access—hence, of deallocation. The leakage would persist until the **nestedHeap** program exited and the system reclaimed the leaked bytes.
A few cautionary notes on the **free** library function are in order. Recall the sample calls above:
```
[free][10](heap_struct-&gt;heap_nums); /* first free encapsulated aggregate */
[free][10](heap_struct);            /* then free containing structure */
```
These calls free the allocated storage—but they do _not_ set their arguments to **NULL**. (The **free** function gets a copy of an address as an argument; hence, changing the copy to **NULL** would leave the original unchanged.) For example, after a successful call to **free**, the pointer **heap_struct** still holds a heap address of some heap-allocated bytes, but using this address now would be an error because the call to **free** gives the system the right to reclaim and then reuse the allocated bytes.
Calling **free** with a **NULL** argument is pointless but harmless. Calling **free** repeatedly on a non-**NULL** address is an error with indeterminate results:
```
[free][10](heap_struct);  /* 1st call: ok */
[free][10](heap_struct);  /* 2nd call: ERROR */
```
### Memory leakage and heap fragmentation
The phrase "memory leakage" refers to dynamically allocated heap storage that is no longer accessible. Here's a code segment for review:
```
float* nums = [malloc][8](sizeof(float) * 10); /* 10 floats */
nums[0] = 3.14f;                          /* and so on */
nums = [malloc][8](sizeof(float) * 25);        /* 25 new floats */
```
Assume that the first **malloc** succeeds. The second **malloc** resets the **nums** pointer, either to **NULL** (allocation failure) or to the address of the first **float** among newly allocated twenty-five. Heap storage for the initial ten **float** elements remains allocated but is now inaccessible because the **nums** pointer either points elsewhere or is **NULL**. The result is forty bytes (**sizeof(float) * 10**) of leakage.
Before the second call to **malloc**, the initially allocated storage should be freed:
```
float* nums = [malloc][8](sizeof(float) * 10); /* 10 floats */
nums[0] = 3.14f;                          /* and so on */
[free][10](nums);                               /** good **/
nums = [malloc][8](sizeof(float) * 25);        /* no leakage */
```
Even without leakage, the heap can fragment over time, which then requires system defragmentation. For example, suppose that the two biggest heap chunks are currently of sizes 200MB and 100MB. However, the two chunks are not contiguous, and process **P** needs to allocate 250MB of contiguous heap storage. Before the allocation can be made, the system must _defragment_ the heap to provide 250MB contiguous bytes for **P**. Defragmentation is complicated and, therefore, time-consuming.
Memory leakage promotes fragmentation by creating allocated but inaccessible heap chunks. Freeing no-longer-needed heap storage is, therefore, one way that a programmer can help to reduce the need for defragmentation.
### Tools to diagnose memory leakage
Various tools are available for profiling memory efficiency and safety. My favorite is [valgrind][11]. To illustrate how the tool works for memory leaks, here's the **leaky** program:
```
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
int* get_ints(unsigned n) {
  int* ptr = [malloc][8](n * sizeof(int));
  if (ptr != NULL) {
    unsigned i;
    for (i = 0; i &lt; n; i++) ptr[i] = i + 1;
  }
  return ptr;
}
void print_ints(int* ptr, unsigned n) {
  unsigned i;
  for (i = 0; i &lt; n; i++) [printf][6]("%3i\n", ptr[i]);
}
int main() {
  const unsigned n = 32;
  int* arr = get_ints(n);
  if (arr != NULL) print_ints(arr, n);
  /** heap storage not yet freed... **/
  return 0;
}
```
The function **main** calls **get_ints**, which tries to **malloc** thirty-two 4-byte **int**s from the heap and then initializes the dynamic array if the **malloc** succeeds. On success, the **main** function then calls **print_ints**. There is no call to **free** to match the call to **malloc**; hence, memory leaks.
With the **valgrind** toolbox installed, the command below checks the **leaky** program for memory leaks (**%** is the command-line prompt):
```
`% valgrind --leak-check=full ./leaky`
```
Below is most of the output. The number on the left, 207683, is the process identifier of the executing **leaky** program. The report provides details of where the leak occurs, in this case, from the call to **malloc** within the **get_ints** function that **main** calls.
```
==207683== HEAP SUMMARY:
==207683==   in use at exit: 128 bytes in 1 blocks
==207683==   total heap usage: 2 allocs, 1 frees, 1,152 bytes allocated
==207683== 
==207683== 128 bytes in 1 blocks are definitely lost in loss record 1 of 1
==207683==   at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==207683==   by 0x109186: get_ints (in /home/marty/gc/leaky)
==207683==   by 0x109236: main (in /home/marty/gc/leaky)
==207683== 
==207683== LEAK SUMMARY:
==207683==   definitely lost: 128 bytes in 1 blocks
==207683==   indirectly lost: 0 bytes in 0 blocks
==207683==   possibly lost: 0 bytes in 0 blocks
==207683==   still reachable: 0 bytes in 0 blocks
==207683==   suppressed: 0 bytes in 0 blocks
```
If function **main** is revised to include a call to **free** right after the one to **print_ints**, then **valgrind** gives the **leaky** program a clean bill of health:
```
`==218462== All heap blocks were freed -- no leaks are possible`
```
### Static area storage
In orthodox C, a function must be defined outside all blocks. This rules out having one function defined inside the body of another, a feature that some C compilers support. My examples stick with functions defined outside all blocks. Such a function is either **static** or **extern**, with **extern** as the default.
C functions and variables with either **static** or **extern** as their storage class reside in what I've been calling the **static area** of memory because this area has a fixed size during program execution. The syntax for these two storage classes is complicated enough to merit a review. After the review, a full code example brings the syntactic details back to life. Functions or variables defined outside all blocks default to **extern**; hence, the storage class **static** must be explicit for both functions and variables:
```
/** file1.c: outside all blocks, five definitions  **/
int foo(int n) { return n * 2; }     /* extern by default */
static int bar(int n) { return n; }  /* static */
extern int baz(int n) { return -n; } /* explicitly extern */
int num1;        /* extern */
static int num2; /* static */
```
The difference between **extern** and **static** comes down to scope: an **extern** function or variable may be visible across files. By contrast, a **static** function is visible only in the file that contains the function's _definition_, and a **static** variable is visible only in the file (or a block therein) that has the variable's _definition_:
```
static int n1;    /* scope is the file */
void func() {
   static int n2; /* scope is func's body */
   ...
}
```
If a **static** variable such as **n1** above is defined outside all blocks, the variable's scope is the file in which the variable is defined. Wherever a **static** variable may be defined, storage for the variable is in the static area of memory.
An **extern** function or variable is defined outside all blocks in a given file, but the function or variable so defined then may be declared in some other file. The typical practice is to _declare_ such a function or variable in a header file, which is included wherever needed. Some short examples clarify these tricky points.
Suppose that the **extern** function **foo** is _defined_ in **file1.c**, with or without the keyword **extern**:
```
/** file1.c **/
int foo(int n) { return n * 2; } /* definition has a body {...} */
```
This function must be _declared_ with an explicit **extern** in any other file (or block therein) for the function to be visible. Here's the declaration that makes the **extern** function **foo** visible in file **file2.c**:
```
/** file2.c: make function foo visible here **/
extern int foo(int); /* declaration (no body) */
```
Recall that a function declaration does not have a body enclosed in curly braces, whereas a function definition does have such a body.
For review, header files typically contain function and variable declarations. Source-code files that require the declarations then **#include** the relevant header file(s). The **staticProg** program in the next section illustrates this approach.
The rules get trickier (sorry!) with **extern** variables. Any **extern** object—function or variable—must be _defined_ outside all blocks. Also, a variable defined outside all blocks defaults to **extern**:
```
/** outside all blocks **/
int n; /* defaults to extern */
```
However, the **extern** can be explicit in the variable's _definition_ only if the variable is initialized explicitly there:
```
/** file1.c: outside all blocks **/
int n1;             /* defaults to extern, initialized by compiler to zero */
extern int n2 = -1; /* ok, initialized explicitly */
int n3 = 9876;      /* ok, extern by default and initialized explicitly */
```
For a variable defined as **extern** in **file1.c** to be visible in another file such as **file2.c**, the variable must be _declared_ as explicitly **extern** in **file2.c** and not initialized, which would turn the declaration into a definition:
```
/** file2.c **/
extern int n1; /* declaration of n1 defined in file1.c */
```
To avoid confusion with **extern** variables, the rule of thumb is to use **extern** explicitly in a _declaration_ (required) but not in a _definition_ (optional and tricky). For functions, the **extern** is optional in a definition but needed for a declaration. The **staticProg** example in the next section brings these points together in a full program.
### The staticProg example
The **staticProg** program consists of three files: two C source files (**static1.c** and **static2.c**) together with a header file (**static.h**) that contains two declarations:
```
/** header file static.h **/
#define NumCount 100               /* macro */
extern int global_nums[NumCount];  /* array declaration */
extern void fill_array();          /* function declaration */
```
The **extern** in the two declarations, one for an array and the other for a function, underscores that the objects are _defined_ elsewhere ("externally"): the array **global_nums** is defined in file **static1.c** (without an explicit **extern**) and the function **fill_array** is defined in file **static2.c** (also without an explicit **extern**). Each source file includes the header file **static.h**.The **static1.c** file defines the two arrays that reside in the static area of memory, **global_nums** and **more_nums**. The second array has a **static** storage class, which restricts its scope to the file (**static1.c**) in which the array is defined. As noted, **global_nums** as **extern** can be made visible in multiple files.
```
/** static1.c **/
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include "static.h"             /* declarations */
int global_nums[NumCount];      /* definition: extern (global) aggregate */
static int more_nums[NumCount]; /* definition: scope limited to this file */
int main() {
  fill_array(); /** defined in file static2.c **/
  unsigned i;
  for (i = 0; i &lt; NumCount; i++)
    more_nums[i] = i * -1;
  /* confirm initialization worked */
  for (i = 0; i &lt; NumCount; i += 10) 
    [printf][6]("%4i\t%4i\n", global_nums[i], more_nums[i]);
    
  return 0;  
}
```
The **static2.c** file below defines the **fill_array** function, which **main** (in the **static1.c** file) invokes; the **fill_array** function populates the **extern** array named **global_nums**, which is defined in file **static1.c**. The sole point of having two files is to underscore that an **extern** variable or function can be visible across files.
```
/** static2.c **/
#include "static.h" /** declarations **/
void fill_array() { /** definition **/
  unsigned i;
  for (i = 0; i &lt; NumCount; i++) global_nums[i] = i + 2;
}
```
The **staticProg** program can be compiled as follows:
```
`% gcc -o staticProg static1.c static2.c`
```
### More details from assembly language
A modern C compiler can handle any mix of C and assembly language. When compiling a C source file, the compiler first translates the C code into assembly language. Here's the command to save the assembly language generated from the **static1.c** file above:
```
`% gcc -S static1.c`
```
The resulting file is **static1.s**. Here's a segment from the top, with added line numbers for readability:
```
    .file    "static1.c"          ## line  1
    .text                         ## line  2
    .comm    global_nums,400,32   ## line  3
    .local    more_nums           ## line  4
    .comm    more_nums,400,32     ## line  5
    .section    .rodata           ## line  6
.LC0:                             ## line  7
    .string    "%4i\t%4i\n"       ## line  8
    .text                         ## line  9
    .globl    main                ## line 10
    .type    main, @function      ## line 11
main:                             ## line 12
...
```
The assembly-language directives such as **.file** (line 1) begin with a period. As the name suggests, a directive guides the assembler as it translates assembly language into machine code. The **.rodata** directive (line 6) indicates that read-only objects follow, including the string constant **"%4i\t%4i\n"** (line 8), which function **main** (line 12) uses to format output. The function **main** (line 12), introduced as a label (the colon at the end makes it so), is likewise read-only.
In assembly language, labels are addresses. The label **main:** (line 12) marks the address at which the code for the **main** function begins, and the label **.LC0**: (line 7) marks the address at which the format string begins.
The definitions of the **global_nums** (line 3) and **more_nums** (line 4) arrays include two numbers: 400 is the total number of bytes in each array, and 32 is the number of bits in each of the 100 **int** elements per array. (The **.comm** directive in line 5 stands for **common name**, which can be ignored.)
The array definitions differ in that **more_nums** is marked as **.local** (line 4), which means that its scope is restricted to the containing file **static1.s**. By contrast, the **global_nums** array can be made visible across multiple files, including the translations of the **static1.c** and **static2.c** files.
Finally, the **.text** directive occurs twice (lines 2 and 9) in the assembly code segment. The term "text" suggests "read-only" but also covers read/write variables such as the elements in the two arrays. Although the assembly language shown is for an Intel architecture, Arm6 assembly would be quite similar. For both architectures, variables in the **.text** area (in this case, elements in the two arrays) are initialized automatically to zeros.
### Wrapping up
For memory-efficient and memory-safe programming in C, the guidelines are easy to state but may be hard to follow, especially when calls to poorly designed libraries are in play. The guidelines are:
* Use stack storage whenever possible, thereby encouraging the compiler to optimize with general-purpose registers for scratchpad. Stack storage represents efficient memory use and promotes clean, modular code. Never return a pointer to stack-based storage.
* Use heap storage carefully. The challenge in C (and C++) is to ensure that dynamically allocated storage is deallocated ASAP. Good programming habits and tools (such as **valgrind**) help to meet the challenge. Favor libraries that provide their own deallocation function(s), such as the **free_all** function in the **nestedHeap** code example.
* Use static storage judiciously, as this storage impacts the memory footprint of a process from start to finish. In particular, try to avoid **extern** and **static** arrays.
The C code examples are available at my website (<https://condor.depaul.edu/mkalin>).
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/memory-programming-c
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 (Code going into a computer.)
[2]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
[3]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fread.html
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fwrite.html
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/malloc.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/free.html
[11]: https://www.valgrind.org/

View File

@ -1,143 +0,0 @@
[#]: subject: "Linux Jargon Buster: What is sudo rm -rf? Why is it Dangerous?"
[#]: via: "https://itsfoss.com/sudo-rm-rf/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux Jargon Buster: What is sudo rm -rf? Why is it Dangerous?
======
When you are new to Linux, youll often come across advice to never run `sudo rm -rf /`. There are so many memes in the Linux world around `sudo rm -rf`.
![][1]
But it seems that there are some confusions around it. In the tutorial on [cleaning Ubuntu to make free space][2], I advised running some command that involved sudo and rm -rf. An Its FOSS reader asked me why I am advising that if sudo rm -rf is a dangerous Linux command that should not be run.
And thus I thought of writing this chapter of Linux jargon buster and clear the misconceptions.
### sudo rm -rf: what does it do?
Lets learn things in steps.
The rm command is used for [removing files and directories in Linux command line][3].
```
[email protected]:$ rm agatha
[email protected]:$
```
But some files will not be removed immediate because of read only [file permissions][4]. They have to be forced delete with the option `-f`.
```
[email protected]:$ rm books
rm: remove write-protected regular file 'books'? y
[email protected]:$ rm -f christie
[email protected]:$
```
However, rm command cannot be used to delete directories (folders) directly. You have to use the recursive option `-r` with the rm command.
```
[email protected]:$ rm new_dir
rm: cannot remove 'new_dir': Is a directory
```
And thus ultimately, rm -rf command means recursively force delete the given directory.
```
[email protected]:~$ rm -r new_dir
rm: remove write-protected regular file 'new_dir/books'? ^C
[email protected]:$ rm -rf new_dir
[email protected]:$
```
Heres a screenshot of all the above commands:
![Example explaining rm command][5]
If you add sudo to the rm -rf command, you are deleting files with root power. That means you could delete system files owned by [root user][6].
### So, sudo rm -rf is a dangerous Linux command?
Well, any command that deletes something could be dangerous if you are not sure of what you are deleting.
Consider **rm -rf command** as a knife. Is knife a dangerous thing? Possibly. If you cut vegetables with the knife, its good. If you cut your fingers with the knife, it is bad, of course.
The same goes for rm -rf command. It is not dangerous in itself. It is used for deleting files after all. But if you use it to delete important files unknowingly, then it is a problem.
Now coming to sudo rm -rf /.
You know that with sudo, you run a command as root, which allows you to make any changes to the system.
/ is the symbol for the root directory. /var means the var directory under root. /var/log/apt means apt directory under log, under root.
![Linux directory hierarchy representation][7]
As per [Linux directory hierarchy][8], everything in a Linux file system starts at root. If you delete root, you are basically removing all the files of your system.
And this is why it is advised to not run `sudo rm -rf /` command because youll wipe out your entire Linux system.
Please note that in some cases, you could be running a command like sudo rm -rf /var/log/apt which could be fine. Again, you have to pay attention on what you are deleting, the same as you have to pay attention on what you are cutting with a knife.
### I play with danger: what if I run sudo rm -rf / to see what happens?
Most Linux distributions provide a failsafe protection against accidentally deleting the root directory.
```
[email protected]:~$ sudo rm -rf /
[sudo] password for abhishek:
rm: it is dangerous to operate recursively on '/'
rm: use --no-preserve-root to override this failsafe
```
I mean it is human to make typos and if you accidentally typed “/ var/log/apt” instead of “/var/log/apt” (a space between / and var meaning that you are providing / and var directories to for deletion), youll be deleting the root directory.
![Pay attention when using sudo rm -rf][9]
Thats quite good. Your Linux system takes care of such accidents.
Now, what if you are hell-bent on destroying your system with sudo rm -rf /? Youll have to use It will ask you to use no-preserve-root with it.
No, please do not do that on your own. Let me show it to you.
So, I have elementary OS running in a virtual machine. I run `sudo rm -rf / --no-preserve-root` and you can see the lights going out literally in the video below (around 1 minute).
[Subscribe to our YouTube channel for more Linux videos][10]
### Clear or still confused?
Linux has an active community where most people try to help new users. Most people because there are some evil trolls lurking to mess with the new users. They will often suggest running rm -rf / for the simplest of the problems faced by beginners. These idiots get some sort of supremacist satisfaction I think for such evil acts. I ban them immediately from the forums and groups I administer.
I hope this article made things clearer for you. Its possible that you still have some confusion, specially because it involves root, file permissions and other things new users might not be familiar with. If thats the case, please let me know your doubts in the comment section and Ill try to clear them.
In the end, remember. Dont drink and root. Stay safe while running your Linux system :)
![][11]
--------------------------------------------------------------------------------
via: https://itsfoss.com/sudo-rm-rf/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/04/sudo-rm-rf.gif?resize=400%2C225&ssl=1
[2]: https://itsfoss.com/free-up-space-ubuntu-linux/
[3]: https://linuxhandbook.com/remove-files-directories/
[4]: https://linuxhandbook.com/linux-file-permissions/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/rm-rf-command-example-800x487.png?resize=800%2C487&ssl=1
[6]: https://itsfoss.com/root-user-ubuntu/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/08/linux-directory-structure.png?resize=800%2C400&ssl=1
[8]: https://linuxhandbook.com/linux-directory-structure/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/08/sudo-rm-rf-example.png?resize=798%2C346&ssl=1
[10]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/08/dont-drink-and-root.jpg?resize=800%2C450&ssl=1

View File

@ -1,80 +0,0 @@
[#]: subject: "Neither Windows, nor Linux! Shrine is Gods Operating System"
[#]: via: "https://itsfoss.com/shrine-os/"
[#]: author: "John Paul https://itsfoss.com/author/john/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Neither Windows, nor Linux! Shrine is Gods Operating System
======
Weve all used multiple operating systems in our lives. Some were good and some were bad. But can you say that youve ever used an operating system designed by God? Today, Id like to introduce you to Shrine.
### What is Shrine?
![Shrine interface][1]
From that introduction, youre probably wondering what the heck is going on. Well, it all started with a guy named Terry Davis. Before we go any further, Id better warn you that Terry suffered from schizophrenia during his life and often didnt take his medication. Because of this, he said or did things during his life that were not quite socially acceptable.
Anyway, back to the story line. In the early 2000s, Terry released a simple operating system. Over the years, it went through several names, including J Operating System, LoseThos, and SparrowOS. He finally settled on the name [TempleOS][2]. He chose that name because this operating system would be Gods temple. As such. God gave Terry the following [specifications][3] for the operating system:
* It would have 640×480 16 color graphics
* It would use “a single-voice 8-bit signed MIDI-like sample for sound”.
* It would follow the Commodore 64, i.e. “a non-networked, simple machine where programming was the goal, not just a means to an end”.
* It would only support one file system (named “Red Sea”).
* It would be limited to 100,000 lines of code to make it “easy to learn the whole thing”.
* “Ring-0-only. Everything runs in kernel mode, including user applications”
* The font would be limited to “one 8×8 fixed-width font”.
* The use would have “full access to everything. All memory, I/O ports, instructions, and similar things must never be off limits. All functions, variables and class members will be accessible.”
* It would only support one platform, 64-bit PCs.
Terry wrote this operating system using in a programming language that he called HolyC. TechRepublic called it a “modified version of C++ (“more than C, less than C++”)”. If you are interested in getting a flavor of HolyC, I would recommend, [this article][4] and the HolyC entry on [RosettaCode][5].
In 2013, Terry announced on his website that TempleOS was complete. Tragically, Terry died a few years later in August of 2018 when he was hit by a train. He was homeless at the time. Over the years, many people followed Terry through his work on the operating system. Most were impressed at his ability to write an operating system in such a small package.
Now, you are probably wondering what all this talk of TempleOS has to do with Shrine. Well, as the [GitHub page][6] for Shrine states, it is “A TempleOS distro for heretics”. GitHub user [minexew][7] created Shrine to add features to TempleOS that Terry had neglected. These features include:
* 99% compatibility with TempleOS programs
* Ships with Lambda Shell, which feels a bit like a classic Unix command interpreter
* TCP/IP stack &amp; internet access out of the box
* Includes a package downloader
minexew is planning to add more features in the future, but hasnt announced what exactly will be included. He has plans to make a full TempleOS environment for Linux.
### Experience
Its fairly easy to get Shrine virtualized. All you need to do is install your virtualizing software of choice. (Mine is VirtualBox.) When you create a virtual machine for Shrine, make sure that it is 64-bit and has at least 512 MB of RAM.
Once you boot into Shrine, you will be asked if you want to install to your (virtual) hard drive. Once that is finished (or not, if you choose), you will be offered a tour of the operating system. From there you can explore.
### Final Thoughts
Temple OS and (Shrine) is obviously not intended to be a replacement for Windows or Linux. Even though Terry referred to it as “Gods temple”, Im sure in his more lucid moments, he would have acknowledged that it was more of a hobby operating system. With that in mind, the finished product is fairly [impressive][8]. Over a twelve-year period, Terry created an operating system in a little over 100,000 lines of code, using a language that he had created himself. He also wrote his own compiler, graphics library and several games. All this while fighting his own personal demons.
--------------------------------------------------------------------------------
via: https://itsfoss.com/shrine-os/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/shrine.jpg?resize=800%2C600&ssl=1
[2]: https://templeos.org/
[3]: https://web.archive.org/web/20170508181026/http://www.templeos.org:80/Wb/Doc/Charter.html
[4]: https://harrisontotty.github.io/p/a-lang-design-analysis-of-holyc
[5]: https://rosettacode.org/wiki/Category:HolyC
[6]: https://github.com/minexew/Shrine
[7]: https://github.com/minexew
[8]: http://www.codersnotes.com/notes/a-constructive-look-at-templeos/

View File

@ -1,88 +0,0 @@
[#]: subject: "Apps for daily needs part 5: video editors"
[#]: via: "https://fedoramagazine.org/apps-for-daily-needs-part-5-video-editors/"
[#]: author: "Arman Arisman https://fedoramagazine.org/author/armanwu/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Apps for daily needs part 5: video editors
======
![][1]
Photo by [Brooke Cagle][2] on [Unsplash][3]
Video editing has become a popular activity. People need video editors for various reasons, such as work, education, or just a hobby. There are also now many platforms for sharing video on the internet. Almost all social media and chat messengers provide features for sharing videos. This article will introduce some of the open source video editors that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article [Things to do after installing Fedora 34 Workstation][4]. Here is a list of a few apps for daily needs in the video editors category.
### Kdenlive
When anyone asks about an open source video editor on Linux, the answer that often comes up is Kdenlive. It is a very popular video editor among open source users. This is because its features are complete for general purposes and are easy to use by someone who is not a professional.
Kdenlive supports multi-track, so you can combine audio, video, images, and text from multiple sources. This application also supports various video and audio formats without having to convert them first. In addition, Kdenlive provides a wide variety of effects and transitions to support your creativity in producing cool videos. Some of the features that Kdenlive provides are titler for creating 2D titles, audio and video scopes, proxy editing, timeline preview, keyframeable effects, and many more.
![][5]
More information is available at this link: <https://kdenlive.org/en/>
* * *
### Shotcut
Shotcut has more or less the same features as Kdenlive. This application is a general purposes video editor. It has a fairly simple interface, but with complete features to meet the various needs of your video editing work.
Shotcut has a complete set of features for a video editor, ranging from simple editing to high-level capabilities. It also supports various video, audio, and image formats. You dont need to worry about your work history, because this application has unlimited undo and redo. Shotcut also provides a variety of video and audio effects features, so you have freedom to be creative in producing your video works. Some of the features offered are audio filters, audio mixing, cross fade audio and video dissolve transition, tone generator, speed change, video compositing, 3 way color wheels, track compositing/blending mode, video filters, etc.
![][6]
More information is available at this link: <https://shotcut.org/>
* * *
### Pitivi
Pitivi will be the right choice if you want a video editor that has an intuitive and clean user interface. You will feel comfortable with how it looks and will have no trouble finding the features you need. This application is classified as very easy to learn, especially if you need an application for simple editing needs. However, Pitivi still offers a variety of features, like trimming &amp; cutting, sound mixing, keyframeable audio effects, audio waveforms, volume keyframe curves, video transitions, etc.
![][7]
More information is available at this link: <https://www.pitivi.org/>
* * *
### Cinelerra
Cinelerra is a video editor that has been in development for a long time. There are tons of features for your video work such as built-in frame render, various video effects, unlimited layers, 8K support, multi camera support, video audio sync, render farm, motion graphics, live preview, etc. This application is maybe not suitable for those who are just learning. I think it will take you a while to get used to the interface, especially if you are already familiar with other popular video editor applications. But Cinelerra will still be an interesting choice as your video editor.
![][8]
More information is available at this link: <http://cinelerra.org/>[][9]
* * *
### Conclusion
This article presented four video editor apps for your daily needs that are available on Fedora Linux. Actually there are many other video editors that you can use in Fedora Linux. You can also use Olive (Fedora Linux repo), OpenShot (rpmfusion-free) , Flowblade (rpmfusion-free) and many more. Each video editor has its own advantages. Some are better at correcting color, while others are better at a variety of transitions and effects. Some are better when it comes to how easy it is to add text. Choose the application that suits your needs. Hopefully this article can help you to choose the right video editors. If you have experience in using these applications, please share your experience in the comments.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/apps-for-daily-needs-part-5-video-editors/
作者:[Arman Arisman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/armanwu/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/08/FedoraMagz-Apps-5-video-816x345.jpg
[2]: https://unsplash.com/@brookecagle?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/meeting-on-cafe-computer?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://fedoramagazine.org/things-to-do-after-installing-fedora-34-workstation/
[5]: https://fedoramagazine.org/wp-content/uploads/2021/08/video-kdenlive-1024x576.png
[6]: https://fedoramagazine.org/wp-content/uploads/2021/08/video-shotcut-1024x576.png
[7]: https://fedoramagazine.org/wp-content/uploads/2021/08/video-pitivi-1024x576.png
[8]: https://fedoramagazine.org/wp-content/uploads/2021/08/video-cinelerra-1024x576.png
[9]: https://www.olivevideoeditor.org/

View File

@ -1,124 +0,0 @@
[#]: subject: "Watch commands and tasks with the Linux watch command"
[#]: via: "https://opensource.com/article/21/9/linux-watch-command"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Watch commands and tasks with the Linux watch command
======
See how the watch command can let you know when a task has been
completed or a command has been executed.
![Clock, pen, and notepad on a desk][1]
There are many times when you need to wait for something to finish, such as:
* A file download.
* Creating or extracting a [tar][2] file.
* An [Ansible][3] job.
Some of these processes have some sort of progress indication, but sometimes the process is run through a layer of abstraction, and the only way to measure the progress is through its side effects. Some of these might be:
* A file being downloaded keeps growing.
* A directory extracted from a tarball fills up with files.
* The Ansible job builds a [container][4].
You can query all of these things with commands like these:
```
$ ls -l downloaded-file
$ find . | wc -l
$ podman ps
$ docker ps
```
But running these commands over and over, even if it is with the convenience of [Bash history][5] and the **Up Arrow**, is tedious.
Another approach is to write a little Bash script to automate these commands for you:
```
while :
do
  docker ps
  sleep 2
done
```
But such scripts can also become tedious to write. You could write a little generic script and package it, so it's always available to you. Luckily, other open source developers have already been there and done that.
The result is the command `watch`.
### Installing watch
The `watch` command is part of the [`procps-ng` package][6], so if you're on Linux, you already have it installed.
On macOS, install `watch` using [MacPorts][7] or [Homebrew][8]. On Windows, use [Chocolatey][9].
### Using watch
The `watch` command periodically runs a command and shows its output. It has some text-terminal niceties, so only the latest output is on the screen.
The simplest usage is: `watch <command>`.
For example, prefixing the `docker ps` command with `watch` works like this:
```
`$ watch docker ps`
```
The `watch` command, and a few creative Unix command-line tricks, can generate ad-hoc dashboards. For example, to count audit events:
```
`$ watch 'grep audit: /var/log/kern.log |wc -l'`
```
In the last example, it is probably useful if there's a visual indication that the number of audit events changed. If change is expected, but you want something to look "different," `watch --differences` works well. It highlights any differences from the last run. This works especially well if you are grepping in multiple files, so you can easily see which one changed.
If changes are not expected, you can ask for them to be highlighted "permanently" to know which ones to investigate by using `watch --differences=permanent`. This is often more useful.
### Controlling frequency
Finally, sometimes the command might be resource-intensive and should not be run too frequently. The `-n` parameter controls the frequency. Watch uses two seconds by default, but `watch -n 10` might be appropriate for something more resource-intensive, like grepping for a pattern in any file in a subdirectory:
```
`$ watch -n 10 'find . -type f | xargs grep suspicious-pattern'`
```
### Watch a command with watch
The `watch` command is useful for many ad-hoc system administration tasks where you need to wait for some time-consuming step, without a progress bar, before moving on to the next one. Though this is not a great situation to be in, `watch` can make it slightly better—and give you time to start working on those notes for the retrospective! Download the **[cheat sheet][10] **to keep helpful syntax and options close at hand.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/linux-watch-command
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6 (Clock, pen, and notepad on a desk)
[2]: https://opensource.com/article/17/7/how-unzip-targz-file
[3]: https://opensource.com/resources/what-ansible
[4]: https://opensource.com/resources/what-docker
[5]: https://opensource.com/article/20/6/bash-history-control
[6]: https://opensource.com/article/21/8/linux-procps-ng
[7]: https://opensource.com/article/20/11/macports
[8]: https://opensource.com/article/20/6/homebrew-mac
[9]: https://opensource.com/article/20/3/chocolatey
[10]: https://opensource.com/downloads/watch-cheat-sheet

View File

@ -1,190 +0,0 @@
[#]: subject: "How to Install Kali Linux in VMware"
[#]: via: "https://itsfoss.com/install-kali-linux-vmware/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install Kali Linux in VMware
======
Kali Linux is the de facto standard of [Linux distributions used for learning and practicin][1][g][1] [hacking and penetration testing][1].
And, if youve been tinkering around with Linux distros long enough, you might have tried it out just out of curiosity.
However, no matter what you use it for, it is not a replacement for a regular full-fledged desktop Linux operating system. Hence, it is recommended (at least for beginners) to install Kali Linux using a virtual machine program like VMware.
With a virtual machine, you can use Kali Linux as a regular application in your Windows or Linux system. Its almost the same as running VLC or Skype in your system.
There are a few free virtualization tools available for you. You can [install Kali Linux on Oracle VirtualBox][2] or use VMWare Workstation.
This tutorial focuses on VMWare.
### Installing Kali Linux on VMware on Windows and Linux
Non-FOSS alert!
VM Ware is not open source software.
For this tutorial, I presume that you are using Windows, considering most VMware users prefer using Windows 10/11.
However, the _**tutorial is also valid for Linux except the VMWare installation on Windows part**_. You can [easily install VMWare on Ubuntu][3] and other Linux distributions.
#### Step 1: Install VMWare Workstation Player (on Windows)
If you already have VMware installed on your system, you can skip the steps to install Kali Linux.
Head to [VMWares official workstation player webpage][4] and then click on the “**Download Free**” button.
![][5]
Next, you get to choose the version (if you want something specific or encountering bugs in the latest version) and then click on “**Go to Downloads.**“
![][6]
And then you get the download buttons for both Windows and Linux versions. You will have to click on the button for Windows 64-bit because that is what we need here.
![][7]
There is no support for 32-bit systems, in case you were wondering.
Finally, when you get the .exe file downloaded, launch it to start the installation process. You need to hit “Next” to get started installing VMware.
![][8]
Next, you will have to agree to the policies and conditions to continue.
![][9]
Now, you get to choose the path of your installation. Ideally, keep it at the default settings. But, if you need better keyboard response / in-screen keyboard performance in the virtual machine, you may want to enable the “**Enhanced Keyboard Driver**.”
![][10]
Proceeding to the next step, you can choose to disable checking for updates every time you start the program (can be annoying) and disable sending data to VMware as part of its user experience improvement program.
![][11]
If you want quick access using desktop and start menu shortcuts, you can check those settings or toggle them off, which I prefer.
![][12]
Now, you have to continue to start the installation.
![][13]
This may take a while, and when completed, you get greeted with another window that lets you finish the process and gives you the option to enter a license key. If you want to get the commercial license for your use-case, you need the VMware Workstation Pro edition, or else, the player is free for personal use.
![][14]
Attention!
Please make sure that virtualization is enabled in your system. Recent Windows versions require that you enable the virtualization explicitly to use virtual machines.
#### Step 2: Install Kali Linux on VMware
To get started, you need to download the image file of Kali Linux. And, when it comes to Kali Linux, they offer a separate ISO file if you plan to use it on a virtual machine.
![][15]
Head to its [official download page][16] and download the prebuilt VMware image available.
![][17]
You can download the **.7z** file directly or utilize Torrent (which is generally faster). In either case, you can also check the file integrity with the SHA256 value provided.
Once downloaded, you need to extract the file to any path of your choice.
![][18]
Open VMware Workstation Player and then click on “**Open a Virtual Machine**.” Now, look for the folder you extracted. And navigate through it till you find a file with the “**.vmx**” extension.
For instance: **Kali-Linux-2021.3-vmware-amd64.vmx**
![][19]
Select the .vmx file to open the virtual machine. And, it should appear right in your VMware player.
You can choose to launch the virtual machine with the default settings. Or, if you want to tweak the hardware allocated to the virtual machine, feel free to change the settings before you launch it.
![][20]
Depending on your computer hardware, you should allocate more memory and at least half of your processor cores to get a smooth performance.
In this case, I have 16 Gigs of RAM and a quad-core processor. Hence, it is safe to allocate nearly 7 GB of RAM and two cores for this virtual machine.
![][21]
While you can assign more resources, but it might affect the performance of your host operating system when working on a task. So, it is recommended to keep a balance between the two.
Now, save the settings and hit “**Play virtual machine**” to start Kali Linux on VMware.
When it starts loading up, you may be prompted with some tips to improve performance by tweaking some virtual machine settings.
You do not have to do that, but if you notice performance issues, you can disable side-channel mitigations (needed for enhanced security) to uplift the performance of the VM.
Also, you may be prompted to download and [install VMware tools for Linux][22]; you need to do this to get a good VM experience.
Once you do that, you will be greeted with Kali Linuxs login screen.
![][23]
Considering that you launched a prebuilt VMware folder, you need to enter the default login and password to proceed.
**Username**: kali
**Password:** kali
![][24]
Thats it! Youre done installing Kali Linux on VMware. Now, all you have to do is start exploring!
### Where to go from here?
Here are a few tips you can utilize:
* If clipboard sharing and file sharing is not working, [install VMWare tools][22] on the guest system (Kali Linux).
* If you are new to it, check out this [list of Kali Linux tools][25].
Feel free to share your thoughts if you find this tutorial helpful. Do you prefer to install Kali Linux without using a VMware image ready to go? Let me know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-kali-linux-vmware/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-hacking-penetration-testing/
[2]: https://itsfoss.com/install-kali-linux-virtualbox/
[3]: https://itsfoss.com/install-vmware-player-ubuntu-1310/
[4]: https://www.vmware.com/products/workstation-player.html
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-download.png?resize=732%2C486&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-download-1.png?resize=800%2C292&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-download-final.png?resize=800%2C212&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-setup-1.png?resize=692%2C465&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-setup-license.png?resize=629%2C443&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-setup-2.png?resize=638%2C440&ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-workstation-tracking.png?resize=618%2C473&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-workstation-shortcuts.png?resize=595%2C445&ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-install.png?resize=620%2C474&ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-player-installed.png?resize=589%2C441&ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-image-kali.png?resize=800%2C488&ssl=1
[16]: https://www.kali.org/get-kali/
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-kali-linux-image-download.png?resize=800%2C764&ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/extract-vmware-image.png?resize=617%2C359&ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-kali-linux-image-folder.png?resize=800%2C498&ssl=1
[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/virtual-machine-settings-kali.png?resize=800%2C652&ssl=1
[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/kali-vm-settings.png?resize=800%2C329&ssl=1
[22]: https://itsfoss.com/install-vmware-tools-linux/
[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/kali-linux-vm-login.png?resize=800%2C540&ssl=1
[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/vmware-kali-linux.png?resize=800%2C537&ssl=1
[25]: https://itsfoss.com/best-kali-linux-tools/

View File

@ -1,129 +0,0 @@
[#]: subject: "Start using YAML now"
[#]: via: "https://opensource.com/article/21/9/intro-yaml"
[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Start using YAML now
======
What is YAML, and why is it about time we started using it?
![woman on laptop sitting at the window][1]
YAML (YAML Ain't Markup Language) is a human-readable data serialization language. Its syntax is simple and human-readable. It does not contain quotation marks, opening and closing tags, or braces. It does not contain anything which might make it harder for humans to parse nesting rules. You can scan your YAML document and immediately know what's going on.
### YAML features
YAML has some super features which make it superior to other serialization formats:
* Easy to skim.
* Easy to use.
* Portable between programming languages.
* Native data structures of Agile languages.
* Consistent model to support generic tools.
* Supports one-pass processing.
* Expressive and extensible.
I will show you YAML's power further with some examples.
Can you figure out what's going on below?
```
\-------
# My grocery list
groceries:
    - Milk
     - Eggs
     - Bread
     - Butter
...
```
The above example contains a simple list of groceries to buy, and it's a fully-formed YAML document. In YAML, strings aren't quoted, and lists need simple hyphens and spaces. A YAML document starts with **\---** and ends with **...**, but they are optional. Comments in YAML begin with a **#**.
Indentation is key in YAML. Indentation must contain spaces, not tabs. And while the number of spaces required is flexible, it's a good idea to keep them consistent.
### Basic Elements
#### Collections
YAML has two types of collections: _Lists_ (for sequences) and _dictionaries_ (for mappings). Lists are key-value pairs where every value is on a new line, beginning with a hyphen and space. Dictionaries are key-value pairs where every value is a mapping containing a key, a colon and space, and a value.
For example:
```
# My List
groceries:
    - Milk
     - Eggs
     - Bread
     - Butter
# My dictionary
contact:
 name: Ayush Sharma
 email: [myemail@example.com][2]
```
Lists and dictionaries are often combined to provide more complex data structures. Lists can contain dictionaries, and dictionaries can contain lists.
#### Strings
Strings in YAML don't need quotation marks. Multi-line strings are defined using **|** or **&gt;**. The former preserves newlines, but the latter does not.
For example:
```
my_string: |
    This is my string.
     It can contain many lines.
     Newlines are preserved.
my_string_2: &gt;
    This is my string.
     This can also contain many lines.
     Newlines aren't preserved and all lines are folded.
```
#### Anchors
YAML can have repeatable blocks of data using node anchors. The **&amp;** character defines a block of data that is later referenced using *****. For example:
```
billing_address: &amp;add1
 house: B1
 street: My Street
shipping_address: *add1
```
At this point, you know enough YAML to get started. You can play around with the online YAML parser to test yourself. If you work with YAML daily, then [this handy cheatsheet][3] will be helpful.
* * *
_This article was originally published on the [author's personal blog][4] and has been adapted with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/intro-yaml
作者:[Ayush Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayushsharma
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: mailto:myemail@example.com
[3]: https://yaml.org/refcard.html
[4]: https://notes.ayushsharma.in/2021/08/introduction-to-yaml

View File

@ -1,235 +0,0 @@
[#]: subject: "How to Install Ubuntu Desktop on Raspberry Pi 4"
[#]: via: "https://itsfoss.com/install-ubuntu-desktop-raspberry-pi/"
[#]: author: "Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install Ubuntu Desktop on Raspberry Pi 4
======
_**Brief: This thorough tutorial shows you how to install Ubuntu Desktop on Raspberry Pi 4 device.**_
The revolutionary Raspberry Pi is the most popular single board computer. It has its very own Debian based operating system called [Raspbian][1].
There are several other [operating systems available for Raspberry Pi][2] but almost all of them are lightweight. This was appropriate for the small factor and low end hardware of the Pi devices.
This changes with the introduction of Raspberry Pi 4B that flaunts 8 GB RAM and supports 4K display. The aim is to use Raspberry Pi as a regular desktop and it succeeds in doing so to a larger extent.
Before the 4B model, you could [install the Ubuntu server on Raspberry Pi][3] but the desktop version was not available. However, **Ubuntu now provides official desktop image for Pi 4 models**.
In this tutorial, I am going to show the steps for installing Ubuntu desktop on Raspberry Pi 4.
First, a quick look at the prerequisites.
### Prerequisites for running Ubuntu on Raspberry Pi 4
![][4]
Heres what you need:
1. A Linux or Windows system with active internet connection.
2. [Raspberry Pi Imager][5] : The official open source tool from Raspberry that gets you the distro image on your SD card.
3. Micro SD Card: Consider using at least a 16 GB storage for your card, albeit a 32 GB version is recommended.
4. A USB based Micro SD Card Reader (if your computer does not have a card reader).
5. Essential Raspberry Pi 4 accessories such as an HDMI compatible display, [Micro HDMI to Standard HDMI (A/M) Cable][6], [Power Supply (Official Adapter Recommended)][7], USB Wired/Wireless Keyboard and Mouse/Touchpad.
It is good practice to [read in detail about the Pi requirements][8] beforehand.
Now, without further delay, let me quickly walk you through the image preparation for the SD Card.
### Preparing the Ubuntu Desktop image for Raspberry Pi
Raspberry Pi provides a GUI application for writing the ISO image to the SD Card. **This tool can also download compatible operating systems like Ubuntu, Raspbian etc automatically**.
![Official tool to download and put operating system on SD card][9]
You can download this tool for Ubuntu, Windows and macOS from the official website:
[Download Raspberry Pi Imager][10]
On Ubuntu and other Linux distributions, you can also install it with Snap:
```
sudo snap install rpi-imager
```
Once installed, run the imager tool. When you see the screen below, select “CHOOSE OS”:
![Pi imager: choose the preferred operating system][11]
Under “Operating System”, select “Other general purpose OS”:
![Pi imager: other general purpose operating systems][12]
Now, select “Ubuntu”:
![Pi imager distro: Ubuntu][13]
Next, select “Ubuntu Desktop 21.04 (RPI 4/400)” as shown below:
![Pi imager distro version: Ubuntu 21.04][14]
Note
If you do not have a good, consistent internet collection, you can [download the Ubuntu for Raspberry Pi image separately from Ubuntus website][15]. In the Imager tool, while choosing the OS, go to the bottom and select “Use custom” option. You can also use Etcher for writing the image to the SD card.
Insert the micro SD card inside your Card reader and wait for it to mount. Select “CHOOSE STORAGE” under “Storage”:
![Pi imager choose storage \(SD card\)][16]
You should be seeing only your micro SD card storage and youd recognize it instantly based on the size. Here, Ive used a 32 GB card:
![Pi imager choose SD card][17]
Now click on “WRITE”:
![Pi imager image write][18]
Ill assume you have the contents of the SD card backed up. If its a new card, you can just proceed:
![Pi imager image write confirmation query][19]
Since this is a [sudo][20] privilege, you must enter your password. If you run `sudo rpi-imager` from a terminal, this would not appear:
![Pi imager image write authentication asking for password][21]
If your SD card is a bit old, it would take some time. But if it is a recent one with high speeds, it wouldnt take long:
![Pi imager writing image][22]
I also wouldnt recommend skipping verification. Make sure the image write went successful:
![Pi imager verifying changes][23]
Once it is over, you will get the following confirmation:
![Pi imager write successful][24]
Now, safely-remove the SD card from your system.
### Using the micro SD card with Ubuntu on Raspberry Pi
Half of the battle is won. Unlike the regular Ubuntu install, you have not created a live environment. Ubuntu is already installed on the SD card and is almost read to use. Lets see what remains here.
#### Step 1: Insert the SD card into Pi
For first time users, it can take a bit confusing sometimes to figure out where on earth is that card slot! Not to worry. It is located below the board on the left-hand side. Heres an inverted view with a card inserted:
![Pi 4B board inverted and micro SD card inserted][25]
Keep sliding the card in this orientation slowly into the slot below the board, gently until it no longer goes any further. You may also hear a little clicking sound for confirmation. This means it has just fit in perfectly:
![Raspberry Pi SD slot left side middle and below the pi board][26]
You might notice two little pins adjusting themselves in the slot (shown above) as you put it inside, but thats ok. Once inserted, the card would look like a bit protruded. Thats how it is supposed to look like:
![Pi SD card inserted with a little portion visible][27]
#### Step 2: Setting Up the Raspberry Pi
I do not need to go in detail here, I presume.
Ensure that the power cable connector, micro HDMI cable connector, keyboard and mouse connectors (wired/non-wired) are securely connected to the Pi board in the relevant ports.
Make sure the display and power plug are properly connected as well, before you go ahead and turn on the power socket. I wouldnt recommend plugging in the adapter to a live electrical socket. Look up [electrical arcing][28].
Once youve ensured the above two steps, you can [power on the Raspberry Pi device][29].
#### Step 3: The first run of Ubuntu desktop on Raspberry Pi
Once you power on the Raspberry Pi, youll be asked to some basic configuration on your first run. You just have to follow the onscreen instructions.
Select your language, keyboard layout, connect to WiFi etc.
![Select language][30]
![Select keyboard layout][31]
![Select WiFi][32]
Youll be asked to select the time zone:
![Select time zone][33]
And then create the user and password:
![Enter desired username and password][34]
It will configure a couple of things and may take some time in doing so.
![Finishing Ubuntu setup][35]
![Finishing Ubuntu setup][36]
It may take some time after this, your system will reboot and youll find yourself at the Ubuntu login screen:
![][37]
You can start enjoying Ubuntu desktop on Raspberry Pi now.
![Ubuntu desktop on Raspberry Pi][38]
### Conclusion
I noticed **a temporary anomaly**: A red flickering border on the left-hand side of my display while doing the installation. This flickering (also of different colors) was noticeable on random parts of the screen as well. But it went away after restarting and the first boot.
It was much needed that Ubuntu to start providing support for popular ARM devices like Raspberry Pi and I am happy to see it running on a Raspberry Pi.
I hope you find this tutorial helpful. If you have questions or suggestions, please let me know in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-ubuntu-desktop-raspberry-pi/
作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/avimanyu/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/tutorial-how-to-install-raspberry-pi-os-raspbian-wheezy/
[2]: https://itsfoss.com/raspberry-pi-os/
[3]: https://itsfoss.com/install-ubuntu-server-raspberry-pi/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-desktop-raspberry-pi.png?resize=800%2C450&ssl=1
[5]: https://github.com/raspberrypi/rpi-imager
[6]: https://www.raspberrypi.org/products/micro-hdmi-to-standard-hdmi-a-cable/
[7]: https://www.raspberrypi.org/products/type-c-power-supply/
[8]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/raspberry-pi-imager-tool.webp?resize=680%2C448&ssl=1
[10]: https://www.raspberrypi.org/software/
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-choose-os.webp?resize=681%2C443&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-other-general-purpose-os.webp?resize=679%2C440&ssl=1
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-os-ubuntu.webp?resize=677%2C440&ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-os-ubuntu-21-04.webp?resize=677%2C440&ssl=1
[15]: https://ubuntu.com/download/raspberry-pi
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-choose-storage.webp?resize=677%2C438&ssl=1
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-choose-sd-card.webp?resize=790%2C450&ssl=1
[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-image-write.webp?resize=676%2C437&ssl=1
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-image-write-confirm.webp?resize=679%2C440&ssl=1
[20]: https://itsfoss.com/add-sudo-user-ubuntu/
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-image-write-password.webp?resize=380%2C227&ssl=1
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-writing-image.webp?resize=673%2C438&ssl=1
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-verifying-changes.webp?resize=677%2C440&ssl=1
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-imager-write-successful.webp?resize=675%2C442&ssl=1
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-inverted-micro-sd-card-inserted.webp?resize=800%2C572&ssl=1
[26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/raspberry-pi-sd-slot-left-side-middle-below-board.webp?resize=632%2C324&ssl=1
[27]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pi-sd-card-inserted.webp?resize=650%2C432&ssl=1
[28]: https://www.electricianatlanta.net/what-is-electrical-arcing-and-why-is-it-dangerous/
[29]: https://itsfoss.com/turn-on-raspberry-pi/
[30]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run.webp?resize=800%2C451&ssl=1
[31]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-2.webp?resize=800%2C600&ssl=1
[32]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-3.webp?resize=800%2C600&ssl=1
[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-4.webp?resize=800%2C600&ssl=1
[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-5.webp?resize=800%2C600&ssl=1
[35]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-6.webp?resize=800%2C600&ssl=1
[36]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-first-run-7.webp?resize=800%2C600&ssl=1
[37]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-raspberry-pi-login-screen.webp?resize=800%2C600&ssl=1
[38]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-21-04-post-setup-desktop.webp?resize=800%2C450&ssl=1

View File

@ -0,0 +1,513 @@
[#]: subject: "3 ways to test your API with Python"
[#]: via: "https://opensource.com/article/21/9/unit-test-python"
[#]: author: "Miguel Brito https://opensource.com/users/miguendes"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
3 ways to test your API with Python
======
Unit testing can be daunting, but these Python modules will make your
life much easier.
![Puzzle pieces coming together to form a computer screen][1]
In this tutorial, you'll learn how to unit test code that performs HTTP requests. In other words, you'll see the art of API unit testing in Python.
Unit tests are meant to test a single unit of behavior. In testing, a well-known rule of thumb is to isolate code that reaches external dependencies.
For instance, when testing a code that performs HTTP requests, it's recommended to replace the real call with a fake call during test time. This way, you can unit test it without performing a real HTTP request every time you run the test.
The question is, _how can you isolate the code?_
Hopefully, that's what I'm going to answer in this post! I'll not only show you how to do it but also weigh the pros and cons of three different approaches.
Requirements:
* [Python 3.8][2]
* pytest-mock
* requests
* flask
* responses
* VCR.py
### Demo app using a weather REST API
To put this problem in context, imagine that you're building a weather app. This app uses a third-party weather REST API to retrieve weather information for a particular city. One of the requirements is to generate a simple HTML page, like the image below:
![web page displaying London weather][3]
The weather in London, OpenWeatherMap. Image is the author's own.
To get the information about the weather, you must find it somewhere. Fortunately, [OpenWeatherMap][2] provides everything you need through its REST API service.
_Ok, that's cool, but how can I use it?_
You can get everything you need by sending a `GET` request to: `https://api.openweathermap.org/data/2.5/weather?q={city_name}&appid={api_key}&units=metric`. For this tutorial, I'll parameterize the city name and settle on the metric unit.
### Retrieving the data
To retrieve the weather data, use `requests`. You can create a function that receives a city name as a parameter and returns a JSON. The JSON will contain the temperature, weather description, sunset, sunrise time, and so on.
The example below illustrates such a function:
```
def find_weather_for(city: str) -&gt; dict:
    """Queries the weather API and returns the weather data for a particular city."""
    url = API.format(city_name=city, api_key=API_KEY)
    resp = requests.get(url)
    return resp.json()
```
The URL is made up of two global variables:
```
BASE_URL = "<https://api.openweathermap.org/data/2.5/weather>"
API = BASE_URL + "?q={city_name}&amp;appid={api_key}&amp;units=metric"
```
The API returns a JSON in this format:
```
{
  "coord": {
    "lon": -0.13,
    "lat": 51.51
  },
  "weather": [
    {
      "id": 800,
      "main": "Clear",
      "description": "clear sky",
      "icon": "01d"
    }
  ],
  "base": "stations",
  "main": {
    "temp": 16.53,
    "feels_like": 15.52,
    "temp_min": 15,
    "temp_max": 17.78,
    "pressure": 1023,
    "humidity": 72
  },
  "visibility": 10000,
  "wind": {
    "speed": 2.1,
    "deg": 40
  },
  "clouds": {
    "all": 0
  },
  "dt": 1600420164,
  "sys": {
    "type": 1,
    "id": 1414,
    "country": "GB",
    "sunrise": 1600407646,
    "sunset": 1600452509
  },
  "timezone": 3600,
  "id": 2643743,
  "name": "London",
  "cod": 200
```
The data is returned as a Python dictionary when you call `resp.json()`. In order to encapsulate all the details, you can represent them as a `dataclass`. This class has a factory method that gets the dictionary and returns a `WeatherInfo` instance.
This is good because you keep the representation stable. For example, if the API changes the way it structures the JSON, you can change the logic in just one place, the `from_dict` method. Other parts of the code won't be affected. You can even get information from different sources and combine them in the `from_dict` method!
```
@dataclass
class WeatherInfo:
    temp: float
    sunset: str
    sunrise: str
    temp_min: float
    temp_max: float
    desc: str
    @classmethod
    def from_dict(cls, data: dict) -&gt; "WeatherInfo":
        return cls(
            temp=data["main"]["temp"],
            temp_min=data["main"]["temp_min"],
            temp_max=data["main"]["temp_max"],
            desc=data["weather"][0]["main"],
            sunset=format_date(data["sys"]["sunset"]),
            sunrise=format_date(data["sys"]["sunrise"]),
        )
```
Now, you'll create a function called `retrieve_weather`. You'll use this function to call the API and return a `WeatherInfo` so you can build your HTML page.
```
def retrieve_weather(city: str) -&gt; WeatherInfo:
    """Finds the weather for a city and returns a WeatherInfo instance."""
    data = find_weather_for(city)
    return WeatherInfo.from_dict(data)
```
Good, you have the basic building blocks for our app. Before moving forward, unit test those functions.
### 1\. Testing the API using mocks
[According to Wikipedia][4], a mock object is an object that simulates the behavior of a real object by mimicking it. In Python, you can mock any object using the `unittest.mock` lib that is part of the standard library. To test the `retrieve_weather` function, you can then mock `requests.get` and return static data.
#### pytest-mock
For this tutorial, you'll use `pytest` as your testing framework of choice. The `pytest` library is very extensible through plugins. To accomplish our mocking goals, use `pytest-mock`. This plugin abstracts a bunch of setups from `unittest.mock` and makes your testing code very concise. If you are curious, I discuss more about it in [another blog post][5].
_Ok, enough talking, show me the code._
Here's a complete test case for the `retrieve_weather` function. This test uses two fixtures: One is the `mocker` fixture provided by the `pytest-mock` plugin. The other one is ours. It's just the static data you saved from a previous request.
```
@pytest.fixture()
def fake_weather_info():
    """Fixture that returns a static weather data."""
    with open("tests/resources/weather.json") as f:
        return json.load(f)
[/code] [code]
def test_retrieve_weather_using_mocks(mocker, fake_weather_info):
    """Given a city name, test that a HTML report about the weather is generated
    correctly."""
    # Creates a fake requests response object
    fake_resp = mocker.Mock()
    # Mock the json method to return the static weather data
    fake_resp.json = mocker.Mock(return_value=fake_weather_info)
    # Mock the status code
    fake_resp.status_code = HTTPStatus.OK
    mocker.patch("weather_app.requests.get", return_value=fake_resp)
    weather_info = retrieve_weather(city="London")
    assert weather_info == WeatherInfo.from_dict(fake_weather_info)
```
If you run the test, you get the following output:
```
============================= test session starts ==============================
...[omitted]...
tests/test_weather_app.py::test_retrieve_weather_using_mocks PASSED      [100%]
============================== 1 passed in 0.20s ===============================
Process finished with exit code 0
```
Great, your tests pass! But... Life is not a bed of roses. This test has pros and cons. I'll take a look at them.
#### Pros
Well, one pro already discussed is that by mocking the API's return, you make your tests easier. Isolate the communication with the API and make the test predictable. It will always return what you want.
#### Cons
As for cons, the problem is, what if you don't want to use `requests` anymore and decide to go with the standard library's `urllib`. Every time you change the implementation of `find_weather_for`, you will have to adapt the test. A good test doesn't change when your implementation changes. So, by mocking, you end up coupling your test with the implementation.
Also, another downside is the amount of setup you have to do before calling the function—at least three lines of code.
```
...
    # Creates a fake requests response object
    fake_resp = mocker.Mock()
    # Mock the json method to return the static weather data
    fake_resp.json = mocker.Mock(return_value=fake_weather_info)
    # Mock the status code
    fake_resp.status_code = HTTPStatus.OK
...
```
_Can I do better?_
Yes, please, follow along. I'll see now how to improve it a bit.
### Using responses
Mocking `requests` using the `mocker` feature has the downside of having a long setup. A good way to avoid that is to use a library that intercepts `requests` calls and patches them. There is more than one lib for that, but the simplest to me is `responses`. Let's see how to use it to replace `mock`.
```
@responses.activate
def test_retrieve_weather_using_responses(fake_weather_info):
    """Given a city name, test that a HTML report about the weather is generated
    correctly."""
    api_uri = API.format(city_name="London", api_key=API_KEY)
    responses.add(responses.GET, api_uri, json=fake_weather_info, status=HTTPStatus.OK)
    weather_info = retrieve_weather(city="London")
    assert weather_info == WeatherInfo.from_dict(fake_weather_info)
```
Again, this function makes use of our `fake_weather_info` fixture.
Next, run the test:
```
============================= test session starts ==============================
...
tests/test_weather_app.py::test_retrieve_weather_using_responses PASSED  [100%]
============================== 1 passed in 0.19s ===============================
```
Excellent! This test pass too. But... It's still not that great.
#### Pros
The good thing about using libraries like `responses` is that you don't need to patch `requests` ourselves. You save some setup by delegating the abstraction to the library. However, in case you haven't noticed, there are problems.
#### Cons
Again, the problem is, much like `unittest.mock`, your test is coupled to the implementation. If you replace `requests`, your test breaks.
### 2\. Testing the API using an adapter
_If by using mocks I couple our tests, what can I do?_
Imagine the following scenario: Say that you can no longer use `requests`, and you'll have to replace it with `urllib` since it comes with Python. Not only that, you learned the lesson of not coupling test code with implementation, and you want to avoid that in the future. You want to replace `urllib` and not have to rewrite the tests.
It turns out you can abstract away the code that performs the `GET` request.
_Really? How?_
You can abstract it by using an adapter. The adapter is a design pattern used to encapsulate or wrap the interface of other classes and expose it as a new interface. This way, you can change the adapters without changing our code. For example, you can encapsulate the details about `requests` in our `find_weather_for` and expose it via a function that takes only the URL.
So, this:
```
def find_weather_for(city: str) -&gt; dict:
    """Queries the weather API and returns the weather data for a particular city."""
    url = API.format(city_name=city, api_key=API_KEY)
    resp = requests.get(url)
    return resp.json()
```
Becomes this:
```
def find_weather_for(city: str) -&gt; dict:
    """Queries the weather API and returns the weather data for a particular city."""
    url = API.format(city_name=city, api_key=API_KEY)
    return adapter(url)
```
And the adapter becomes this:
```
def requests_adapter(url: str) -&gt; dict:
    resp = requests.get(url)
    return resp.json()
```
Now it's time to refactor our `retrieve_weather` function:
```
def retrieve_weather(city: str) -&gt; WeatherInfo:
    """Finds the weather for a city and returns a WeatherInfo instance."""
    data = find_weather_for(city, adapter=requests_adapter)
    return WeatherInfo.from_dict(data)
```
So, if you decide to change this implementation to one that uses `urllib`, just swap the adapters:
```
def urllib_adapter(url: str) -&gt; dict:
    """An adapter that encapsulates urllib.urlopen"""
    with urllib.request.urlopen(url) as response:
        resp = response.read()
    return json.loads(resp)
[/code] [code]
def retrieve_weather(city: str) -&gt; WeatherInfo:
    """Finds the weather for a city and returns a WeatherInfo instance."""
    data = find_weather_for(city, adapter=urllib_adapter)
    return WeatherInfo.from_dict(data)
```
_Ok, how about the tests?_
To test r`etrieve_weather`, just create a fake adapter that is used during test time:
```
@responses.activate
def test_retrieve_weather_using_adapter(
    fake_weather_info,
):
    def fake_adapter(url: str):
        return fake_weather_info
    weather_info = retrieve_weather(city="London", adapter=fake_adapter)
    assert weather_info == WeatherInfo.from_dict(fake_weather_info)
```
If you run the test you get:
```
============================= test session starts ==============================
tests/test_weather_app.py::test_retrieve_weather_using_adapter PASSED    [100%]
============================== 1 passed in 0.22s ===============================
```
#### Pros
The pro for this approach is that you successfully decoupled your test from the implementation. Use [dependency injection][6] to inject a fake adapter during test time. Also, you can swap the adapter at any time, including during runtime. You did all of this without changing the behavior.
#### Cons
The cons are that, since you're using a fake adapter for tests, if you introduce a bug in the adapter you employ in the implementation, your test won't catch it. For example, say that we pass a faulty parameter to `requests`, like this:
```
def requests_adapter(url: str) -&gt; dict:
    resp = requests.get(url, headers=&lt;some broken headers&gt;)
    return resp.json()
```
This adapter will fail in production, and the unit tests won't catch it. But truth to be told, you also have the same problem with the previous approach. That's why you always need to go beyond unit tests and also have integration tests. That being said, consider another option.
### 3\. Testing the API using VCR.py
Now it's finally the time to discuss our last option. I have only found about it quite recently, frankly. I've been using mocks for a long time and always had some problems with them. `VCR.py` is a library that simplifies a lot of the tests that make HTTP requests.
It works by recording the HTTP interaction the first time you run the test as a flat YAML file called a _cassette_. Both the request and the response are serialized. When you run the test for the second time, `VCR.py` will intercept the call and return a response for the request made.
Now see how to test `retrieve_weather` using `VCR.py below:`
```
@vcr.use_cassette()
def test_retrieve_weather_using_vcr(fake_weather_info):
    weather_info = retrieve_weather(city="London")
    assert weather_info == WeatherInfo.from_dict(fake_weather_info)
```
_Wow, is that it? No setup? What is that `@vcr.use_cassette()`?_
Yes, that's it! There is no setup, just a `pytest` annotation to tell VCR to intercept the call and save the cassette file.
_What does the cassette file look like?_
Good question. There's a bunch of things in it. This is because VCR saves every detail of the interaction.
```
interactions:
\- request:
    body: null
    headers:
      Accept:
      - '*/*'
      Accept-Encoding:
      - gzip, deflate
      Connection:
      - keep-alive
      User-Agent:
      - python-requests/2.24.0
    method: GET
    uri: [https://api.openweathermap.org/data/2.5/weather?q=London\&amp;appid=\][7]&lt;YOUR API KEY HERE&gt;&amp;units=metric
  response:
    body:
      string: '{"coord":{"lon":-0.13,"lat":51.51},"weather":[{"id":800,"main":"Clear","description":"clearsky","icon":"01d"}],"base":"stations","main":{"temp":16.53,"feels_like":15.52,"temp_min":15,"temp_max":17.78,"pressure":1023,"humidity":72},"visibility":10000,"wind":{"speed":2.1,"deg":40},"clouds":{"all":0},"dt":1600420164,"sys":{"type":1,"id":1414,"country":"GB","sunrise":1600407646,"sunset":1600452509},"timezone":3600,"id":2643743,"name":"London","cod":200}'
    headers:
      Access-Control-Allow-Credentials:
      - 'true'
      Access-Control-Allow-Methods:
      - GET, POST
      Access-Control-Allow-Origin:
      - '*'
      Connection:
      - keep-alive
      Content-Length:
      - '454'
      Content-Type:
      - application/json; charset=utf-8
      Date:
      - Fri, 18 Sep 2020 10:53:25 GMT
      Server:
      - openresty
      X-Cache-Key:
      - /data/2.5/weather?q=london&amp;units=metric
    status:
      code: 200
      message: OK
version: 1
```
_That's a lot!_
Indeed! The good thing is that you don't need to care much about it. `VCR.py` takes care of that for you.
#### Pros
Now, for the pros, I can list at least five things:
* No setup code.
* Tests remain isolated, so it's fast.
* Tests are deterministic.
* If you change the request, like by using incorrect headers, the test will fail.
* It's not coupled to the implementation, so you can swap the adapters, and the test will pass. The only thing that matters is that you request is the same.
#### Cons
Again, despite the enormous benefits compared to mocking, there are still problems.
If the API provider changes the format of the data for some reason, the test will still pass. Fortunately, this is not very frequent, and API providers usually version their APIs before introducing such breaking changes. Also, unit tests are not meant to access the external API, so there isn't much to do here.
Another thing to consider is having end-to-end tests in place. These tests will call the server every time it runs. As the name says, it's a more broad test and slow. They cover a lot more ground than unit tests. In fact, not every project will need to have them. So, in my view, `VCR.py` is more than enough for most people's needs.
### Conclusion
This is it. I hope you've learned something useful today. Testing API client applications can be a bit daunting. Yet, when armed with the right tools and knowledge, you can tame the beast.
You can find the full app on [my GitHub][8].
* * *
_This article was originally published on the [author's personal blog][9] and has been adapted with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/unit-test-python
作者:[Miguel Brito][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/miguendes
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
[2]: https://miguendes.me/how-i-set-up-my-python-workspace
[3]: https://opensource.com/sites/default/files/sbzkkiywh.jpeg
[4]: https://en.wikipedia.org/wiki/Mock_object
[5]: https://miguendes.me/7-pytest-plugins-you-must-definitely-use
[6]: https://stackoverflow.com/questions/130794/what-is-dependency-injection
[7]: https://api.openweathermap.org/data/2.5/weather?q=London\&appid=\
[8]: https://github.com/miguendes/tutorials/tree/master/testing_http
[9]: https://miguendes.me/3-ways-to-test-api-client-applications-in-python

View File

@ -0,0 +1,116 @@
[#]: subject: "Pensela: An Open-Source Tool Tailored for Screen Annotations"
[#]: via: "https://itsfoss.com/pensela/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Pensela: An Open-Source Tool Tailored for Screen Annotations
======
_**Brief:** Pensela is an interesting screen annotation tool available cross-platform. Let us take a closer look at it._
You may have come across [several screenshot tools][1] available for Linux. However, a dedicated screen annotation tool along with the ability to take screenshots? And, with cross-platform support?
Well, that sounds even better!
### Pensela: A Useful Screen Annotation Tool
![][2]
While you get many tools to beautify your screenshots and the screenshot tools like Flameshot, Pensela lets you focus on annotations first.
It focuses on offering several annotation options while giving you the ability to take full-size screenshots.
Here, I shall highlight some of its features along with my experience using it.
### Features of Pensela
**Note:** Pensela is a fairly new project on [GitHub][3] with no recent updates to it. If you like what you see, I encourage you to help the project or fork it to add the necessary improvements.
Given that it is a new project with an uncertain future, the feature set is impressive as per what it describes.
Heres what you can expect:
* Cross-platform support (Windows, macOS, and Linux)
* Drawing shapes (circle,square,triangle, and more)
* Signs for yes or no (or correct or wrong)
* Arrow object
* Double-sided arrow
* Ability to change the color of the objects added
* Undo/Redo option
* Add custom text
* Adjust the placement of text/objects
* Toggle the annotation tool or turn off to use the active window
* Text highlighter
* Screenshot button to take the full-screen picture
* Option for clearing all the drawings
### Using Pensela as Screen Annotation Tool
The moment you launch the tool, your active window gets unresponsive because it focuses on the annotation capability of pensela.
You get the option to toggle it using the visibility button (icon with an eye). If you disable it, you can interact with the active windows and your computer, but you cannot add annotations.
![][4]
When you enable it, the annotations should start working, and the existing ones will be visible.
This should come in handy if you are streaming/screencasting so that you can use the annotations live and toggle them off when needed.
In the same section, you select the drag button with two double-side arrows, which lets you move the annotations you already created before turning off the button.
You can add a piece of text if you click on “T” and then tweak it around to set a color to add them. The tool gives you the freedom to customize the colors of every object available.
The undo/redo feature works like a charm without limits, which is a good thing.
The ability to hide all the annotations in one click while resuming it after finishing any existing work should come in handy.
![][5]
**Some downsides of Pensela as of now:**
Unfortunately, it does not let you take a screenshot of a specific region on your screen. It only takes a full-screen screenshot, and any annotations you work on need to be full-screen specific for the best results.
Of course, you can manually crop/resize the screenshot later, but that is a limitation I have come across.
Also, you cannot adjust the position of the annotation bar. So, it could be an inconvenience if you want to add an annotation on the top side of your screen.
And, there is no advanced customization option to tweak or change the behavior of how the tools work, how the screenshot is taken, etc.
### Installing Pensela in Linux
You get an AppImage file and a deb file available from its [GitHub releases section][6].
Using an AppImage file should come in handy irrelevant of your Linux distribution, but feel free to try other options mentioned on its GitHub page.
You should also find it in [AUR][7] on an Arch-based Linux distro.
[Pensela][3]
What do you think about Pensela as an annotation tool? Do you know of any similar annotation tools? Feel free to let me know your thoughts in the comments down below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/pensela/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/take-screenshot-linux/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pensela-screenshot.png?resize=800%2C442&ssl=1
[3]: https://github.com/weiameili/Pensela
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/pensela-visibility.png?resize=575%2C186&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/pensela-folder-screenshot.png?resize=800%2C285&ssl=1
[6]: https://github.com/weiameili/Pensela/releases/tag/v1.1.3
[7]: https://itsfoss.com/aur-arch-linux/

View File

@ -0,0 +1,284 @@
[#]: subject: "Add storage with LVM"
[#]: via: "https://opensource.com/article/21/9/add-storage-lvm"
[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Add storage with LVM
======
LVM enables extreme flexibility in how you configure your storage.
![Storage units side by side][1]
Logical Volume Manager (LVM) allows for a layer of abstraction between the operating system and the hardware. Normally, your OS looks for disks (`/dev/sda`, `/dev/sdb`, and so on) and partitions within those disks (`/dev/sda1`, `/dev/sdb1`, and so on).
In LVM, a virtual layer is created between the operating system and the disks. Instead of one drive holding some number of partitions, LVM creates a unified storage pool (called a _Volume Group_) that spans any number of physical drives (called _Physical Volumes_). Using the storage available in a Volume Group, LVM provides what appear to be disks and partitions to your OS.
And the operating system is completely unaware that it's being "tricked."
![Drive space][2]
Opensource.com, [CC BY-SA 4.0][3]
Because the LVM creates volume groups and logical volumes virtually, it makes it easy to resize or move them, or create new volumes, even while the system is running. Additionally, LVM provides features that are not present otherwise, like creating live snapshots of logical volumes, without unmounting the disk first.
A volume group in an LVM is a named virtual container that groups together the underlying physical disks. It acts as a pool from which logical volumes of different sizes can be created. Logical volumes contain the actual file system and can span multiple disks, and don't need to be physically contiguous.
### Features
* Partition names normally have system designations like `/dev/sda1`. LVM volumes have normal human-understandable names, like `home` or `media`.
* The total size of partitions is limited by the size of the underlying physical disk. In LVM, volumes can span multiple disks, and are only limited by the total size of all physical disks in the LVM.
* Partitions can normally only be resized, moved, or deleted when the disk is not in use and is unmounted. LVM volumes can be manipulated while the system is running.
* Partitions can only be expanded by allocating them free space adjacent to the partition. LVM volumes can take free space from anywhere.
* Expanding a partition involves moving the data around to make free space, which is time-consuming and could lead to data loss during a power outage. LVM volumes can take free space from anywhere in the volume group, even on another disk.
* Because its so easy to create volumes in an LVM, it encourages creating different volumes, like creating separate volumes to test features or to try different operating systems. With partitions, this process would be time-consuming and error-prone.
* Snapshots can only be created in an LVM. It allows you to create a point-in-time image of the current logical volume, even while the system is running. This is great for backups.
### Test setup
As a demonstration, assume your system has the following drive configuration:
```
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
`-xvda1 202:1    0   8G  0 part /
xvdb    202:16   0   1G  0 disk
xvdc    202:32   0   1G  0 disk
xvdd    202:48   0   2G  0 disk
xvde    202:64   0   5G  0 disk
xvdf    202:80   0   8G  0 disk
```
#### Step 1. Initialize disks to use with LVM
Run `pvcreate /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde /dev/xvdf`. The output should be:
```
Physical volume "/dev/xvdb" successfully created
Physical volume "/dev/xvdc" successfully created
Physical volume "/dev/xvdd" successfully created
Physical volume "/dev/xvde" successfully created
Physical volume "/dev/xvdf" successfully created
```
See the result using `pvs` or `pvdisplay`:
```
"/dev/xvde" is a new physical volume of "5.00 GiB"
\--- NEW Physical volume ---
PV Name               /dev/xvde
VG Name
PV Size               5.00 GiB
Allocatable           NO
PE Size               0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               728JtI-ffZD-h2dZ-JKnV-8IOf-YKdS-8srJtn
"/dev/xvdb" is a new physical volume of "1.00 GiB"
\--- NEW Physical volume ---
PV Name               /dev/xvdb
VG Name
PV Size               1.00 GiB
Allocatable           NO
PE Size               0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               zk1phS-7uXc-PjBP-5Pv9-dtAV-zKe6-8OCRkZ
"/dev/xvdd" is a new physical volume of "2.00 GiB"
\--- NEW Physical volume ---
PV Name               /dev/xvdd
VG Name
PV Size               2.00 GiB
Allocatable           NO
PE Size               0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               R0I139-Ipca-KFra-2IZX-o9xJ-IW49-T22fPc
"/dev/xvdc" is a new physical volume of "1.00 GiB"
\--- NEW Physical volume ---
PV Name               /dev/xvdc
VG Name
PV Size               1.00 GiB
Allocatable           NO
PE Size               0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               FDzcVS-sq22-2b13-cYRj-dXHf-QLjS-22Meae
"/dev/xvdf" is a new physical volume of "8.00 GiB"
\--- NEW Physical volume ---
PV Name               /dev/xvdf
VG Name
PV Size               8.00 GiB
Allocatable           NO
PE Size               0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               TRVSH9-Bo5D-JHHb-g0NX-8IoS-GG6T-YV4d0p
```
#### Step 2. Create the volume group
Run `vgcreate myvg /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde /dev/xvdf`. See the results with `vgs` or `vgdisplay`:
```
\--- Volume group ---
VG Name               myvg
System ID
Format                lvm2
Metadata Areas        5
Metadata Sequence No  1
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                0
Open LV               0
Max PV                0
Cur PV                5
Act PV                5
VG Size               16.98 GiB
PE Size               4.00 MiB
Total PE              4347
Alloc PE / Size       0 / 0
Free  PE / Size       4347 / 16.98 GiB
VG UUID               ewrrWp-Tonj-LeFa-4Ogi-BIJJ-vztN-yrepkh
```
#### Step 3: Create logical volumes
Run the following commands:
```
lvcreate myvg --name media --size 4G
lvcreate myvg --name home --size 4G
```
Verify the results using `lvs` or `lvdisplay`:
```
\--- Logical volume ---
LV Path                /dev/myvg/media
LV Name                media
VG Name                myvg
LV UUID                LOBga3-pUNX-ZnxM-GliZ-mABH-xsdF-3VBXFT
LV Write Access        read/write
LV Creation host, time ip-10-0-5-236, 2017-02-03 05:29:15 +0000
LV Status              available
# open                 0
LV Size                4.00 GiB
Current LE             1024
Segments               1
Allocation             inherit
Read ahead sectors     auto
\- currently set to     256
Block device           252:0
\--- Logical volume ---
LV Path                /dev/myvg/home
LV Name                home
VG Name                myvg
LV UUID                Hc06sl-vtss-DuS0-jfqj-oNce-qKf6-e5qHhK
LV Write Access        read/write
LV Creation host, time ip-10-0-5-236, 2017-02-03 05:29:40 +0000
LV Status              available
# open                 0
LV Size                4.00 GiB
Current LE             1024
Segments               1
Allocation             inherit
Read ahead sectors     auto
\- currently set to     256
Block device           252:1
```
#### Step 4: Create the file system
Create the file system using:
```
mkfs.ext3 /dev/myvg/media
mkfs.ext3 /dev/myvg/home
```
Mount it:
```
mount /dev/myvg/media /media
mount /dev/myvg/home /home
```
See your full setup using `lsblk`:
```
NAME         MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda         202:0    0   8G  0 disk
`-xvda1      202:1    0   8G  0 part /
xvdb         202:16   0   1G  0 disk
xvdc         202:32   0   1G  0 disk
xvdd         202:48   0   2G  0 disk
xvde         202:64   0   5G  0 disk
`-myvg-media 252:0    0   4G  0 lvm  /media
xvdf         202:80   0   8G  0 disk
`-myvg-home  252:1    0   4G  0 lvm  /home
```
#### Step 5: Extending the LVM
Add a new disk at `/dev/xvdg`. To extend the `home` volume, run the following commands:
```
pvcreate /dev/xvdg
vgextend myvg /dev/xvdg
lvextend -l 100%FREE /dev/myvg/home
resize2fs /dev/myvg/home
```
Run `df -h` and you should see your new size reflected.
And that's it!
LVM enables extreme flexibility in how you configure your storage. Try it out, and have fun with LVM!
* * *
_This article was originally published on the [author's personal blog][4] and has been adapted with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/add-storage-lvm
作者:[Ayush Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayushsharma
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-storage.png?itok=95-zvHYl (Storage units side by side)
[2]: https://opensource.com/sites/default/files/lvm.png (Drive space)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://notes.ayushsharma.in/2017/02/working-with-logical-volume-manager-lvm

View File

@ -0,0 +1,149 @@
[#]: subject: "Install AnyDesk on Ubuntu Linux [GUI and Terminal Methods]"
[#]: via: "https://itsfoss.com/install-anydesk-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install AnyDesk on Ubuntu Linux [GUI and Terminal Methods]
======
_**Brief: This beginners tutorial discusses both GUI and terminal methods of installing AnyDesk on Ubuntu-based Linux distributions.**_
[AnyDesk][1] is a popular remote desktop software available for Linux, Windows, BSD, macOS and mobile platforms.
With this tool, you can remotely access other computer using AnyDesk or let someone else remotely access your system. Not everyone can access it just because two devices use AnyDesk. You have to accept the incoming connection and/or provide a password for secure connection.
This is helpful in providing tech support to friend, family, colleagues or even to the customers.
In this tutorial, Ill show you both graphical and command line ways of installing AnyDesk on Ubuntu. You can use either method based on your preference. Both methods will install the same AnyDesk version on your Ubuntu system.
The same method should be applicable to Debian and other Debian and Ubuntu based distributions such as Linux Mint, Linux Lite etc.
Non-FOSS alert!
AnyDesk is not open source software. It is covered here because it is available on Linux and the articles focus is on Linux.
### Method 1: Install AnyDesk on Ubuntu using terminal
[Open the terminal application][2] on your system. Youll need a tool like wget to [download files in the terminal. For th][3]at, use the following command:
```
sudo apt update
sudo apt install wget
```
The next step now is to download the GPG key of AnyDesk repository and add it to your systems trusted keys. This way, your system will trust the software coming from this [external repository][4].
```
wget -qO - https://keys.anydesk.com/repos/DEB-GPG-KEY | sudo apt-key add -
```
You may ignore the deprecated warning about apt-key command for now. The next step is to add the AnyDesk repository to your systems repository sources:
```
echo "deb http://deb.anydesk.com/ all main" | sudo tee /etc/apt/sources.list.d/anydesk-stable.list
```
Update the package cache so that your system learns about the availability of new applications through the newly added repository.
```
sudo apt update
```
And now, you can install AnyDesk:
```
sudo apt install anydesk
```
Once that is done, you can start AnyDesk from the system menu or from the terminal itself:
```
anydesk
```
You can enjoy AnyDesk now.
![AnyDesk running in Ubuntu][5]
### Method 2: Install AnyDesk on Ubuntu graphically
If you are not comfortable with the command line, no worries. You can also install AnyDesk without going into the terminal.
You can download AnyDesk for Ubuntu from the official AnyDesk website:
[Download AnyDesk for Linux][6]
Youll see a Download Now button. Click on it.
![Download AnyDesk][7]
When you click on the download button, it gives you options for various Linux distributions. Select the one for Ubuntu:
![Download the appropriate file][8]
It will download the DEB file of AnyDesk application. [Installing deb file][9] is easy. Either double click on it or right click and open with Software Install.
![Right click on deb file and open with software center][10]
Software Center application will be opened and you can install it from there.
![Installing AnyDesk in Ubuntu software center][11]
Once installed, search for it in the system menu and start from there.
![AnyDesk installed in Ubuntu][12]
Thats it. Not too hard, is it?
I am not going to show the steps for using AnyDesk. I think you already have some idea about that. If not, refer to [this article][13], please.
#### Troubleshooting tip
When I tried to run AnyDesk from the system menu, it didnt start. So, I started it from the terminal and it showed me this error:
```
[email protected]:~$ anydesk
anydesk: error while loading shared libraries: libpangox-1.0.so.0: cannot open shared object file: No such file or directory
```
If you see the [error while loading shared libraries][14] message, you install the package it is complaining about. Heres what I did in my case:
```
sudo apt install libpangox-1.0-0
```
That solved the issue for me and I hope it fixes for you as well.
If you have any questions related to this topic, please let me know in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-anydesk-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://anydesk.com/en
[2]: https://itsfoss.com/open-terminal-ubuntu/
[3]: https://itsfoss.com/download-files-from-linux-terminal/
[4]: https://itsfoss.com/adding-external-repositories-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/anydesk-running-in-ubuntu.png?resize=800%2C418&ssl=1
[6]: https://anydesk.com/en/downloads/linux
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/any-desk-ubuntu-download.webp?resize=800%2C312&ssl=1
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/any-desk-ubuntu-download-1.webp?resize=800%2C427&ssl=1
[9]: https://itsfoss.com/install-deb-files-ubuntu/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/install-anaydesk-ubuntu.png?resize=800%2C403&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/installing-anydesk-in-ubuntu-software-center.png?resize=781%2C405&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/anydesk-installed-in-ubuntu.png?resize=759%2C196&ssl=1
[13]: https://support.anydesk.com/Access
[14]: https://itsfoss.com/solve-open-shared-object-file-quick-tip/

View File

@ -0,0 +1,320 @@
[#]: subject: "Install PowerShell on Fedora Linux"
[#]: via: "https://fedoramagazine.org/install-powershell-on-fedora-linux/"
[#]: author: "TheEvilSkeletonOzymandias42 https://fedoramagazine.org/author/theevilskeleton/https://fedoramagazine.org/author/ozymandias42/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install PowerShell on Fedora Linux
======
![][1]
Photos by [NOAA][2] and [Cedric Fox][3] on [Unsplash][4]
PowerShell (also written pwsh) is a powerful open source command-line and object-oriented shell developed and maintained by Microsoft. It is syntactically verbose and intuitive for the user. This article is a guide on how to install PowerShell on the host and inside a Podman or Toolbox container.
### Table of contents
* [Why use PowerShell][5]
* [Demonstration][6]
* [Comparison between Bash and PowerShell][7]
* [Install PowerShell][8]
* [Install PowerShell on a host using the package manager][9]
* [Method 1: Microsoft repositories][10]
* [Method 2: RPM file][11]
* [Install via container][12]
* [Method 1: Podman container][13]
* [Method 2: Fedora Linux Toolbox container][14]
### Why use PowerShell
PowerShell, as the name suggests, is _power_ful. The syntax is verbose and semantically clear to the end user. For those that dont want to write long commands all the time, most commands are aliased. The aliases can be viewed with _Get-Alias_ or [here][15].
The most important difference between PowerShell and traditional shells, however, is its output pipeline. While normal shells output strings or character streams, PowerShell outputs objects. This has far reaching implications for how command pipelines work and comes with quite a few advantages.
#### Demonstration
The following examples illustrate the verbosity and simplicity. Lines that start with the pound symbol (**#**) are comments. Lines that start with **PS &gt;** are commands, **PS &gt;** being the prompt:
```
# Return all files greater than 50MB in the current directory.
## Longest form
PS > Get-Childitem | Where-Object Length -gt 50MB
## Shortest form (with use of aliases)
PS > gci | ? Length -gt 40MB
## Output looks like this
Directory: /home/Ozymandias42/Downloads
Mode LastWriteTime Length Name
---- ------------- ------ ----
----- 20/08/2020 13:55 2000683008 40MB-file.img
# In order: get VMs, get snapshots, only select the last 3 and remove selected list:
PS > Get-VM VM-1 | Get-Snapshot | Select-Object -Last 3 | Remove-Snapshot
```
What this shows quite well is that input-output reformatting with tools like _cut_, _sed_, _awk_ or similar, which Bash scripts often need, is usually not necessary in PowerShell. The reason for this is that PowerShell works fundamentally different than traditional POSIX shells such as Bash, Zsh, or other shells like Fish. The commands of traditional shells are output as strings whereas in PowerShell they are output as objects.
#### Comparison between Bash and PowerShell
The following example illustrates the advantages of the object-output in PowerShell in contrast to the traditional string-output in Bash. Suppose you want a script that outputs all processes that occupy 200MB or more in RAM. With Bash, this might look something like this:
```
$ ps -eO rss | awk -F' ' \
'{ if($2 >= (1024*200)) { \
printf("%s\t%s\t%s\n",$1,$2,$6);} \
}'
PID RSS COMMAND
A B C
[...]
```
The first obvious difference is readability or more specifically, semantic clarity. Neither _ps_ nor _awk_ are self-descriptive. _ps_ shows the **p**rocess **s**tatus and _awk_ is a text processing tool and language whose letters are the initials of its developers last names, **A**ho, **W**einberger, **K**ernighan (see [Wikipedia][16]). Before contrasting it with PowerShell however, examine the script:
* _ps -e_ outputs all running processes;
* _-O rss_ outputs the default output of _ps_ plus the amount of kilobytes each process uses, the _rss_ field; this output looks like this:
```
PID RSS S TTY TIME COMMAND 1 13776 S ? 00:00:01 /usr/lib/systemd/systemd
```
* | pipe operator uses the output of the command on the left side as input for the command on the right side.
* _awk -F _ declares “space” as the input field separator. So going with the above example, PID is the first, RSS the second and so on.
* _{ if($2 &gt;= (1024*200)_ is the beginning of the actual AWK-script. It checks whether field 2 ([RSS][17]) contains a number larger than or equal to 1024*200KB (204800KB, or 200MB);
* _{ printf(“%s\t%s\t%s\n”,$1,$2,$6);} }_ continues the script. If the previous part evaluates to true, this outputs the first, second and sixth field ([PID][18], [RSS][17] and COMMAND fields respectively).
With this in mind, step back and look at what was required for this script to be written and for it to work:
* The input command _ps_ had to have the field we wanted to filter against in its output. This was not the case by default and required us to use the _-O_ flag with the _rss_ field as argument.
* We had to treat the output of _ps_ as a list of input fields, requiring us to know their order and structure. Or in other words, we had to at least _know_ that _RSS_ would be the second field. Meaning we had to know how the output of _ps_ would look beforehand.
* We then had to know what unit the data we were filtering against was in as well as what unit the processing tool would work in. Meaning we had to know that the _RSS_ field uses kilobytes and that _awk_ does too. Otherwise we would not have been able to write the expression _($2 &lt;= 1024*200)_
Now, contrast the above with the PowerShell equivalent:
```
# Longest form
PS > Get-Process | Where-Object WorkingSet -ge 200MB
# Shortest form (with use of aliases)
PS > gps | ? ws -ge 200MB
NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
A B C D E F G
[...]
```
This first thing to notice is that we have perfect semantic clarity. The commands are perfectly self-descriptive in what they do.
Furthermore there is no requirement for input-output reformatting, nor is there concern about the unit used by the input command. The reason for this is that PowerShell does not output strings, but objects.
To understand this think about the following. In Bash the output of a command is equal to that what it prints out in the terminal. In PowerShell what is printed on the terminal is not equal to the information, that is actually available. This is, because the output-printing system in PowerShell also works with objects. So every command in PowerShell marks some of the properties of its output objects as printable and others not. However, it always includes all properties, whereas Bash only includes what it actually prints. One can think of it like JSON objects. Where output in Bash would be separated into “fields” by a delimiter such as a space or tab, it becomes an easily addressable object property in PowerShell, with the only requirement being, that one has to know its name. Like _WorkingSet_ in the above example.
To see all available properties of a commands output objects and their types, one can simply do something like:
```
PS > Get-Process | Get-Member
```
### Install PowerShell
PowerShell is available in several package formats, including RPM used by Fedora Linux. This article shows how to install PowerShell on Fedora Linux using various methods.
I recommend installing it natively. But I will also show how to do it in a container. I will show using both the official Microsoft PowerShell container and a Fedora Linux 30 toolbox container. The advantage of the container-method is that its guaranteed to work, since all dependencies are bundled in it, and isolation from the host. Regardless, I recommend doing it natively, despite the official docs only explicitly stating Fedora Linux releases 28 to 30 as being supported.
**Note:** Supported means guaranteed to work. It does not necessarily mean incompatible with other releases. This means, that while not guaranteed, releases higher than 30 should still work. They did in fact work in our tests.
It is more difficult to set up PowerShell and run it in a container than to run it directly on a host. It takes more time to install and you will not be able to run host commands directly.
#### Install PowerShell on a host using the package manager
##### Method 1: Microsoft repositories
Installation is as straight-forward as can be and the procedure doesnt differ from any other software installed through third party repositories.
It can be split into four general steps:
1. Adding the new repositorys GPG key
2. Adding repository to DNF repository list
3. Refreshing DNF cache to include available packages from the new repository
4. Installing new packages
Powershell is then launched with the command _pwsh_.
```
$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
$ curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
$ sudo dnf makecache
$ sudo dnf install powershell
$ pwsh
```
To remove the repository and packages, run the following.
```
$ sudo rm /etc/yum.repos.d/microsoft.repo
$ sudo dnf remove powershell
```
##### Method 2: RPM file
This method is not meaningfully different from the first method. In fact it adds the GPG key and the repository implicitly when installing the RPM file. This is because the RPM file contains the link to both in its metadata.
First, get the _.rpm_ file for the version you want from the [PowerShell Core GitHub repository][19]. See the readme.md
“Get Powershell” table for links.
Second, enter the following:
```
$ sudo dnf install powershell-<version>.rhel.7.<architecture>.rpm
```
Substitute _&lt;version&gt;_ and _&lt;architecture&gt;_ with the version and architecture you want to use respectively, for example [powershell-7.1.3-1.rhel.7.x86_64.rpm][20].
Alternatively you could even run it with the link instead, skipping the need to download it first.
```
$ sudo dnf install https://github.com/PowerShell/PowerShell/releases/download/v<version>/powershell-<version>.rhel.7.<architecture>.rpm
```
To remove PowerShell, run the following.
```
$ sudo dnf remove powershell
```
#### Install via container
##### Method 1: Podman container
Podman is an [Open Container Initiative][21] (OCI) compliant drop-in replacement for Docker.
Microsoft provides a [PowerShell Docker container][22]. The following example will use that container with Podman.
For more information about Podman, visit [Podman.io][23]. Fedora Magazine has a [tag][24] dedicated to Podman.
To use PowerShell in Podman, run the following script:
```
$ podman run \
-it \
--privileged \
--rm \
--name powershell \
--env-host \
--net=host --pid=host --ipc=host \
--volume $HOME:$HOME \
--volume /:/var/host \
mcr.microsoft.com/powershell \
/usr/bin/pwsh -WorkingDirectory $(pwd)
```
This script creates a Podman container for PowerShell and immediately attaches to it. It also mounts the _/home_ and the hosts root directories into the container so theyre available there. However, the hosts root directory is available in _/var/host_.
Unfortunately, you can only indirectly run host commands while inside the container. As a workaround, run _chroot /var/host_ to chroot to the root and then run host commands.
To break the command down, everything is mandatory unless specified:
* -it creates a persistent environment that does not kick you out when you enter it;
* \--privileged gives extended privileges to the container (optional);
* \--rm removes the container when you exit;
* \--name powershell sets the name of the container to _powershell_;
* \--env-host sets all host environment variables to the containers variables (optional);
* \--volume $HOME:$HOME mounts the user directory;
* \--volume /:/var/host mounts the root directory to _/var/host_ (optional);
* \--net=host --pid=host --ipc=host runs the process in the hosts namespaces instead of a separate set of namespaces for the contained process;
* docker.io/microsoft/powershell enters the container;
* /usr/bin/pwsh -WorkingDirectory $(pwd) enters the container in the current directory (optional).
Optional but very convenient: alias _pwsh_ with the script to easily access the Podman container by typing _pwsh_.
To remove the PowerShell image, run the following.
```
$ podman rmi mcr.microsoft.com/powershell
```
##### Method 2: Fedora Linux Toolbox container
Toolbox is an elegant solution to setup persistent environments without affecting the host system as a whole. It acts as a wrapper around Podman and takes care of supplying a lot of the flags demonstrated in the previous method. For this reason, Toolbox is a lot easier to use than Podman. It was designed to work for development and debugging. With Toolbox, you can run any command the same as you would directly on the Fedora Workstation host (including _dnf_).
The installation procedure is similar to the installation on the host methods, with the only difference being that those steps are done inside a container. Make sure you have the _toolbox_ package installed.
Preparing and entering the Fedora 34 Toolbox container is a two step process:
1. Creating the Fedora 34 Toolbox container
2. Running the Fedora 34 Toolbox container
```
$ toolbox create --image registry.fedoraproject.org/f34/fedora-toolbox
$ toolbox enter --container fedora-toolbox
```
Then, follow the instructions at [Method 1: Microsoft repositories][10].
Optional but very convenient: alias _pwsh_ with _toolbox run container fedora-toolbox_ _pwsh_ to easily access the Toolbox container by typing _pwsh_.
To remove the Toolbox container, make certain you have stopped the Toolbox session by entering _exit_ and then run the following:
```
$ podman kill fedora-toolbox
$ toolbox rm fedora-toolbox
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/install-powershell-on-fedora-linux/
作者:[TheEvilSkeletonOzymandias42][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/theevilskeleton/https://fedoramagazine.org/author/ozymandias42/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/05/powershell-816x345.jpg
[2]: https://unsplash.com/@noaa?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/@thecedfox?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://unsplash.com/s/photos/shell?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[5]: tmp.c7U2gcu9Hl#why-use-powershell
[6]: tmp.c7U2gcu9Hl#demonstration
[7]: tmp.c7U2gcu9Hl#comparison-between-bash-and-powershell
[8]: tmp.c7U2gcu9Hl#install-powershell
[9]: tmp.c7U2gcu9Hl#install-on-host-via-package-manager
[10]: tmp.c7U2gcu9Hl#method-1-microsoft-repositories
[11]: tmp.c7U2gcu9Hl#method-2-rpm-file
[12]: tmp.c7U2gcu9Hl#install-via-container
[13]: tmp.c7U2gcu9Hl#method-1-podman-container
[14]: tmp.c7U2gcu9Hl#method-2-fedora-toolbox-container
[15]: https://ilovepowershell.com/2011/11/03/list-of-top-powershell-alias/
[16]: https://en.wikipedia.org/wiki/AWK
[17]: https://en.wikipedia.org/wiki/Resident_set_size
[18]: https://en.wikipedia.org/wiki/Process_identifier
[19]: https://github.com/PowerShell/PowerShell
[20]: https://github.com/PowerShell/PowerShell/releases/download/v7.1.3/powershell-7.1.3-1.rhel.7.x86_64.rpm
[21]: https://opencontainers.org/
[22]: https://hub.docker.com/_/microsoft-powershell
[23]: https://podman.io/
[24]: https://fedoramagazine.org/tag/podman/

View File

@ -0,0 +1,115 @@
[#]: subject: "My favorite LibreOffice productivity tips"
[#]: via: "https://opensource.com/article/21/9/libreoffice-tips"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
My favorite LibreOffice productivity tips
======
Here are some LibreOffice keyboard shortcuts and formatting tips that
might save you valuable time.
![woman on laptop sitting at the window][1]
LibreOffice is my productivity application of choice. It's one of the most potent reasons for recommending Linux distributions to educators and students, whether PK-12 or higher education. Now that the school year is upon us, I thought I would recommend some LibreOffice shortcuts and tips that might save you valuable time.
### Work faster with keyboard shortcuts
I use a lot of keyboard shortcuts. Here are the most common shortcuts that apply to all LibreOffice applications.
* **Ctrl**+**N**—Create a new document
* **Ctrl**+**O**—Open a document
* **Ctrl**+**S**—Save a document
* **Ctrl**+**Shift**+**S**—Save as
* **Ctrl**+**P**—Print a document
Here are some shortcut keys just for LibreOffice Writer:
* **Home**—Takes you to the beginning of the current line.
* **End**—Takes you to the end of a line.
* **Ctrl**+**Home**—Takes the cursor to the start of the document
* **Ctrl**+**End**—Takes the cursor to the end of the document
* **Ctrl**+**A**—Select All
* **Ctrl**+**D**—Double Underline
* **Ctrl**+**E**—Centered
* **Ctrl**+**H**—Find and Replace
* **Ctrl**+**L**—Align Left
* **Ctrl**+**R**—Align Right
Function keys have value too:
* **F2**—Opens the formula bar
* **F3**—Completes auto-text
* **F5**—Opens the navigator
* **F7**—Opens spelling and grammar
* **F11**—Opens styles and formatting
* **Shift**+**F11**—Creates a new style
### Document formats
There are lots of document formats out there, and LibreOffice supports a good number of them. By default, LibreOffice saves documents to the Open Document Format, an open source standard that stores stylesheets and data in a ZIP container labeled as ODT for text documents, ODS for spreadsheets, and ODP for presentations. It's a flexible format and is maintained by the LibreOffice community as well as the Document Foundation.
The Open Document Format is on by default, so you don't need to do anything to get LibreOffice to use it.
Another open specification for documents is Microsoft's [Office Open XML format][2]. It's an ISO standard and is well supported by all the major office solutions.
If you work with folks using Microsoft Office (which itself is not open source, but it does use the open OOXML format), then they're definitely used to DOCX, XLSX, and, PPTX formats and probably can't open ODT, ODS, or ODP files. You can avoid a lot of confusion by setting LibreOffice to save to OOXML by default.
To set OOXML as your preferred format: 
1. Click on the **Tools** menu and select **Options** at the bottom of the menu.
2. In the **Options** window, click on the **Load/Save** category in the left panel and select **General**.
![LibreOffice settings panel][3]
(Don Watkins, [CC BY-SA 4.0][4])
3. Navigate to the **Default File Format and ODF Settings** section.
4. Choose _Text document_ for the **Document type** and choose _Open XML (Transitional) (*.docx)_ for the **Always save as **drop-down list.
5. Click **Apply** and then **OK**. 
6. Deselect the **Warn when not saving in ODF or default format** to avoid confirmation dialogue boxes when saving.
![LibreOffice save formats][5]
(Don Watkins, [CC BY-SA 4.0][4])
Repeat the same process XLSX and PPTX documents by following the same logic. 
### Free your office
The LibreOffice project is managed by a thriving community of users and developers, in tandem with the Document Foundation. This includes an Engineering Steering Committee, a Board of Directors, independent developers and designers and translators, and more. These teams are always open to your contribution, so if you're eager to participate in a great open source project, don't hesitate to [get involved][6].
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/libreoffice-tips
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://www.iso.org/standard/71691.html
[3]: https://opensource.com/sites/default/files/uploads/libreoffice-panel.jpg (LibreOffice settings panel)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/libreoffice-save-format.jpg (LibreOffice save formats)
[6]: https://www.libreoffice.org/community/get-involved/

View File

@ -0,0 +1,189 @@
[#]: subject: "Build your website with Jekyll"
[#]: via: "https://opensource.com/article/21/9/build-website-jekyll"
[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Build your website with Jekyll
======
Jekyll is an open source static site generator. You can write your
content in Markdown, use HTML/CSS for structure and presentation, and
Jekyll compiles it all into static HTML.
![Person using a laptop][1]
Static website generators and JAMStack have taken off in recent years. And with good reason. There is no need for complex backends with only static HTML, CSS, and Javascript to serve. Not having backends means better security, lower operational overhead, and cheaper hosting. A win-win-win!
In this article, I'm going to talk about Jekyll. As of this writing, [my personal website uses Jekyll][2]. Jekyll uses a Ruby engine to convert articles written in Markdown to generate HTML. [Sass][3] allows merging complex CSS rules into flat files. [Liquid][4] allows some programmatic control over otherwise static content.
### Install Jekyll
The [Jekyll website][5] has installation instructions for Linux, MacOS, and Windows. After installation, the [Quickstart guide][6] will set up a basic Hello-World project.
Now visit `http://localhost:4000` in your browser. You should see your default "awesome" blog.
![Default "awesome" blog][7]
Screenshot by Ayush Sharma, [CC BY-SA 4.0][8]
### Directory structure
The default site contains the following files and folders:
* `_posts`: Your blog entries.
* `_site`: The final compiled static website.
* `about.markdown`: Content for the about page.
* `index.markdown`: Content for the home page.
* `404.html`: Content for the 404 page.
* `_config.yml`: Site-wide configuration for Jekyll.
### Creating new blog entries
Creating posts is simple. All you need to do is create a new file under `_posts` with the proper format and extension, and youre all set.
A valid file name is `2021-08-29-welcome-to-jekyll.markdown`. A post file must contain what Jekyll calls the YAML Front Matter. Its a special section at the beginning of the file with the metadata. If you see the default post, youll see the following:
```
\---
layout: post
title: "Welcome to Jekyll!"
date:  2021-08-29 11:28:12 +0530
categories: jekyll update
\---
```
Jekyll uses the above metadata, and you can also define custom `key: value` pairs. If you need some inspiration, [have a look at my website's front matter][9]. Aside from the front matter, you can [use in-built Jekyll variables][10] to customize your website.
Lets create a new post. Create `2021-08-29-ayushsharma.markdown` in the `_posts` folder. Add the following content:
```
\---
layout: post
title:  "Check out ayushsharma.in!"
date:   2021-08-29 12:00:00 +0530
categories: mycategory
\---
This is my first post.
# This is a heading.
## This is another heading.
This is a [link](<http://notes.ayushsharma.in>)
This is my category:
```
If the `jekyll serve` command is still running, refresh the page, and you'll see the new entry below.
![New blog entry][11]
Screenshot by Ayush Sharma, [CC BY-SA 4.0][8]
Congrats on creating your first article! The process may seem simple, but there's a lot you can do with Jekyll. Using simple markdown, you can generate an archive of posts, syntax highlighting for code snippets, and separate pages for posts in one category.
### Drafts
If you're not ready to publish your content yet, you can create a new `_drafts` folder. Markdown files in this folder are only rendered by passing the `--drafts` argument.
### Layouts and Includes
Note the front matter of the two articles in our `_posts` folder, and you'll see `layout: post` in the Front Matter. The `_layout` folder contains all the layouts. You won't find them in your source code because Jekyll loads them by default. The default source code used by Jekyll is [here][12]. If you follow the link, you'll see that the `post` layout uses the [`default` layout][13]. The default layout contains the code `{{ content }}` which is where content is injected. The layout files will also contain `include` directives. These load files from the [`includes` folder][14] and allow composing a page using different components.
Overall, this is how layouts work—you define them in the front matter and inject your content within them. Includes provide other sections of the page to compose a whole page. This is a standard web-design technique—defining header, footer, aside, and content elements and then injecting content within them. This is the real power of static site generators—full programmatic control over assembling your website with final compilation into static HTML.
### Pages
Not all content on your website will be an article or a blog post. You'll need about pages, contact pages, project pages, or portfolio pages. This is where Pages come in. They work exactly like Posts do, meaning they're markdown files with front matter. But they don't go in the `_posts` directory. They either stay in your project root or in folders of their own. For Layouts and Includes, you can use the same ones as you do for your Posts or create new ones. Jekyll is very flexible and you can be as creative as you want! Your default blog already has `index.markdown` and `about.markdown`. Feel free to customize them as you wish.
### Data files
Data files live in the `_data` directory, and can be `.yml`, `.json`, or `.csv`. For example, a `_data/members.yml` file may contain:
```
\- name: A
 github: a@a
\- name: B
 github: b@b
\- name: C
 github: c@c
```
Jekyll reads these during site generation. You can access them using `site.data.members`.
```
&lt;ul&gt;
{ % for member in site.data.members %}
 &lt;li&gt;
 &lt;a href="[https://github.com/"\&gt;][15]
      { { member.name }}
 &lt;/a&gt;
 &lt;/li&gt;
{ % endfor %}
&lt;/ul&gt;
```
### Permalinks
Your `_config.yml` file defines the format of your permalinks. You can [use a variety of default variables][16] to assemble your own custom permalink.
### Building your final website
The command `jekyll serve `is great for local testing. But once you're done with local testing, you'll want to build the final artifact to publish. The command `jekyll build --source source_dir --destination destination_dir` builds your website into the `_site` folder. Note that this folder is cleaned up before every build, so don't place important things in there. Once you have the content, you can host it on a static hosting service of your choosing.
You should now have a decent overall grasp of what Jekyll is capable of and what the main bits and pieces do. If youre looking for inspiration, the official [JAMStack website has some amazing examples][17].
![Example Jekyll sites from JAMStack][18]
Screenshot by Ayush Sharma, [CC BY-SA 4.0][8]
Happy coding :)
* * *
_This article was originally published on the [author's personal blog][19] and has been adapted with permission._
See how Jekyll, an open source generator of static HTML files, makes running a blog as easy as...
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/build-website-jekyll
作者:[Ayush Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayushsharma
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/laptop_screen_desk_work_chat_text.png?itok=UXqIDRDD (Person using a laptop)
[2]: https://gitlab.com/ayush-sharma/ayushsharma-in
[3]: https://sass-lang.com/
[4]: https://shopify.github.io/liquid/
[5]: https://jekyllrb.com/docs/installation/
[6]: https://jekyllrb.com/docs/
[7]: https://opensource.com/sites/default/files/uploads/2016-08-15-introduction-to-jekyll-welcome-to-jekyll.png (Default "awesome" blog)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://gitlab.com/ayush-sharma/ayushsharma-in/-/blob/2.0/_posts/2021-07-15-the-evolution-of-ayushsharma-in.md
[10]: https://jekyllrb.com/docs/variables/
[11]: https://opensource.com/sites/default/files/uploads/2016-08-15-introduction-to-jekyll-new-article.png (New blog entry)
[12]: https://github.com/jekyll/minima/blob/master/_layouts/post.html
[13]: https://github.com/jekyll/minima/blob/master/_layouts/default.html#L12
[14]: https://github.com/jekyll/minima/tree/master/_includes
[15]: https://github.com/"\>
[16]: https://jekyllrb.com/docs/permalinks/
[17]: https://jamstack.org/examples/
[18]: https://opensource.com/sites/default/files/uploads/2016-08-15-introduction-to-jekyll-jamstack-examples.png (Example Jekyll sites from JAMStack)
[19]: https://notes.ayushsharma.in/2021/08/introduction-to-jekyll

View File

@ -0,0 +1,67 @@
[#]: subject: "Fedora Linux earns recognition from the Digital Public Goods Alliance as a DPG!"
[#]: via: "https://fedoramagazine.org/fedora-linux-earns-recognition-from-the-digital-public-goods-alliance-as-a-dpg/"
[#]: author: "Justin W. FloryAlberto Rodriguez SanchezMatthew Miller https://fedoramagazine.org/author/jflory7/https://fedoramagazine.org/author/bt0dotninja/https://fedoramagazine.org/author/mattdm/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fedora Linux earns recognition from the Digital Public Goods Alliance as a DPG!
======
![][1]
In the Fedora Project community, [we look at open source][2] as not only code that can change how we interact with computers, but also as a way for us to positively influence and shape the future. The more hands that help shape a project, the more ideas, viewpoints and experiences the project represents — thats truly what the spirit of open source is built from.
But its not just the global contributors to the Fedora Project who feel this way. August 2021 saw Fedora Linux recognized as a digital public good by the [Digital Public Goods Alliance (DPGA)][3], a significant achievement and a testament to the openness and inclusivity of the project.
We know that digital technologies can save lives, improve the well-being of billions, and contribute to a more sustainable future. We also know that in tackling those challenges, Open Source is uniquely positioned in the world of digital solutions by inherently welcoming different ideas and perspectives critical to lasting success.
But, we also know that many regions and countries around the world do not have access to those technologies. Open Source technologies can be the difference between achieving the [Sustainable Development Goals][4] (SDGs) by 2030 or missing the targets. Projects like Fedora Linux, which [represent much more than code itself][2], are the game-changers we need. Already, individuals, organizations, governments, and Open Source communities, including the Fedora Projects own, are working to make sure the potential of Open Source is realized and equipped to take on the monumental challenges being faced.
The Digital Public Goods Alliance is a multi-stakeholder initiative, endorsed by the United Nations Secretary-General. It works to accelerate the attainment of the SDGs in low- and middle-income countries by facilitating the discovery, development, use of, and investment in digital public goods (DPGs). DPGs are Open Source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, and do no harm. This definition, drawn from the UN Secretary-Generals [2020 Roadmap for Digital Cooperation][5], serves as the foundation of the DPG Registry, an online repository for DPGs. 
The DPG Registry was created to help increase the likelihood of discovery, and therefore use of, DPGs. Today, we are excited to share that Fedora Linux was added to the [DPG Registry][6]! Recognition as a DPG increases the visibility, support for, and prominence of open projects that have the potential to tackle global challenges. To become a digital public good, all projects are required to meet the [DPG Standard][7] to ensure they truly encapsulate Open Source principles. 
As an Open Source leader, Fedora Linux can make achieving the SDGs a reality through its role as a convener of many Open Source “upstream” communities. In addition to providing a fully-featured desktop, server, cloud, and container operating system, it also acts as a platform where different Open Source software and work come together. Fedora Linux by default only ships its releases with purely Open Source software packages and components. While third-party repositories are available for use with proprietary packages or closed components, Fedora Linux is a complete offering with some of the greatest innovations that Open Source has to offer. Collectively this means Fedora Linux can act as a gateway, empowering the creation of more and better solutions to better tackle the challenges they are trying to address.
The DPG designation also aligns with Fedoras fundamental foundations:
* **Freedom**: Fedora Linux was built as Free and Open Source Software from the beginning. Fedora Linux only ships and distributes Free Software from its default repositories. Fedora Linux already uses widely-accepted Open Source licenses.
* **Friends**: Fedora has an international community of hundreds spread across six continents. The Fedora Community is strong and well-positioned to scale as the upstream distribution of the worlds most-widely used enterprise flavor of Linux.
* **Features**: Fedora consistently delivers on innovation and features in Open Source. Fedora Linux 34 was a record-breaking release, with 63 new approved Changes in the last release.
* **First**: Fedora leverages its unique position and resources in the Free Software world to deliver on innovation. New ideas and features are tried out in the Fedora Community to discover what works, and what doesnt. We have many stories of both.
![][8]
For us, recognition as a digital public good brings honor and is a great moment for us, as a community, to reaffirm our commitment to contribute and grow the Open Source ecosystem.
This is a proud moment for each Fedora Community member because we are making a difference. Our work matters and has value in creating an equitable world; this is a fantastic and important feeling.
If you have an interest in learning more about the Digital Public Goods Alliance please reach out to [hello@digitalpublicgoods.net][9].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-linux-earns-recognition-from-the-digital-public-goods-alliance-as-a-dpg/
作者:[Justin W. FloryAlberto Rodriguez SanchezMatthew Miller][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jflory7/https://fedoramagazine.org/author/bt0dotninja/https://fedoramagazine.org/author/mattdm/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/09/DPG_recognition-816x345.jpg
[2]: https://docs.fedoraproject.org/en-US/project/
[3]: https://digitalpublicgoods.net/frequently-asked-questions/
[4]: https://sdgs.un.org/goals
[5]: https://www.un.org/en/content/digital-cooperation-roadmap/
[6]: http://digitalpublicgoods.net/registry/
[7]: http://digitalpublicgoods.net/standard/
[8]: https://lh6.googleusercontent.com/lzxUQ45O79-kK_LHsokEChsfMCyAz4fpTx1zEaj6sN_-IiJp5AVqpsISdcxvc8gFCU-HBv43lylwkqjItSm1X1rG_sl9is1ou9QbIUpJTGyzr4fQKWm_QujF55Uyi-hRrta1M9qB=s0
[9]: mailto:hello@digitalpublicgoods.net

View File

@ -0,0 +1,233 @@
[#]: subject: "JingPad Review: A Real Linux Tab for True Linux Fans"
[#]: via: "https://itsfoss.com/jingpad-a1-review/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
JingPad Review: A Real Linux Tab for True Linux Fans
======
If you follow Linux news enthusiastically, you might have come across a new name recently: [JingOS][1].
It is a new Linux distribution designed for touch devices. It can also be used as a regular desktop operating system on non-touch devices.
![JingOS][2]
JingOS is not the primary product for Beijing based JingLing tech. They have created this OS for their flagship product, JingPad, an ARM-based Linux tablet.
This review focuses on two aspects:
* The hardware side of JingPad tablet
* The software side of JingOS
So that you know
People at Jing sent me the JingPad for free to review the device. I can keep the device forever. However, this does not mean that review is biased in their favor to show only the positives. I am sharing my experience with the device and its operating system.
### JingPad: The first impression
![JingPad A1][3]
To be frank with you, I was expecting a mediocre, generic tablet preinstalled with a Linux OS. I was wrong.
The moment I unboxed the device and hold it in my hands, I got a feeling that I am looking at a premium device. The black colored tablet has a smooth finish. The back has a glossy finish and though I prefer matte usually, I liked the shiny body. I learned later that they used Corning Gorilla Glass for the back.
The hardware specification of the gadget is more than just decent. Take a look:
* 11 inches AMOLED screen with 4:3 aspect ratio
* 2K screen with 266PPI and 350nit
* Unisoc Tiger T7510 ARM Chip
* 8000mAh battery
* Weighs around 550g so not too heavy
* Around 7 mm in thickness
* 16 MP back camera and 8 MP front camera
* Dual band WiFi and Bluetooth 5.0
* 8 GB RAM
* 256 GB UMCP storage which can be expanded by MicroSD of 512 GB in size
JingPad also has companion stylus and keyboard. The review unit did not have the stylus but I did get the detachable keyboard. More on the keyboard later.
Note
JingOS is in alpha stage of development. A lot of things do not work as expected or promised at this moment. It should eventually get better in the later stages of development.
### JingOS: Experience the user interface
The JingOS will immediately remind you of iOS or Deepin OS, whichever you prefer. The interface is clean and the 2K display makes things look pretty.
There are row of application icons and a dock at the bottom. Swiping up from the bottom brings an activity area displaying all the running applications. From here, you can drag an application upward to close it. Touching the trash will close all the running applications. To minimize an application, you have to swipe from right to left.
![JingOS interface][4]
Swiping down from the top left brings the notification center. Doing the same on the top right corner lets you access the Settings menu.
The theming option is limited to light and dark themes but hey, you are getting the dark theme at least. Changing the theme requires a restart, which is an annoyance.
![JingOS provides a dark mode][5]
From what I see and experience, JingOS uses Ubuntu as base. For the interface, it is using KDE Plasma and probably some parts of GNOME underneath. The footprints of KDE Plasma are visible at places. For example, look at this screenshot notification.
![JingOS is built on top of KDE Plasma][6]
Since it is based on Ubuntu, you can use the apt-get commands in the terminal. The terminal (KDEs Konsole) can be used with both keyboard and on-screen keyboard. You also have the option to split the terminal window.
![Terminal][7]
Something that bothered me with JingOS is that it only works in the landscape mode. I could not find a way to use it in the portrait mode. This creates an issue if you are trying to log into a website in the browser because the on-screen keyboard takes half of the screen and you cannot see the form fields properly.
![Works only in landscape mode][8]
There is also no tap to wake feature. Face unlock is also missing. There is a fingerprint reader on the power button but it does not work for now. JingOS will add this feature in the [next couple of months through software update][9].
Overall, JingOS is pretty to look at, pretty to use for most part of it.
### Keyboard
The review device I got came with the companion keyboard. The keyboard is made for JingPad. It is magnetic and the tab sticks to it. It is detected automatically, no wonder there. It doubles up as cover to give the device front and back protection.
![JingPad with keyboard][10]
You can tilt the device at certain angles to use it comfortably as a laptop.
![JingPad with keyboard placed at angle][11]
**At the time of writing this review, the keyboard support is in initial stages for JingOS. This is why it does not work as it should.**
There are several function keys on the keyboard for controlling the volume, brightness, media playback, screen lock etc.
Most function keys work, except the one for showing the desktop. The super key, which has JingPad logo, does not work as well. I was expecting it to bring the home screen from running applications but that did not work.
The worst part is that when the keyboard is attached, it does not show the mouse pointer on the screen. That means going into a hybrid use mode where you touch the applications icon to open them and then use the keyboard for typing. This must be improved in the future.
Coming to the typing, the keys are a bit tiny but not miniscule. I think thats expected from the keyboard that has to match the 11 inches screen. The keys
The added weight of the keyboard makes the device heavier than usual but I guess youll have to make a compromise with the ease of use and the increased weight.
Another thing I noticed that when you put the lid down in the keyboard mode, the screen is not locked automatically. I expected a similar behavior to closing the lid of laptop.
### Battery life, charging and performance
JingPad comes with a 8000 mAh and claims to have up to 10 hours of battery life. They also claim that their 18W fast charger will charge the device completely in 3 hours.
My review device did not come with the fast charger because it was still under manufacturing when they shipped the device. I got a travel adapter instead and it took slightly more than 5 hours to charge the device.
I tested and found that if you keep the device on standby, the battery lasts around 42 hours. If you start using it continuously, it goes for 6 hours max. Now, this depends on what kind of work you are doing on the system. A few tweaks like reducing screen brightness, refresh rate, disabling connectivity could give it some extra battery life.
### Camera and sound
![][12]
There are two cameras here. 16 MP back camera for taking pictures (well, why not) and 8 MP front camera for video calls and online meetings.
Camera performance is okay. The default camera does not come with AI features like the smartphones but thats fine. You are not going to use it primarily as a camera, after all.
The front camera is good enough for the video calls. The placement of front camera is on the top so when you are using it in the landscape mode, it may seem that you are not looking directly into the camera. But people could still see and hear you well so that should not be an issue.
Speaking of hearing, JingPad has 4 speakers, two on each side. They provide decent sound for causal YouTube and watching streaming content for a single person. No complaints in this department.
### Applications
JingPad comes pre-installed with some of its own applications. These applications include a file manager, camera app, voice recorder, camera app, gallery app, music and video applications. Many of these applications are open source and you can find the code on [their GitHub repository][13].
![The music app from JingOS][14]
JingOS has an app store of its own where it provides a handful of applications that work on the touch devices.
![JingOS App Store][15]
The app store does not offer many applications at the moment but _**you can use the apt package manager**_ to get additional software (from JingOSs repositories).
My biggest complaint is not about the lack of applications. Its about the versions of some of the offered applications. I downloaded Mozilla Firefox from the app store and it installed version 82. The current Firefox version is 92 at the time of writing the review. I tried updating the system, even in command line, to see if it gets updates but there were none.
Because of this outdated version, I could not [play Netflix on Firefox][16]. There is no dedicated Netflix app so this was the only way to test it.
![Issue with outdated Firefox][17]
There are plans to add Android apps support as well. I cannot say how well it will work but something is better than nothing.
### Using JingPad for coding
I am not a programmer, not anymore at least. But since JingPads targeted customer include young coders who are frequently on the move, I tried using it for some simple coding.
There is no IDE available from the App Store but VS Code was available to install from the APT repository as well as in DEB file format. I installed it and opened it but it never ran like it was supposed to be.
So, I installed Gedit and wrote some sample bash scripts. Not much of coding but with the attached keyboard, it was not too bad.
It would have been a lot better if the copy-paste worked but unfortunately, I was not able to select and copy the code from random websites like a true programmer of the 21st century. I hope this gets fixed in the future updates.
### Android compatibility and other operating systems
I know what you are thinking. The tab looks good hardware wise. It is designed to run a Linux distribution based on Ubuntu. So can I use Ubuntu or some other distributions?
JingOS team says that you are free to install any other operating system on it. It uses ARM architecture so that is something to keep in mind while replacing the OS.
You will also be able to install JingOS back. I am yet to experiment with this part. Ill update the review when I do that.
As per the [roadmap of JingOS][18], there are plans for adding Android compatibility. This means you should be able to install Android or Android based ROM/distributions. As per the JingOS team, they will be developing the solution in house instead of using tools like Anbox. That will be interesting to see.
Heres a demo video of an Android app running on JingPad under JingOS:
### Conclusion
JingOS is in alpha stage of development at the moment. Most of the issues I have encountered in this review should be addressed in the future OTA updates, as their roadmap suggests. The final stable version of JingPad should be available by March 2022.
JingPad as a device comes on the pricey side but it also gives you a high-end gadget. 2K AMOLED display with Gorilla Glass, 8 GB RAM, 256 GB UMCP storage and other stuff you get only in high-end devices. The sound from the speakers is decent.
The magnetic tab cover and the detachable keyboards are also of premium quality. These things matter because you are paying a good amount of money for it.
The [JingPad with the Pencil][19] (stylus) and the keyboard costs $899. It comes with free international shipping and 1-year warranty. In many countries, there will also be additional custom duty levied on top of this price.
That seems like a lot of money, right? If you compare it to the price of iPad Air with same specifications (256 GB storage, WiFi+Cellular), keyboard and Pencil, the Apple device price reaches $1300 in the USA.
It is natural to compare JingPad with PineTab, another ARM-based Linux tablet. But PineTab is not a high-end gadget. It has modest specification and geared towards DIY tinkerers. JingPad, on the other hand, targets regular users, not just the DIY geeks.
Altogether, JingPad is aiming to give you a true Linux tablet but in the premium range. You get what you are paying for. A premium device for a premium pricing, with the freedom to run Linux on it.
_**But at this stage, JingOS has a lot of pending work to make JingPad a consumer level Linux tablet.**_ You should wait for the final device unless you really want your hands on it right away.
I plan to make a video review of the device where you can see it in action. I am not very good at doing video reviews, so this will take some time. You may leave your comments on what you would like to see in the video review and Ill try to cover it.
_Meanwhile, you can follow the updates on JingPad and JingOS development on their [Telegram channel][20] or [Discord][21]. You may also watch the demos on [their YouTube channel][22]._
--------------------------------------------------------------------------------
via: https://itsfoss.com/jingpad-a1-review/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://en.jingos.com/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/jing-os-interaface.webp?resize=768%2C512&ssl=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingpad.jpeg?resize=800%2C529&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingos-interface.webp?resize=800%2C584&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingos-theme-change.webp?resize=800%2C584&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingos-plasma-notification.webp?resize=800%2C584&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingpad-terminal.webp?resize=800%2C584&ssl=1
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingpad-landscape.webp?resize=800%2C584&ssl=1
[9]: https://forum-cdn.jingos.com/uploads/default/original/1X/4c6ef800ef62f0315852fde4f2c32958b32d93a9.png
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingpad-with-keyboard.webp?resize=800%2C600&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingpad-keyboard-angle.webp?resize=800%2C600&ssl=1
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingpad-camera.png?resize=787%2C552&ssl=1
[13]: https://github.com/JingOS-team/JingOS
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingos-music-app.webp?resize=800%2C584&ssl=1
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/jingos-app-store.webp?resize=800%2C584&ssl=1
[16]: https://itsfoss.com/netflix-firefox-linux/
[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/JingPad-Netflix-Issue.webp?resize=800%2C584&ssl=1
[18]: https://forum.jingos.com/t/feature-roadmap-of-jingos-arm-on-the-jingpad-a1/1708
[19]: https://www.indiegogo.com/projects/jingpad-world-s-first-consumer-level-linux-tablet#/
[20]: https://t.me/JingOS_Linux
[21]: https://discord.gg/uuWc8qKM
[22]: https://www.youtube.com/channel/UCRbaVa2v845SEtRadSlhWmA

View File

@ -0,0 +1,80 @@
[#]: subject: "An open source alternative to Microsoft Exchange"
[#]: via: "https://opensource.com/article/21/9/open-source-groupware-grommunio"
[#]: author: "Markus Feilner https://opensource.com/users/mfeilner"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
An open source alternative to Microsoft Exchange
======
Open source users now have a robust and fully functional choice for
groupware.
![Working on a team, busy worklife][1]
Microsoft Exchange has for many years been nearly unavoidable as a platform for groupware environments. Late in 2020, however, an Austrian open source software developer introduced [grommunio][2], a groupware server and client with a look and feel familiar to Exchange and Outlook users.
The grommunio project functions well as a drop-in replacement for Exchange. The developers connect components to the platform the same way Microsoft does, and they support RPC (Remote Procedure Call) with the HTTP protocol. According to the developers, grommunio also includes numerous interfaces of common groupware such as IMAP, POP3, SMTP, EAS (Exchange ActiveSync), EWS (Exchange Web Services), CalDAV, and CardDAV. With such broad support, grommunio integrates smoothly into existing infrastructures.
Users will notice little difference among Outlook, Android, and iOS clients. Of course, as open source software, it supports other clients, too. Outlook and smartphones communicate with grommunio just as they do with a Microsoft Exchange server, thanks to their integrated, native Exchange protocols. An everyday enterprise user can continue to use their existing clients with the grommunio server quietly running in the background.
### More than just mail
In addition to mail functions, a calendaring system is available in the grommunio interface. Appointments can be created by clicking directly in the calendar display or in a new tab. It's intuitive and just what you'd expect from a modern tool. Users can create, manage, and share calendars as well as address books. Private contacts or common contacts are possible, and you can share everything with colleagues.
Task management shows a list of tasks on the left in a drop-down menu, and they can have both one owner and multiple collaborators. You can assign deadlines, categories, attachments, and other attributes to each task. In the same way, notes can be managed and shared with other team members.
### Chat, video conferences, and file sync
In addition to all the standard features of modern groupware, grommunio also offers chat, video conferencing, and file synchronization. It does this with full integration on a large scale for the enterprise, with extraordinarily high performance. It's an easy choice for promoters of open source and a powerful option for sysadmins. Because grommunio aims to integrate rather than reinvent, all components are standard open source tools.
![Screenshot of grommunio meeting space][3]
Jitsi integration for advanced video conferences (Markus Feilner, [CC BY-SA 4.0][4])
Behind the meeting function in grommunio is [Jitsi][5], smoothly integrated into the grommunio UI with a familiar user interface. The chat feature, fully integrated and centrally managed, is based on [Mattermost][6].
![Screenshot of grommunio's town square for chat][7]
Mattermost for chat (Markus Feilner, [CC BY-SA 4.0][4])
[ownCloud][8], which promises enterprise-level file sharing and synchronization, starts after a click on the Files button.
![Screenshot of grommunio file sharing space][9]
ownCloud for file synchronization and exchange (Markus Feilner, [CC BY-SA 4.0][4])
The grommunio project has a powerful administrative interface, including roles, domain and organization management, predictive monitoring, and a self-service portal. Shell-based wizards guide admins through installation and migration of data from Microsoft Exchange. The development team is constantly working for better integration and more centralization for management, and with that comes a better workflow for admins.
![Screenshot of grommunio dashboards][10]
grommunio's admin interface (Markus Feilner, [CC BY-SA 4.0][4])
### Explore grommunio
The grommunio project has lofty goals, but its developers have put in the work, and it shows. A German hosting service specializing in tax consultants—a sector where German data protection laws are especially tough—recently announced that grommunio is available to their customers. The grommunio project gets a lot right: a clean combination of existing, successful concepts working together to enable open, secure, and privacy-compliant communication.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/open-source-groupware-grommunio
作者:[Markus Feilner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mfeilner
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife)
[2]: https://grommunio.com/en/
[3]: https://opensource.com/sites/default/files/uploads/jitsi_0.png (grommunio meeting space)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/article/20/5/open-source-video-conferencing
[6]: https://opensource.com/education/16/3/mattermost-open-source-chat
[7]: https://opensource.com/sites/default/files/uploads/mattermost.png (grommunio's town square for chat)
[8]: https://owncloud.com/
[9]: https://opensource.com/sites/default/files/uploads/owncloud_0.png (Owncloud for file synchronization and exchange)
[10]: https://opensource.com/sites/default/files/uploads/grommunio_interface_0.png (Screenshot of grommunio dashboards)

View File

@ -0,0 +1,402 @@
[#]: subject: "PowerShell on Linux? A primer on Object-Shells"
[#]: via: "https://fedoramagazine.org/powershell-on-linux-a-primer-on-object-shells/"
[#]: author: "TheEvilSkeletonOzymandias42 https://fedoramagazine.org/author/theevilskeleton/https://fedoramagazine.org/author/ozymandias42/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
PowerShell on Linux? A primer on Object-Shells
======
![][1]
Photos by [NOAA][2] and [Cedric Fox][3] on [Unsplash][4]
In the previous post, [Install PowerShell on Fedora Linux][5], we went through different ways to install PowerShell on Fedora Linux and explained the basics of PowerShell. This post gives you an overview of PowerShell and a comparison to POSIX-compliant shells.
### Table of contents
* [Differences at first glance — Usability][6]
* [Speed and efficiency][7]
* [Aliases][8]
* [Custom aliases][9]
* [Differences between POSIX Shells — Char-stream vs. Object-stream][10]
* [To filter for something][11]
* [Output formatting][12]
* [Field separators, column-counting and sorting][13]
* [Getting rid of fields and formatting a nice table][14]
* [How its done in PowerShell][15]
* [Remote Administration with PowerShell — PowerShell-Sessions on Linux!?][16]
* [Background][17]
* [What this is good for][18]
* [Conclusion][19]
### Differences at first glance — Usability
One of the very first differences to take note of when using PowerShell for the first time is semantic clarity.
Most commands in traditional POSIX shells, like the Bourne Again Shell (BASH), are heavily abbreviated and often require memorizing.
Commands like _awk_, _ps_, _top_ or even _ls_ do not communicate what they do with their name. Only when one already _does_ know what they do, do the names start to make sense. Once I know that _ls_ **lists** files the abbreviation makes sense.
In PowerShell on the other hand, commands are perfectly self-descriptive. They accomplish this by following a strict naming convention.
Commands in PowerShell are called “cmdlets” (pronounced commandlets). These always follow the scheme of Verb-Noun.
One example: To **get** all files or child-items in a directory I tell PowerShell like this:
```
PS > Get-ChildItem
Directory: /home/Ozymandias42
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 14/04/2021 08:11 Folder1
d---- 13/04/2021 11:55 Folder2
```
**An Aside:**
The cmdlet name is Get-Child_Item_ not _Item**s**_. This is in acknowledgement of [Set-theory][20]. Each of the standard cmdlets return a list or a set of results. The number of items in a set —mathematicians call this the sets [cardinality][21]— can be 0, 1 or any arbitrary natural number, meaning the set can be empty, contain exactly one result or many results. The reason for this, and why I stress this here, is because the standard cmdlets _also_ implicitly implement a ForEach-Loop for any results they return. More about this later.
#### Speed and efficiency
##### Aliases
You might have noticed that standard cmdlets are long and can therefore be time consuming when writing scripts. However, many cmdlets are aliased and dont necessarily depend on the case, which mitigates this problem.
Lets write a script with unaliased cmdlets as an example:
```
PS > Get-Process | ForEach-Object {Write-Host $_.Name -ForegroundColor Cyan}
```
This lists the name of running processes in cyan. As you can see, many characters are in upper case and cmdlets names are relatively long. Lets shorten them and replace upper case letters to make the script easier to type:
```
PS > gps | foreach {write-host $_.name -foregroundcolor cyan}
```
This is the same script but with greatly simplified input.
To see the full list of aliased cmdlets, type _Get-Alias_.
##### Custom aliases
Just like any other shell, PowerShell also lets you set your own aliases by using the _Set-Alias_ cmdlet. Lets alias _Write-Host_ to something simpler so we can make the same script even easier to type:
```
PS > Set-Alias -Name wh -Value Write-Host
```
Here, we aliased _wh_ to _Write-Host_ to increase typebility. When setting aliases, _-Name_ indicates what you want the alias to be and _-Value_ indicates what you want to alias to.
Lets see how it looks now:
```
PS > gps | foreach {wh $_.name -foregroundcolor cyan}
```
You can see that we already made the script easier to type. If we wanted, we could also alias _ForEach-Object_ to _fe_, but you get the gist.
If you want to see the properties of an alias, you can type _Get-Alias_. Lets check the properties of the alias _wh_ using the _Get-Alias_ cmdlet:
```
PS > Get-Alias wh
CommandType Name Version Source
----------- ---- ------- ------
Alias wh -> Write-Host
```
##### Autocompletion and suggestions
PowerShell suggests cmdlets or flags when you press the Tab key twice, by default. If there is nothing to suggest, PowerShell automatically completes to the cmdlet.
### Differences between POSIX Shells — Char-stream vs. Object-stream
Any scripting will eventually string commands together via pipe | and soon come to notice a few key differences.
In bash what is moved from one command to the next through a pipe is just a string of characters. However, in PowerShell this is not the case.
In PowerShell, every cmdlet is aware of data structures and objects. For example, a structure like this:
```
{
firstAuthor=Ozy,
secondAuthor=Skelly
}
```
This data is kept as-is even if a command, used alone, would have presented this data as follows:
```
AuthorNr. AuthorName
1 Ozy
2 Skelly
```
In bash, on the other hand, that formatted output would need to be created by parsing with helper tools like _awk_ or _cut_ first, to be usable with a different command.
PowerShell does not require this parsing since the underlying structure is sent when using a pipe rather than the formatted output shown without. So the command _authorObject | doThingsWithSingleAuthor firstAuthor_ is possible.
The following examples shall further illustrate this.
**Beware:** This will get fairly technical and verbose. Skip if satisfied already.
A few of the most often used constructs to illustrate the advantage of PowerShell over bash, when using pipes, are to:
* filter for something
* format output
* sort output
When implementing these in bash there are a few things that will re-occur time and time again.
The following sections will exemplarise these constructs and their variants in bash and contrast them with their PowerShell equivalents.
#### To filter for something
Lets say you want to see all processes matching the name _ssh-agent_.
In human thinking terms you know what you want.
1. Get all processes
2. Filter for all processes that match our criteria
3. Print those processes
To apply this in bash we could do it in two ways.
The first one, which most people who are comfortable with bash might use is this one:
```
$ ps -p $(pgrep ssh-agent)
```
At first glance this is straight forward. _ps_ gets all processes and the _-p_ flag tells it to filter for a given list of pids.
What the veteran bash user might forget here however is that this might read this way but is not actually run as such. Theres a tiny but important little thing called the order of evaluation.
_$()_ is d a subshell. A subshell is run, or evaluated, first. This means the list of pids to filter again is first and the result is then returned in place of the subshell for the waiting outer command _ps_ to use.
This means it is written as:
1. Print processes
2. Filter Processes
but evaluated the other way around. It also implicitly combines the original steps 2. and 3.
A less often used variant that more closely matches the human thought pattern and evaluation order is:
```
$ pgrep ssh-agent | xargs ps
```
The second one still combines two steps, the steps 1. and 2. but follows the evaluation logic a human would think of.
The reason this variant is less used is that ominous _xargs_ command. What this basically does is to append all lines of output from the previous command as a single long line of arguments to the command followed by it. In this case _ps_.
This is necessary because pgrep produces output like this:
```
$ pgrep bash
14514
15308
```
When used in conjunction with a subshell _ps_, might not care about this but when using pipes to approximate the human evaluation order this becomes a problem.
What _xargs_ does, is to reduce the following construct to a single command:
```
$ for i in $(pgrep ssh-agent); do ps $i ; done
```
Okay. Now we have talked a LOT about evaluation order and how to do it in bash in different ways with different evaluation orders of the three basic steps we outlined.
So with this much preparation, how does PowerShell handle it?
```
PS > Get-Process | Where-Object Name -Match ssh-agent
```
Completely self-descriptive and follows the evaluation order of the steps we outlined perfectly. Also do take note of the absence of _xargs_ or any explicit for-loop.
As mentioned in our aside a few hundred words back, the standard cmdlets all implement ForEach internally and do it implicitly when piped input in list-form.
#### Output formatting
This is where PowerShell really shines. Consider a simple example to see how its done in bash first. Say we want to list all files in a directory sorted by size from the biggest to the smallest and listed as a table with filename, size and creation date. Also lets say we have some files with long filenames in there and want to make sure we get the full filename no matter how big our terminal.
##### Field separators, column-counting and sorting
Now the first obvious step is to run _ls_ with the _-l_ flag to get a list with not just the filenames but the creation date and the file sizes we need to sort against too.
We will get a more verbose output than we need. Like this one:
```
$ ls -l
total 148692
-rwxr-xr-x 1 root root 51984 May 16 2020 [
-rwxr-xr-x 1 root root 283728 May 7 18:13 appdata2solv
lrwxrwxrwx 1 root root 6 May 16 2020 apropos -> whatis
-rwxr-xr-x 1 root root 35608 May 16 2020 arch
-rwxr-xr-x 1 root root 14784 May 16 2020 asn1Coding
-rwxr-xr-x 1 root root 18928 May 16 2020 asn1Decoding
[not needed] [not needed]
```
What is apparent is, that to get the kind of output we want we have to get rid of the fields marked _[not needed]_ in the above example but thats not the only thing needing work. We also need to sort the output so that the biggest file is the first in the list, meaning reverse sort…
This, of course, can be done in multiple ways but it only shows again, how convoluted bash scripts can get.
We can either sort with the _ls_ tool directly by using the _-r_ flag for reverse sort, and the _sort=size_ flag for sort by size, or we can pipe the whole thing to _sort_ and supply that with the _-n_ flag for numeric sort and the _-k 5_ flag to sort by the fifth column.
Wait! **fifth** ? Yes. Because this too we would have to know. _sort_, by default, uses spaces as field separators, meaning in the tabular output of _ls -l_ the numbers representing the size is the 5th field.
##### Getting rid of fields and formatting a nice table
To get rid of the remaining fields, we once again have multiple options. The most straightforward option, and most likely to be known, is probably _cut_. This is one of the few UNIX commands that is self-descriptive, even if its just because of the natural brevity of its associated verb. So we pipe our results, up to now, into _cut_ and tell it to only output the columns we want and how they are separated from each other.
_cut -f5- -d” “_ will output from the fifth field to the end. This will get rid of the first columns.
```
283728 May 7 18:13 appdata2solv
51984 May 16 2020 [
35608 May 16 2020 arch
14784 May 16 2020 asn1Coding
6 May 16 2020 apropos -> whatis
```
This is till far from how we wanted it. First of all the filename is in the last column and then the filesize is in the Human unfriendly format of blocks instead of KB, MB, GB and so on. Of course we could fix that too in various ways at various points in our already long pipeline.
All of this makes it clear that transforming the output of traditional UNIX commands is quite complicated and can often be done at multiple points in the pipeline.
##### How its done in PowerShell
```
PS > Get-ChildItem
| Sort-Object Length -Descending
| Format-Table -AutoSize
Name,
@{Name="Size"; Expression=
{[math]::Round($_.Length/1MB,2).toString()+" MB"}
},
CreationTime
#Reformatted over multiple lines for better readability.
```
The only actual output transformation being done here is the conversion and rounding of bytes to megabytes for better human readability. This also is one of the only real weaknesses of PowerShell, that it lacks a _simple_ mechanism to get human readable filesizes.
That part aside its clear, that Format-Table allows you to simply list the columns wanted by their names in the order you want them.
This works because of the aforementioned object-nature of piped data-streams in PowerShell. There is no need to cut apart strings by delimiters.
#### Remote Administration with PowerShell — PowerShell-Sessions on Linux!?
#### Background
Remote administration via PowerShell on Windows has traditionally always been done via Windows Remoting, using the WinRM protocol.
With the release of Windows 10, Microsoft has also offered a Windows native OpenSSH Server and Client.
Using the SSH Server alone on Windows provides the user a CMD prompt unless the default system Shell is changed via a registry key.
A more elegant option is to make use of the Subsystem facility in _sshd_config_. This makes it possible to configure arbitrary binaries as remote-callable subsystems instead of the globally configured default shell.
By default there is usually one already there. The sftp subsystem.
To make PowerShell available as Subsystem one simply needs to add it like so:
```
Subsystem powershell /usr/bin/pwsh -sshs --noprofile --nologo
```
This works —with the correct paths of course— on _all_ OS PowerShell Core is available for. So that means Windows, Linux, and macOS.
#### What this is good for
It is now possible to open a PowerShell (Remote) Session to a properly configured SSH-enabled Server by doing this:
```
PS > Enter-PSSession
-HostName <target-HostName-or-IP>
-User <targetUser>
-IdentityFilePath <path-to-id_rsa-file>
...
<-SSHTransport>
```
What this does is to register and enter an interactive PSSession with the Remote-Host. By itself this has no functional difference from a normal SSH-session. It does, however, allow for things like running scripts from a local host on remote machines via other cmdlets that utilise the same subsystem.
One such example is the _Invoke-Command_ cmdlet. This becomes especially useful, given that _Invoke-Command_ has the _-AsJob_ flag.
What this enables is running local scripts as batchjobs on multiple remote servers while using the local Job-manager to get feedback about when the jobs have finished on the remote machines.
While it is possible to run local scripts via ssh on remote hosts it is not as straight forward to view their progress and it gets outright hacky to run local scripts remotely. We refrain from giving examples here, for brevitys sake.
With PowerShell, however, this can be as easy as this:
```
$listOfRemoteHosts | Invoke-Command
-HostName $_
-FilePath /home/Ozymandias42/Script2Run-Remotely.ps1
-AsJob
```
Overview of the running tasks is available by doing this:
```
PS > Get-Job
Id Name PSJobTypeName State HasMoreData Location Command
-- ---- ------------- ----- ----------- -------- -------
1 Job1 BackgroundJob Running True localhost Microsoft.PowerShe…
```
Jobs can then be attached to again, should they require manual intervention, by doing _Receive-Job &lt;JobName-or-JobNumber&gt;_.
### Conclusion
In conclusion, PowerShell applies a fundamentally different philosophy behind its syntax in comparison to standard POSIX shells like bash. Of course, for bash, its historically rooted in the limitations of the original UNIX. PowerShell provides better semantic clarity with its cmdlets and outputs which means better understandability for humans, hence easier to use and learn. PowerShell also provides aliased cmdlets in the case of unaliased cmdlets being too long. The main difference is that PowerShell is object-oriented, leading to elimination of input-output parsing. This allows PowerShell scripts to be more concise.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/powershell-on-linux-a-primer-on-object-shells/
作者:[TheEvilSkeletonOzymandias42][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/theevilskeleton/https://fedoramagazine.org/author/ozymandias42/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/09/powershell_2-816x345.jpg
[2]: https://unsplash.com/@noaa?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/@thecedfox?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://unsplash.com/s/photos/shell?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[5]: https://fedoramagazine.org/install-powershell-on-fedora-linux
[6]: tmp.YtC5jLcRsL#differences-at-first-glance--usability
[7]: tmp.YtC5jLcRsL#speed-and-efficiency
[8]: tmp.YtC5jLcRsL#aliases
[9]: tmp.YtC5jLcRsL#custom-aliases
[10]: tmp.YtC5jLcRsL#differences-between-posix-shells--char-stream-vs-object-stream
[11]: tmp.YtC5jLcRsL#to-filter-for-something
[12]: tmp.YtC5jLcRsL#output-formatting
[13]: tmp.YtC5jLcRsL#field-operators-collumn-counting-and-sorting
[14]: tmp.YtC5jLcRsL#getting-rid-of-fields-and-formatting-a-nice-table
[15]: tmp.YtC5jLcRsL#how-its-done-in-powershell
[16]: tmp.YtC5jLcRsL#remote-administration-with-powershell--powershell-sessions-on-linux
[17]: tmp.YtC5jLcRsL#background
[18]: tmp.YtC5jLcRsL#what-this-is-good-for
[19]: tmp.YtC5jLcRsL#conclusion
[20]: https://en.wikipedia.org/wiki/Set_(mathematics)
[21]: https://en.wikipedia.org/wiki/Set_(mathematics)#Cardinality

View File

@ -0,0 +1,63 @@
[#]: subject: "6 open source tools for orchestral composers"
[#]: via: "https://opensource.com/article/21/9/open-source-orchestral-composers"
[#]: author: "Pete Savage https://opensource.com/users/psav"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
6 open source tools for orchestral composers
======
Think it's impossible to compose orchestral tracks with just open source
software? Think again.
![Sheet music with geometry graphic][1]
As an avid amateur musician, I've worked with many different software programs to create both simple and complex pieces. As my projects have grown in scope, I've used composition software ranging from basic engraving to MIDI-compatible notation to playback of multi-instrument works. Composers have their choice of proprietary software, but I wanted to prove that, regardless of the need, there is an open source tool that will more than satisfy them.
### Music engraving programs
When my needs were simple and my projects few, I used the excellent resource [Lilypond][2], part of the GNU project, for engraving my music score. Lilypond is a markup language used to create sheet music. What looks like a mass of letters and numbers on the screen becomes a beautiful music score that can be exported as a PDF to share with all your musical acquaintances. For creating small snippets of a score, Lilypond performs excellently.
Using a text markup language might be a tolerable experience for a software engineer, but waiting to save and run the renderer before seeing the result of your edit can be frustrating. [Frescobaldi][3] is an effective solution to this problem, allowing you to work in a text editor on the left and see a live preview updating on the right. For small scores, this works well. For larger scores, however, the render time can make for a painful experience. Though Frescobaldi has a built-in MIDI-style player, hooking it up to play something requires both knowledge of [JACK][4] (an audio connection API) and a user interface such as [qSynth][5]. For me, Frescobaldi is best for projects when I already know what the score looks like. It's not a composing tool; it's an engraving tool.
### Music notation programs
A few months ago, I started creating a songbook for my former band. For this project, I needed to add chord diagrams, guitar tablature, and multiple staves, so I moved over to [Denemo.][6] Denemo is a fabulously configurable tool that uses LilyPond as its rendering backend. The key benefit to Denemo is the ability to enter notes on a stave. The stave you enter notes on might not look exactly like the score will appear on rendering—in fact, it almost certainly won't. However, in most cases, it's far easier to enter the notes directly on a stave than to write them in a text markup language.
Denemo served me well when creating my songbook, but I had greater ambitions. When I started composing a few piano and small ensemble pieces, I could have handled these in Denemo, but I decided to try [MuseScore][7] to compare the programs. Though MuseScore doesn't use a text-based markup language like Lilypond, it has many other benefits over the LilyPond-based offerings, such as single-note dynamics and rendering out to WAV or MP3.
In my latest project, I took a piano concept I wrote for a fictional role-playing game (RPG) and turned it into a full orchestral version. MuseScore was fantastic for this. The program definitely became part of my composing process, and it would have been much more difficult for me to arrange 18 instruments in LilyPond than in MuseScore. I was also able to hear single-note dynamics, such as a single violin note moving from silence to loud and back. I do not know of any editors for Lilypond that allow for this.
#### Piano Concept
#### Orchestral Concept
### Going beyond the score
My next task will be to take the MIDI from this project and code it into a Digital Audio Workstation (DAW), such as [Ardour][8]. The difference between the audio output from MuseScore and something created with a DAW is that a DAW allows much more than single-note dynamics. Expression, volume, and other parameters can be adjusted in time, allowing for a more realistic sound, assuming the instrument can handle it. I'm currently working on packaging up [sFizz][9] for Fedora. sFizz is an SFZ instrument VST plugin that can be used in an open source DAW and has fantastic support for the different expressions I'd like to use in my piece.
The ultimate aim of this project is to show that open source tooling can be used to create an orchestral track that sounds authentic. Think it's impossible to make realistic-sounding orchestral tracks just with open source software? That's for next time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/open-source-orchestral-composers
作者:[Pete Savage][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/psav
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/sheet_music_graphic.jpg?itok=t-uXNbzE (Sheet music with geometry graphic)
[2]: https://lilypond.org/
[3]: https://frescobaldi.org/
[4]: https://jackaudio.org/
[5]: https://qsynth.sourceforge.io/
[6]: http://www.denemo.org/
[7]: https://musescore.org/en
[8]: https://ardour.org/
[9]: https://sfz.tools/sfizz/

View File

@ -0,0 +1,666 @@
[#]: subject: "Code memory safety and efficiency by example"
[#]: via: "https://opensource.com/article/21/8/memory-programming-c"
[#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
实例讲解代码之内存安全与效率
======
了解有关内存安全和效率的更多信息。
![Code going into a computer.][1]
C 是一种高级语言同时具有“接近金属”LCTT 译注:即“接近人类思维方式”的反义词)的特性,这使得它有时看起来更像是一种可移植的汇编语言,而不是 Java 或 Python 的兄弟语言。内存管理作为上述特性之一,涵盖了正在执行的程序对内存的安全和高效使用。本文通过 C 语言代码示例,以及现代 C 语言编译器生成的汇编语言代码段,详细介绍了内存安全性和效率。
尽管代码示例是用 C 语言编写的,但安全高效的内存管理指南对于 C++ 是同样适用的。这两种语言在很多细节上有所不同例如C++ 具有 C 所缺乏的面向对象特性和泛型),但在内存管理方面面临的挑战是一样的。
### 执行中程序的内存概述
对于正在执行的程序(又名 _<ruby>进程<rt>process</rt></ruby>_),内存被划分为三个区域:**<ruby><rt>stack</rt></ruby>**、**<ruby><rt>heap</rt></ruby>** 和 **<ruby>静态区<rt>static area</rt></ryby>**。下文会给出每个区域的概述,以及完整的代码示例。
作为通用 CPU 寄存器的替补_栈_ 为代码块(例如函数或循环体)中的局部变量提供暂存器存储。传递给函数的参数在此上下文中也视作局部变量。看一下下面这个简短的示例:
```
void some_func(int a, int b) {
   int n;
   ...
}
```
通过 **a****b** 传递的参数以及局部变量 **n** 的存储会在栈中,除非编译器可以找到通用寄存器。编译器倾向于优先将通用寄存器用作暂存器,因为 CPU 对这些寄存器的访问速度很快(一个时钟周期)。然而,这些寄存器在台式机、笔记本电脑和手持机器的标准架构上很少(大约 16 个)。
在只有汇编语言程序员才能看到的实施层面,栈被组织为具有 **push**(插入)和 **pop**(删除)操作的 LIFO后进先出列表。 **top** 指针可以作为偏移的基地址;这样,除了 **top** 之外的栈位置也变得可访问了。例如,表达式 **top+16** 指向堆栈的 **top** 指针上方 16 个字节的位置,表达式 **top-16** 指向 **top** 指针下方 16 个字节的位置。因此,可以通过 **top** 指针访问实现了暂存器存储的栈的位置。在标准的 ARM 或 Intel 架构中,栈从高内存地址增长到低内存地址;因此,减小某进程的 **top** 就是增大其栈规模。
使用栈结构就意味着轻松高效地使用内存。编译器(而非程序员)会编写管理栈的代码,管理过程通过分配和释放所需的暂存器存储来实现;程序员声明函数参数和局部变量,将实现过程交给编译器。此外,完全相同的栈存储可以在连续的函数调用和代码块(如循环)中重复使用。精心设计的模块化代码会将栈存储作为暂存器的首选内存选项,同时优化编译器要尽可能使用通用寄存器而不是栈。
**堆** 提供的存储是通过程序员代码显式分配的,堆分配的语法因语言而异。在 C 中,成功调用库函数 **malloc**(或其变体 **calloc** 等)会分配指定数量的字节(在 C++ 和 Java 等语言中,**new** 运算符具有相同的用途)。编程语言在如何释放堆分配的存储方面有着巨大的差异:
* 在 Java、Go、Lisp 和 Python 等语言中,程序员不会显式释放动态分配的堆存储。
例如,下面这个 Java 语句为一个字符串分配了堆存储,并将这个堆存储的地址存储在变量 **greeting** 中:
```
String greeting = new String("Hello, world!");
```
Java 有一个垃圾回收器它是一个运行时实用程序如果进程无法再访问自己分配的堆存储回收器可以使其自动释放。因此Java 堆释放是通过垃圾收集器自动进行的。在上面的示例中,垃圾收集器将在变量 **greeting** 超出作用域后,释放字符串的堆存储。
* Rust 编译器会编写堆释放代码。这是 Rust 在不依赖垃圾回收器的情况下,使堆释放实现自动化的开创性努力,但这也会带来运行时复杂性和开销。向 Rust 的努力致敬!
* 在 C和 C++)中,堆释放是程序员的任务。程序员调用 **malloc** 分配堆存储,然后负责相应地调用库函数 **free** 来释放该存储空间(在 C++ 中,**new** 运算符分配堆存储,而 **delete****delete[]** 运算符释放此类存储)。下面是一个 C 语言代码示例:
```
char* greeting = malloc(14);       /* 14 heap bytes */
strcpy(greeting, "Hello, world!"); /* copy greeting into bytes */
puts(greeting);                    /* print greeting */
free(greeting);                    /* free malloced bytes */
```
C 语言避免了垃圾回收器的成本和复杂性,但也不过是让程序员承担了堆释放的任务。
内存的 **静态区** 为可执行代码(例如 C 语言函数、字符串文字例如“Hello, world!”)和全局变量提供存储空间:
```
int n;                       /* global variable */
int main() {                 /* function */
   char* msg = "No comment"; /* string literal */
   ...
}
```
该区域是静态的,因为它的大小从进程执行开始到结束都固定不变。由于静态区相当于进程固定大小的内存占用,因此经验法则是通过避免使用全局数组等方法来使该区域尽可能小。
下文会结合代码示例对本节概述展开进一步讲解。
### 栈存储
想象一个有各种连续执行的任务的程序,任务包括了处理每隔几分钟通过网络下载并存储在本地文件中的数字数据。下面的 **stack** 程序简化了处理流程(仅是将奇数整数值转换为偶数),而将重点放在栈存储的好处上。
```
#include <stdio.h>
#include <stdlib.h>
#define Infile   "incoming.dat"
#define Outfile  "outgoing.dat"
#define IntCount 128000  /* 128,000 */
void other_task1() { /*...*/ }
void other_task2() { /*...*/ }
void process_data(const char* infile,
          const char* outfile,
          const unsigned n) {
  int nums[n];
  FILE* input = fopen(infile, "r");
  if (NULL == infile) return;
  FILE* output = fopen(outfile, "w");
  if (NULL == output) {
    fclose(input);
    return;
  }
  fread(nums, n, sizeof(int), input); /* read input data */
  unsigned i;
  for (i = 0; i < n; i++) {
    if (1 == (nums[i] & 0x1))  /* odd parity? */
      nums[i]--;               /* make even */
  }
  fclose(input);               /* close input file */
  fwrite(nums, n, sizeof(int), output);
  fclose(output);
}
int main() {
  process_data(Infile, Outfile, IntCount);
  
  /** now perform other tasks **/
  other_task1(); /* automatically released stack storage available */
  other_task2(); /* ditto */
  
  return 0;
}
```
底部的 **main** 函数首先调用 **process_data** 函数,该函数会创建一个基于栈的数组,其大小由参数 **n** 给定(当前示例中为 128,000。因此该数组占用 128,000 x **sizeof(int)** 个字节,在标准设备上达到了 512,000 字节(**int** 在这些设备上是四个字节)。然后数据会被读入数组(使用库函数 **fread**),循环处理,并保存到本地文件 **outgoing.dat**(使用库函数**fwrite**)。
**process_data** 函数返回到其调用者 **main** 函数时,**process_data** 函数的大约 500MB 栈暂存器可供 **stack** 程序中的其他函数用作暂存器。在此示例中,**main** 函数接下来调用存根函数 **other_task1****other_task2**。这三个函数在 **main** 中依次调用,这意味着所有三个函数都可以使用相同的堆栈存储作为暂存器。因为编写栈管理代码的是编译器而不是程序员,所以这种方法对程序员来说既高效又容易。
在 C 语言中,在块(例如函数或循环体)内定义的任何变量默认都有一个 **auto** 存储类,这意味着该变量是基于栈的。存储类 **register** 现在已经过时了,因为 C 编译器会主动尝试尽可能使用 CPU 寄存器。只有在块内定义的变量可能是 **register**,如果没有可用的 CPU 寄存器,编译器会将其更改为 **auto**。基于栈的编程可能是不错的首选方式,但这种风格确实有一些挑战性。下面的 **badStack** 程序说明了这点。
```
#include <stdio.h>;
const int* get_array(const unsigned n) {
  int arr[n]; /* stack-based array */
  unsigned i;
  for (i = 0; i < n; i++) arr[i] = 1 + 1;
  return arr;  /** ERROR **/
}
int main() {
  const unsigned n = 16;
  const int* ptr = get_array(n);
  
  unsigned i;
  for (i = 0; i < n; i++) printf("%i ", ptr[i]);
  puts("\n");
  return 0;
}
```
**badStack** 程序中的控制流程很简单。**main** 函数使用 16LCTT译注原文为 128应为作者笔误作为参数调用函数 **get_array**,然后被调用函数会使用传入参数来创建对应大小的本地数组。**get_array** 函数会初始化数组并返回给 **main** 中的数组标识符 **arr****arr** 是一个指针常量,保存数组的第一个 **int** 元素的地址。
当然,本地数组 **arr** 可以在 **get_array** 函数中访问,但是一旦 **get_array** 返回,就不能合法访问该数组。尽管如此,**main** 函数会尝试使用函数 **get_array** 返回的堆栈地址 **arr** 来打印基于栈的数组。现代编译器会警告错误。例如,下面是来自 GNU 编译器的警告:
```
badStack.c: In function 'get_array':
badStack.c:9:10: warning: function returns address of local variable [-Wreturn-local-addr]
return arr;  /** ERROR **/
```
一般规则是,如果使用栈存储实现局部变量,应该仅在该变量所在的代码块内,访问这块基于栈的存储(在本例中,数组指针 **arr** 和循环计数器 **i** 均为这样的局部变量)。因此,函数永远不应该返回指向基于栈存储的指针。
### 堆存储
接下来使用若干代码示例凸显在 C 语言中使用堆存储的优点。在第一个示例中,使用了最优方案分配、使用和释放堆存储。第二个示例(在下一节中)将堆存储嵌套在了其他堆存储中,这会使其释放操作变得复杂。
```
#include <stdio.h>
#include <stdlib.h>
int* get_heap_array(unsigned n) {
  int* heap_nums = malloc(sizeof(int) * n); 
  
  unsigned i;
  for (i = 0; i < n; i++)
    heap_nums[i] = i + 1;  /* initialize the array */
  
  /* stack storage for variables heap_nums and i released
     automatically when get_num_array returns */
  return heap_nums; /* return (copy of) the pointer */
}
int main() {
  unsigned n = 100, i;
  int* heap_nums = get_heap_array(n); /* save returned address */
  
  if (NULL == heap_nums) /* malloc failed */
    fprintf(stderr, "%s\n", "malloc(...) failed...");
  else {
    for (i = 0; i < n; i++) printf("%i\n", heap_nums[i]);
    free(heap_nums); /* free the heap storage */
  }
  return 0; 
}
```
上面的 **heap** 程序有两个函数: **main** 函数使用参数(示例中为 100调用 **get_heap_array** 函数,参数用来指定数组应该有多少个 **int** 元素。因为堆分配可能会失败,**main** 函数会检查 **get_heap_array** 是否返回了 **NULL**;如果是,则表示失败。如果分配成功,**main** 将打印数组中的 **int** 值,然后立即调用库函数 **free** 来对堆存储解除分配。这就是最优的方案。
**get_heap_array** 函数以下列语句开头,该语句值得仔细研究一下:
```
int* heap_nums = malloc(sizeof(int) * n); /* heap allocation */
```
**malloc** 库函数及其变体函数针对字节进行操作;因此,**malloc** 的参数是 **n****int** 类型元素所需的字节数(**sizeof(int)** 在标准现代设备上是四个字节)。**malloc** 函数返回所分配字节段的首地址,如果失败则返回 **NULL** .
如果成功调用 **malloc**,在现代台式机上其返回的地址大小为 64 位。在手持设备和早些时候的台式机上,该地址的大小可能是 32 位,或者甚至更小,具体取决于其年代。堆分配数组中的元素是 **int** 类型,这是一个四字节的有符号整数。这些堆分配的 **int** 的地址存储在基于栈的局部变量 **heap_nums** 中。可以参考下图:
```
                 heap-based
 stack-based        /
     \        +----+----+   +----+
 heap-nums--->|int1|int2|...|intN|
              +----+----+   +----+
```
一旦 **get_heap_array** 函数返回,指针变量 **heap_nums** 的栈存储将自动回收——但动态 **int** 数组的堆存储仍然存在,这就是 **get_heap_array** 函数返回这个地址(的副本)给 **main** 函数的原因:它现在负责在打印数组的整数后,通过调用库函数 **free** 显式释放堆存储:
```
free(heap_nums); /* free the heap storage */
```
**malloc** 函数不会初始化堆分配的存储空间,因此里面是随机值。相比之下,其变体函数 **calloc** 会将分配的存储初始化为零。这两个函数都返回 **NULL** 来表示分配失败。
**heap** 示例中,**main** 函数在调用 **free** 后会立即返回,正在执行的程序会终止,这会让系统回收所有已分配的堆存储。尽管如此,程序员应该养成在不再需要时立即显式释放堆存储的习惯。
### 嵌套堆分配
下一个代码示例会更棘手一些。C 语言有很多返回指向堆存储的指针的库函数。下面是一个常见的使用情景:
1\. C 程序调用一个库函数,该函数返回一个指向基于堆的存储的指针,而指向的存储通常是一个聚合体,如数组或结构体:
```
SomeStructure* ptr = lib_function(); /* returns pointer to heap storage */
```
2\. 然后程序使用所分配的存储。
3\. 对于清理而言,问题是对 **free** 的简单调用是否会清理库函数分配的所有堆分配存储。例如,**SomeStructure** 实例可能有指向堆分配存储的字段。一个特别麻烦的情况是动态分配的结构体数组,每个结构体有一个指向又一层动态分配的存储的字段。下面的代码示例说明了这个问题,并重点关注了如何设计一个可以安全地为客户端提供堆分配存储的库。
```
#include <stdio.h>
#include <stdlib.h>
typedef struct {
  unsigned id;
  unsigned len;
  float*   heap_nums;
} HeapStruct;
unsigned structId = 1;
HeapStruct* get_heap_struct(unsigned n) {
  /* Try to allocate a HeapStruct. */
  HeapStruct* heap_struct = malloc(sizeof(HeapStruct));
  if (NULL == heap_struct) /* failure? */
    return NULL;           /* if so, return NULL */
  /* Try to allocate floating-point aggregate within HeapStruct. */
  heap_struct->heap_nums = malloc(sizeof(float) * n);
  if (NULL == heap_struct->heap_nums) {  /* failure? */
    free(heap_struct);                   /* if so, first free the HeapStruct */
    return NULL;                         /* then return NULL */
  }
  /* Success: set fields */
  heap_struct->id = structId++;
  heap_struct->len = n;
  return heap_struct; /* return pointer to allocated HeapStruct */
}
void free_all(HeapStruct* heap_struct) {
  if (NULL == heap_struct) /* NULL pointer? */
    return;                /* if so, do nothing */
  
  free(heap_struct->heap_nums); /* first free encapsulated aggregate */
  free(heap_struct);            /* then free containing structure */  
}
int main() {
  const unsigned n = 100;
  HeapStruct* hs = get_heap_struct(n); /* get structure with N floats */
  /* Do some (meaningless) work for demo. */
  unsigned i;
  for (i = 0; i < n; i++) hs->heap_nums[i] = 3.14 + (float) i;
  for (i = 0; i < n; i += 10) printf("%12f\n", hs->heap_nums[i]);
  free_all(hs); /* free dynamically allocated storage */
  
  return 0;
}
```
上面的 **nestedHeap** 程序示例以结构体 **HeapStruct** 为中心,结构体中又有名为 **heap_nums** 的指针字段:
```
typedef struct {
  unsigned id;
  unsigned len;
  float*   heap_nums; /** pointer **/
} HeapStruct;
```
函数 **get_heap_struct** 尝试为 **HeapStruct** 实例分配堆存储,这需要为字段 **heap_nums** 指向的若干个 **float** 变量分配堆存储。如果成功调用 **get_heap_struct** 函数,并将指向堆分配结构体的指针以 **hs** 命名,其结果可以描述如下:
```
hs-->HeapStruct instance
        id
        len
        heap_nums-->N contiguous float elements
```
**get_heap_struct** 函数中,第一个堆分配过程很简单:
```
HeapStruct* heap_struct = malloc(sizeof(HeapStruct));
if (NULL == heap_struct) /* failure? */
  return NULL;           /* if so, return NULL */
```
**sizeof(HeapStruct)** 包括了 **heap_nums** 字段的字节数32 位机器上为 464 位机器上为 8**heap_nums** 字段则是指向动态分配数组中的 **float** 元素的指针。那么,问题关键在于 **malloc** 为这个结构体传送了字节空间还是表示失败的 **NULL**;如果是 **NULL****get_heap_struct** 函数就也返回 **NULL** 以通知调用者堆分配失败。
第二步尝试堆分配的过程更复杂,因为在这一步,**HeapStruct** 的堆存储已经分配好了:
```
heap_struct->heap_nums = malloc(sizeof(float) * n);
if (NULL == heap_struct->heap_nums) {  /* failure? */
  free(heap_struct);                   /* if so, first free the HeapStruct */
  return NULL;                         /* and then return NULL */
}
```
传递给 **get_heap_struct** 函数的参数 **n** 指明动态分配的 **heap_nums** 数组中应该有多少个 **float** 元素。如果可以分配所需的若干个 **float** 元素,则该函数在返回 **HeapStruct** 的堆地址之前会设置结构的 **id****len** 字段。 但是,如果尝试分配失败,则需要两个步骤来实现最优方案:
1\. 必须释放 **HeapStruct** 的存储以避免内存泄漏。对于调用 **get_heap_struct** 的客户端函数而言,没有动态 **heap_nums** 数组的 **HeapStruct** 可能就是没用的;因此,**HeapStruct** 实例的字节空间应该显式释放,以便系统可以回收这些空间用于未来的堆分配。
2\. 返回 **NULL** 以标识失败。
如果成功调用 **get_heap_struct** 函数,那么释放堆存储也很棘手,因为它涉及要以正确顺序进行的两次 **free** 操作。因此,该程序设计了一个 **free_all** 函数,而不是要求程序员再去手动实现两步释放操作。回顾一下,**free_all** 函数是这样的:
```
void free_all(HeapStruct* heap_struct) {
  if (NULL == heap_struct) /* NULL pointer? */
    return;                /* if so, do nothing */
  
  free(heap_struct->heap_nums); /* first free encapsulated aggregate */
  free(heap_struct);            /* then free containing structure */  
}
```
检查完参数 **heap_struct** 不是 **NULL** 值后,函数首先释放 **heap_nums** 数组,这步要求 **heap_struct** 指针此时仍然是有效的。先释放 **heap_struct** 的做法是错误的。一旦 **heap_nums** 被释放,**heap_struct** 就可以释放了。如果 **heap_struct** 被释放,但 **heap_nums** 没有被释放,那么数组中的 **float** 元素就会泄漏:仍然分配了字节空间,但无法被访问到——因此一定要记得释放 **heap_nums**。存储泄漏将一直持续,直到 **nestedHeap** 程序退出,系统回收泄漏的字节时为止。
关于 **free** 库函数的注意事项就是要有顺序。回想一下上面的调用示例:
```
free(heap_struct->heap_nums); /* first free encapsulated aggregate */
free(heap_struct);            /* then free containing structure */
```
这些调用释放了分配的存储空间——但它们并 _不是_ 将它们的操作参数设置为 **NULL****free** 函数会获取地址的副本作为参数;因此,将副本更改为 **NULL** 并不会改变原地址上的参数值)。例如,在成功调用 **free** 之后,指针 **heap_struct** 仍然持有一些堆分配字节的堆地址,但是现在使用这个地址将会产生错误,因为对 **free** 的调用使得系统有权回收然后重用这些分配过的字节。
使用 **NULL** 参数调用 **free** 没有意义,但也没有什么坏处。而在非 **NULL** 的地址上重复调用 **free** 会导致不确定结果的错误:
```
free(heap_struct);  /* 1st call: ok */
free(heap_struct);  /* 2nd call: ERROR */
```
### 内存泄漏和堆碎片化
“内存泄漏”是指动态分配的堆存储变得不再可访问。看一下相关的代码段:
```
float* nums = malloc(sizeof(float) * 10); /* 10 floats */
nums[0] = 3.14f;                          /* and so on */
nums = malloc(sizeof(float) * 25);        /* 25 new floats */
```
假如第一个 **malloc** 成功,第二个 **malloc** 会再将 **nums** 指针重置为 **NULL**(分配失败情况下)或是新分配的 25 个 **float** 中第一个的地址。最初分配的 10 个 **float** 元素的堆存储仍然处于被分配状态,但此时已无法再对其访问,因为 **nums** 指针要么指向别处,要么是 **NULL**。结果就是造成了 40 个字节 (**sizeof(float) * 10**) 的泄漏。
在第二次调用 **malloc** 之前,应该释放最初分配的存储空间:
```
float* nums = malloc(sizeof(float) * 10); /* 10 floats */
nums[0] = 3.14f;                          /* and so on */
free(nums);                               /** good **/
nums = malloc(sizeof(float) * 25);        /* no leakage */
```
即使没有泄漏,堆也会随着时间的推移而碎片化,需要对系统进行碎片整理。例如,假设两个最大的堆块当前的大小分别为 200MB 和 100MB。然而这两个堆块并不连续进程 **P** 此时又需要分配 250MB 的连续堆存储。在进行分配之前,系统可能要对堆进行 _碎片整理_ 以给 **P** 提供 250MB 连续存储空间。碎片整理很复杂,因此也很耗时。
内存泄漏会创建处于已分配状态但不可访问的堆块,从而会加速碎片化。因此,释放不再需要的堆存储是程序员帮助减少碎片整理需求的一种方式。
### 诊断内存泄漏的工具
有很多工具可用于分析内存效率和安全性,其中我最喜欢的是 [valgrind][11]。为了说明该工具如何处理内存泄漏,这里给出 **leaky** 示例程序:
```
#include <stdio.h>
#include <stdlib.h>
int* get_ints(unsigned n) {
  int* ptr = malloc(n * sizeof(int));
  if (ptr != NULL) {
    unsigned i;
    for (i = 0; i < n; i++) ptr[i] = i + 1;
  }
  return ptr;
}
void print_ints(int* ptr, unsigned n) {
  unsigned i;
  for (i = 0; i < n; i++) printf("%3i\n", ptr[i]);
}
int main() {
  const unsigned n = 32;
  int* arr = get_ints(n);
  if (arr != NULL) print_ints(arr, n);
  /** heap storage not yet freed... **/
  return 0;
}
```
**main** 函数调用了 **get_ints** 函数,后者会试着从堆中 **malloc** 32 个 4 字节的 **int**,然后初始化动态数组(如果 **malloc** 成功)。初始化成功后,**main** 函数会调用 **print_ints**函数。程序中并没有调用 **free** 来对应 **malloc** 操作;因此,内存泄漏了。
如果安装了 **valgrind** 工具箱,下面的命令会检查 **leaky** 程序是否存在内存泄漏(**%** 是命令行提示符):
```
% valgrind --leak-check=full ./leaky
```
绝大部分输出都在下面给出了。左边的数字 207683 是正在执行的 **leaky** 程序的进程标识符。这份报告给出了泄漏发生位置的详细信息,本例中位置是在 **main** 函数所调用的 **get_ints** 函数中对 **malloc** 的调用处。
```
==207683== HEAP SUMMARY:
==207683==   in use at exit: 128 bytes in 1 blocks
==207683==   total heap usage: 2 allocs, 1 frees, 1,152 bytes allocated
==207683== 
==207683== 128 bytes in 1 blocks are definitely lost in loss record 1 of 1
==207683==   at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==207683==   by 0x109186: get_ints (in /home/marty/gc/leaky)
==207683==   by 0x109236: main (in /home/marty/gc/leaky)
==207683== 
==207683== LEAK SUMMARY:
==207683==   definitely lost: 128 bytes in 1 blocks
==207683==   indirectly lost: 0 bytes in 0 blocks
==207683==   possibly lost: 0 bytes in 0 blocks
==207683==   still reachable: 0 bytes in 0 blocks
==207683==   suppressed: 0 bytes in 0 blocks
```
如果把 **main** 函数改成在对 **print_ints** 的调用之后,再加上一个对 **free** 的调用,**valgrind** 就会对 **leaky** 程序给出一个干净的内存健康清单:
```
==218462== All heap blocks were freed -- no leaks are possible
```
### 静态区存储
在正统的 C 语言中,函数必须在所有块之外定义。这是一些 C 编译器支持的特性,杜绝了在另一个函数体内定义一个函数的可能。我举的例子都是在所有块之外定义的函数。这样的函数要么是 **static** 即静态的,要么是 **extern** 即外部的,其中 **extern** 是默认值。
C 语言中,以 **static****extern** 修饰的函数和变量驻留在内存中所谓的 **静态区** 中,因为在程序执行期间该区域大小是固定不变的。这两个存储类型的语法非常复杂,我们应该回顾一下。在回顾之后,会有一个完整的代码示例来生动展示语法细节。在所有块之外定义的函数或变量默认为 **extern**;因此,函数和变量要想存储类型为 **static** ,必须显式指定:
```
/** file1.c: outside all blocks, five definitions  **/
int foo(int n) { return n * 2; }     /* extern by default */
static int bar(int n) { return n; }  /* static */
extern int baz(int n) { return -n; } /* explicitly extern */
int num1;        /* extern */
static int num2; /* static */
```
**extern** 和 **static** 的区别在于作用域:**extern** 修饰的函数或变量可以实现跨文件可见(需要声明)。相比之下,**static** 修饰的函数仅在 _定义_ 该函数的文件中可见,而 **static** 修饰的变量仅在 _定义_ 该变量的文件(或文件中的块)中可见:
```
static int n1;    /* scope is the file */
void func() {
   static int n2; /* scope is func's body */
   ...
}
```
如果在所有块之外定义了 **static** 变量,例如上面的 **n1**,该变量的作用域就是定义变量的文件。无论在何处定义 **static** 变量,变量的存储都在内存的静态区中。
**extern** 函数或变量在给定文件中的所有块之外定义,但这样定义的函数或变量也可以在其他文件中声明。典型的做法是在头文件中 _声明_ 这样的函数或变量,只要需要就可以包含进来。下面这些简短的例子阐述了这些棘手的问题。
假设 **extern** 函数 **foo****file1.c**_定义_,有无关键字 **extern** 效果都一样:
```
/** file1.c **/
int foo(int n) { return n * 2; } /* definition has a body {...} */
```
必须在其他文件(或其中的块)中使用显式的 **extern** _声明_ 此函数才能使其可见。以下是使 **extern** 函数 **foo** 在文件 **file2.c** 中可见的声明语句:
```
/** file2.c: make function foo visible here **/
extern int foo(int); /* declaration (no body) */
```
回想一下,函数声明没有用大括号括起来的主体,而函数定义会有这样的主体。
为了便于查看,函数和变量声明通常会放在头文件中。准备好需要声明的源代码文件,然后就可以 **#include** 相关的头文件。下一节中的 **staticProg** 程序演示了这种方法。
至于 **extern** 的变量,规则就变得更棘手了(很抱歉增加了难度!)。任何 **extern** 的对象——无论函数或变量——必须 _定义_ 在所有块之外。此外,在所有块之外定义的变量默认为 **extern**
```
/** outside all blocks **/
int n; /* defaults to extern */
```
但是,只有在变量的 _定义_ 中显式初始化变量时,**extern** 才能在变量的 _定义_ 中显式修饰LCTT译注换言之如果下列代码中的 `int n1;` 行前加上 **extern**,该行就由 _定义_ 变成了 _声明_
```
/** file1.c: outside all blocks **/
int n1;             /* defaults to extern, initialized by compiler to zero */
extern int n2 = -1; /* ok, initialized explicitly */
int n3 = 9876;      /* ok, extern by default and initialized explicitly */
```
要使在 **file1.c** 中定义为 **extern** 的变量在另一个文件(例如 **file2.c**)中可见,该变量必须在 **file2.c** 中显式 _声明_**extern** 并且不能初始化(初始化会将声明转换为定义):
```
/** file2.c **/
extern int n1; /* declaration of n1 defined in file1.c */
```
为了避免与 **extern** 变量混淆,经验是在 _声明_ 中显式使用 **extern**(必须),但不要在 _定义_ 中使用(非必须且棘手)。对于函数,**extern** 在定义中是可选使用的,但在声明中是必须使用的。下一节中的 **staticProg** 示例会把这些点整合到一个完整的程序中。
### staticProg 示例
**staticProg** 程序由三个文件组成:两个 C 语言源文件(**static1.c** 和 **static2.c**)以及一个头文件(**static.h**),头文件中包含两个声明:
```
/** header file static.h **/
#define NumCount 100               /* macro */
extern int global_nums[NumCount];  /* array declaration */
extern void fill_array();          /* function declaration */
```
两个声明中的 **extern**一个用于数组另一个用于函数强调对象在别处“外部”_定义_数组 **global_nums** 在文件 **static1.c** 中定义(没有显式的 **extern**),函数 **fill_array** 在文件 **static2.c** 中定义(也没有显式的 **extern**)。每个源文件都包含了头文件 **static.h**。**static1.c** 文件定义了两个驻留在内存静态区域中的数组(**global_nums** 和 **more_nums**)。第二个数组有 **static** 修饰,这将其作用域限制为定义数组的文件 (**static1.c**)。如前所述, **extern** 修饰的 **global_nums** 则可以实现在多个文件中可见。
```
/** static1.c **/
#include <stdio.h>
#include <stdlib.h>
#include "static.h"             /* declarations */
int global_nums[NumCount];      /* definition: extern (global) aggregate */
static int more_nums[NumCount]; /* definition: scope limited to this file */
int main() {
  fill_array(); /** defined in file static2.c **/
  unsigned i;
  for (i = 0; i < NumCount; i++)
    more_nums[i] = i * -1;
  /* confirm initialization worked */
  for (i = 0; i < NumCount; i += 10) 
    printf("%4i\t%4i\n", global_nums[i], more_nums[i]);
    
  return 0;  
}
```
下面的 **static2.c** 文件中定义了 **fill_array** 函数,该函数由 **main**(在 **static1.c** 文件中)调用;**fill_array** 函数会给名为 **global_nums****extern** 数组中的元素赋值,该数组在文件 **static1.c** 中定义。使用两个文件的唯一目的是凸显 **extern** 变量或函数能够跨文件可见。
```
/** static2.c **/
#include "static.h" /** declarations **/
void fill_array() { /** definition **/
  unsigned i;
  for (i = 0; i < NumCount; i++) global_nums[i] = i + 2;
}
```
**staticProg** 程序可以如下编译:
```
% gcc -o staticProg static1.c static2.c
```
### 从汇编语言看更多细节
现代 C 编译器能够处理 C 和汇编语言的任意组合。编译 C 源文件时,编译器首先将 C 代码翻译成汇编语言。这是对从上文 **static1.c** 文件生成的汇编语言进行保存的命令:
```
% gcc -S static1.c
```
生成的文件就是 **static1.s**。这是文件顶部的一段代码,额外添加了行号以提高可读性:
```
    .file    "static1.c"          ## line  1
    .text                         ## line  2
    .comm    global_nums,400,32   ## line  3
    .local    more_nums           ## line  4
    .comm    more_nums,400,32     ## line  5
    .section    .rodata           ## line  6
.LC0:                             ## line  7
    .string    "%4i\t%4i\n"       ## line  8
    .text                         ## line  9
    .globl    main                ## line 10
    .type    main, @function      ## line 11
main:                             ## line 12
...
```
诸如 **.file**(第 1 行)之类的汇编语言指令以句点开头。顾名思义,指令会指导汇编程序将汇编语言翻译成机器代码。**.rodata** 指令(第 6 行)表示后面是只读对象,包括字符串常量 **"%4i\t%4i\n"**(第 8 行),**main** 函数(第 12 行)会使用此字符串常量来实现格式化输出。作为标签引入(通过末尾的冒号实现)的 **main** 函数(第 12 行),同样也是只读的。
在汇编语言中,标签就是地址。标签 **main:**(第 12 行)标记了 **main** 函数代码开始的地址,标签 **.LC0**:(第 7 行)标记了格式化字符串开头所在的地址。
**global_nums**(第 3 行)和 **more_nums**(第 4 行数组的定义包含了两个数字400 是每个数组中的总字节数32 是每个数组(含 100 个 **int** 元素)中每个元素的比特数。(第 5 行中的 **.comm** 指令表示 **common name**,可以忽略。)
两个数组定义的不同之处在于 **more_nums** 被标记为 **.local**(第 4 行),这意味着其作用域仅限于其所在文件 **static1.s**。相比之下,**global_nums** 数组就能在多个文件中实现可见,包括由 **static1.c****static2.c** 文件翻译成的汇编文件。
最后,**.text** 指令在汇编代码段中出现了两次(第 2 行和第 9 行。术语“text”表示“只读”但也会涵盖一些读/写变量,例如两个数组中的元素。尽管本文展示的汇编语言是针对 Intel 架构的,但 Arm6 汇编也非常相似。对于这两种架构,**.text** 区域中的变量(本例中为两个数组中的元素)会自动初始化为零。
### 总结
C 语言中的内存高效和内存安全编程准则很容易说明,但可能会很难遵循,尤其是在调用设计不佳的库的时候。准则如下:
* 尽可能使用栈存储,进而鼓励编译器将通用寄存器用作暂存器,实现优化。栈存储代表了高效的内存使用并促进了代码的整洁和模块化。永远不要返回指向基于栈的存储的指针。
* 小心使用堆存储。C和 C++)中的重难点是确保动态分配的存储尽快解除分配。良好的编程习惯和工具(如 **valgrind**)有助于攻关这些重难点。优先选用自身提供释放函数的库,例如 **nestedHeap** 代码示例中的 **free_all** 释放函数。
* 谨慎使用静态存储,因为这种存储会自始至终地影响进程的内存占用。特别是尽量避免使用 **extern** 和 **static** 数组。
本文 C 语言代码示例可在我的网站 (<https://condor.depaul.edu/mkalin>) 上找到。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/memory-programming-c
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 (Code going into a computer.)
[2]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
[3]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fread.html
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fwrite.html
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/malloc.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/free.html
[11]: https://www.valgrind.org/

View File

@ -0,0 +1,78 @@
[#]: subject: "Neither Windows, nor Linux! Shrine is Gods Operating System"
[#]: via: "https://itsfoss.com/shrine-os/"
[#]: author: "John Paul https://itsfoss.com/author/john/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
不是 Windows也不是 LinuxShrine 才是 “神之操作系统”
======
在生活中,我们都曾使用过多种操作系统。有些好,有些坏。但你能说你使用过由“神”设计的操作系统吗?今天,我想向你介绍 Shrine圣殿
### 什么是 Shrine
![Shrine 界面][1]
从介绍里,你可能想知道这到底是怎么回事。嗯,这一切都始于一个叫 Terry Davis 的人。在我们进一步介绍之前我最好提醒你Terry 在生前患有精神分裂症,而且经常不吃药。正因为如此,他在生活中说过或做过一些不被社会接受的事情。
总之,让我们回到故事的主线。在 21 世纪初Terry 发布了一个简单的操作系统。多年来,它不停地换了几个名字,有 J Operating System、LoseThos 和 SparrowOS 等等。他最终确定了 [TempleOS][2] 这个名字。他选择这个名字(神庙系统)是因为这个操作系统将成为神的圣殿。因此,神给 Terry 的操作系统规定了以下 [规格][3]
![video](https://youtu.be/LtlyeDAJR7A)
* 它将有 640×480 的 16 色图形
* 它将使用“单声道 8 位带符号的类似 MIDI 的声音采样”
* 它将追随 Commodore 64即“一个非网络化的简单机器编程是目标而不仅仅是达到目的的手段”
* 它将只支持一个文件系统(名为 “Red Sea”
* 它将被限制在 10 万行代码内,以使它 “整体易于学习”。
* “只支持 Ring-0 级,一切都在内核模式下运行,包括用户应用程序
* 字体将被限制为 “一种 8×8 等宽字体”
* “对一切都可以完全访问。所有的内存、I/O 端口、指令和类似的东西都绝无限制。所有的函数、变量和类成员都是可访问的”
* 它将只支持一个平台,即 64 位 PC
Terry 用一种他称之为 HolyC神圣 C 语言的编程语言编写了这个操作系统。TechRepublic 称其为一种 “C++ 的修改版(‘比 C 多,比 C++ 少’)”。如果你有兴趣了解 HolyC我推荐[这篇文章][4] 和 [RosettaCode][5] 上的 HolyC 条目。
2013 年Terry 在他的网站上宣布TempleOS 已经完成。不幸的是,几年后的 2018 年 8 月Terry 被火车撞死了。当时他无家可归。多年来,许多人通过他在该操作系统上的工作关注着他。大多数人对他在如此小的体积中编写操作系统的能力印象深刻。
现在,你可能想知道这些关于 TempleOS 的讨论与 Shrine 有什么关系。好吧,正如 Shrine 的 [GitHub 页面][6] 所说,它是 “一个为异教徒设计的 TempleOS 发行版”。GitHub 用户 [minexew][7] 创建了 Shrine为 TempleOS 添加 Terry 忽略的功能。这些功能包括:
* 与 TempleOS 程序 99% 的兼容性
* 带有 Lambda Shell感觉有点像经典的 Unix 命令解释器
* TCP/IP 协议栈和开机即可上网
* 包括一个软件包下载器
minexew 正计划在未来增加更多的功能,但还没有宣布具体会包括什么。他有计划为 Linux 制作一个完整的 TempleOS 环境。
### 体验
让 Shrine 在虚拟机中运行是相当容易的。你所需要做的就是安装你选择的虚拟化软件。(我的是 VirtualBox当你为 Shrine 创建一个虚拟机时,确保它是 64 位的,并且至少有 512MB 的内存。
一旦你启动到 Shrine会询问你是否要安装到你的虚拟硬盘上。一旦安装完成你也可以选择不安装你会看到一个该操作系统的导览你可以由此探索。
### 总结
TempleOS (和 Shrine显然不是为了取代 Windows 或 Linux。即使 Terry 把它称为 “神之圣殿”,我相信在他比较清醒的时候,他也会承认这更像是一个业余的作业系统。考虑到这一点,已完成的产品相当 [令人印象深刻][8]。在 12 年的时间里Terry 用他自己创造的语言创造了一个稍稍超过 10 万行代码的操作系统。他还编写了自己的编译器、图形库和几个游戏。所有这些都是在与他自己的个人心魔作斗争的时候进行的。
--------------------------------------------------------------------------------
via: https://itsfoss.com/shrine-os/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/shrine.jpg?resize=800%2C600&ssl=1
[2]: https://templeos.org/
[3]: https://web.archive.org/web/20170508181026/http://www.templeos.org:80/Wb/Doc/Charter.html
[4]: https://harrisontotty.github.io/p/a-lang-design-analysis-of-holyc
[5]: https://rosettacode.org/wiki/Category:HolyC
[6]: https://github.com/minexew/Shrine
[7]: https://github.com/minexew
[8]: http://www.codersnotes.com/notes/a-constructive-look-at-templeos/

View File

@ -0,0 +1,141 @@
[#]: subject: "Use Vagrant to test your scripts on different operating systems"
[#]: via: "https://opensource.com/article/21/9/test-vagrant"
[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
使用 Vagrant 在不同的操作系统上测试你的脚本
======
Vagrant 可以帮助你在你的电脑上运行其他操作系统,这意味着你可以构建、测试、做古怪的事情而不毁坏你的系统。
![Business woman on laptop sitting in front of window][1]
我使用 Vagrant 已经很长时间了。我使用几种 DevOps 工具在一个系统上安装它们可能会变得很复杂。Vagrant 让你在不破坏系统的情况下做一些很酷的事情,因为你根本不需要在生产系统上做实验。
如果你熟悉 [VirtualBox][2] 或 [GNOME Boxes][3],那么学习 Vagrant 很容易。Vagrant 有一个简单而干净的界面用于处理虚拟机。一个名为 `Vagrantfile` 的配置文件,允许你定制你的虚拟机(称为 _Vagrant boxes_)。一个简单的命令行界面让你启动、停止、暂停或销毁你的 box。
考虑一下这个简单的例子。
假设你想写 Ansible 或 shell 脚本,在一个新的服务器上安装 Nginx。你不能在自己的系统上这样做因为你可能没有运行想测试的操作系统或者可能没有你想做的所有依赖项。启动新的云服务器进行测试可能会很费时和昂贵。这就是 Vagrant 派上用处的地方。你可以用它来启动一个虚拟机,用你的脚本来配置它,并证明一切按预期工作。然后,你可以删除这个 box重新配置它并重新运行你的脚本来验证它。你可以多次重复这个过程直到你确信你的脚本在所有条件下都能工作。你可以将你的 Vagrant 文件提交给 Git以确保你的团队正在测试完全相同的环境因为他们将使用完全相同的测试机。不再有“但它在我的机器上运行良好”这事了。
### 开始使用
First,[install Vagrant on your system][4]and then create a new folder to experiment in. In this new folder, create a new file namedwith these contents:
首先,[在你的系统上安装 Vagrant][4],然后创建一个新的文件夹进行实验。在这个新文件夹中,创建一个名为 `Vagrantfile` 的新文件,内容如下:
```
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/hirsute64"
end
```
你也可以运行 `vagrant init ubuntu/hirsute64`,它将为你生成一个新的 Vagrant 文件。现在运行 `vagrant up`。这个命令将从 Vagrant 仓库中下载 `ubuntu/hirsuite64` 镜像。
```
Bringing machine 'default' up with 'virtualbox' provider...
==&gt; default: Importing base box 'ubuntu/hirsute64'...
==&gt; default: Matching MAC address for NAT networking...
==&gt; default: Checking if box 'ubuntu/hirsute64' version '20210820.0.0' is up to date...
==&gt; default: Setting the name of the VM: a_default_1630204214778_76885
==&gt; default: Clearing any previously set network interfaces...
==&gt; default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==&gt; default: Forwarding ports...
default: 22 (guest) =&gt; 2222 (host) (adapter 1)
==&gt; default: Running 'pre-boot' VM customizations...
==&gt; default: Booting VM...
==&gt; default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Connection reset. Retrying...
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==&gt; default: Machine booted and ready!
```
此时,如果你打开你的 Vagrant 后端(如 VirtualBox 或 virt-manager你会看到你的 box 在那里。接下来,运行 `vagrant ssh` 登录到 box。如果你能看到 Vagrant 的提示,那么你就进入了!
```
~ vagrant ssh
Welcome to Ubuntu 21.04 (GNU/Linux 5.11.0-31-generic x86_64)
* Documentation: <https://help.ubuntu.com>
* Management: <https://landscape.canonical.com>
* Support: <https://ubuntu.com/advantage>
System information as of Sun Aug 29 02:33:51 UTC 2021
System load: 0.01 Processes: 110
Usage of /: 4.1% of 38.71GB Users logged in: 0
Memory usage: 17% IPv4 address for enp0s3: 10.0.2.15
Swap usage: 0% IPv4 address for enp0s8: 192.168.1.20
0 updates can be applied immediately.
vagrant@ubuntu-hirsute:~$
```
Vagrant 使用“基础 box” 来启动你的本地机器。在我们的例子中Vagrant 从 [Hashicorp 的 Vagrant 目录][5]下载 `ubuntu/hirsuite64` 镜像,并插入 VirtualBox 来创建实际的 box。
### 共享文件夹
Vagrant 将你的当前文件夹映射到 Vagrant box 中的 `/vagrant`。这允许你在你的系统和 box 里保持文件同步。这对于测试 Nginx 网站是很好的,通过将你的文件根目录指向 `/vagrant`。你可以使用 IDE 进行修改box 里的 Nginx 会提供这些修改。
### Vagrant 命令
有几个 Vagrant 命令,你可以用它们来控制你的 box。
其中一些重要的命令是:
* `vagrant up`:启动一个 box。
* `vagrant status`:显示当前 box 的状态。
* `vagrant suspend`:暂停当前的 box。
* `vagrant resume`:恢复当前的 box。
* `vagrant halt`:关闭当前的 box。
* `vagrant destroy`:销毁当前的 box。通过运行此命令你将失去存储在 box 上的任何数据。
* `vagrant snapshot`:对当前的 box 进行快照。
### 试试 Vagrant
Vagrant 是一个使用 DevOps 原则进行虚拟机管理的经过时间考验的工具。配置你的测试机,与你的团队分享配置,并在一个可预测和可重复的环境中测试你的项目。如果你正在开发软件,那么通过使用 Vagrant 进行测试,你将为你的用户提供良好的服务。如果你不开发软件,但你喜欢尝试新版本的操作系统,那么没有比这更简单的方法了。今天就试试 Vagrant 吧!
* * *
_这篇文章最初发表在[作者的个人博客][6]上经许可后被改编。_
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/test-vagrant
作者:[Ayush Sharma][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayushsharma
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating)
[2]: https://opensource.com/article/21/6/try-linux-virtualbox
[3]: https://opensource.com/article/19/5/getting-started-gnome-boxes-virtualization
[4]: https://www.vagrantup.com/docs/installation
[5]: https://app.vagrantup.com/boxes/search
[6]: https://notes.ayushsharma.in/2021/08/introduction-to-vagrant