Merge pull request #26 from LCTT/master

update 0926
This commit is contained in:
SamMa 2021-09-26 09:26:15 +08:00 committed by GitHub
commit 5321fbc6e5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 1739 additions and 1158 deletions

View File

@ -3,16 +3,18 @@
[#]: author: "Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/" [#]: author: "Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/"
[#]: collector: "lujun9972" [#]: collector: "lujun9972"
[#]: translator: "wxy" [#]: translator: "wxy"
[#]: reviewer: " " [#]: reviewer: "turbokernel"
[#]: publisher: " " [#]: publisher: "wxy"
[#]: url: " " [#]: url: "https://linux.cn/article-13817-1.html"
如何在树莓派 4 上安装 Ubuntu 桌面系统 如何在树莓派 4 上安装 Ubuntu 桌面系统
====== ======
> 这个详尽的教程告诉你如何在树莓派 4 设备上安装 Ubuntu 桌面。 > 本教程将详细告诉你在树莓派 4 设备上如何安装 Ubuntu 桌面。
革命性的<ruby>树莓派<rt>Raspberry Pi</rt></ruby>是最受欢迎的单板计算机。它有自己的基于 Debian 的操作系统,叫做 <ruby>[树莓派操作系统][1]<rt>Raspberry Pi OS</rt></ruby>(原名 Raspbian ![](https://img.linux.net.cn/data/attachment/album/202109/25/084015z4cfiiy8e1ezmmz0.jpg)
革命性的<ruby>树莓派<rt>Raspberry Pi</rt></ruby>是最受欢迎的单板计算机。它拥有基于 Debian 的操作系统,叫做 <ruby>[树莓派操作系统][1]<rt>Raspberry Pi OS</rt></ruby>(原名 Raspbian
还有其他几个 [可用于树莓派的操作系统][2],但几乎所有的都是轻量级的,适合于树莓派设备的小尺寸和低端硬件。 还有其他几个 [可用于树莓派的操作系统][2],但几乎所有的都是轻量级的,适合于树莓派设备的小尺寸和低端硬件。
@ -22,23 +24,23 @@
在本教程中,我将展示在树莓派 4 上安装 Ubuntu 桌面的步骤。 在本教程中,我将展示在树莓派 4 上安装 Ubuntu 桌面的步骤。
首先,快速了解一下先决条件 首先,快速了解一下运行要求
### 在树莓派 4 上运行 Ubuntu 的先决条件 ### 在树莓派 4 上运行 Ubuntu 的要求
![][4] ![][4]
以下是你需要的东西: 以下是你需要的东西:
1. 一个能用的互联网连接的 Linux 或 Windows 系统。 1. 一个能够联网的 Linux 或 Windows 系统。
2. [树莓派镜像工具][5] :树莓派的官方开源工具,可以在你的 SD 卡上写入发行版镜像。 2. [树莓派镜像工具][5] :树莓派的官方开源工具,可以在你的 SD 卡上写入发行版镜像。
3. 微型 SD 卡:可以使用至少 16GB 的存储卡,尽管建议使用 32GB 的版本。 3. Micro SD 卡:最低使用 16GB 的存储卡,推荐使用 32GB 的版本。
4. 一个基于 USB 的 Micro SD 卡读卡器(如果你的电脑没有读卡器)。 4. 一个基于 USB 的 Micro SD 卡读卡器(如果你的电脑没有读卡器)。
5. 必要的树莓派 4 配件,如 HDMI 兼容显示器、[Micro HDMI 连接到标准 HDMIA/M 接口的电缆][6]、[电源(建议使用官方适配器)][7]、USB 的有线/无线键盘和鼠标/触摸板。 5. 树莓派 4 必备配件,如 HDMI 兼容显示器、[Micro HDMI 连接到标准 HDMIA/M 接口的电缆][6]、[电源(建议使用官方适配器)][7]、USB 的有线/无线键盘和鼠标/触摸板。
事先 [详细阅读树莓派的要求][8] 是很好的做法 最好能够提前 [详细阅读树莓派的要求][8]
现在,不再拖延了,让我快速带领你完成 SD 卡的镜像准备。 现在,闲话少叙,让我快速带领你完成 SD 卡的镜像准备。
### 为树莓派准备 Ubuntu 桌面镜像 ### 为树莓派准备 Ubuntu 桌面镜像
@ -46,17 +48,17 @@
![下载并将操作系统放入 SD 卡的官方工具][9] ![下载并将操作系统放入 SD 卡的官方工具][9]
你可以从官方网站上下载用于 Ubuntu、Windows 和 macOS 的这个工具 你可以从官方网站上下载这个工具的 Ubuntu、Windows 和 macOS 版本
- [下载树莓派镜像工具][10] - [下载树莓派镜像工具][10]
在 Ubuntu 和其他 Linux 发行版上,你也可以用 Snap 安装它: 在 Ubuntu 和其他 Linux 发行版上,你也可以使用 Snap 安装它:
``` ```
sudo snap install rpi-imager sudo snap install rpi-imager
``` ```
安装完毕后,运行该工具。当你看到下面的屏幕时,选择 “<ruby>选择操作系统<rt>CHOOSE OS</rt></ruby>”: 安装完毕后,运行该工具。当你看到下面的界面时,选择 “<ruby>选择操作系统<rt>CHOOSE OS</rt></ruby>”:
![镜像工具:选择首选操作系统][11] ![镜像工具:选择首选操作系统][11]
@ -74,13 +76,13 @@ sudo snap install rpi-imager
> **注意:** > **注意:**
> >
> 如果你没有一个好的、稳定的网络连接,你可以 [从 Ubuntu 的网站上单独下载 Ubuntu 的树莓派镜像][15]。在镜像工具中,在选择操作系统时,底部选择“<ruby>使用自定义<rt>Use custom</rt></ruby>”选项。你也可以使用 Etcher 将镜像写入到 SD 卡上。 > 如果你没有一个稳定的网络连接,你可以 [从 Ubuntu 的网站上单独下载 Ubuntu 的树莓派镜像][15]。在镜像工具中,在选择操作系统时,底部选择“<ruby>使用自定义<rt>Use custom</rt></ruby>”选项。你也可以使用 Etcher 将镜像写入到 SD 卡上。
微型 SD 卡插入读卡器中,等待它挂载。选择“<ruby>存储设备<rt>Storage</rt></ruby>”下的 “<ruby>选择存储设备<rt>CHOOSE STORAGE</rt></ruby>”: Micro SD 卡插入读卡器中,等待它挂载。选择“<ruby>存储设备<rt>Storage</rt></ruby>”下的 “<ruby>选择存储设备<rt>CHOOSE STORAGE</rt></ruby>”:
![镜像工具选择存储设备SD 卡)][16] ![镜像工具选择存储设备SD 卡)][16]
你应该只看到你的微型 SD 卡的存储空间,你会根据大小立即识别它。这里,我使用的是 32GB 的卡: 你应该可以根据存储空间大小,识别你的 Micro SD 卡。这里,我使用的是 32GB 的卡:
![镜像工具:选择 SD 卡][17] ![镜像工具:选择 SD 卡][17]
@ -88,59 +90,59 @@ sudo snap install rpi-imager
![镜像工具:镜像写入][18] ![镜像工具:镜像写入][18]
我假设你已经备份了 SD 卡上的内容。如果是一张新卡,你可以直接进行: 如果你已经备份了 SD 卡上的内容或是一张新卡,你可以直接进行:
![镜像工具:镜像写入确认][19] ![镜像工具:镜像写入确认][19]
由于这是一个 [sudo][20] 的权限,你必须输入密码。如果你从终端运行 `sudo rpi-imager`,就不会出现这种情况: 由于这需要 [sudo][20] 的权限,你必须输入密码。如果你从终端运行 `sudo rpi-imager`,就不会出现这种情况:
![镜像工具:镜像写入授权需要密码][21] ![镜像工具:镜像写入授权需要密码][21]
如果你的 SD 卡有点旧,这将需要一些时间。但是,如果它是一个新的高速的 SD 卡,就不会花很长时间: 如果你的 SD 卡有点旧,这将需要一些时间。如果它是一个新的高速 SD 卡,就无需很长时间:
![镜像工具:写入镜像][22] ![镜像工具:写入镜像][22]
我也不建议跳过验证。确保镜像写入成功 为确保镜像写入成功,我不建议跳过验证
![镜像工具:验证写入][23] ![镜像工具:验证写入][23]
一旦结束,你会得到以下确认 写入结束后,会有以下确认提示
![镜像工具:写入成功][24] ![镜像工具:写入成功][24]
现在,从你的系统中安全移除 SD 卡。 现在,从你的系统中安全移除 SD 卡。
###在树莓派上使用载有 Ubuntu 的微型 SD 卡 ### 在树莓派上使用装有 Ubuntu 的 MicroSD 卡
战斗的一半已经胜利了。与常规的 Ubuntu 安装不同,你并没有创建一个临场环境。Ubuntu 已经安装在 SD 卡上了,而且几乎已经可以使用了。让我们来看看这里还剩下什么。 已经成功了一半了。与常规的 Ubuntu 安装不同,无需创建一个临场安装环境。Ubuntu 已经安装在 SD 卡上了,而且几乎可以直接使用了。让我们来看看这里还剩下什么。
#### 第 1 步:将 SD 卡插入树莓派中 #### 第 1 步:将 SD 卡插入树莓派中
对于第一次使用的用户来说,有时会有点困惑,不知道那个卡槽到底在哪里?不用担心。它位于电路板背面的左手边。下面是一个插入卡后的倒置视图。 对于第一次使用的用户来说,有时会有点困惑,不知道那个卡槽到底在哪里?不用担心。它位于电路板背面的左手边。下面是一个插入卡后的倒置视图。
![树莓派 4B 板倒置,插入微型 SD 卡][25] ![树莓派 4B 板倒置,插入 Micro SD 卡][25]
这个方向将卡慢慢插入板子下面的插槽,轻轻地插,直到它不再往前移动。你可能还会听到一点咔嚓声来确认。这意味着它刚刚完美地插进去了。 照这个方向将卡慢慢插入板子下面的卡槽,轻轻地插,直到它不再往前移动。你可能还会听到一点咔嚓声来确认。这意味着它已经完美地插入了。
![树莓派 SD 插槽在板子背面的左侧][26] ![树莓派 SD 插槽在板子背面的左侧][26]
当你把它插进去的时候,你可能会注意到有两个小针脚在插槽中调整了自己的位置(如上图所示),但这没关系。一旦插入,卡看起来会有一点突出。这就是它应该有的样子。 当你把它插进去的时候,你可能会注意到在插槽中有两个小针脚调整了自己的位置(如上图所示),但这没关系。一旦插入,卡看起来会有一点突出。这就是它应该有的样子。
![树莓派 SD 卡插入时有一小部分可见][27] ![树莓派 SD 卡插入时有一小部分可见][27]
#### 第 2 步:设置树莓派 #### 第 2 步:设置树莓派
想,我不需要在这里详细介绍。 无需在这里详细介绍。
保电源线接头、微型 HDMI 线接头、键盘和鼠标接头(有线/无线)都牢固地连接到树莓派板的相关端口。 电源线接头、微型 HDMI 线接头、键盘和鼠标接头(有线/无线)都牢固地连接到树莓派板的相关端口。
确保显示器和电源插头也已正确连接,然后再去打开电源插座。我不建议把适配器插到带电的插座上。主要一下 [电弧][28]。 确保显示器和电源插头也已正确连接,然后再去打开电源插座。我不建议把适配器插到带电的插座上。参考 [电弧][28]。
一旦你确保了以上两个步骤,你就可以 [打开树莓派设备的电源][29]。 确认了以上两个步骤后,你就可以 [打开树莓派设备的电源][29]。
#### 第 3 步:在树莓派上 Ubuntu 桌面的首次运行 #### 第 3 步:在树莓派上 Ubuntu 桌面的首次运行
一旦你打开树莓派的电源,你会被要求在第一次运行时进行一些基本配置。你只需按照屏幕上的指示操作即可。 当你打开树莓派的电源,你需要在初次运行时进行一些基本配置。你只需按照屏幕上的指示操作即可。
选择你的语言、键盘布局、连接到 WiFi 等: 选择你的语言、键盘布局、连接到 WiFi 等:
@ -150,7 +152,7 @@ sudo snap install rpi-imager
![选择 WiFi][32] ![选择 WiFi][32]
会被要求选择时区: 可以根据需求选择时区:
![选择时区][33] ![选择时区][33]
@ -158,17 +160,17 @@ sudo snap install rpi-imager
![输入所需的用户名和密码][34] ![输入所需的用户名和密码][34]
它将配置一些东西,可能需要一些时间来完成 之后的步骤将配置一些东西,这个过程需要一些时间
![完成 Ubuntu 设置][35] ![完成 Ubuntu 设置][35]
![完成 Ubuntu 设置][36] ![完成 Ubuntu 设置][36]
这之后可能需要一些时间,你的系统会重新启动,你会发现自己处于 Ubuntu 的登录界面: 系统会重新启动之前需要一些时间,最终,你将会来到 Ubuntu 的登录界面:
![Ubuntu 的登录界面][37] ![Ubuntu 的登录界面][37]
现在可以开始享受树莓派上的 Ubuntu 桌面了: 现在,你可以开始享受树莓派上的 Ubuntu 桌面了:
![树莓派上的 Ubuntu 桌面][38] ![树莓派上的 Ubuntu 桌面][38]
@ -176,9 +178,9 @@ sudo snap install rpi-imager
我注意到**一个暂时的异常情况**。在进行安装时,我的显示器左侧有一个红色的闪烁边界。这种闪烁(也有不同的颜色)在屏幕的随机部分也能注意到。但在重启和第一次启动后,它就消失了。 我注意到**一个暂时的异常情况**。在进行安装时,我的显示器左侧有一个红色的闪烁边界。这种闪烁(也有不同的颜色)在屏幕的随机部分也能注意到。但在重启和第一次启动后,它就消失了。
我非常需要 Ubuntu 开始为树莓派等流行的 ARM 设备提供支持,我很高兴看到它在树莓派上运行 很高兴能够看到它在树莓派上运行,我非常需要 Ubuntu 开始为树莓派等流行的 ARM 设备提供支持。
希望你觉得这个教程对你有帮助。如果你有问题或建议,请在评论中告诉我。 希望这个教程对你有帮助。如果你有问题或建议,请在评论中告诉我。
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
@ -187,7 +189,7 @@ via: https://itsfoss.com/install-ubuntu-desktop-raspberry-pi/
作者:[Avimanyu Bandyopadhyay][a] 作者:[Avimanyu Bandyopadhyay][a]
选题:[lujun9972][b] 选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy) 译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID) 校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,164 @@
[#]: subject: "GNOME 41 Released: The Most Popular Linux Desktop Environment Gets Better"
[#]: via: "https://news.itsfoss.com/gnome-41-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13816-1.html"
GNOME 41 发布:最受欢迎的 Linux 桌面环境的精细打磨
======
> GNOME 41 是一次有价值的升级,它带来了新的应用程序、功能和细微的视觉改进。
![](hhttps://img.linux.net.cn/data/attachment/album/202109/24/130703iznp8p53dbd1kktz.jpg)
现在 GNOME 41 稳定版终于发布了。
虽然 GNOME 40 带来了不少激进的改变,让许多用户不得不去适应新的工作流程,但 GNOME 41 似乎避免了这个问题。
在 GNOME 41 中没有明显的工作流程变化,但有增加了新的功能,做了全面的改进。
GNOME 41 的测试版已经发布了一段时间了。而且,为了发现它值得关注的地方,我们在发布前就使用 [GNOME OS][1] 试用了它的稳定版。
### GNOME 41 有什么新功能?
GNOME 41 并没有给你带来任何新的视觉感受,但是一些有用的改进可以帮助你改善或控制工作流程。
此外,还升级了一些 GNOME 应用程序。
让我提一下 GNOME 41 的主要亮点。
#### GNOME 41 软件的上下文板块
![][3]
每个版本中,用户都期待着对 GNOME “<ruby>软件<rt>Software</rt></ruby>”的改进。
虽然他们一直在朝着正确的方向改进它但它需要一次视觉上的重新打造。而且这一次GNOME 41 带来了急需的 UI 更新。
软件商店的描述性更强了,看起来应该对新用户有吸引力。它使用表情符号/创意图标来对应用程序进行分类,使软件中心变得更时尚。
就像 [Apps for GNOME][4] 门户网站一样,“软件”的应用程序屏幕包括了更多的细节,以尽可能地告知用户,而不需要参考项目页面或其网站。
![][5]
换句话说,这些添加到应用程序页面的上下文板块,提供了有关设备支持、安全/许可、年龄等级、下载的大小、项目等信息。
你还可以为某些应用程序(如 GIMP选择可用的附加组件以便一次都安装上。这样你就可以节省寻找附加组件和单独安装它们的时间了。
事实证明GNOME 41 “软件”比以前更有好用了。
#### 新的多任务选项
![][6]
GNOME 41 打造了新的多任务设置以帮助你改善工作流程。
你可以通过切换热角来快速打开“<ruby>活动概览<rt>Activities Overview</rt></ruby>”。还添加了一个拖动窗口到边缘时调整其大小的能力。
根据你的需求,你可以设置一个固定的可用工作空间的数量,当然也可以保持动态数量。
除此以外,你还可以调整这些功能:
* 多显示器工作区
* 应用程序切换行为
当你有多个显示器时,你可以选择将工作空间限制在一个屏幕上,或在连接的显示器上连续显示。
而且,当你在切换应用程序并浏览它们时,你可以自定义只在同一工作区或在所有工作区预览应用程序。
#### 节电设置
![][7]
在 GNOME 41 中,现在有一个有效节省电力的性能调整。这对于笔记本用户手动调整其性能,或者当一个应用程序要求切换模式以节省电力时,是非常有用的。
![][8]
#### GNOME “日历”的改进
GNOME “<ruby>日历<rt>Calendar</rt></ruby>”现在可以打开 ICS 文件及导入活动。
#### 触摸板手势
无缝的工作流程的体验:可以利用三指垂直向上/向下滑动的动作来获得“活动概览”,以及利用三指水平向右/向左滑动的动作在工作空间之间导航。
很高兴看到他们重点放在改善使用触摸板的工作流程上,这类似于 [elementary OS 6 的功能][9]。
#### GNOME 连接应用
![][10]
添加了一个新的“<ruby>连接<rt>Connections</rt></ruby>”应用程序,可以连接到远程计算机,不管是什么平台。
我看到这个应用程序仍然是一个 alpha 版本,但也许随着接下来的几次更新,就会看到这个应用程序的完成版本。
我还没有试过它是否可以工作,但也许值得再写一篇简短的文章来告诉你如何使用它。
#### SIP/VoIP 支持
在 [GNOME 41 测试版][11] 中,我发现了对 SIP/VoIP 的支持。
如果你是一个商业用户或者经常打国际电话,你现在可以直接从 GNOME 41 的拨号盘上拨打 VoIP 电话了。
不幸的是,在使用带有 GNOME 41 稳定版的 GNOME OS 时,我无法找到包含的“<ruby>通话<rt>Calls</rt></ruby>”应用程序。所以,我无法截图给你看。
#### GNOME Web / Epiphany 的改进
![][12]
GNOME Web即 Epiphany 浏览器)最近进行了很多很棒的改进。
在 GNOME 41 中Epiphany 浏览器现在利用 AdGuard 的脚本来阻止 YouTube 广告。别忘了,它还增加了对 Epiphany canary 构建版的支持。
#### 其他改进
在底层,有一些细微但重要的变化带来了更好、更快的用户体验。
例如,你可能会注意到,在应用程序/窗口的标题区域,图标更加醒目。这是为了提高清晰度和增强外观。
同样地GNOME 应用程序和功能也有许多改进,你在使用它们时可能会发现:
* GNOME “<ruby>地图<rt>Map</rt></ruby>”现在以一种用户友好的方式显示平均海平面。
* Nautilus 文件管理器进行了改进,支持有密码保护的压缩文件,并能够让你切换启用/禁用自动清理垃圾的功能
* “<ruby>音乐<rt>Music</rt></ruby>”应用程序的用户界面进行了更新
* GNOME 文本编辑器有了更多功能
* GTK 更新至 4.4.0
* 增加 libawaita以潜在地改善 GNOME 应用程序的用户体验
你可以参考 [官方更新日志和公告博文][13] 来探索所有的技术变化。
### 总结
GNOME 41 可能不是一个突破性的升级,但它是一个带有许多有价值的补充的重要更新。
你可以期待在下个月发布 Fedora 35 中带有它。不幸的是Ubuntu 21.10 将不包括它,但你可以在其他 Linux 发行版中等待它。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/gnome-41-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-os/
[2]: https://i2.wp.com/i.ytimg.com/vi/holOYrZquBQ/hqdefault.jpg?w=780&ssl=1
[3]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-software.png?w=1233&ssl=1
[4]: https://news.itsfoss.com/apps-for-gnome-portal/
[5]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-software-app.png?w=1284&ssl=1
[6]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-multitasking.png?w=1032&ssl=1
[7]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-power-settings.png?w=443&ssl=1
[8]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-41-power-options.png?w=1012&ssl=1
[9]: https://news.itsfoss.com/elementary-os-6-features/
[10]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/connections-gnome-41.png?w=1075&ssl=1
[11]: https://news.itsfoss.com/gnome-41-beta/
[12]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/gnome-web-41.png?w=1328&ssl=1
[13]: https://help.gnome.org/misc/release-notes/41.0/

View File

@ -0,0 +1,82 @@
[#]: subject: "Linux Gamers Can Finally Play Games like Apex Legends, Fortnite, Thanks to Easy Anti-Cheat Support"
[#]: via: "https://news.itsfoss.com/easy-anti-cheat-linux/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13821-1.html"
Linux 玩家终于可以玩《Apex Legends》、《Fortnite》等游戏了
======
> 如果你是一个狂热的多人游戏玩家你将能够玩到《Apex Legends》和《Fortnite》这样的热门游戏。但是你可能需要等待一段时间。
![](https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/09/easy-anti-cheat-linux.png?w=1200&ssl=1)
Linux 玩家们,这可是个大新闻啊!
Epic Games 为其“简易反作弊”服务增加了完整的 Linux 支持,官方提供了兼容 [SteamPlay][1](或 Proton和 Wine 兼容性。
尽管我们预计这将在未来的某个时候发生,但 Steam Deck 的引入改变了 [在 Linux 上玩游戏][2] 的场景。
你可能知道Steam Deck 是由 Linux 驱动的,这就是为什么 Epic Games 有兴趣扩大对 Linux 平台的支持。
因此,可以说,如果不是 Valve 在 Steam Deck 上的努力,在 Linux 上获得“简易反作弊”支持的机会并不乐观。
### 多人游戏玩家可以考虑转到 Linux 上了
有了 [简易反作弊][3] 的支持许多流行的多人游戏如《Apex Legends》、《Fortnite》、《Tom Clancy's Division 2》、《Rust》 和其他许多游戏应该可以在 Linux 上完美地运行了。
根据 Epic Games 的公告:
> 从最新的 SDK 版本开始,开发者只需在 Epic 在线服务开发者门户点击几下,就可以通过 Wine 或 Proton 激活对 Linux 的反作弊支持。
因此,开发人员可能需要一段时间来激活对各种游戏的反作弊支持。但是,对于大多数带有简易反作弊功能的游戏来说,这应该是一个绿色信号。
少了一个 [Windows 与 Linux 双启动][4] 的理由。
《Apex Legends》 是我喜欢的多人游戏之一。而且,我不得不使用 Windows 来玩这个游戏。希望这种情况很快就会改变,在未来几周内,我可以在 Linux 上试一试!
同样,如果你几乎就要转到 Linux 了,但因为它与游戏的兼容性问题而迟疑,我想说问题已经解决了一半了!
当然,我们仍然需要对 BattleEye、其他反作弊服务和游戏客户端的官方支持。但是这是个开端。
### Steam Deck 现在是一个令人信服的游戏选择
虽然许多人不确定 Steam Deck 是否支持所有的 AAA 级游戏,但这应该会有所改善!
[Steam Deck][5] 现在应该是多人游戏玩家的一个简单选择。
### 总结
如果 Steam Deck 作为一个成功的掌上游戏机而成为了焦点,那么正如我们所知,在 Linux 上玩游戏也将发生改变。
而且,我认为 Epic Games 在其反作弊服务中加入 Linux 支持仅仅只是一个开始。
也许,我们永远都不用借助 [ProtonDB][6] 来在 Linux 上玩一个只有 Windows 支持的游戏谁知道呢但是在这之后Linux 游戏的未来似乎充满希望。
如果你是一个开发者,你可能想阅读 [该公告][7] 来获得最新的 SDK。
你对 Epic Games 将简易反作弊引入 Linux 有何看法?欢迎在下面的评论中分享你的想法。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/easy-anti-cheat-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/steam-play/
[2]: https://itsfoss.com/linux-gaming-guide/
[3]: https://www.easy.ac/en-us/
[4]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
[5]: https://www.steamdeck.com/en/
[6]: https://www.protondb.com
[7]: https://dev.epicgames.com/en-US/news/epic-online-services-launches-anti-cheat-support-for-linux-mac-and-steam-deck

View File

@ -1,172 +0,0 @@
[#]: subject: "GNOME 41 Released: The Most Popular Linux Desktop Environment Gets Better"
[#]: via: "https://news.itsfoss.com/gnome-41-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
GNOME 41 Released: The Most Popular Linux Desktop Environment Gets Better
======
GNOME 41 stable release is ultimately here.
While GNOME 40 was a radical change forcing many users to adapt to a new workflow, GNOME 41 seems to settle the dust.
With GNOME 41, there are no significant workflow changes but new **feature additions** and **improvements** across the board.
GNOME 41 beta has been out there for a while. And, we tried the stable release right before the release date using [GNOME OS][1] to highlight what you can expect with it.
### GNOME 41 Features: Whats New?
![][2]
GNOME 41 does not give you any new visual treats but useful improvements to help you improve the workflow or take control of it.
There are some GNOME app upgrades that come along with it.
Let me mention the key highlights of GNOME 41.
#### GNOME 41 Software Context Tiles
![][3]
Every release, users look forward to the improvements made to the GNOME Software Center.
While they have been improving it in the right direction, it needed a visual overhaul. And, this time, GNOME 41 comes with a much-needed UI refresh.
The software store is more descriptive and should look appealing to new users. Using emojis/creative icons to categorize applications makes the software center pop.
Like the [Apps for GNOME][4] portal, the application screens on the Software center include more details to inform the user as much as possible without needing to refer to the project page or the web.
![][5]
In other words, these are the context tiles added to an app page that provides information about device support, safety/permissions, age rating, download size, the project, and more.
You also get to choose the available add-ons for a particular app like GIMP to install in one go. So, you save time from finding add-ons and installing them individually.
GNOME 41 Software should prove to be much more helpful than it ever was.
#### New Multitasking Options
![][6]
To help you improve the workflow, GNOME 41 comes baked in with new multitasking tweaks.
You get the toggle the hot corner to quickly open the Activities Overview. The ability to resize windows upon dragging them to the edges has also been added.
If you want, you can set a fixed number of workspaces available or keep it dynamic to adapt as you require.
In addition to these, you also get features to tweak:
* Multi-monitor workspaces
* Application switching behaviour
When you have multiple displays, you can choose to keep the workspaces restricted to a single screen or continue over connected displays.
And, when you head to switch applications and navigate through them, you can customize to preview the applications only in the same workspace or from all workspaces.
#### Power Saving Settings
![][7]
A helpful performance tweak to save power is now available in GNOME 41. This is incredibly useful for laptop users to tune their performance manually or if an app requests switching the mode to save power.
![][8]
#### GNOME Calendar Improvements
GNOME Calendar now can open ICS files along with the ability to import the events.
#### Touchpad Gestures
The workflow experience should be seamless when you utilize three-finger vertical swipe up/down actions to get the activity overview, and three-finger horizontal swipe right/left actions to navigate between workspaces.
It is good to see the focus on improving the workflow using the touchpad, similar to [elementary OS 6 features][9].
#### GNOME Connections App
![][10]
A new “Connections” app has been added to connect to remote computers no matter the platform.
I still see the application as an alpha build, but maybe with the following few updates, you should get the finished version of the application.
I havent tried if it works, but it might be worth another brief article to show you how to use it.
#### SIP/VoIP Support
With [GNOME 41 beta release][11], we spotted the inclusion of SIP/VoIP support.
If you are a business user or prefer international calls, you can now make VoIP calls directly from the dialpad in GNOME 41.
Unfortunately, I couldnt find the “Calls” app included when using GNOME OS with GNOME 41 stable release. So, I couldnt grab a screenshot of how it looks.
#### GNOME Web / Epiphany Improvements
![][12]
GNOME Web or Epiphany browser has been receiving a lot of good improvements lately.
With GNOME 41, the epiphany browser now utilizes AdGuards script to block YouTube advertisements. Not to forget, the support for epiphany canary builds has been added as well.
#### Other Improvements
Under the hood, several subtle but essential changes result in a better and faster user experience.
For instance, you may notice the icons more prominent in the header areas of application/windows. This is to improve clarity and enhance the look.
Similarly, there are numerous improvements to GNOME apps and functionalities that you might bump into when you use them:
* GNOME Map now shows the mean sea levels in a user-friendly way
* Improvements to Nautilus file manager to support password-protected zip files and the ability to toggle to let you enable/disable automatic trash cleaning
* Music app getting a UI refresh
* GNOME Text Editor gaining more features
* GTK updated to 4.4.0
* Addition of libawaita to potentially improve the user experience with GNOME apps
You can refer to the [official changelog and the announcement blog post][13] to explore all the technical changes.
### Wrapping Up
GNOME 41 may not be an experience-breaking upgrade, but it is a significant update with many valuable additions.
You can expect it with Fedora 35, which should release next month. Unfortunately, Ubuntu 21.10 will not include it, but you can wait it out for other Linux distributions.
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/gnome-41-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/gnome-os/
[2]: https://i2.wp.com/i.ytimg.com/vi/holOYrZquBQ/hqdefault.jpg?w=780&ssl=1
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUyMiIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: https://news.itsfoss.com/apps-for-gnome-portal/
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUxMiIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU0NiIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[7]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQzMSIgd2lkdGg9IjQ0MyIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[8]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjM2NSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[9]: https://news.itsfoss.com/elementary-os-6-features/
[10]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU0NyIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[11]: https://news.itsfoss.com/gnome-41-beta/
[12]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUyMCIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[13]: https://help.gnome.org/misc/release-notes/41.0/

View File

@ -1,708 +0,0 @@
[#]: subject: "Code memory safety and efficiency by example"
[#]: via: "https://opensource.com/article/21/8/memory-programming-c"
[#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Code memory safety and efficiency by example
======
Learn more about memory safety and efficiency
![Code going into a computer.][1]
C is a high-level language with close-to-the-metal features that make it seem, at times, more like a portable assembly language than a sibling of Java or Python. Among these features is memory management, which covers an executing program's safe and efficient use of memory. This article goes into the details of memory safety and efficiency through code examples in C and a code segment from the assembly language that a modern C compiler generates.
Although the code examples are in C, the guidelines for safe and efficient memory management are the same for C++. The two languages differ in various details (e.g., C++ has object-oriented features and generics that C lacks), but these languages share the very same challenges with respect to memory management.
### Overview of memory for an executing program
For an executing program (aka _process_), memory is partitioned into three areas: The **stack**, the **heap**, and the **static area**. Here's an overview of each, with full code examples to follow.
As a backup for general-purpose CPU registers, the _stack_ provides scratchpad storage for the local variables within a code block, such as a function or a loop body. Arguments passed to a function count as local variables in this context. Consider a short example:
```
void some_func(int a, int b) {
   int n;
   ...
}
```
Storage for the arguments passed in parameters **a** and **b** and the local variable **n** would come from the stack unless the compiler could find general-purpose registers instead. The compiler favors such registers for scratchpad because CPU access to these registers is fast (one clock tick). However, these registers are few (roughly sixteen) on the standard architectures for desktop, laptop, and handheld machines.
At the implementation level, which only an assembly-language programmer would see, the stack is organized as a LIFO (Last In, First Out) list with **push** (insert) and **pop** (remove) operations. The **top** pointer can act as a base address for offsets; in this way, stack locations other than **top** become accessible. For example, the expression **top+16** points to a location sixteen bytes above the stack's **top**, and the expression **top-16** points to sixteen bytes below the **top**. Accordingly, stack locations that implement scratchpad storage are accessible through the **top** pointer. On a standard ARM or Intel architecture, the stack grows from high to low memory addresses; hence, to decrement **top** is to grow the stack for a process.
To use the stack is to use memory effortlessly and efficiently. The compiler, rather than the programmer, writes the code that manages the stack by allocating and deallocating the required scratchpad storage; the programmer declares function arguments and local variables, leaving the implementation to the compiler. Moreover, the very same stack storage can be reused across consecutive function calls and code blocks such as loops. Well-designed modular code makes stack storage the first memory option for scratchpad, with an optimizing compiler using, whenever possible, general-purpose registers instead of the stack.
The **heap** provides storage allocated explicitly through programmer code, although the syntax for heap allocation differs across languages. In C, a successful call to the library function **malloc** (or variants such as **calloc**) allocates a specified number of bytes. (In languages such as C++ and Java, the **new** operator serves the same purpose.) Programming languages differ dramatically on how heap-allocated storage is deallocated:
* In languages such as Java, Go, Lisp, and Python, the programmer does not explicitly deallocate dynamically allocated heap storage.
For example, this Java statement allocates heap storage for a string and stores the address of this heap storage in the variable **greeting**:
```
`String greeting = new String("Hello, world!");`
```
Java has a garbage collector, a runtime utility that automatically deallocates heap storage that is no longer accessible to the process that allocated the storage. Java heap deallocation is thus automatic through a garbage collector. In the example above, the garbage collector would deallocate the heap storage for the string after the variable **greeting** went out of scope.
* The Rust compiler writes the heap-deallocation code. This is Rust's pioneering effort to automate heap-deallocation without relying on a garbage collector, which entails runtime complexity and overhead. Hats off to the Rust effort!
* In C (and C++), heap deallocation is a programmer task. The programmer who allocates heap storage through a call to **malloc** is then responsible for deallocating this same storage with a matching call to the library function **free**. (In C++, the **new** operator allocates heap storage, whereas the **delete** and **delete[]** operators free such storage.) Here's a C example:
```
char* greeting = malloc(14);       /* 14 heap bytes */
strcpy(greeting, "Hello, world!"); /* copy greeting into bytes */
puts(greeting);                    /* print greeting */
free(greeting);                    /* free malloced bytes */
```
C avoids the cost and complexity of a garbage collector, but only by burdening the programmer with the task of heap deallocation.
The **static area** of memory provides storage for executable code such as C functions, string literals such as "Hello, world!", and global variables:
```
int n;                       /* global variable */
int main() {                 /* function */
   char* msg = "No comment"; /* string literal */
   ...
}
```
This area is static in that its size remains fixed from the start until the end of process execution. Because the static area amounts to a fixed-sized memory footprint for a process, the rule of thumb is to keep this area as small as possible by avoiding, for example, global arrays.
Code examples in the following sections flesh out this overview.
### Stack storage
Imagine a program that has various tasks to perform consecutively, including processing numeric data downloaded every few minutes over a network and stored in a local file. The **stack** program below simplifies the processing (odd integer values are made even) to keep the focus on the benefits of stack storage.
```
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#define Infile   "incoming.dat"
#define Outfile  "outgoing.dat"
#define IntCount 128000  /* 128,000 */
void other_task1() { /*...*/ }
void other_task2() { /*...*/ }
void process_data(const char* infile,
          const char* outfile,
          const unsigned n) {
  int nums[n];
  FILE* input = [fopen][2](infile, "r");
  if (NULL == infile) return;
  FILE* output = [fopen][2](outfile, "w");
  if (NULL == output) {
    [fclose][3](input);
    return;
  }
  [fread][4](nums, n, sizeof(int), input); /* read input data */
  unsigned i;
  for (i = 0; i &lt; n; i++) {
    if (1 == (nums[i] &amp; 0x1))  /* odd parity? */
      nums[i]--;               /* make even */
  }
  [fclose][3](input);               /* close input file */
  [fwrite][5](nums, n, sizeof(int), output);
  [fclose][3](output);
}
int main() {
  process_data(Infile, Outfile, IntCount);
  
  /** now perform other tasks **/
  other_task1(); /* automatically released stack storage available */
  other_task2(); /* ditto */
  
  return 0;
}
```
The **main** function at the bottom first calls the **process_data** function, which creates a stack-based array of a size given by argument **n** (128,000 in the current example). Accordingly, the array holds 128,000 x **sizeof(int)** bytes, which comes to 512,000 bytes on standard devices because an **int** is four bytes on these devices. Data then are read into the array (using library function **fread**), processed in a loop, and saved to the local file **outgoing.dat** (using library function **fwrite**).
When the **process_data** function returns to its caller **main**, the roughly 500MB of stack scratchpad for the **process_data** function become available for other functions in the **stack** program to use as scratchpad. In this example, **main** next calls the stub functions **other_task1** and **other_task2**. The three functions are called consecutively from **main**, which means that all three can use the same stack storage for scratchpad. Because the compiler rather than the programmer writes the stack-management code, this approach is both efficient and easy on the programmer.
In C, any variable defined inside a block (e.g., a function's or a loop's body) has an **auto** storage class by default, which means that the variable is stack-based. The storage class **register** is now outdated because C compilers are aggressive, on their own, in trying to use CPU registers whenever possible. Only a variable defined inside a block may be **register**, which the compiler changes to **auto** if no CPU register is available.Stack-based programming may be the preferred way to go, but this style does have its challenges. The **badStack** program below illustrates.
```
#include &lt;stdio.h&gt;
const int* get_array(const unsigned n) {
  int arr[n]; /* stack-based array */
  unsigned i;
  for (i = 0; i &lt; n; i++) arr[i] = 1 + 1;
  return arr;  /** ERROR **/
}
int main() {
  const unsigned n = 16;
  const int* ptr = get_array(n);
  
  unsigned i;
  for (i = 0; i &lt; n; i++) [printf][6]("%i ", ptr[i]);
  [puts][7]("\n");
  return 0;
}
```
The flow of control in the **badStack** program is straightforward. Function **main** calls function **get_array** with an argument of 128, which the called function then uses to create a local array of this size. The **get_array** function initializes the array and returns to **main** the array's identifier **arr**, which is a pointer constant that holds the address of the array's first **int** element.
The local array **arr** is accessible within the **get_array** function, of course, but this array cannot be legitimately accessed once **get_array** returns. Nonetheless, function **main** tries to print the stack-based array by using the stack address **arr**, which function **get_array** returns. Modern compilers warn about the mistake. For example, here's the warning from the GNU compiler:
```
badStack.c: In function 'get_array':
badStack.c:9:10: warning: function returns address of local variable [-Wreturn-local-addr]
8 |   return arr;  /** ERROR **/
```
The general rule is that stack-based storage should be accessed only within the code block that contains the local variables implemented with stack storage (in this case, the array pointer **arr** and the loop counter **i**). Accordingly, a function should never return a pointer to stack-based storage.
### Heap storage
Several code examples highlight the fine points of using heap storage in C. In the first example, heap storage is allocated, used, and then freed in line with best practice. The second example nests heap storage inside other heap storage, which complicates the deallocation operation.
```
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
int* get_heap_array(unsigned n) {
  int* heap_nums = [malloc][8](sizeof(int) * n); 
  
  unsigned i;
  for (i = 0; i &lt; n; i++)
    heap_nums[i] = i + 1;  /* initialize the array */
  
  /* stack storage for variables heap_nums and i released
     automatically when get_num_array returns */
  return heap_nums; /* return (copy of) the pointer */
}
int main() {
  unsigned n = 100, i;
  int* heap_nums = get_heap_array(n); /* save returned address */
  
  if (NULL == heap_nums) /* malloc failed */
    [fprintf][9](stderr, "%s\n", "malloc(...) failed...");
  else {
    for (i = 0; i &lt; n; i++) [printf][6]("%i\n", heap_nums[i]);
    [free][10](heap_nums); /* free the heap storage */
  }
  return 0; 
}
```
The **heap** program above has two functions: **main** calls **get_heap_array** with an argument (currently 100) that specifies how many **int** elements the array should have. Because the heap allocation could fail, **main** checks whether **get_heap_array** has returned **NULL**, which signals failure. If the allocation succeeds, **main** prints the **int** values in the array—and immediately thereafter deallocates, with a call to library function **free**, the heap-allocated storage. This is best practice.
The **get_heap_array** function opens with this statement, which merits a closer look:
```
`int* heap_nums = malloc(sizeof(int) * n); /* heap allocation */`
```
The **malloc** library function and its variants deal with bytes; hence, the argument to **malloc** is the number of bytes required for **n** elements of type **int**. (The **sizeof(int)** is four bytes on a standard modern device.) The **malloc** function returns either the address of the first among the allocated bytes or, in case of failure, **NULL**.
In a successful call to **malloc**, the returned address is 64-bits in size on a modern desktop machine. On handhelds and earlier desktop machines, the address might be 32-bits in size or, depending on age, even smaller. The elements in the heap-allocated array are of type **int**, a four-byte signed integer. The address of these heap-allocated **int**s is stored in the local variable **heap_nums**, which is stack-based. Here's a depiction:
```
                 heap-based
 stack-based        /
     \        +----+----+   +----+
 heap-nums---&gt;|int1|int2|...|intN|
              +----+----+   +----+
```
Once the **get_heap_array** function returns, stack storage for pointer variable **heap_nums** is reclaimed automatically—but the heap storage for the dynamic **int** array persists, which is why the **get_heap_array** function returns (a copy of) this address to **main**, which now is responsible, after printing the array's integers, for explicitly deallocating the heap storage with a call to the library function **free**:
```
`free(heap_nums); /* free the heap storage */`
```
The **malloc** function does not initialize heap-allocated storage, which therefore contains random values. By contrast, the **calloc **variant initializes the allocated storage to zeros. Both functions return **NULL** to signal failure.
In the **heap** example, **main** returns immediately after calling **free**, and the executing program terminates, which allows the system to reclaim any allocated heap storage. Nonetheless, the programmer should develop the habit of explicitly freeing heap storage as soon as it is no longer needed.
### Nested heap allocation
The next code example is trickier. C has various library functions that return a pointer to heap storage. Here's a familiar scenario:
1\. The C program invokes a library function that returns a pointer to heap-based storage, typically an aggregate such as an array or a structure:
```
`SomeStructure* ptr = lib_function(); /* returns pointer to heap storage */`
```
2\. The program then uses the allocated storage.
3\. For cleanup, the issue is whether a simple call to **free** will clean up all of the heap-allocated storage that the library function allocates. For example, the **SomeStructure** instance may have fields that, in turn, point to heap-allocated storage. A particularly troublesome case would be a dynamically allocated array of structures, each of which has a field pointing to more dynamically allocated storage.The following code example illustrates the problem and focuses on designing a library that safely provides heap-allocated storage to clients.
```
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
typedef struct {
  unsigned id;
  unsigned len;
  float*   heap_nums;
} HeapStruct;
unsigned structId = 1;
HeapStruct* get_heap_struct(unsigned n) {
  /* Try to allocate a HeapStruct. */
  HeapStruct* heap_struct = [malloc][8](sizeof(HeapStruct));
  if (NULL == heap_struct) /* failure? */
    return NULL;           /* if so, return NULL */
  /* Try to allocate floating-point aggregate within HeapStruct. */
  heap_struct-&gt;heap_nums = [malloc][8](sizeof(float) * n);
  if (NULL == heap_struct-&gt;heap_nums) {  /* failure? */
    [free][10](heap_struct);                   /* if so, first free the HeapStruct */
    return NULL;                         /* then return NULL */
  }
  /* Success: set fields */
  heap_struct-&gt;id = structId++;
  heap_struct-&gt;len = n;
  return heap_struct; /* return pointer to allocated HeapStruct */
}
void free_all(HeapStruct* heap_struct) {
  if (NULL == heap_struct) /* NULL pointer? */
    return;                /* if so, do nothing */
  
  [free][10](heap_struct-&gt;heap_nums); /* first free encapsulated aggregate */
  [free][10](heap_struct);            /* then free containing structure */  
}
int main() {
  const unsigned n = 100;
  HeapStruct* hs = get_heap_struct(n); /* get structure with N floats */
  /* Do some (meaningless) work for demo. */
  unsigned i;
  for (i = 0; i &lt; n; i++) hs-&gt;heap_nums[i] = 3.14 + (float) i;
  for (i = 0; i &lt; n; i += 10) [printf][6]("%12f\n", hs-&gt;heap_nums[i]);
  free_all(hs); /* free dynamically allocated storage */
  
  return 0;
}
```
The **nestedHeap** example above centers on a structure **HeapStruct** with a pointer field named **heap_nums**:
```
typedef struct {
  unsigned id;
  unsigned len;
  float*   heap_nums; /** pointer **/
} HeapStruct;
```
The function **get_heap_struct** tries to allocate heap storage for a **HeapStruct** instance, which entails allocating heap storage for a specified number of **float** variables to which the field **heap_nums** points. The result of a successful call to **get_heap_struct** can be depicted as follows, with **hs** as the pointer to the heap-allocated structure:
```
hs--&gt;HeapStruct instance
        id
        len
        heap_nums--&gt;N contiguous float elements
```
In the **get_heap_struct** function, the first heap allocation is straightforward:
```
HeapStruct* heap_struct = [malloc][8](sizeof(HeapStruct));
if (NULL == heap_struct) /* failure? */
  return NULL;           /* if so, return NULL */
```
The **sizeof(HeapStruct)** includes the bytes (four on a 32-bit machine, eight on a 64-bit machine) for the **heap_nums** field, which is a pointer to the **float** elements in a dynamically allocated array. At issue, then, is whether the **malloc** delivers the bytes for this structure or **NULL** to signal failure; if **NULL**, the **get_heap_struct** function returns **NULL** to notify the caller that the heap allocation failed.
The second attempted heap allocation is more complicated because, at this step, heap storage for the **HeapStruct** has been allocated:
```
heap_struct-&gt;heap_nums = [malloc][8](sizeof(float) * n);
if (NULL == heap_struct-&gt;heap_nums) {  /* failure? */
  [free][10](heap_struct);                   /* if so, first free the HeapStruct */
  return NULL;                         /* and then return NULL */
}
```
The argument **n** sent to the **get_heap_struct** function indicates how many **float** elements should be in the dynamically allocated **heap_nums** array. If the required **float** elements can be allocated, then the function sets the structure's **id** and **len** fields before returning the heap address of the **HeapStruct**. If the attempted allocation fails, however, two steps are necessary to meet best practice:
1\. The storage for the **HeapStruct** must be freed to avoid memory leakage. Without the dynamic **heap_nums** array, the **HeapStruct** is presumably of no use to the client function that calls **get_heap_struct**; hence, the bytes for the **HeapStruct** instance should be explicitly deallocated so that the system can reclaim these bytes for future heap allocations.
2\. **NULL** is returned to signal failure.
If the call to the **get_heap_struct** function succeeds, then freeing the heap storage is also tricky because it involves two **free** operations in the proper order. Accordingly, the program includes a **free_all** function instead of requiring the programmer to figure out the appropriate two-step deallocation. For review, here's the **free_all** function:
```
void free_all(HeapStruct* heap_struct) {
  if (NULL == heap_struct) /* NULL pointer? */
    return;                /* if so, do nothing */
  
  [free][10](heap_struct-&gt;heap_nums); /* first free encapsulated aggregate */
  [free][10](heap_struct);            /* then free containing structure */  
}
```
After checking that the argument **heap_struct** is not **NULL**, the function first frees the **heap_nums** array, which requires that the **heap_struct** pointer is still valid. It would be an error to release the **heap_struct** first. Once the **heap_nums** have been deallocated, the **heap_struct** can be freed as well. If **heap_struct** were freed, but **heap_nums** were not, then the **float** elements in the array would be leakage: still allocated bytes but with no possibility of access—hence, of deallocation. The leakage would persist until the **nestedHeap** program exited and the system reclaimed the leaked bytes.
A few cautionary notes on the **free** library function are in order. Recall the sample calls above:
```
[free][10](heap_struct-&gt;heap_nums); /* first free encapsulated aggregate */
[free][10](heap_struct);            /* then free containing structure */
```
These calls free the allocated storage—but they do _not_ set their arguments to **NULL**. (The **free** function gets a copy of an address as an argument; hence, changing the copy to **NULL** would leave the original unchanged.) For example, after a successful call to **free**, the pointer **heap_struct** still holds a heap address of some heap-allocated bytes, but using this address now would be an error because the call to **free** gives the system the right to reclaim and then reuse the allocated bytes.
Calling **free** with a **NULL** argument is pointless but harmless. Calling **free** repeatedly on a non-**NULL** address is an error with indeterminate results:
```
[free][10](heap_struct);  /* 1st call: ok */
[free][10](heap_struct);  /* 2nd call: ERROR */
```
### Memory leakage and heap fragmentation
The phrase "memory leakage" refers to dynamically allocated heap storage that is no longer accessible. Here's a code segment for review:
```
float* nums = [malloc][8](sizeof(float) * 10); /* 10 floats */
nums[0] = 3.14f;                          /* and so on */
nums = [malloc][8](sizeof(float) * 25);        /* 25 new floats */
```
Assume that the first **malloc** succeeds. The second **malloc** resets the **nums** pointer, either to **NULL** (allocation failure) or to the address of the first **float** among newly allocated twenty-five. Heap storage for the initial ten **float** elements remains allocated but is now inaccessible because the **nums** pointer either points elsewhere or is **NULL**. The result is forty bytes (**sizeof(float) * 10**) of leakage.
Before the second call to **malloc**, the initially allocated storage should be freed:
```
float* nums = [malloc][8](sizeof(float) * 10); /* 10 floats */
nums[0] = 3.14f;                          /* and so on */
[free][10](nums);                               /** good **/
nums = [malloc][8](sizeof(float) * 25);        /* no leakage */
```
Even without leakage, the heap can fragment over time, which then requires system defragmentation. For example, suppose that the two biggest heap chunks are currently of sizes 200MB and 100MB. However, the two chunks are not contiguous, and process **P** needs to allocate 250MB of contiguous heap storage. Before the allocation can be made, the system must _defragment_ the heap to provide 250MB contiguous bytes for **P**. Defragmentation is complicated and, therefore, time-consuming.
Memory leakage promotes fragmentation by creating allocated but inaccessible heap chunks. Freeing no-longer-needed heap storage is, therefore, one way that a programmer can help to reduce the need for defragmentation.
### Tools to diagnose memory leakage
Various tools are available for profiling memory efficiency and safety. My favorite is [valgrind][11]. To illustrate how the tool works for memory leaks, here's the **leaky** program:
```
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
int* get_ints(unsigned n) {
  int* ptr = [malloc][8](n * sizeof(int));
  if (ptr != NULL) {
    unsigned i;
    for (i = 0; i &lt; n; i++) ptr[i] = i + 1;
  }
  return ptr;
}
void print_ints(int* ptr, unsigned n) {
  unsigned i;
  for (i = 0; i &lt; n; i++) [printf][6]("%3i\n", ptr[i]);
}
int main() {
  const unsigned n = 32;
  int* arr = get_ints(n);
  if (arr != NULL) print_ints(arr, n);
  /** heap storage not yet freed... **/
  return 0;
}
```
The function **main** calls **get_ints**, which tries to **malloc** thirty-two 4-byte **int**s from the heap and then initializes the dynamic array if the **malloc** succeeds. On success, the **main** function then calls **print_ints**. There is no call to **free** to match the call to **malloc**; hence, memory leaks.
With the **valgrind** toolbox installed, the command below checks the **leaky** program for memory leaks (**%** is the command-line prompt):
```
`% valgrind --leak-check=full ./leaky`
```
Below is most of the output. The number on the left, 207683, is the process identifier of the executing **leaky** program. The report provides details of where the leak occurs, in this case, from the call to **malloc** within the **get_ints** function that **main** calls.
```
==207683== HEAP SUMMARY:
==207683==   in use at exit: 128 bytes in 1 blocks
==207683==   total heap usage: 2 allocs, 1 frees, 1,152 bytes allocated
==207683== 
==207683== 128 bytes in 1 blocks are definitely lost in loss record 1 of 1
==207683==   at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==207683==   by 0x109186: get_ints (in /home/marty/gc/leaky)
==207683==   by 0x109236: main (in /home/marty/gc/leaky)
==207683== 
==207683== LEAK SUMMARY:
==207683==   definitely lost: 128 bytes in 1 blocks
==207683==   indirectly lost: 0 bytes in 0 blocks
==207683==   possibly lost: 0 bytes in 0 blocks
==207683==   still reachable: 0 bytes in 0 blocks
==207683==   suppressed: 0 bytes in 0 blocks
```
If function **main** is revised to include a call to **free** right after the one to **print_ints**, then **valgrind** gives the **leaky** program a clean bill of health:
```
`==218462== All heap blocks were freed -- no leaks are possible`
```
### Static area storage
In orthodox C, a function must be defined outside all blocks. This rules out having one function defined inside the body of another, a feature that some C compilers support. My examples stick with functions defined outside all blocks. Such a function is either **static** or **extern**, with **extern** as the default.
C functions and variables with either **static** or **extern** as their storage class reside in what I've been calling the **static area** of memory because this area has a fixed size during program execution. The syntax for these two storage classes is complicated enough to merit a review. After the review, a full code example brings the syntactic details back to life. Functions or variables defined outside all blocks default to **extern**; hence, the storage class **static** must be explicit for both functions and variables:
```
/** file1.c: outside all blocks, five definitions  **/
int foo(int n) { return n * 2; }     /* extern by default */
static int bar(int n) { return n; }  /* static */
extern int baz(int n) { return -n; } /* explicitly extern */
int num1;        /* extern */
static int num2; /* static */
```
The difference between **extern** and **static** comes down to scope: an **extern** function or variable may be visible across files. By contrast, a **static** function is visible only in the file that contains the function's _definition_, and a **static** variable is visible only in the file (or a block therein) that has the variable's _definition_:
```
static int n1;    /* scope is the file */
void func() {
   static int n2; /* scope is func's body */
   ...
}
```
If a **static** variable such as **n1** above is defined outside all blocks, the variable's scope is the file in which the variable is defined. Wherever a **static** variable may be defined, storage for the variable is in the static area of memory.
An **extern** function or variable is defined outside all blocks in a given file, but the function or variable so defined then may be declared in some other file. The typical practice is to _declare_ such a function or variable in a header file, which is included wherever needed. Some short examples clarify these tricky points.
Suppose that the **extern** function **foo** is _defined_ in **file1.c**, with or without the keyword **extern**:
```
/** file1.c **/
int foo(int n) { return n * 2; } /* definition has a body {...} */
```
This function must be _declared_ with an explicit **extern** in any other file (or block therein) for the function to be visible. Here's the declaration that makes the **extern** function **foo** visible in file **file2.c**:
```
/** file2.c: make function foo visible here **/
extern int foo(int); /* declaration (no body) */
```
Recall that a function declaration does not have a body enclosed in curly braces, whereas a function definition does have such a body.
For review, header files typically contain function and variable declarations. Source-code files that require the declarations then **#include** the relevant header file(s). The **staticProg** program in the next section illustrates this approach.
The rules get trickier (sorry!) with **extern** variables. Any **extern** object—function or variable—must be _defined_ outside all blocks. Also, a variable defined outside all blocks defaults to **extern**:
```
/** outside all blocks **/
int n; /* defaults to extern */
```
However, the **extern** can be explicit in the variable's _definition_ only if the variable is initialized explicitly there:
```
/** file1.c: outside all blocks **/
int n1;             /* defaults to extern, initialized by compiler to zero */
extern int n2 = -1; /* ok, initialized explicitly */
int n3 = 9876;      /* ok, extern by default and initialized explicitly */
```
For a variable defined as **extern** in **file1.c** to be visible in another file such as **file2.c**, the variable must be _declared_ as explicitly **extern** in **file2.c** and not initialized, which would turn the declaration into a definition:
```
/** file2.c **/
extern int n1; /* declaration of n1 defined in file1.c */
```
To avoid confusion with **extern** variables, the rule of thumb is to use **extern** explicitly in a _declaration_ (required) but not in a _definition_ (optional and tricky). For functions, the **extern** is optional in a definition but needed for a declaration. The **staticProg** example in the next section brings these points together in a full program.
### The staticProg example
The **staticProg** program consists of three files: two C source files (**static1.c** and **static2.c**) together with a header file (**static.h**) that contains two declarations:
```
/** header file static.h **/
#define NumCount 100               /* macro */
extern int global_nums[NumCount];  /* array declaration */
extern void fill_array();          /* function declaration */
```
The **extern** in the two declarations, one for an array and the other for a function, underscores that the objects are _defined_ elsewhere ("externally"): the array **global_nums** is defined in file **static1.c** (without an explicit **extern**) and the function **fill_array** is defined in file **static2.c** (also without an explicit **extern**). Each source file includes the header file **static.h**.The **static1.c** file defines the two arrays that reside in the static area of memory, **global_nums** and **more_nums**. The second array has a **static** storage class, which restricts its scope to the file (**static1.c**) in which the array is defined. As noted, **global_nums** as **extern** can be made visible in multiple files.
```
/** static1.c **/
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include "static.h"             /* declarations */
int global_nums[NumCount];      /* definition: extern (global) aggregate */
static int more_nums[NumCount]; /* definition: scope limited to this file */
int main() {
  fill_array(); /** defined in file static2.c **/
  unsigned i;
  for (i = 0; i &lt; NumCount; i++)
    more_nums[i] = i * -1;
  /* confirm initialization worked */
  for (i = 0; i &lt; NumCount; i += 10) 
    [printf][6]("%4i\t%4i\n", global_nums[i], more_nums[i]);
    
  return 0;  
}
```
The **static2.c** file below defines the **fill_array** function, which **main** (in the **static1.c** file) invokes; the **fill_array** function populates the **extern** array named **global_nums**, which is defined in file **static1.c**. The sole point of having two files is to underscore that an **extern** variable or function can be visible across files.
```
/** static2.c **/
#include "static.h" /** declarations **/
void fill_array() { /** definition **/
  unsigned i;
  for (i = 0; i &lt; NumCount; i++) global_nums[i] = i + 2;
}
```
The **staticProg** program can be compiled as follows:
```
`% gcc -o staticProg static1.c static2.c`
```
### More details from assembly language
A modern C compiler can handle any mix of C and assembly language. When compiling a C source file, the compiler first translates the C code into assembly language. Here's the command to save the assembly language generated from the **static1.c** file above:
```
`% gcc -S static1.c`
```
The resulting file is **static1.s**. Here's a segment from the top, with added line numbers for readability:
```
    .file    "static1.c"          ## line  1
    .text                         ## line  2
    .comm    global_nums,400,32   ## line  3
    .local    more_nums           ## line  4
    .comm    more_nums,400,32     ## line  5
    .section    .rodata           ## line  6
.LC0:                             ## line  7
    .string    "%4i\t%4i\n"       ## line  8
    .text                         ## line  9
    .globl    main                ## line 10
    .type    main, @function      ## line 11
main:                             ## line 12
...
```
The assembly-language directives such as **.file** (line 1) begin with a period. As the name suggests, a directive guides the assembler as it translates assembly language into machine code. The **.rodata** directive (line 6) indicates that read-only objects follow, including the string constant **"%4i\t%4i\n"** (line 8), which function **main** (line 12) uses to format output. The function **main** (line 12), introduced as a label (the colon at the end makes it so), is likewise read-only.
In assembly language, labels are addresses. The label **main:** (line 12) marks the address at which the code for the **main** function begins, and the label **.LC0**: (line 7) marks the address at which the format string begins.
The definitions of the **global_nums** (line 3) and **more_nums** (line 4) arrays include two numbers: 400 is the total number of bytes in each array, and 32 is the number of bits in each of the 100 **int** elements per array. (The **.comm** directive in line 5 stands for **common name**, which can be ignored.)
The array definitions differ in that **more_nums** is marked as **.local** (line 4), which means that its scope is restricted to the containing file **static1.s**. By contrast, the **global_nums** array can be made visible across multiple files, including the translations of the **static1.c** and **static2.c** files.
Finally, the **.text** directive occurs twice (lines 2 and 9) in the assembly code segment. The term "text" suggests "read-only" but also covers read/write variables such as the elements in the two arrays. Although the assembly language shown is for an Intel architecture, Arm6 assembly would be quite similar. For both architectures, variables in the **.text** area (in this case, elements in the two arrays) are initialized automatically to zeros.
### Wrapping up
For memory-efficient and memory-safe programming in C, the guidelines are easy to state but may be hard to follow, especially when calls to poorly designed libraries are in play. The guidelines are:
* Use stack storage whenever possible, thereby encouraging the compiler to optimize with general-purpose registers for scratchpad. Stack storage represents efficient memory use and promotes clean, modular code. Never return a pointer to stack-based storage.
* Use heap storage carefully. The challenge in C (and C++) is to ensure that dynamically allocated storage is deallocated ASAP. Good programming habits and tools (such as **valgrind**) help to meet the challenge. Favor libraries that provide their own deallocation function(s), such as the **free_all** function in the **nestedHeap** code example.
* Use static storage judiciously, as this storage impacts the memory footprint of a process from start to finish. In particular, try to avoid **extern** and **static** arrays.
The C code examples are available at my website (<https://condor.depaul.edu/mkalin>).
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/memory-programming-c
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 (Code going into a computer.)
[2]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
[3]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fread.html
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fwrite.html
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/malloc.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/free.html
[11]: https://www.valgrind.org/

View File

@ -1,80 +0,0 @@
[#]: subject: "Neither Windows, nor Linux! Shrine is Gods Operating System"
[#]: via: "https://itsfoss.com/shrine-os/"
[#]: author: "John Paul https://itsfoss.com/author/john/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Neither Windows, nor Linux! Shrine is Gods Operating System
======
Weve all used multiple operating systems in our lives. Some were good and some were bad. But can you say that youve ever used an operating system designed by God? Today, Id like to introduce you to Shrine.
### What is Shrine?
![Shrine interface][1]
From that introduction, youre probably wondering what the heck is going on. Well, it all started with a guy named Terry Davis. Before we go any further, Id better warn you that Terry suffered from schizophrenia during his life and often didnt take his medication. Because of this, he said or did things during his life that were not quite socially acceptable.
Anyway, back to the story line. In the early 2000s, Terry released a simple operating system. Over the years, it went through several names, including J Operating System, LoseThos, and SparrowOS. He finally settled on the name [TempleOS][2]. He chose that name because this operating system would be Gods temple. As such. God gave Terry the following [specifications][3] for the operating system:
* It would have 640×480 16 color graphics
* It would use “a single-voice 8-bit signed MIDI-like sample for sound”.
* It would follow the Commodore 64, i.e. “a non-networked, simple machine where programming was the goal, not just a means to an end”.
* It would only support one file system (named “Red Sea”).
* It would be limited to 100,000 lines of code to make it “easy to learn the whole thing”.
* “Ring-0-only. Everything runs in kernel mode, including user applications”
* The font would be limited to “one 8×8 fixed-width font”.
* The use would have “full access to everything. All memory, I/O ports, instructions, and similar things must never be off limits. All functions, variables and class members will be accessible.”
* It would only support one platform, 64-bit PCs.
Terry wrote this operating system using in a programming language that he called HolyC. TechRepublic called it a “modified version of C++ (“more than C, less than C++”)”. If you are interested in getting a flavor of HolyC, I would recommend, [this article][4] and the HolyC entry on [RosettaCode][5].
In 2013, Terry announced on his website that TempleOS was complete. Tragically, Terry died a few years later in August of 2018 when he was hit by a train. He was homeless at the time. Over the years, many people followed Terry through his work on the operating system. Most were impressed at his ability to write an operating system in such a small package.
Now, you are probably wondering what all this talk of TempleOS has to do with Shrine. Well, as the [GitHub page][6] for Shrine states, it is “A TempleOS distro for heretics”. GitHub user [minexew][7] created Shrine to add features to TempleOS that Terry had neglected. These features include:
* 99% compatibility with TempleOS programs
* Ships with Lambda Shell, which feels a bit like a classic Unix command interpreter
* TCP/IP stack &amp; internet access out of the box
* Includes a package downloader
minexew is planning to add more features in the future, but hasnt announced what exactly will be included. He has plans to make a full TempleOS environment for Linux.
### Experience
Its fairly easy to get Shrine virtualized. All you need to do is install your virtualizing software of choice. (Mine is VirtualBox.) When you create a virtual machine for Shrine, make sure that it is 64-bit and has at least 512 MB of RAM.
Once you boot into Shrine, you will be asked if you want to install to your (virtual) hard drive. Once that is finished (or not, if you choose), you will be offered a tour of the operating system. From there you can explore.
### Final Thoughts
Temple OS and (Shrine) is obviously not intended to be a replacement for Windows or Linux. Even though Terry referred to it as “Gods temple”, Im sure in his more lucid moments, he would have acknowledged that it was more of a hobby operating system. With that in mind, the finished product is fairly [impressive][8]. Over a twelve-year period, Terry created an operating system in a little over 100,000 lines of code, using a language that he had created himself. He also wrote his own compiler, graphics library and several games. All this while fighting his own personal demons.
--------------------------------------------------------------------------------
via: https://itsfoss.com/shrine-os/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/shrine.jpg?resize=800%2C600&ssl=1
[2]: https://templeos.org/
[3]: https://web.archive.org/web/20170508181026/http://www.templeos.org:80/Wb/Doc/Charter.html
[4]: https://harrisontotty.github.io/p/a-lang-design-analysis-of-holyc
[5]: https://rosettacode.org/wiki/Category:HolyC
[6]: https://github.com/minexew/Shrine
[7]: https://github.com/minexew
[8]: http://www.codersnotes.com/notes/a-constructive-look-at-templeos/

View File

@ -1,157 +0,0 @@
[#]: subject: "Run containers on your Mac with Lima"
[#]: via: "https://opensource.com/article/21/9/run-containers-mac-lima"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Run containers on your Mac with Lima
======
Lima can help overcome the challenges of running containers on a Mac.
![Containers for shipping overseas][1]
Running containers on your Mac can be a challenge. After all, containers are based on Linux-specific technologies like cgroups and namespaces.
Luckily, macOS has a built-in hypervisor, allowing virtual machines (VMs) on the Mac. The hypervisor is a low-level kernel feature, not a user-facing one.
Enter `hyperkit`, an [open source project][2] that will run VMs using the macOS hypervisor. The `hyperkit` tool is designed to be a "minimalist" VM runner. Unlike, say, VirtualBox, it does not come with fancy UI features to manage VMs.
You can grab `hyperkit`, a minimalist Linux distribution running a container manager, and plumb all the pieces together. This would be a lot of moving parts, and sounds like a lot of work. Especially if you want to make the network connections a bit more seamless by using `vpnkit`, an open source project to create a VM's network that feels more like part of the host's network.
### Lima
There is no reason to go to all that effort, when [the `lima` project][3] has figured out the details. One of the easiest ways to get `lima` running is with [Homebrew][4]. You can install `lima` with this command:
```
`$ brew install lima`
```
After installation, which might take a while, it is time to begin having some fun. In order to let `lima` know you are ready for some fun, you need to start it. Here's the command:
```
`$ limactl start`
```
If this is your first time, you will be asked if you like the defaults or whether you want to change any of them. The defaults are pretty safe, but I like to live on the wild side. This is why I jump into an editor and make the following modifications from:
```
 - location: "~"
    # CAUTION: `writable` SHOULD be false for the home directory.
    # Setting `writable` to true is possible but untested and dangerous.
    writable: false
```
to:
```
  - location: "~"
    # I *also* like to live dangerously -- Austin Powers
    writable: true
```
As it says in the comment, this can be dangerous. Many existing workflows, sadly, depend on this mounting to be read-write.
By default, `lima` runs `containerd` to manage containers. The `containerd` manager is also a pretty frill-less one. While it is not uncommon to use a wrapper daemon, like `dockerd`, to add those nice-to-have ergonomics, there is another way.
### The nerdctl tool
The `nerdctl` tool is a drop-in replacement for the Docker client which puts those features in the client, not the server. The `lima` tool allows running `nerdctl` without installing it locally, directly from inside the VM.
Putting it all together, it is time to run a container! This container will run an HTTP server. You can create the files on your Mac:
```
$ ls
index.html
$ cat index.html
hello
```
Now, mount and forward the ports:
```
$ lima nerdctl run --rm -it -p 8000:8000 -v $(pwd):/html --entrypoint bash python
root@9486145449ab:/#
```
Inside the container, run a simple web server:
```
$ lima nerdctl run --rm -it -p 8000:8000 -v $(pwd):/html --entrypoint bash python
root@9486145449ab:/# cd /html/
root@9486145449ab:/html# python -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (<http://0.0.0.0:8000/>) ...
```
From a different terminal, you can check that everything looks good:
```
$ curl localhost:8000
hello
```
Back on the container, there is a log message documenting the HTTP client's connection:
```
`10.4.0.1 - - [09/Sep/2021 14:59:08] "GET / HTTP/1.1" 200 -`
```
One file is not enough, so times to make some things better. **CTRL-C** the server, and add another file:
```
^C
Keyboard interrupt received, exiting.
root@9486145449ab:/html# echo goodbye &gt; foo.html
root@9486145449ab:/html# python -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (<http://0.0.0.0:8000/>) ...
```
Check that you can see the new file:
```
$ curl localhost:8000/foo.html
goodbye
```
### Wrap up
To recap, installing `lima` takes a while, but after you are done, you can do the following:
* Run containers.
* Mount arbitrary sub-directories of your home directory into containers.
* Edit files in those directories.
* Run network servers that appear to Mac programs like they are running on localhost.
All with `lima nerdctl`.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/run-containers-mac-lima
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_2015-2-osdc-lead.png?itok=kAfHrBoy (Containers for shipping overseas)
[2]: https://www.docker.com/blog/docker-unikernels-open-source/
[3]: https://github.com/lima-vm/lima
[4]: https://brew.sh/

View File

@ -0,0 +1,67 @@
[#]: subject: "Fedora Linux earns recognition from the Digital Public Goods Alliance as a DPG!"
[#]: via: "https://fedoramagazine.org/fedora-linux-earns-recognition-from-the-digital-public-goods-alliance-as-a-dpg/"
[#]: author: "Justin W. FloryAlberto Rodriguez SanchezMatthew Miller https://fedoramagazine.org/author/jflory7/https://fedoramagazine.org/author/bt0dotninja/https://fedoramagazine.org/author/mattdm/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fedora Linux earns recognition from the Digital Public Goods Alliance as a DPG!
======
![][1]
In the Fedora Project community, [we look at open source][2] as not only code that can change how we interact with computers, but also as a way for us to positively influence and shape the future. The more hands that help shape a project, the more ideas, viewpoints and experiences the project represents — thats truly what the spirit of open source is built from.
But its not just the global contributors to the Fedora Project who feel this way. August 2021 saw Fedora Linux recognized as a digital public good by the [Digital Public Goods Alliance (DPGA)][3], a significant achievement and a testament to the openness and inclusivity of the project.
We know that digital technologies can save lives, improve the well-being of billions, and contribute to a more sustainable future. We also know that in tackling those challenges, Open Source is uniquely positioned in the world of digital solutions by inherently welcoming different ideas and perspectives critical to lasting success.
But, we also know that many regions and countries around the world do not have access to those technologies. Open Source technologies can be the difference between achieving the [Sustainable Development Goals][4] (SDGs) by 2030 or missing the targets. Projects like Fedora Linux, which [represent much more than code itself][2], are the game-changers we need. Already, individuals, organizations, governments, and Open Source communities, including the Fedora Projects own, are working to make sure the potential of Open Source is realized and equipped to take on the monumental challenges being faced.
The Digital Public Goods Alliance is a multi-stakeholder initiative, endorsed by the United Nations Secretary-General. It works to accelerate the attainment of the SDGs in low- and middle-income countries by facilitating the discovery, development, use of, and investment in digital public goods (DPGs). DPGs are Open Source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, and do no harm. This definition, drawn from the UN Secretary-Generals [2020 Roadmap for Digital Cooperation][5], serves as the foundation of the DPG Registry, an online repository for DPGs. 
The DPG Registry was created to help increase the likelihood of discovery, and therefore use of, DPGs. Today, we are excited to share that Fedora Linux was added to the [DPG Registry][6]! Recognition as a DPG increases the visibility, support for, and prominence of open projects that have the potential to tackle global challenges. To become a digital public good, all projects are required to meet the [DPG Standard][7] to ensure they truly encapsulate Open Source principles. 
As an Open Source leader, Fedora Linux can make achieving the SDGs a reality through its role as a convener of many Open Source “upstream” communities. In addition to providing a fully-featured desktop, server, cloud, and container operating system, it also acts as a platform where different Open Source software and work come together. Fedora Linux by default only ships its releases with purely Open Source software packages and components. While third-party repositories are available for use with proprietary packages or closed components, Fedora Linux is a complete offering with some of the greatest innovations that Open Source has to offer. Collectively this means Fedora Linux can act as a gateway, empowering the creation of more and better solutions to better tackle the challenges they are trying to address.
The DPG designation also aligns with Fedoras fundamental foundations:
* **Freedom**: Fedora Linux was built as Free and Open Source Software from the beginning. Fedora Linux only ships and distributes Free Software from its default repositories. Fedora Linux already uses widely-accepted Open Source licenses.
* **Friends**: Fedora has an international community of hundreds spread across six continents. The Fedora Community is strong and well-positioned to scale as the upstream distribution of the worlds most-widely used enterprise flavor of Linux.
* **Features**: Fedora consistently delivers on innovation and features in Open Source. Fedora Linux 34 was a record-breaking release, with 63 new approved Changes in the last release.
* **First**: Fedora leverages its unique position and resources in the Free Software world to deliver on innovation. New ideas and features are tried out in the Fedora Community to discover what works, and what doesnt. We have many stories of both.
![][8]
For us, recognition as a digital public good brings honor and is a great moment for us, as a community, to reaffirm our commitment to contribute and grow the Open Source ecosystem.
This is a proud moment for each Fedora Community member because we are making a difference. Our work matters and has value in creating an equitable world; this is a fantastic and important feeling.
If you have an interest in learning more about the Digital Public Goods Alliance please reach out to [hello@digitalpublicgoods.net][9].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/fedora-linux-earns-recognition-from-the-digital-public-goods-alliance-as-a-dpg/
作者:[Justin W. FloryAlberto Rodriguez SanchezMatthew Miller][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jflory7/https://fedoramagazine.org/author/bt0dotninja/https://fedoramagazine.org/author/mattdm/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/09/DPG_recognition-816x345.jpg
[2]: https://docs.fedoraproject.org/en-US/project/
[3]: https://digitalpublicgoods.net/frequently-asked-questions/
[4]: https://sdgs.un.org/goals
[5]: https://www.un.org/en/content/digital-cooperation-roadmap/
[6]: http://digitalpublicgoods.net/registry/
[7]: http://digitalpublicgoods.net/standard/
[8]: https://lh6.googleusercontent.com/lzxUQ45O79-kK_LHsokEChsfMCyAz4fpTx1zEaj6sN_-IiJp5AVqpsISdcxvc8gFCU-HBv43lylwkqjItSm1X1rG_sl9is1ou9QbIUpJTGyzr4fQKWm_QujF55Uyi-hRrta1M9qB=s0
[9]: mailto:hello@digitalpublicgoods.net

View File

@ -0,0 +1,80 @@
[#]: subject: "An open source alternative to Microsoft Exchange"
[#]: via: "https://opensource.com/article/21/9/open-source-groupware-grommunio"
[#]: author: "Markus Feilner https://opensource.com/users/mfeilner"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
An open source alternative to Microsoft Exchange
======
Open source users now have a robust and fully functional choice for
groupware.
![Working on a team, busy worklife][1]
Microsoft Exchange has for many years been nearly unavoidable as a platform for groupware environments. Late in 2020, however, an Austrian open source software developer introduced [grommunio][2], a groupware server and client with a look and feel familiar to Exchange and Outlook users.
The grommunio project functions well as a drop-in replacement for Exchange. The developers connect components to the platform the same way Microsoft does, and they support RPC (Remote Procedure Call) with the HTTP protocol. According to the developers, grommunio also includes numerous interfaces of common groupware such as IMAP, POP3, SMTP, EAS (Exchange ActiveSync), EWS (Exchange Web Services), CalDAV, and CardDAV. With such broad support, grommunio integrates smoothly into existing infrastructures.
Users will notice little difference among Outlook, Android, and iOS clients. Of course, as open source software, it supports other clients, too. Outlook and smartphones communicate with grommunio just as they do with a Microsoft Exchange server, thanks to their integrated, native Exchange protocols. An everyday enterprise user can continue to use their existing clients with the grommunio server quietly running in the background.
### More than just mail
In addition to mail functions, a calendaring system is available in the grommunio interface. Appointments can be created by clicking directly in the calendar display or in a new tab. It's intuitive and just what you'd expect from a modern tool. Users can create, manage, and share calendars as well as address books. Private contacts or common contacts are possible, and you can share everything with colleagues.
Task management shows a list of tasks on the left in a drop-down menu, and they can have both one owner and multiple collaborators. You can assign deadlines, categories, attachments, and other attributes to each task. In the same way, notes can be managed and shared with other team members.
### Chat, video conferences, and file sync
In addition to all the standard features of modern groupware, grommunio also offers chat, video conferencing, and file synchronization. It does this with full integration on a large scale for the enterprise, with extraordinarily high performance. It's an easy choice for promoters of open source and a powerful option for sysadmins. Because grommunio aims to integrate rather than reinvent, all components are standard open source tools.
![Screenshot of grommunio meeting space][3]
Jitsi integration for advanced video conferences (Markus Feilner, [CC BY-SA 4.0][4])
Behind the meeting function in grommunio is [Jitsi][5], smoothly integrated into the grommunio UI with a familiar user interface. The chat feature, fully integrated and centrally managed, is based on [Mattermost][6].
![Screenshot of grommunio's town square for chat][7]
Mattermost for chat (Markus Feilner, [CC BY-SA 4.0][4])
[ownCloud][8], which promises enterprise-level file sharing and synchronization, starts after a click on the Files button.
![Screenshot of grommunio file sharing space][9]
ownCloud for file synchronization and exchange (Markus Feilner, [CC BY-SA 4.0][4])
The grommunio project has a powerful administrative interface, including roles, domain and organization management, predictive monitoring, and a self-service portal. Shell-based wizards guide admins through installation and migration of data from Microsoft Exchange. The development team is constantly working for better integration and more centralization for management, and with that comes a better workflow for admins.
![Screenshot of grommunio dashboards][10]
grommunio's admin interface (Markus Feilner, [CC BY-SA 4.0][4])
### Explore grommunio
The grommunio project has lofty goals, but its developers have put in the work, and it shows. A German hosting service specializing in tax consultants—a sector where German data protection laws are especially tough—recently announced that grommunio is available to their customers. The grommunio project gets a lot right: a clean combination of existing, successful concepts working together to enable open, secure, and privacy-compliant communication.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/open-source-groupware-grommunio
作者:[Markus Feilner][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mfeilner
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_dev_email_chat_video_work_wfm_desk_520.png?itok=6YtME4Hj (Working on a team, busy worklife)
[2]: https://grommunio.com/en/
[3]: https://opensource.com/sites/default/files/uploads/jitsi_0.png (grommunio meeting space)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/article/20/5/open-source-video-conferencing
[6]: https://opensource.com/education/16/3/mattermost-open-source-chat
[7]: https://opensource.com/sites/default/files/uploads/mattermost.png (grommunio's town square for chat)
[8]: https://owncloud.com/
[9]: https://opensource.com/sites/default/files/uploads/owncloud_0.png (Owncloud for file synchronization and exchange)
[10]: https://opensource.com/sites/default/files/uploads/grommunio_interface_0.png (Screenshot of grommunio dashboards)

View File

@ -0,0 +1,402 @@
[#]: subject: "PowerShell on Linux? A primer on Object-Shells"
[#]: via: "https://fedoramagazine.org/powershell-on-linux-a-primer-on-object-shells/"
[#]: author: "TheEvilSkeletonOzymandias42 https://fedoramagazine.org/author/theevilskeleton/https://fedoramagazine.org/author/ozymandias42/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
PowerShell on Linux? A primer on Object-Shells
======
![][1]
Photos by [NOAA][2] and [Cedric Fox][3] on [Unsplash][4]
In the previous post, [Install PowerShell on Fedora Linux][5], we went through different ways to install PowerShell on Fedora Linux and explained the basics of PowerShell. This post gives you an overview of PowerShell and a comparison to POSIX-compliant shells.
### Table of contents
* [Differences at first glance — Usability][6]
* [Speed and efficiency][7]
* [Aliases][8]
* [Custom aliases][9]
* [Differences between POSIX Shells — Char-stream vs. Object-stream][10]
* [To filter for something][11]
* [Output formatting][12]
* [Field separators, column-counting and sorting][13]
* [Getting rid of fields and formatting a nice table][14]
* [How its done in PowerShell][15]
* [Remote Administration with PowerShell — PowerShell-Sessions on Linux!?][16]
* [Background][17]
* [What this is good for][18]
* [Conclusion][19]
### Differences at first glance — Usability
One of the very first differences to take note of when using PowerShell for the first time is semantic clarity.
Most commands in traditional POSIX shells, like the Bourne Again Shell (BASH), are heavily abbreviated and often require memorizing.
Commands like _awk_, _ps_, _top_ or even _ls_ do not communicate what they do with their name. Only when one already _does_ know what they do, do the names start to make sense. Once I know that _ls_ **lists** files the abbreviation makes sense.
In PowerShell on the other hand, commands are perfectly self-descriptive. They accomplish this by following a strict naming convention.
Commands in PowerShell are called “cmdlets” (pronounced commandlets). These always follow the scheme of Verb-Noun.
One example: To **get** all files or child-items in a directory I tell PowerShell like this:
```
PS > Get-ChildItem
Directory: /home/Ozymandias42
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 14/04/2021 08:11 Folder1
d---- 13/04/2021 11:55 Folder2
```
**An Aside:**
The cmdlet name is Get-Child_Item_ not _Item**s**_. This is in acknowledgement of [Set-theory][20]. Each of the standard cmdlets return a list or a set of results. The number of items in a set —mathematicians call this the sets [cardinality][21]— can be 0, 1 or any arbitrary natural number, meaning the set can be empty, contain exactly one result or many results. The reason for this, and why I stress this here, is because the standard cmdlets _also_ implicitly implement a ForEach-Loop for any results they return. More about this later.
#### Speed and efficiency
##### Aliases
You might have noticed that standard cmdlets are long and can therefore be time consuming when writing scripts. However, many cmdlets are aliased and dont necessarily depend on the case, which mitigates this problem.
Lets write a script with unaliased cmdlets as an example:
```
PS > Get-Process | ForEach-Object {Write-Host $_.Name -ForegroundColor Cyan}
```
This lists the name of running processes in cyan. As you can see, many characters are in upper case and cmdlets names are relatively long. Lets shorten them and replace upper case letters to make the script easier to type:
```
PS > gps | foreach {write-host $_.name -foregroundcolor cyan}
```
This is the same script but with greatly simplified input.
To see the full list of aliased cmdlets, type _Get-Alias_.
##### Custom aliases
Just like any other shell, PowerShell also lets you set your own aliases by using the _Set-Alias_ cmdlet. Lets alias _Write-Host_ to something simpler so we can make the same script even easier to type:
```
PS > Set-Alias -Name wh -Value Write-Host
```
Here, we aliased _wh_ to _Write-Host_ to increase typebility. When setting aliases, _-Name_ indicates what you want the alias to be and _-Value_ indicates what you want to alias to.
Lets see how it looks now:
```
PS > gps | foreach {wh $_.name -foregroundcolor cyan}
```
You can see that we already made the script easier to type. If we wanted, we could also alias _ForEach-Object_ to _fe_, but you get the gist.
If you want to see the properties of an alias, you can type _Get-Alias_. Lets check the properties of the alias _wh_ using the _Get-Alias_ cmdlet:
```
PS > Get-Alias wh
CommandType Name Version Source
----------- ---- ------- ------
Alias wh -> Write-Host
```
##### Autocompletion and suggestions
PowerShell suggests cmdlets or flags when you press the Tab key twice, by default. If there is nothing to suggest, PowerShell automatically completes to the cmdlet.
### Differences between POSIX Shells — Char-stream vs. Object-stream
Any scripting will eventually string commands together via pipe | and soon come to notice a few key differences.
In bash what is moved from one command to the next through a pipe is just a string of characters. However, in PowerShell this is not the case.
In PowerShell, every cmdlet is aware of data structures and objects. For example, a structure like this:
```
{
firstAuthor=Ozy,
secondAuthor=Skelly
}
```
This data is kept as-is even if a command, used alone, would have presented this data as follows:
```
AuthorNr. AuthorName
1 Ozy
2 Skelly
```
In bash, on the other hand, that formatted output would need to be created by parsing with helper tools like _awk_ or _cut_ first, to be usable with a different command.
PowerShell does not require this parsing since the underlying structure is sent when using a pipe rather than the formatted output shown without. So the command _authorObject | doThingsWithSingleAuthor firstAuthor_ is possible.
The following examples shall further illustrate this.
**Beware:** This will get fairly technical and verbose. Skip if satisfied already.
A few of the most often used constructs to illustrate the advantage of PowerShell over bash, when using pipes, are to:
* filter for something
* format output
* sort output
When implementing these in bash there are a few things that will re-occur time and time again.
The following sections will exemplarise these constructs and their variants in bash and contrast them with their PowerShell equivalents.
#### To filter for something
Lets say you want to see all processes matching the name _ssh-agent_.
In human thinking terms you know what you want.
1. Get all processes
2. Filter for all processes that match our criteria
3. Print those processes
To apply this in bash we could do it in two ways.
The first one, which most people who are comfortable with bash might use is this one:
```
$ ps -p $(pgrep ssh-agent)
```
At first glance this is straight forward. _ps_ gets all processes and the _-p_ flag tells it to filter for a given list of pids.
What the veteran bash user might forget here however is that this might read this way but is not actually run as such. Theres a tiny but important little thing called the order of evaluation.
_$()_ is d a subshell. A subshell is run, or evaluated, first. This means the list of pids to filter again is first and the result is then returned in place of the subshell for the waiting outer command _ps_ to use.
This means it is written as:
1. Print processes
2. Filter Processes
but evaluated the other way around. It also implicitly combines the original steps 2. and 3.
A less often used variant that more closely matches the human thought pattern and evaluation order is:
```
$ pgrep ssh-agent | xargs ps
```
The second one still combines two steps, the steps 1. and 2. but follows the evaluation logic a human would think of.
The reason this variant is less used is that ominous _xargs_ command. What this basically does is to append all lines of output from the previous command as a single long line of arguments to the command followed by it. In this case _ps_.
This is necessary because pgrep produces output like this:
```
$ pgrep bash
14514
15308
```
When used in conjunction with a subshell _ps_, might not care about this but when using pipes to approximate the human evaluation order this becomes a problem.
What _xargs_ does, is to reduce the following construct to a single command:
```
$ for i in $(pgrep ssh-agent); do ps $i ; done
```
Okay. Now we have talked a LOT about evaluation order and how to do it in bash in different ways with different evaluation orders of the three basic steps we outlined.
So with this much preparation, how does PowerShell handle it?
```
PS > Get-Process | Where-Object Name -Match ssh-agent
```
Completely self-descriptive and follows the evaluation order of the steps we outlined perfectly. Also do take note of the absence of _xargs_ or any explicit for-loop.
As mentioned in our aside a few hundred words back, the standard cmdlets all implement ForEach internally and do it implicitly when piped input in list-form.
#### Output formatting
This is where PowerShell really shines. Consider a simple example to see how its done in bash first. Say we want to list all files in a directory sorted by size from the biggest to the smallest and listed as a table with filename, size and creation date. Also lets say we have some files with long filenames in there and want to make sure we get the full filename no matter how big our terminal.
##### Field separators, column-counting and sorting
Now the first obvious step is to run _ls_ with the _-l_ flag to get a list with not just the filenames but the creation date and the file sizes we need to sort against too.
We will get a more verbose output than we need. Like this one:
```
$ ls -l
total 148692
-rwxr-xr-x 1 root root 51984 May 16 2020 [
-rwxr-xr-x 1 root root 283728 May 7 18:13 appdata2solv
lrwxrwxrwx 1 root root 6 May 16 2020 apropos -> whatis
-rwxr-xr-x 1 root root 35608 May 16 2020 arch
-rwxr-xr-x 1 root root 14784 May 16 2020 asn1Coding
-rwxr-xr-x 1 root root 18928 May 16 2020 asn1Decoding
[not needed] [not needed]
```
What is apparent is, that to get the kind of output we want we have to get rid of the fields marked _[not needed]_ in the above example but thats not the only thing needing work. We also need to sort the output so that the biggest file is the first in the list, meaning reverse sort…
This, of course, can be done in multiple ways but it only shows again, how convoluted bash scripts can get.
We can either sort with the _ls_ tool directly by using the _-r_ flag for reverse sort, and the _sort=size_ flag for sort by size, or we can pipe the whole thing to _sort_ and supply that with the _-n_ flag for numeric sort and the _-k 5_ flag to sort by the fifth column.
Wait! **fifth** ? Yes. Because this too we would have to know. _sort_, by default, uses spaces as field separators, meaning in the tabular output of _ls -l_ the numbers representing the size is the 5th field.
##### Getting rid of fields and formatting a nice table
To get rid of the remaining fields, we once again have multiple options. The most straightforward option, and most likely to be known, is probably _cut_. This is one of the few UNIX commands that is self-descriptive, even if its just because of the natural brevity of its associated verb. So we pipe our results, up to now, into _cut_ and tell it to only output the columns we want and how they are separated from each other.
_cut -f5- -d” “_ will output from the fifth field to the end. This will get rid of the first columns.
```
283728 May 7 18:13 appdata2solv
51984 May 16 2020 [
35608 May 16 2020 arch
14784 May 16 2020 asn1Coding
6 May 16 2020 apropos -> whatis
```
This is till far from how we wanted it. First of all the filename is in the last column and then the filesize is in the Human unfriendly format of blocks instead of KB, MB, GB and so on. Of course we could fix that too in various ways at various points in our already long pipeline.
All of this makes it clear that transforming the output of traditional UNIX commands is quite complicated and can often be done at multiple points in the pipeline.
##### How its done in PowerShell
```
PS > Get-ChildItem
| Sort-Object Length -Descending
| Format-Table -AutoSize
Name,
@{Name="Size"; Expression=
{[math]::Round($_.Length/1MB,2).toString()+" MB"}
},
CreationTime
#Reformatted over multiple lines for better readability.
```
The only actual output transformation being done here is the conversion and rounding of bytes to megabytes for better human readability. This also is one of the only real weaknesses of PowerShell, that it lacks a _simple_ mechanism to get human readable filesizes.
That part aside its clear, that Format-Table allows you to simply list the columns wanted by their names in the order you want them.
This works because of the aforementioned object-nature of piped data-streams in PowerShell. There is no need to cut apart strings by delimiters.
#### Remote Administration with PowerShell — PowerShell-Sessions on Linux!?
#### Background
Remote administration via PowerShell on Windows has traditionally always been done via Windows Remoting, using the WinRM protocol.
With the release of Windows 10, Microsoft has also offered a Windows native OpenSSH Server and Client.
Using the SSH Server alone on Windows provides the user a CMD prompt unless the default system Shell is changed via a registry key.
A more elegant option is to make use of the Subsystem facility in _sshd_config_. This makes it possible to configure arbitrary binaries as remote-callable subsystems instead of the globally configured default shell.
By default there is usually one already there. The sftp subsystem.
To make PowerShell available as Subsystem one simply needs to add it like so:
```
Subsystem powershell /usr/bin/pwsh -sshs --noprofile --nologo
```
This works —with the correct paths of course— on _all_ OS PowerShell Core is available for. So that means Windows, Linux, and macOS.
#### What this is good for
It is now possible to open a PowerShell (Remote) Session to a properly configured SSH-enabled Server by doing this:
```
PS > Enter-PSSession
-HostName <target-HostName-or-IP>
-User <targetUser>
-IdentityFilePath <path-to-id_rsa-file>
...
<-SSHTransport>
```
What this does is to register and enter an interactive PSSession with the Remote-Host. By itself this has no functional difference from a normal SSH-session. It does, however, allow for things like running scripts from a local host on remote machines via other cmdlets that utilise the same subsystem.
One such example is the _Invoke-Command_ cmdlet. This becomes especially useful, given that _Invoke-Command_ has the _-AsJob_ flag.
What this enables is running local scripts as batchjobs on multiple remote servers while using the local Job-manager to get feedback about when the jobs have finished on the remote machines.
While it is possible to run local scripts via ssh on remote hosts it is not as straight forward to view their progress and it gets outright hacky to run local scripts remotely. We refrain from giving examples here, for brevitys sake.
With PowerShell, however, this can be as easy as this:
```
$listOfRemoteHosts | Invoke-Command
-HostName $_
-FilePath /home/Ozymandias42/Script2Run-Remotely.ps1
-AsJob
```
Overview of the running tasks is available by doing this:
```
PS > Get-Job
Id Name PSJobTypeName State HasMoreData Location Command
-- ---- ------------- ----- ----------- -------- -------
1 Job1 BackgroundJob Running True localhost Microsoft.PowerShe…
```
Jobs can then be attached to again, should they require manual intervention, by doing _Receive-Job &lt;JobName-or-JobNumber&gt;_.
### Conclusion
In conclusion, PowerShell applies a fundamentally different philosophy behind its syntax in comparison to standard POSIX shells like bash. Of course, for bash, its historically rooted in the limitations of the original UNIX. PowerShell provides better semantic clarity with its cmdlets and outputs which means better understandability for humans, hence easier to use and learn. PowerShell also provides aliased cmdlets in the case of unaliased cmdlets being too long. The main difference is that PowerShell is object-oriented, leading to elimination of input-output parsing. This allows PowerShell scripts to be more concise.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/powershell-on-linux-a-primer-on-object-shells/
作者:[TheEvilSkeletonOzymandias42][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/theevilskeleton/https://fedoramagazine.org/author/ozymandias42/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2021/09/powershell_2-816x345.jpg
[2]: https://unsplash.com/@noaa?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/@thecedfox?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://unsplash.com/s/photos/shell?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[5]: https://fedoramagazine.org/install-powershell-on-fedora-linux
[6]: tmp.YtC5jLcRsL#differences-at-first-glance--usability
[7]: tmp.YtC5jLcRsL#speed-and-efficiency
[8]: tmp.YtC5jLcRsL#aliases
[9]: tmp.YtC5jLcRsL#custom-aliases
[10]: tmp.YtC5jLcRsL#differences-between-posix-shells--char-stream-vs-object-stream
[11]: tmp.YtC5jLcRsL#to-filter-for-something
[12]: tmp.YtC5jLcRsL#output-formatting
[13]: tmp.YtC5jLcRsL#field-operators-collumn-counting-and-sorting
[14]: tmp.YtC5jLcRsL#getting-rid-of-fields-and-formatting-a-nice-table
[15]: tmp.YtC5jLcRsL#how-its-done-in-powershell
[16]: tmp.YtC5jLcRsL#remote-administration-with-powershell--powershell-sessions-on-linux
[17]: tmp.YtC5jLcRsL#background
[18]: tmp.YtC5jLcRsL#what-this-is-good-for
[19]: tmp.YtC5jLcRsL#conclusion
[20]: https://en.wikipedia.org/wiki/Set_(mathematics)
[21]: https://en.wikipedia.org/wiki/Set_(mathematics)#Cardinality

View File

@ -0,0 +1,666 @@
[#]: subject: "Code memory safety and efficiency by example"
[#]: via: "https://opensource.com/article/21/8/memory-programming-c"
[#]: author: "Marty Kalin https://opensource.com/users/mkalindepauledu"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
实例讲解代码之内存安全与效率
======
了解有关内存安全和效率的更多信息。
![Code going into a computer.][1]
C 是一种高级语言同时具有“接近金属”LCTT 译注:即“接近人类思维方式”的反义词)的特性,这使得它有时看起来更像是一种可移植的汇编语言,而不是 Java 或 Python 的兄弟语言。内存管理作为上述特性之一,涵盖了正在执行的程序对内存的安全和高效使用。本文通过 C 语言代码示例,以及现代 C 语言编译器生成的汇编语言代码段,详细介绍了内存安全性和效率。
尽管代码示例是用 C 语言编写的,但安全高效的内存管理指南对于 C++ 是同样适用的。这两种语言在很多细节上有所不同例如C++ 具有 C 所缺乏的面向对象特性和泛型),但在内存管理方面面临的挑战是一样的。
### 执行中程序的内存概述
对于正在执行的程序(又名 _<ruby>进程<rt>process</rt></ruby>_),内存被划分为三个区域:**<ruby><rt>stack</rt></ruby>**、**<ruby><rt>heap</rt></ruby>** 和 **<ruby>静态区<rt>static area</rt></ryby>**。下文会给出每个区域的概述,以及完整的代码示例。
作为通用 CPU 寄存器的替补_栈_ 为代码块(例如函数或循环体)中的局部变量提供暂存器存储。传递给函数的参数在此上下文中也视作局部变量。看一下下面这个简短的示例:
```
void some_func(int a, int b) {
   int n;
   ...
}
```
通过 **a****b** 传递的参数以及局部变量 **n** 的存储会在栈中,除非编译器可以找到通用寄存器。编译器倾向于优先将通用寄存器用作暂存器,因为 CPU 对这些寄存器的访问速度很快(一个时钟周期)。然而,这些寄存器在台式机、笔记本电脑和手持机器的标准架构上很少(大约 16 个)。
在只有汇编语言程序员才能看到的实施层面,栈被组织为具有 **push**(插入)和 **pop**(删除)操作的 LIFO后进先出列表。 **top** 指针可以作为偏移的基地址;这样,除了 **top** 之外的栈位置也变得可访问了。例如,表达式 **top+16** 指向堆栈的 **top** 指针上方 16 个字节的位置,表达式 **top-16** 指向 **top** 指针下方 16 个字节的位置。因此,可以通过 **top** 指针访问实现了暂存器存储的栈的位置。在标准的 ARM 或 Intel 架构中,栈从高内存地址增长到低内存地址;因此,减小某进程的 **top** 就是增大其栈规模。
使用栈结构就意味着轻松高效地使用内存。编译器(而非程序员)会编写管理栈的代码,管理过程通过分配和释放所需的暂存器存储来实现;程序员声明函数参数和局部变量,将实现过程交给编译器。此外,完全相同的栈存储可以在连续的函数调用和代码块(如循环)中重复使用。精心设计的模块化代码会将栈存储作为暂存器的首选内存选项,同时优化编译器要尽可能使用通用寄存器而不是栈。
**堆** 提供的存储是通过程序员代码显式分配的,堆分配的语法因语言而异。在 C 中,成功调用库函数 **malloc**(或其变体 **calloc** 等)会分配指定数量的字节(在 C++ 和 Java 等语言中,**new** 运算符具有相同的用途)。编程语言在如何释放堆分配的存储方面有着巨大的差异:
* 在 Java、Go、Lisp 和 Python 等语言中,程序员不会显式释放动态分配的堆存储。
例如,下面这个 Java 语句为一个字符串分配了堆存储,并将这个堆存储的地址存储在变量 **greeting** 中:
```
String greeting = new String("Hello, world!");
```
Java 有一个垃圾回收器它是一个运行时实用程序如果进程无法再访问自己分配的堆存储回收器可以使其自动释放。因此Java 堆释放是通过垃圾收集器自动进行的。在上面的示例中,垃圾收集器将在变量 **greeting** 超出作用域后,释放字符串的堆存储。
* Rust 编译器会编写堆释放代码。这是 Rust 在不依赖垃圾回收器的情况下,使堆释放实现自动化的开创性努力,但这也会带来运行时复杂性和开销。向 Rust 的努力致敬!
* 在 C和 C++)中,堆释放是程序员的任务。程序员调用 **malloc** 分配堆存储,然后负责相应地调用库函数 **free** 来释放该存储空间(在 C++ 中,**new** 运算符分配堆存储,而 **delete****delete[]** 运算符释放此类存储)。下面是一个 C 语言代码示例:
```
char* greeting = malloc(14);       /* 14 heap bytes */
strcpy(greeting, "Hello, world!"); /* copy greeting into bytes */
puts(greeting);                    /* print greeting */
free(greeting);                    /* free malloced bytes */
```
C 语言避免了垃圾回收器的成本和复杂性,但也不过是让程序员承担了堆释放的任务。
内存的 **静态区** 为可执行代码(例如 C 语言函数、字符串文字例如“Hello, world!”)和全局变量提供存储空间:
```
int n;                       /* global variable */
int main() {                 /* function */
   char* msg = "No comment"; /* string literal */
   ...
}
```
该区域是静态的,因为它的大小从进程执行开始到结束都固定不变。由于静态区相当于进程固定大小的内存占用,因此经验法则是通过避免使用全局数组等方法来使该区域尽可能小。
下文会结合代码示例对本节概述展开进一步讲解。
### 栈存储
想象一个有各种连续执行的任务的程序,任务包括了处理每隔几分钟通过网络下载并存储在本地文件中的数字数据。下面的 **stack** 程序简化了处理流程(仅是将奇数整数值转换为偶数),而将重点放在栈存储的好处上。
```
#include <stdio.h>
#include <stdlib.h>
#define Infile   "incoming.dat"
#define Outfile  "outgoing.dat"
#define IntCount 128000  /* 128,000 */
void other_task1() { /*...*/ }
void other_task2() { /*...*/ }
void process_data(const char* infile,
          const char* outfile,
          const unsigned n) {
  int nums[n];
  FILE* input = fopen(infile, "r");
  if (NULL == infile) return;
  FILE* output = fopen(outfile, "w");
  if (NULL == output) {
    fclose(input);
    return;
  }
  fread(nums, n, sizeof(int), input); /* read input data */
  unsigned i;
  for (i = 0; i < n; i++) {
    if (1 == (nums[i] & 0x1))  /* odd parity? */
      nums[i]--;               /* make even */
  }
  fclose(input);               /* close input file */
  fwrite(nums, n, sizeof(int), output);
  fclose(output);
}
int main() {
  process_data(Infile, Outfile, IntCount);
  
  /** now perform other tasks **/
  other_task1(); /* automatically released stack storage available */
  other_task2(); /* ditto */
  
  return 0;
}
```
底部的 **main** 函数首先调用 **process_data** 函数,该函数会创建一个基于栈的数组,其大小由参数 **n** 给定(当前示例中为 128,000。因此该数组占用 128,000 x **sizeof(int)** 个字节,在标准设备上达到了 512,000 字节(**int** 在这些设备上是四个字节)。然后数据会被读入数组(使用库函数 **fread**),循环处理,并保存到本地文件 **outgoing.dat**(使用库函数**fwrite**)。
**process_data** 函数返回到其调用者 **main** 函数时,**process_data** 函数的大约 500MB 栈暂存器可供 **stack** 程序中的其他函数用作暂存器。在此示例中,**main** 函数接下来调用存根函数 **other_task1****other_task2**。这三个函数在 **main** 中依次调用,这意味着所有三个函数都可以使用相同的堆栈存储作为暂存器。因为编写栈管理代码的是编译器而不是程序员,所以这种方法对程序员来说既高效又容易。
在 C 语言中,在块(例如函数或循环体)内定义的任何变量默认都有一个 **auto** 存储类,这意味着该变量是基于栈的。存储类 **register** 现在已经过时了,因为 C 编译器会主动尝试尽可能使用 CPU 寄存器。只有在块内定义的变量可能是 **register**,如果没有可用的 CPU 寄存器,编译器会将其更改为 **auto**。基于栈的编程可能是不错的首选方式,但这种风格确实有一些挑战性。下面的 **badStack** 程序说明了这点。
```
#include <stdio.h>;
const int* get_array(const unsigned n) {
  int arr[n]; /* stack-based array */
  unsigned i;
  for (i = 0; i < n; i++) arr[i] = 1 + 1;
  return arr;  /** ERROR **/
}
int main() {
  const unsigned n = 16;
  const int* ptr = get_array(n);
  
  unsigned i;
  for (i = 0; i < n; i++) printf("%i ", ptr[i]);
  puts("\n");
  return 0;
}
```
**badStack** 程序中的控制流程很简单。**main** 函数使用 16LCTT译注原文为 128应为作者笔误作为参数调用函数 **get_array**,然后被调用函数会使用传入参数来创建对应大小的本地数组。**get_array** 函数会初始化数组并返回给 **main** 中的数组标识符 **arr****arr** 是一个指针常量,保存数组的第一个 **int** 元素的地址。
当然,本地数组 **arr** 可以在 **get_array** 函数中访问,但是一旦 **get_array** 返回,就不能合法访问该数组。尽管如此,**main** 函数会尝试使用函数 **get_array** 返回的堆栈地址 **arr** 来打印基于栈的数组。现代编译器会警告错误。例如,下面是来自 GNU 编译器的警告:
```
badStack.c: In function 'get_array':
badStack.c:9:10: warning: function returns address of local variable [-Wreturn-local-addr]
return arr;  /** ERROR **/
```
一般规则是,如果使用栈存储实现局部变量,应该仅在该变量所在的代码块内,访问这块基于栈的存储(在本例中,数组指针 **arr** 和循环计数器 **i** 均为这样的局部变量)。因此,函数永远不应该返回指向基于栈存储的指针。
### 堆存储
接下来使用若干代码示例凸显在 C 语言中使用堆存储的优点。在第一个示例中,使用了最优方案分配、使用和释放堆存储。第二个示例(在下一节中)将堆存储嵌套在了其他堆存储中,这会使其释放操作变得复杂。
```
#include <stdio.h>
#include <stdlib.h>
int* get_heap_array(unsigned n) {
  int* heap_nums = malloc(sizeof(int) * n); 
  
  unsigned i;
  for (i = 0; i < n; i++)
    heap_nums[i] = i + 1;  /* initialize the array */
  
  /* stack storage for variables heap_nums and i released
     automatically when get_num_array returns */
  return heap_nums; /* return (copy of) the pointer */
}
int main() {
  unsigned n = 100, i;
  int* heap_nums = get_heap_array(n); /* save returned address */
  
  if (NULL == heap_nums) /* malloc failed */
    fprintf(stderr, "%s\n", "malloc(...) failed...");
  else {
    for (i = 0; i < n; i++) printf("%i\n", heap_nums[i]);
    free(heap_nums); /* free the heap storage */
  }
  return 0; 
}
```
上面的 **heap** 程序有两个函数: **main** 函数使用参数(示例中为 100调用 **get_heap_array** 函数,参数用来指定数组应该有多少个 **int** 元素。因为堆分配可能会失败,**main** 函数会检查 **get_heap_array** 是否返回了 **NULL**;如果是,则表示失败。如果分配成功,**main** 将打印数组中的 **int** 值,然后立即调用库函数 **free** 来对堆存储解除分配。这就是最优的方案。
**get_heap_array** 函数以下列语句开头,该语句值得仔细研究一下:
```
int* heap_nums = malloc(sizeof(int) * n); /* heap allocation */
```
**malloc** 库函数及其变体函数针对字节进行操作;因此,**malloc** 的参数是 **n****int** 类型元素所需的字节数(**sizeof(int)** 在标准现代设备上是四个字节)。**malloc** 函数返回所分配字节段的首地址,如果失败则返回 **NULL** .
如果成功调用 **malloc**,在现代台式机上其返回的地址大小为 64 位。在手持设备和早些时候的台式机上,该地址的大小可能是 32 位,或者甚至更小,具体取决于其年代。堆分配数组中的元素是 **int** 类型,这是一个四字节的有符号整数。这些堆分配的 **int** 的地址存储在基于栈的局部变量 **heap_nums** 中。可以参考下图:
```
                 heap-based
 stack-based        /
     \        +----+----+   +----+
 heap-nums--->|int1|int2|...|intN|
              +----+----+   +----+
```
一旦 **get_heap_array** 函数返回,指针变量 **heap_nums** 的栈存储将自动回收——但动态 **int** 数组的堆存储仍然存在,这就是 **get_heap_array** 函数返回这个地址(的副本)给 **main** 函数的原因:它现在负责在打印数组的整数后,通过调用库函数 **free** 显式释放堆存储:
```
free(heap_nums); /* free the heap storage */
```
**malloc** 函数不会初始化堆分配的存储空间,因此里面是随机值。相比之下,其变体函数 **calloc** 会将分配的存储初始化为零。这两个函数都返回 **NULL** 来表示分配失败。
**heap** 示例中,**main** 函数在调用 **free** 后会立即返回,正在执行的程序会终止,这会让系统回收所有已分配的堆存储。尽管如此,程序员应该养成在不再需要时立即显式释放堆存储的习惯。
### 嵌套堆分配
下一个代码示例会更棘手一些。C 语言有很多返回指向堆存储的指针的库函数。下面是一个常见的使用情景:
1\. C 程序调用一个库函数,该函数返回一个指向基于堆的存储的指针,而指向的存储通常是一个聚合体,如数组或结构体:
```
SomeStructure* ptr = lib_function(); /* returns pointer to heap storage */
```
2\. 然后程序使用所分配的存储。
3\. 对于清理而言,问题是对 **free** 的简单调用是否会清理库函数分配的所有堆分配存储。例如,**SomeStructure** 实例可能有指向堆分配存储的字段。一个特别麻烦的情况是动态分配的结构体数组,每个结构体有一个指向又一层动态分配的存储的字段。下面的代码示例说明了这个问题,并重点关注了如何设计一个可以安全地为客户端提供堆分配存储的库。
```
#include <stdio.h>
#include <stdlib.h>
typedef struct {
  unsigned id;
  unsigned len;
  float*   heap_nums;
} HeapStruct;
unsigned structId = 1;
HeapStruct* get_heap_struct(unsigned n) {
  /* Try to allocate a HeapStruct. */
  HeapStruct* heap_struct = malloc(sizeof(HeapStruct));
  if (NULL == heap_struct) /* failure? */
    return NULL;           /* if so, return NULL */
  /* Try to allocate floating-point aggregate within HeapStruct. */
  heap_struct->heap_nums = malloc(sizeof(float) * n);
  if (NULL == heap_struct->heap_nums) {  /* failure? */
    free(heap_struct);                   /* if so, first free the HeapStruct */
    return NULL;                         /* then return NULL */
  }
  /* Success: set fields */
  heap_struct->id = structId++;
  heap_struct->len = n;
  return heap_struct; /* return pointer to allocated HeapStruct */
}
void free_all(HeapStruct* heap_struct) {
  if (NULL == heap_struct) /* NULL pointer? */
    return;                /* if so, do nothing */
  
  free(heap_struct->heap_nums); /* first free encapsulated aggregate */
  free(heap_struct);            /* then free containing structure */  
}
int main() {
  const unsigned n = 100;
  HeapStruct* hs = get_heap_struct(n); /* get structure with N floats */
  /* Do some (meaningless) work for demo. */
  unsigned i;
  for (i = 0; i < n; i++) hs->heap_nums[i] = 3.14 + (float) i;
  for (i = 0; i < n; i += 10) printf("%12f\n", hs->heap_nums[i]);
  free_all(hs); /* free dynamically allocated storage */
  
  return 0;
}
```
上面的 **nestedHeap** 程序示例以结构体 **HeapStruct** 为中心,结构体中又有名为 **heap_nums** 的指针字段:
```
typedef struct {
  unsigned id;
  unsigned len;
  float*   heap_nums; /** pointer **/
} HeapStruct;
```
函数 **get_heap_struct** 尝试为 **HeapStruct** 实例分配堆存储,这需要为字段 **heap_nums** 指向的若干个 **float** 变量分配堆存储。如果成功调用 **get_heap_struct** 函数,并将指向堆分配结构体的指针以 **hs** 命名,其结果可以描述如下:
```
hs-->HeapStruct instance
        id
        len
        heap_nums-->N contiguous float elements
```
**get_heap_struct** 函数中,第一个堆分配过程很简单:
```
HeapStruct* heap_struct = malloc(sizeof(HeapStruct));
if (NULL == heap_struct) /* failure? */
  return NULL;           /* if so, return NULL */
```
**sizeof(HeapStruct)** 包括了 **heap_nums** 字段的字节数32 位机器上为 464 位机器上为 8**heap_nums** 字段则是指向动态分配数组中的 **float** 元素的指针。那么,问题关键在于 **malloc** 为这个结构体传送了字节空间还是表示失败的 **NULL**;如果是 **NULL****get_heap_struct** 函数就也返回 **NULL** 以通知调用者堆分配失败。
第二步尝试堆分配的过程更复杂,因为在这一步,**HeapStruct** 的堆存储已经分配好了:
```
heap_struct->heap_nums = malloc(sizeof(float) * n);
if (NULL == heap_struct->heap_nums) {  /* failure? */
  free(heap_struct);                   /* if so, first free the HeapStruct */
  return NULL;                         /* and then return NULL */
}
```
传递给 **get_heap_struct** 函数的参数 **n** 指明动态分配的 **heap_nums** 数组中应该有多少个 **float** 元素。如果可以分配所需的若干个 **float** 元素,则该函数在返回 **HeapStruct** 的堆地址之前会设置结构的 **id****len** 字段。 但是,如果尝试分配失败,则需要两个步骤来实现最优方案:
1\. 必须释放 **HeapStruct** 的存储以避免内存泄漏。对于调用 **get_heap_struct** 的客户端函数而言,没有动态 **heap_nums** 数组的 **HeapStruct** 可能就是没用的;因此,**HeapStruct** 实例的字节空间应该显式释放,以便系统可以回收这些空间用于未来的堆分配。
2\. 返回 **NULL** 以标识失败。
如果成功调用 **get_heap_struct** 函数,那么释放堆存储也很棘手,因为它涉及要以正确顺序进行的两次 **free** 操作。因此,该程序设计了一个 **free_all** 函数,而不是要求程序员再去手动实现两步释放操作。回顾一下,**free_all** 函数是这样的:
```
void free_all(HeapStruct* heap_struct) {
  if (NULL == heap_struct) /* NULL pointer? */
    return;                /* if so, do nothing */
  
  free(heap_struct->heap_nums); /* first free encapsulated aggregate */
  free(heap_struct);            /* then free containing structure */  
}
```
检查完参数 **heap_struct** 不是 **NULL** 值后,函数首先释放 **heap_nums** 数组,这步要求 **heap_struct** 指针此时仍然是有效的。先释放 **heap_struct** 的做法是错误的。一旦 **heap_nums** 被释放,**heap_struct** 就可以释放了。如果 **heap_struct** 被释放,但 **heap_nums** 没有被释放,那么数组中的 **float** 元素就会泄漏:仍然分配了字节空间,但无法被访问到——因此一定要记得释放 **heap_nums**。存储泄漏将一直持续,直到 **nestedHeap** 程序退出,系统回收泄漏的字节时为止。
关于 **free** 库函数的注意事项就是要有顺序。回想一下上面的调用示例:
```
free(heap_struct->heap_nums); /* first free encapsulated aggregate */
free(heap_struct);            /* then free containing structure */
```
这些调用释放了分配的存储空间——但它们并 _不是_ 将它们的操作参数设置为 **NULL****free** 函数会获取地址的副本作为参数;因此,将副本更改为 **NULL** 并不会改变原地址上的参数值)。例如,在成功调用 **free** 之后,指针 **heap_struct** 仍然持有一些堆分配字节的堆地址,但是现在使用这个地址将会产生错误,因为对 **free** 的调用使得系统有权回收然后重用这些分配过的字节。
使用 **NULL** 参数调用 **free** 没有意义,但也没有什么坏处。而在非 **NULL** 的地址上重复调用 **free** 会导致不确定结果的错误:
```
free(heap_struct);  /* 1st call: ok */
free(heap_struct);  /* 2nd call: ERROR */
```
### 内存泄漏和堆碎片化
“内存泄漏”是指动态分配的堆存储变得不再可访问。看一下相关的代码段:
```
float* nums = malloc(sizeof(float) * 10); /* 10 floats */
nums[0] = 3.14f;                          /* and so on */
nums = malloc(sizeof(float) * 25);        /* 25 new floats */
```
假如第一个 **malloc** 成功,第二个 **malloc** 会再将 **nums** 指针重置为 **NULL**(分配失败情况下)或是新分配的 25 个 **float** 中第一个的地址。最初分配的 10 个 **float** 元素的堆存储仍然处于被分配状态,但此时已无法再对其访问,因为 **nums** 指针要么指向别处,要么是 **NULL**。结果就是造成了 40 个字节 (**sizeof(float) * 10**) 的泄漏。
在第二次调用 **malloc** 之前,应该释放最初分配的存储空间:
```
float* nums = malloc(sizeof(float) * 10); /* 10 floats */
nums[0] = 3.14f;                          /* and so on */
free(nums);                               /** good **/
nums = malloc(sizeof(float) * 25);        /* no leakage */
```
即使没有泄漏,堆也会随着时间的推移而碎片化,需要对系统进行碎片整理。例如,假设两个最大的堆块当前的大小分别为 200MB 和 100MB。然而这两个堆块并不连续进程 **P** 此时又需要分配 250MB 的连续堆存储。在进行分配之前,系统可能要对堆进行 _碎片整理_ 以给 **P** 提供 250MB 连续存储空间。碎片整理很复杂,因此也很耗时。
内存泄漏会创建处于已分配状态但不可访问的堆块,从而会加速碎片化。因此,释放不再需要的堆存储是程序员帮助减少碎片整理需求的一种方式。
### 诊断内存泄漏的工具
有很多工具可用于分析内存效率和安全性,其中我最喜欢的是 [valgrind][11]。为了说明该工具如何处理内存泄漏,这里给出 **leaky** 示例程序:
```
#include <stdio.h>
#include <stdlib.h>
int* get_ints(unsigned n) {
  int* ptr = malloc(n * sizeof(int));
  if (ptr != NULL) {
    unsigned i;
    for (i = 0; i < n; i++) ptr[i] = i + 1;
  }
  return ptr;
}
void print_ints(int* ptr, unsigned n) {
  unsigned i;
  for (i = 0; i < n; i++) printf("%3i\n", ptr[i]);
}
int main() {
  const unsigned n = 32;
  int* arr = get_ints(n);
  if (arr != NULL) print_ints(arr, n);
  /** heap storage not yet freed... **/
  return 0;
}
```
**main** 函数调用了 **get_ints** 函数,后者会试着从堆中 **malloc** 32 个 4 字节的 **int**,然后初始化动态数组(如果 **malloc** 成功)。初始化成功后,**main** 函数会调用 **print_ints**函数。程序中并没有调用 **free** 来对应 **malloc** 操作;因此,内存泄漏了。
如果安装了 **valgrind** 工具箱,下面的命令会检查 **leaky** 程序是否存在内存泄漏(**%** 是命令行提示符):
```
% valgrind --leak-check=full ./leaky
```
绝大部分输出都在下面给出了。左边的数字 207683 是正在执行的 **leaky** 程序的进程标识符。这份报告给出了泄漏发生位置的详细信息,本例中位置是在 **main** 函数所调用的 **get_ints** 函数中对 **malloc** 的调用处。
```
==207683== HEAP SUMMARY:
==207683==   in use at exit: 128 bytes in 1 blocks
==207683==   total heap usage: 2 allocs, 1 frees, 1,152 bytes allocated
==207683== 
==207683== 128 bytes in 1 blocks are definitely lost in loss record 1 of 1
==207683==   at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==207683==   by 0x109186: get_ints (in /home/marty/gc/leaky)
==207683==   by 0x109236: main (in /home/marty/gc/leaky)
==207683== 
==207683== LEAK SUMMARY:
==207683==   definitely lost: 128 bytes in 1 blocks
==207683==   indirectly lost: 0 bytes in 0 blocks
==207683==   possibly lost: 0 bytes in 0 blocks
==207683==   still reachable: 0 bytes in 0 blocks
==207683==   suppressed: 0 bytes in 0 blocks
```
如果把 **main** 函数改成在对 **print_ints** 的调用之后,再加上一个对 **free** 的调用,**valgrind** 就会对 **leaky** 程序给出一个干净的内存健康清单:
```
==218462== All heap blocks were freed -- no leaks are possible
```
### 静态区存储
在正统的 C 语言中,函数必须在所有块之外定义。这是一些 C 编译器支持的特性,杜绝了在另一个函数体内定义一个函数的可能。我举的例子都是在所有块之外定义的函数。这样的函数要么是 **static** 即静态的,要么是 **extern** 即外部的,其中 **extern** 是默认值。
C 语言中,以 **static****extern** 修饰的函数和变量驻留在内存中所谓的 **静态区** 中,因为在程序执行期间该区域大小是固定不变的。这两个存储类型的语法非常复杂,我们应该回顾一下。在回顾之后,会有一个完整的代码示例来生动展示语法细节。在所有块之外定义的函数或变量默认为 **extern**;因此,函数和变量要想存储类型为 **static** ,必须显式指定:
```
/** file1.c: outside all blocks, five definitions  **/
int foo(int n) { return n * 2; }     /* extern by default */
static int bar(int n) { return n; }  /* static */
extern int baz(int n) { return -n; } /* explicitly extern */
int num1;        /* extern */
static int num2; /* static */
```
**extern** 和 **static** 的区别在于作用域:**extern** 修饰的函数或变量可以实现跨文件可见(需要声明)。相比之下,**static** 修饰的函数仅在 _定义_ 该函数的文件中可见,而 **static** 修饰的变量仅在 _定义_ 该变量的文件(或文件中的块)中可见:
```
static int n1;    /* scope is the file */
void func() {
   static int n2; /* scope is func's body */
   ...
}
```
如果在所有块之外定义了 **static** 变量,例如上面的 **n1**,该变量的作用域就是定义变量的文件。无论在何处定义 **static** 变量,变量的存储都在内存的静态区中。
**extern** 函数或变量在给定文件中的所有块之外定义,但这样定义的函数或变量也可以在其他文件中声明。典型的做法是在头文件中 _声明_ 这样的函数或变量,只要需要就可以包含进来。下面这些简短的例子阐述了这些棘手的问题。
假设 **extern** 函数 **foo****file1.c**_定义_,有无关键字 **extern** 效果都一样:
```
/** file1.c **/
int foo(int n) { return n * 2; } /* definition has a body {...} */
```
必须在其他文件(或其中的块)中使用显式的 **extern** _声明_ 此函数才能使其可见。以下是使 **extern** 函数 **foo** 在文件 **file2.c** 中可见的声明语句:
```
/** file2.c: make function foo visible here **/
extern int foo(int); /* declaration (no body) */
```
回想一下,函数声明没有用大括号括起来的主体,而函数定义会有这样的主体。
为了便于查看,函数和变量声明通常会放在头文件中。准备好需要声明的源代码文件,然后就可以 **#include** 相关的头文件。下一节中的 **staticProg** 程序演示了这种方法。
至于 **extern** 的变量,规则就变得更棘手了(很抱歉增加了难度!)。任何 **extern** 的对象——无论函数或变量——必须 _定义_ 在所有块之外。此外,在所有块之外定义的变量默认为 **extern**
```
/** outside all blocks **/
int n; /* defaults to extern */
```
但是,只有在变量的 _定义_ 中显式初始化变量时,**extern** 才能在变量的 _定义_ 中显式修饰LCTT译注换言之如果下列代码中的 `int n1;` 行前加上 **extern**,该行就由 _定义_ 变成了 _声明_
```
/** file1.c: outside all blocks **/
int n1;             /* defaults to extern, initialized by compiler to zero */
extern int n2 = -1; /* ok, initialized explicitly */
int n3 = 9876;      /* ok, extern by default and initialized explicitly */
```
要使在 **file1.c** 中定义为 **extern** 的变量在另一个文件(例如 **file2.c**)中可见,该变量必须在 **file2.c** 中显式 _声明_**extern** 并且不能初始化(初始化会将声明转换为定义):
```
/** file2.c **/
extern int n1; /* declaration of n1 defined in file1.c */
```
为了避免与 **extern** 变量混淆,经验是在 _声明_ 中显式使用 **extern**(必须),但不要在 _定义_ 中使用(非必须且棘手)。对于函数,**extern** 在定义中是可选使用的,但在声明中是必须使用的。下一节中的 **staticProg** 示例会把这些点整合到一个完整的程序中。
### staticProg 示例
**staticProg** 程序由三个文件组成:两个 C 语言源文件(**static1.c** 和 **static2.c**)以及一个头文件(**static.h**),头文件中包含两个声明:
```
/** header file static.h **/
#define NumCount 100               /* macro */
extern int global_nums[NumCount];  /* array declaration */
extern void fill_array();          /* function declaration */
```
两个声明中的 **extern**一个用于数组另一个用于函数强调对象在别处“外部”_定义_数组 **global_nums** 在文件 **static1.c** 中定义(没有显式的 **extern**),函数 **fill_array** 在文件 **static2.c** 中定义(也没有显式的 **extern**)。每个源文件都包含了头文件 **static.h**。**static1.c** 文件定义了两个驻留在内存静态区域中的数组(**global_nums** 和 **more_nums**)。第二个数组有 **static** 修饰,这将其作用域限制为定义数组的文件 (**static1.c**)。如前所述, **extern** 修饰的 **global_nums** 则可以实现在多个文件中可见。
```
/** static1.c **/
#include <stdio.h>
#include <stdlib.h>
#include "static.h"             /* declarations */
int global_nums[NumCount];      /* definition: extern (global) aggregate */
static int more_nums[NumCount]; /* definition: scope limited to this file */
int main() {
  fill_array(); /** defined in file static2.c **/
  unsigned i;
  for (i = 0; i < NumCount; i++)
    more_nums[i] = i * -1;
  /* confirm initialization worked */
  for (i = 0; i < NumCount; i += 10) 
    printf("%4i\t%4i\n", global_nums[i], more_nums[i]);
    
  return 0;  
}
```
下面的 **static2.c** 文件中定义了 **fill_array** 函数,该函数由 **main**(在 **static1.c** 文件中)调用;**fill_array** 函数会给名为 **global_nums****extern** 数组中的元素赋值,该数组在文件 **static1.c** 中定义。使用两个文件的唯一目的是凸显 **extern** 变量或函数能够跨文件可见。
```
/** static2.c **/
#include "static.h" /** declarations **/
void fill_array() { /** definition **/
  unsigned i;
  for (i = 0; i < NumCount; i++) global_nums[i] = i + 2;
}
```
**staticProg** 程序可以如下编译:
```
% gcc -o staticProg static1.c static2.c
```
### 从汇编语言看更多细节
现代 C 编译器能够处理 C 和汇编语言的任意组合。编译 C 源文件时,编译器首先将 C 代码翻译成汇编语言。这是对从上文 **static1.c** 文件生成的汇编语言进行保存的命令:
```
% gcc -S static1.c
```
生成的文件就是 **static1.s**。这是文件顶部的一段代码,额外添加了行号以提高可读性:
```
    .file    "static1.c"          ## line  1
    .text                         ## line  2
    .comm    global_nums,400,32   ## line  3
    .local    more_nums           ## line  4
    .comm    more_nums,400,32     ## line  5
    .section    .rodata           ## line  6
.LC0:                             ## line  7
    .string    "%4i\t%4i\n"       ## line  8
    .text                         ## line  9
    .globl    main                ## line 10
    .type    main, @function      ## line 11
main:                             ## line 12
...
```
诸如 **.file**(第 1 行)之类的汇编语言指令以句点开头。顾名思义,指令会指导汇编程序将汇编语言翻译成机器代码。**.rodata** 指令(第 6 行)表示后面是只读对象,包括字符串常量 **"%4i\t%4i\n"**(第 8 行),**main** 函数(第 12 行)会使用此字符串常量来实现格式化输出。作为标签引入(通过末尾的冒号实现)的 **main** 函数(第 12 行),同样也是只读的。
在汇编语言中,标签就是地址。标签 **main:**(第 12 行)标记了 **main** 函数代码开始的地址,标签 **.LC0**:(第 7 行)标记了格式化字符串开头所在的地址。
**global_nums**(第 3 行)和 **more_nums**(第 4 行数组的定义包含了两个数字400 是每个数组中的总字节数32 是每个数组(含 100 个 **int** 元素)中每个元素的比特数。(第 5 行中的 **.comm** 指令表示 **common name**,可以忽略。)
两个数组定义的不同之处在于 **more_nums** 被标记为 **.local**(第 4 行),这意味着其作用域仅限于其所在文件 **static1.s**。相比之下,**global_nums** 数组就能在多个文件中实现可见,包括由 **static1.c****static2.c** 文件翻译成的汇编文件。
最后,**.text** 指令在汇编代码段中出现了两次(第 2 行和第 9 行。术语“text”表示“只读”但也会涵盖一些读/写变量,例如两个数组中的元素。尽管本文展示的汇编语言是针对 Intel 架构的,但 Arm6 汇编也非常相似。对于这两种架构,**.text** 区域中的变量(本例中为两个数组中的元素)会自动初始化为零。
### 总结
C 语言中的内存高效和内存安全编程准则很容易说明,但可能会很难遵循,尤其是在调用设计不佳的库的时候。准则如下:
* 尽可能使用栈存储,进而鼓励编译器将通用寄存器用作暂存器,实现优化。栈存储代表了高效的内存使用并促进了代码的整洁和模块化。永远不要返回指向基于栈的存储的指针。
* 小心使用堆存储。C和 C++)中的重难点是确保动态分配的存储尽快解除分配。良好的编程习惯和工具(如 **valgrind**)有助于攻关这些重难点。优先选用自身提供释放函数的库,例如 **nestedHeap** 代码示例中的 **free_all** 释放函数。
* 谨慎使用静态存储,因为这种存储会自始至终地影响进程的内存占用。特别是尽量避免使用 **extern** 和 **static** 数组。
本文 C 语言代码示例可在我的网站 (<https://condor.depaul.edu/mkalin>) 上找到。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/memory-programming-c
作者:[Marty Kalin][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mkalindepauledu
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 (Code going into a computer.)
[2]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
[3]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fread.html
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fwrite.html
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/malloc.html
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/free.html
[11]: https://www.valgrind.org/

View File

@ -0,0 +1,78 @@
[#]: subject: "Neither Windows, nor Linux! Shrine is Gods Operating System"
[#]: via: "https://itsfoss.com/shrine-os/"
[#]: author: "John Paul https://itsfoss.com/author/john/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
不是 Windows也不是 LinuxShrine 才是 “神之操作系统”
======
在生活中,我们都曾使用过多种操作系统。有些好,有些坏。但你能说你使用过由“神”设计的操作系统吗?今天,我想向你介绍 Shrine圣殿
### 什么是 Shrine
![Shrine 界面][1]
从介绍里,你可能想知道这到底是怎么回事。嗯,这一切都始于一个叫 Terry Davis 的人。在我们进一步介绍之前我最好提醒你Terry 在生前患有精神分裂症,而且经常不吃药。正因为如此,他在生活中说过或做过一些不被社会接受的事情。
总之,让我们回到故事的主线。在 21 世纪初Terry 发布了一个简单的操作系统。多年来,它不停地换了几个名字,有 J Operating System、LoseThos 和 SparrowOS 等等。他最终确定了 [TempleOS][2] 这个名字。他选择这个名字(神庙系统)是因为这个操作系统将成为神的圣殿。因此,神给 Terry 的操作系统规定了以下 [规格][3]
![video](https://youtu.be/LtlyeDAJR7A)
* 它将有 640×480 的 16 色图形
* 它将使用“单声道 8 位带符号的类似 MIDI 的声音采样”
* 它将追随 Commodore 64即“一个非网络化的简单机器编程是目标而不仅仅是达到目的的手段”
* 它将只支持一个文件系统(名为 “Red Sea”
* 它将被限制在 10 万行代码内,以使它 “整体易于学习”。
* “只支持 Ring-0 级,一切都在内核模式下运行,包括用户应用程序
* 字体将被限制为 “一种 8×8 等宽字体”
* “对一切都可以完全访问。所有的内存、I/O 端口、指令和类似的东西都绝无限制。所有的函数、变量和类成员都是可访问的”
* 它将只支持一个平台,即 64 位 PC
Terry 用一种他称之为 HolyC神圣 C 语言的编程语言编写了这个操作系统。TechRepublic 称其为一种 “C++ 的修改版(‘比 C 多,比 C++ 少’)”。如果你有兴趣了解 HolyC我推荐[这篇文章][4] 和 [RosettaCode][5] 上的 HolyC 条目。
2013 年Terry 在他的网站上宣布TempleOS 已经完成。不幸的是,几年后的 2018 年 8 月Terry 被火车撞死了。当时他无家可归。多年来,许多人通过他在该操作系统上的工作关注着他。大多数人对他在如此小的体积中编写操作系统的能力印象深刻。
现在,你可能想知道这些关于 TempleOS 的讨论与 Shrine 有什么关系。好吧,正如 Shrine 的 [GitHub 页面][6] 所说,它是 “一个为异教徒设计的 TempleOS 发行版”。GitHub 用户 [minexew][7] 创建了 Shrine为 TempleOS 添加 Terry 忽略的功能。这些功能包括:
* 与 TempleOS 程序 99% 的兼容性
* 带有 Lambda Shell感觉有点像经典的 Unix 命令解释器
* TCP/IP 协议栈和开机即可上网
* 包括一个软件包下载器
minexew 正计划在未来增加更多的功能,但还没有宣布具体会包括什么。他有计划为 Linux 制作一个完整的 TempleOS 环境。
### 体验
让 Shrine 在虚拟机中运行是相当容易的。你所需要做的就是安装你选择的虚拟化软件。(我的是 VirtualBox当你为 Shrine 创建一个虚拟机时,确保它是 64 位的,并且至少有 512MB 的内存。
一旦你启动到 Shrine会询问你是否要安装到你的虚拟硬盘上。一旦安装完成你也可以选择不安装你会看到一个该操作系统的导览你可以由此探索。
### 总结
TempleOS (和 Shrine显然不是为了取代 Windows 或 Linux。即使 Terry 把它称为 “神之圣殿”,我相信在他比较清醒的时候,他也会承认这更像是一个业余的作业系统。考虑到这一点,已完成的产品相当 [令人印象深刻][8]。在 12 年的时间里Terry 用他自己创造的语言创造了一个稍稍超过 10 万行代码的操作系统。他还编写了自己的编译器、图形库和几个游戏。所有这些都是在与他自己的个人心魔作斗争的时候进行的。
--------------------------------------------------------------------------------
via: https://itsfoss.com/shrine-os/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/shrine.jpg?resize=800%2C600&ssl=1
[2]: https://templeos.org/
[3]: https://web.archive.org/web/20170508181026/http://www.templeos.org:80/Wb/Doc/Charter.html
[4]: https://harrisontotty.github.io/p/a-lang-design-analysis-of-holyc
[5]: https://rosettacode.org/wiki/Category:HolyC
[6]: https://github.com/minexew/Shrine
[7]: https://github.com/minexew
[8]: http://www.codersnotes.com/notes/a-constructive-look-at-templeos/

View File

@ -0,0 +1,157 @@
[#]: subject: "Run containers on your Mac with Lima"
[#]: via: "https://opensource.com/article/21/9/run-containers-mac-lima"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
用 Lima 在你的 Mac 上运行容器
======
Lima 可以帮助克服在 Mac 上运行容器的挑战。
![Containers for shipping overseas][1]
在你的 Mac 上运行容器可能是一个挑战。毕竟,容器是基于 Linux 特有的技术,如 cgroups 和命名空间。
幸运的是macOS 拥有一个内置的虚拟机监控程序,允许在 Mac 上运行虚拟机VM。虚拟机监控程序是一个底层的内核功能而不是一个面向用户的功能。
`hyperkit`,一个可以使用 macOS 虚拟机监控程序运行虚拟机的[开源项目][2]。`hyperkit` 被设计成一个“最小化”的虚拟机运行器。与 VirtualBox 不同,它没有花哨的 UI 功能来管理虚拟机。
你可以获取 `hyperkit`,一个运行容器管理器的极简 Linux 发行版,并将所有部分组合在一起。 这将有很多变动组件,且听起来像有很多工作。特别是如果你想通过使用 `vpnkit` (一个开源项目,用于创建感觉更像是主机网络一部分的 VM 网络)使网络连接更加无缝。
### Lima
当 [`lima` 项目][3]已经解决了细节问题时,就没有理由再去做这些努力了。让 `lima` 运行的最简单方法之一是使用 [Homebrew][4]。你可以用这个命令安装 `lima`
```
`$ brew install lima`
```
安装后,可能需要一些时间,就享受一些乐趣了。为了让 `lima` 知道你已经准备好了,你需要启动它。下面是命令:
```
`$ limactl start`
```
如果这是你的第一次,你会被问到是否喜欢默认值,或者是否要改变其中的任何一项。默认值是非常安全的,但我喜欢生活在疯狂的一面。这就是为什么我跳进一个编辑器,从以下地方进行修改:
```
- location: "~"
# CAUTION: `writable` SHOULD be false for the home directory.
# Setting `writable` to true is possible but untested and dangerous.
writable: false
```
变成:
```
- location: "~"
# I *also* like to live dangerously -- Austin Powers
writable: true
```
正如评论中所说,这可能是危险的。可悲的是,许多现有的工作流程都依赖于挂载是可读写的。
By default, `lima` runs `containerd` to manage containers. The `containerd` manager is also a pretty frill-less one. While it is not uncommon to use a wrapper daemon, like `dockerd`, to add those nice-to-have ergonomics, there is another way.
默认情况下,`lima` 运行 `containerd` 来管理容器。`containerd` 管理器也是一个非常简洁的管理器。虽然使用一个包装的守护程序,如 `dockerd`,来增加这些漂亮的工效是很常见的,但也有另一种方法。
### nerdctl 工具
`nerdctl` 工具是 Docker 客户端的直接替换,它将这些功能放在客户端,而不是服务器上。`lima` 工具允许运行 `nerdctl` 而不需要在本地安装,直接从虚拟机内部运行。
做完这些后,可以运行一个容器了!这个容器将运行一个 HTTP 服务器。你可以在你的 Mac 上创建这些文件:
```
$ ls
index.html
$ cat index.html
hello
```
现在,挂载并转发端口:
```
$ lima nerdctl run --rm -it -p 8000:8000 -v $(pwd):/html --entrypoint bash python
root@9486145449ab:/#
```
在容器内,运行一个简单的 Web 服务器:
```
$ lima nerdctl run --rm -it -p 8000:8000 -v $(pwd):/html --entrypoint bash python
root@9486145449ab:/# cd /html/
root@9486145449ab:/html# python -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (<http://0.0.0.0:8000/>) ...
```
在另一个终端,你可以检查一切看起来都很好:
```
$ curl localhost:8000
hello
```
回到容器上,有一条记录 HTTP 客户端连接的日志信息:
```
`10.4.0.1 - - [09/Sep/2021 14:59:08] "GET / HTTP/1.1" 200 -`
```
一个文件是不够的,所以还要做些优化。 在服务器上执行 **CTRL-C**,并添加另一个文件:
```
^C
Keyboard interrupt received, exiting.
root@9486145449ab:/html# echo goodbye &gt; foo.html
root@9486145449ab:/html# python -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (<http://0.0.0.0:8000/>) ...
```
检查你是否能看到新的文件:
```
$ curl localhost:8000/foo.html
goodbye
```
### 总结
总结一下,安装 `lima` 需要一些时间,但完成后,你可以做以下事情:
* 运行容器。
* 将你的主目录中的任意子目录挂载到容器中。
* 编辑这些目录中的文件。
* 运行网络服务器,在 Mac 程序看来,它们是在 localhost 上运行的。
这些都是通过 `lima nerdctl` 实现的。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/9/run-containers-mac-lima
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_2015-2-osdc-lead.png?itok=kAfHrBoy (Containers for shipping overseas)
[2]: https://www.docker.com/blog/docker-unikernels-open-source/
[3]: https://github.com/lima-vm/lima
[4]: https://brew.sh/