Merge branch 'LCTT:master' into master

This commit is contained in:
alfred_hong 2022-09-19 11:51:56 +08:00 committed by GitHub
commit 39d842a8cb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
29 changed files with 2370 additions and 957 deletions

View File

@ -0,0 +1,107 @@
[#]: subject: (5 useful Moodle plugins to engage students)
[#]: via: (https://opensource.com/article/21/3/moodle-plugins)
[#]: author: (Sergey Zarubin https://opensource.com/users/sergey-zarubin)
[#]: collector: (lujun9972)
[#]: translator: (MareDevi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-15042-1.html)
5 款可以吸引学生的有用的 Moodle 插件
======
> 使用插件来赋予你的在线学习平台新的功能来激励学生。
![](https://img.linux.net.cn/data/attachment/album/202209/18/165423pkiq74kwzokqzoq7.jpg)
无论在哪里,优秀的在线学习平台对于教育都非常重要。教师们需要一种途径来开办课堂,学生们需要一个友好的用户界面来促进学习,而管理者也需要一种方法来监控教育系统的有效性。
Moodle 是一个开源的软件包,允许你创建一个带有互动在线课程的私人网站。它可以帮助人们进行虚拟的在线聚会,互相教授和学习,并在此过程中保持井井有条。
Moodle 的独特之处在于它的该可用性,利用第三方解决方案可以显著提高可用性。如果你访问 [Moodle 插件目录][2],你将会找到超过 1,700 种由开源社区开发的插件。
面对如此多的选择,为你的学员挑选出最好的插件可能是一个挑战。为了帮助你开始,这里是我挑选出来的五大插件,你可以将其添加到你的在线学习平台。
### Level up!
![Level up Moodle 插件][3]
> **[Level up! 官网](https://levelup.plus/)**
激励和吸引学习者是教育工作者最困难的任务之一。[Level up! 插件][4] 允许你将学习体验游戏化,将积分分配给完成任务的学生,并显示进度和等级提升。这会鼓励你的学生在健康的氛围中竞争,并成为一个很好的学习者。
另外,你可以完全控制学生所获得的积分,并且他们可以在达到一定等级的时候解锁内容。所有的这些功能都是免费提供的。如果你考虑付费,你可以购买一些额外的功能,如个人奖励和团队排行榜。
### BigBlueButton
![BigBlueButton Moodle 插件][5]
> **[BigBlueButton 官网](https://bigbluebutton.org/)**
[BigBlueButton][6] 可能是最知名的 Moodle 插件。这个开源的视频会议解决方案使得教育者能够让学生远程参与实时在线课程和小组协作活动。它提供了一些重要的功能,例如:实时屏幕共享、音视频通话、聊天,发送表情和分组讨论室。这款插件还可以让你记录你的直播课程。
BigBlueButton 让你能够在任何课程中创建多个活动链接、限制你的学生在你加入之前加入会话、创建自定义欢迎消息、管理你的录音等等。总而言之BigBlueButton 拥有你教授和参与在线课程所需要的一切。
### ONLYOFFICE
![ONLYOFFICE Moodle 插件][7]
> **[ONLYOFFICE 官网](https://www.onlyoffice.com/)**
[ONLYOFFICE 插件][8] 允许学习者和教育者在他们的浏览器中直接创建和编辑文本文档、电子表格和演示文档。无需安装任何额外的应用程序,他们就可以处理附在课程中的 .docx、.xlsx、.pptx、.txt 和 .csv 文件;打开 .pdf 文件进行查看;并应用复杂格式和对象,包括自动形状、表格、图表、方程式等等。
此外ONLYFFICE 使得实时共同编辑文件成为可能,这意味着几个用户可以同时在同一个文件上工作。不同的权限(完全访问、评论、审查、只读和填表)使你更容易灵活地管理对文档的访问。
### Global Chat
![Global Chat Moodle 插件][9]
> **[Global Chat 官网](https://moodle.org/plugins/block_gchat)**
[Global Chat 插件][10] 允许教育者和学习者通过 Moodle 进行实时交流。该插件提供了你课程中所有用户的列表,当你点击一个用户的名字时,它会在页面底部打开一个聊天窗口,以便你们进行交流。
有了这个易于使用的工具,你不需要打开一个单独的窗口来开始在线对话。你可以在网页之间转换,而你的对话将始终保持开放。
### Custom certificate
![Custom certificate Moodle 插件][11]
> **[Custom certificate 官网](https://moodle.org/plugins/mod_customcert)**
另一个吸引学生的有效方法是提供证书作为完成课程的奖励。颁发结业证书的承诺有助于保持学生的进度和对培训的承诺。
[Custom certificate 插件][12] 允许你在你的网页浏览器中生成完全可定制的 PDF 证书。重要的是,该插件与 GDPR 要求兼容,而且证书有独特的验证码,所以你可以用它们进行真实认证。
### 更多丰富的 Moodle 插件
这些是我最喜欢的五个 Moodle 插件。你可以通过在 Moodle.org 上 [注册一个账户][13] 来试用它们,或者你可以托管你自己的插件(或者与你的系统管理员或 IT 人员商量,为你设置一个托管环境)。
如果这些插件不符合你的学习目标,可以看看其他可用的插件。如果你找到一个好的插件,请留下评论并告诉大家。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/moodle-plugins
作者:[Sergey Zarubin][a]
选题:[lujun9972][b]
译者:[MareDevi](https://github.com/MareDevi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sergey-zarubin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/read_book_guide_tutorial_teacher_student_apaper.png?itok=_GOufk6N (阅读书籍的人和数字拷贝)
[2]: https://moodle.org/plugins/
[3]: https://opensource.com/sites/default/files/uploads/gamification.png (Level up Moodle 插件)
[4]: https://moodle.org/plugins/block_xp
[5]: https://opensource.com/sites/default/files/uploads/bigbluebutton.png (BigBlueButton Moodle 插件)
[6]: https://moodle.org/plugins/mod_bigbluebuttonbn
[7]: https://opensource.com/sites/default/files/uploads/onlyoffice_editors.png (ONLYOFFICE Moodle 插件)
[8]: https://github.com/logicexpertise/moodle-mod_onlyoffice
[9]: https://opensource.com/sites/default/files/uploads/global_chat.png (Global Chat Moodle 插件)
[10]: https://moodle.org/plugins/block_gchat
[11]: https://opensource.com/sites/default/files/uploads/certificate.png (Custom certificate Moodle 插件)
[12]: https://moodle.org/plugins/mod_customcert
[13]: https://moodle.com/getstarted/

View File

@ -3,33 +3,30 @@
[#]: author: "Alan Smithee https://opensource.com/users/alansmithee"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15045-1.html"
在家庭实验室中规划 OTA 更新需要了解的 3 件事
规划 OTA 更新需要了解的 3 件事
======
在开始编写应用之前,为手机、物联网设备和边缘计算定义无线更新计划。
![Why and how to handle exceptions in Python Flask][1]
> 在开始编写应用之前,为手机、物联网设备和边缘计算定义无线更新计划。
图片来自 Unsplash.comCC0 协议
过去对系统的更新相对简单。当开发人员需要修改他们已经分发给公众的东西时,会发布一个更新程序供人们运行。用户将运行更新程序,允许用新文件替换旧文件并添加新文件。然而,即使有了这些“相对简单”的更新,也有一个问题。当用户安装好的系统处于意外状态时会发生什么?升级中断时会发生什么?当各种设备都在线时,这些问题同样重要,有时需要重要的安全更新。今天的许多更新都是通过无线、<ruby>空中下载技术<rt>over-the-air</rt></ruby>OTA的方式提供的连接不良、信号突然丢失或断电的可能性可能会对应该是次要更新的内容造成灾难性的影响。这些是你在计划提供 OTA 更新时需要考虑的三大策略。
过去对系统的更新相对简单。当开发人员需要修改他们已经分发给公众的东西时,会发布一个更新程序供人们运行。用户将运行更新程序,允许用新文件替换旧文件并添加新文件。然而,即使有了这些“相对简单”的更新,也有一个问题。当用户的安装处于意外状态时会发生什么?升级中断时会发生什么?当各种设备都在线时,这些问题同样重要,有时需要重要的安全更新。今天的许多更新都是通过无线、空中下载技术 (OTA) 的方式提供的,连接不良、信号突然丢失或断电的可能性可能会对应该是次要更新的内容造成灾难性的影响。这些是你在计划提供 OTA 更新时需要考虑的三大策略。
### 1、验证
### 1. 验证
TCP 协议内置了很多验证功能,因此当你 [向设备发送数据包][2] 时通常可以确信每个数据包都已完好无损地收到。但是TCP 无法报告它不知道的错误,因此由你来验证以下内容:
TCP 协议内置了很多验证功能,因此当你[向设备发送数据包][2]时通常可以确信每个数据包都已完好无损地收到。但是TCP 无法报告它不知道的错误,因此由你来验证以下内容:
* 你是否已发送更新所需的所有文件?设备无法接收最初未发送的内容。
* 你是否已发送更新所需的所有文件?设备无法接收没有发送的内容。
* 收到的文件和你发送的文件一样吗?至少,检查 SHA 和以验证文件完整性。
* 如果可能,请使用[数字签名][3]确保文件来自受信任的来源。
* 如果可能,请使用 [数字签名][3] 确保文件来自受信任的来源。
* 在允许更新开始之前,你必须验证设备能够应用更新。在提交更新之前检查权限和电池状态,并确保你的更新过程覆盖任何意外的用户事件,例如计划的重新启动或休眠。
* 最后,你必须验证声称已成功完成的更新是否已实际完成。在将更新正式标记为系统已完成之前,请检查目标设备上的文件位置和完整性。
### 2. 回退和故障状态
### 2回退和故障状态
更新的最坏情况是设备处于损坏状态,以至于它甚至不能继续中止的更新。在这种情况下,更新程序文件存在于目标设备上,但该过程已被中断。这可能会使设备处于未知状态,其中一些文件已被更新版本替换,而其他文件尚未被触及。在最坏的情况下,已更新的文件与尚未更新的文件不兼容,因此设备无法按预期运行。
更新的最坏情况是设备处于损坏状态,以至于它甚至不能继续中止的更新。在这种情况下,更新程序文件存在于目标设备上,但该过程已被中断。这可能会使设备处于未知状态,其中一些文件已被更新版本替换,而其他文件尚未被替换。在最坏的情况下,已更新的文件与尚未更新的文件不兼容,因此设备无法按预期运行。
有一些策略可以解决这个问题。初始更新步骤可能是安装专用于完成更新的特殊引导镜像或环境,并在系统上设置“标志”以确认更新正在进行中。这样可以确保即使设备在更新过程中突然断电,更新过程也会在下次启动时重新启动。仅在验证更新后才删除表示更新成功的标志。
@ -37,15 +34,15 @@ TCP 协议内置了很多验证功能,因此当你[向设备发送数据包][2
但是,在更新被授予启动权限之前,用户(如果有的话)应该能够延迟或忽略更新。
### 3. 附加更新
### 3附加更新
在许多边缘和物联网设备中,目标设备的底层是不可变的。更新只会添加到系统的已知状态。 [Fedora Silverblue][4] 之类的项目正在证明这种模式可以在许多市场上发挥作用,因此这种奢侈可能会变得司空见惯。不过,在那之前,成功应用更新的一部分是了解你将要影响的环境。
在许多边缘和物联网设备中,目标设备的底层是不可变的。更新只会添加到系统的已知状态。 [Fedora Silverblue][4] 之类的项目正在证明这种模式可以在许多领域发挥作用,因此这种奢侈的做法可能会变得司空见惯。不过,在那之前,成功应用更新的一部分是了解你将要影响的环境。
不过,你不需要不可变的核心来应用附加更新。你可以构建一个使用相同概念的系统,将更新作为添加库或包的一种方式,而无需修改旧版本。作为此类更新的最后一步,具有更新路径的可执行文件是你所做的唯一实际修订。
不过,你不需要不可变的核心来应用附加更新。你可以构建一个使用相同概念的系统,将更新作为添加库或包的一种方式,而无需修改旧版本。作为此类更新的最后一步,具有更新路径的可执行文件是你所做的唯一实际修订。
### OTA 更新
世界越来越无线化。对于手机、物联网设备和[边缘计算][5]OTA 更新通常是唯一的选择。实施 OTA 更新策略需要仔细规划并仔细考虑不可能的情况。你最了解的目标设备,因此请在开始编码之前规划好你的更新架构
世界越来越无线化。对于手机、物联网设备和 [边缘计算][5]OTA 更新通常是唯一的选择。实施 OTA 更新策略需要仔细规划并仔细考虑不可能的情况。你最了解的目标设备,因此请在开始编码之前规划好你的更新架构
--------------------------------------------------------------------------------
@ -54,7 +51,7 @@ via: https://opensource.com/article/22/9/plan-ota-updates-edge
作者:[Alan Smithee][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,42 @@
[#]: subject: "A Project For An Open Source 3D-Printed VR Headgear From Europe"
[#]: via: "https://www.opensourceforu.com/2022/09/a-project-for-an-open-source-3d-printed-vr-headgear-from-europe/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "zjsoftceo"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15044-1.html"
来自欧洲的一个开源 3D 打印 VR 头盔项目
======
![](https://www.opensourceforu.com/wp-content/uploads/2022/09/virtual-reality-4-1536x864.jpg)
> 三家欧洲企业创建了一个 6 GHz WiFi 6E 无线开源虚拟现实头盔。
捷克 3D 打印专家 Prusa Research 公司正在与模拟器开发商 Vrgineers 和英国的 Somnium Space 合作开发 Somnium VR ONE 头盔。这款产品可以连接或者独立使用,旨在尽可能地开放,来改变虚拟现实市场中受限的供应。
由于 Android 11 操作系统是一个开源的操作系统,其源代码是公开的,因此它将在不受限制的商业许可下出售。其中央处理单元是高通骁龙 XR2 CPU支持 microSD 存储卡,并拥有 8GB 的LPDDR5 内存和 512GB UFS 闪存。
它采用新的 6 GHz 的 WiFi 6E 高带宽无线协议,而不是目前拥挤的 5GHz 和 2.4GHz WiFi频率以实现更高的带宽和低延迟连接。它包括两个 3.2 英寸 2880RGB * 2880 快速液晶屏幕,具有 120 度水平视野和 100 度垂直视野。
它具有两个用于外部小工具的 USB-C 10 Gbit/s 链路,和一个 USB-C USB2.0 电池组USB3.2 Gen2。Somnium Space 与布拉格的 VRgineers 合作,在线销售电子产品和独有的镜头,使用户能够 3D 打印自己的头盔,此外,也会提供完整的头盔。
该企业于 2012 年在布拉格成立,已经拥有 700 多名员工。开源的 Prusa i3 design 是世界上使用最广泛的 3D 打印机,每月从布拉格直接向 160 多个国家运送超过 10000 台 Original Prusa 打印机。
<ruby>合成训练环境<rt>Synthetic Training Environments</rt></ruby>STE是由捷克共和国的模拟器开发商 VRgineers 向企业和政府客户提供的。它创造了被称为 XTAL 的专业 8K 头盔,该头盔被 NASA、空客防务与航天公司和 BAE Systems 公司使用,目前在布拉格、布尔诺和拉斯维加斯拥有一支由 50 名专家组成的国际团队。
总部位于伦敦的 Somnium Space 是一个建立在区块链上的开放、社交和永久虚拟现实平台。由于其独特的 NFT 的去中心化经济,用户可以拥有、交易和交换数字商品而无需获得授权。它已经与高通公司和 Ultraleap 公司在 Lynx R-1 上进行了合作这是一个独立的增强现实AR头盔设计。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/a-project-for-an-open-source-3d-printed-vr-headgear-from-europe/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[zjoftceo](https://github.com/zjsoftceo)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -0,0 +1,93 @@
[#]: subject: "Wow! Torvalds Modified Fedora Linux to Run on his Apple M2 Macbook"
[#]: via: "https://news.itsfoss.com/fedora-apple-torvalds/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "littlebirdnest"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-15041-1.html"
Torvalds 为自己的 Apple M2 Macbook 专门修改了 Fedora Linux
======
> Linus Torvalds 让 Fedora Linux Workstation 36 成功运行在 Apple Macbook Air M2 上。666
![Wow! Torvalds Modified Fedora Linux to Run on his Apple M2 Macbook][1]
Linus Torvalds 喜欢写代码和修复代码。当然,这是他的技术专长。
如果你知道的话,他就是那个因为买不起 UNIX转头就创造了 Linux 的家伙。
出于类似的原因,他还在 BitKeeper 不再免费使用后构建了 Git。
即使在今天,他仍继续着他的动手精神和“没有我解决不了的问题”的态度。
他设法在他的 Apple Macbook Air M2 上运行了 Fedora Linux 36 Workstation 版本。
**注意**:从 Asahi Linux 的 Hector Martin 那里得知Linus Torvalds 似乎在这里使用了 [Leif 的工具包](https://github.com/leifliddy/asahi-fedora-builder)。所以,你可能想多了,他并没有从头开始做所有事情来让它发挥作用。
![GIF][3]
多亏了 ZDNet 对 Torvalds 的 [采访](https://linux.cn/article-15039-1.html),我们才发现了这一激动人心的考验。
### Apple M2 芯片上的 Fedora Linux
Apple Macbook Air 是一款出色的笔记本电脑。但是,它不能完全按照消费者想要的方式运行 Linux。
然而Linus Torvalds 似乎是使 Linux 运行在苹果电脑上的天才。
尽管苹果基于 ARM 的 M2 芯片没有 Fedora 移植,但他还是做到了。
请注意,这并不意味着你可以立即在 Macbook Air M2 上运行 Fedora Linux。它只适合像 Torvalds 这样的 Linux 高手才能使其工作。
他说,即使没有图形加速和在 GNOME 桌面环境中缺少某些图形效果(例如屏幕调光),这种体验也很出色。
> 我喜欢这种方式,它使显示更加迅捷。我可能也会在我的其他机器上关掉这些。
事实上,总的来说,这是一项令人兴奋的成就!
### Apple 芯片上 Linux 的现状
不仅是 Linus Torvalds而且每个人都对 Apple M1/M2 芯片的性能印象深刻。
事实上,他利用 Macbook Air M2 发布了 **Linux 内核 5.19**
> **[Linus Torvalds 使用 Apple MacBook 硬件发布 Linux Kernel 5.19](https://news.itsfoss.com/linux-kernel-5-19-release/)**
尽管我们很想尝试一下,但 Apple 的 M2 还没有为 Linux 做好准备。
幸运的是,像 [Asahi Linux](https://asahilinux.org/) 这样的项目一直在不断改进对 Apple 芯片的支持。他们还设法使 [Linux 在最新的 Apple M2 芯片上运行](https://asahilinux.org/2022/07/july-2022-release/)。
而且,在 Linux 创造者的努力下,我们应该很快就能在 Macbook 上看到完整的 Linux 体验。
到目前为止,你可以使其与 Asahi Linux 一起使用,但对于大多数用户来说,它仍然无法取代它作为日常办公系统。
#### 推荐阅读 📖
有兴趣了解更多关于 Torvalds 的知识吗?我们这里有一个有趣的收藏👇
> **[Linus Torvalds关于 Linux 创造者的 20 个事实](https://itsfoss.com/linus-torvalds-facts/)**
*💬 你如何看待在 Apple 硬件上运行的 Fedora Linux你希望某个发行版可以在 Apple M1/M2 驱动的设备上运行吗?是哪个发行版?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/fedora-apple-torvalds/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[littlebirdnest](https://github.com/littlebirdnest)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/torvalds-fedora-m2-macbook.png
[2]: https://github.com/leifliddy/asahi-fedora-builder
[3]: https://tenor.com/embed/5289253
[4]: https://www.zdnet.com/article/linus-torvalds-talks-rust-on-linux-his-work-schedule-and-life-with-his-m2-macbook-air/
[5]: https://news.itsfoss.com/linux-kernel-5-19-release/
[7]: https://asahilinux.org/
[8]: https://asahilinux.org/2022/07/july-2022-release/
[9]: https://itsfoss.com/linus-torvalds-facts/

View File

@ -1,36 +0,0 @@
[#]: subject: "Google Uses Fully Homomorphic Open Source Duality-Led Encryption Library"
[#]: via: "https://www.opensourceforu.com/2022/09/google-uses-fully-homomorphic-open-source-duality-led-encryption-library/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Google Uses Fully Homomorphic Open Source Duality-Led Encryption Library
======
*Partnership Growth Speeds Up FHE Market Adoption*
In accordance with a news release from Duality Technologies, Google has merged its open source Completely Homomorphic Encryption (FHE) Transpiler, which was developed using the XLS SDK and is accessible on GitHub, with the leading open source fully homomorphic encryption library, OpenFHE. Developer adoption of FHE will increase as a result of making cryptographic knowledge simpler and more approachable.
A class of encryption techniques known as FHE differs from more common encryption techniques in that it enables computation to be done directly on encrypted data without the requirement for a secret key. A community of well-known cryptographers founded OpenFHE, a library with roots in post-quantum open source lattice cryptography.
The library was built for optimal usability, enhanced APIs, modularity, cross-platform portability, and, when combined with hardware, a project accelerator. Developers can operationalize encrypted data using high-level code, such as C++, which is frequently used on unencrypted data, by combining OpenFHE with Googles Transpiler without having to learn cryptography.
The Google Transpiler simplifies the procedure for utilising FHE-powered applications without necessitating the extensive software development expertise currently required to construct FHE from scratch. This fills the gap occasionally encountered by software designers and developers who want to benefit from FHEs capabilities without having to through a challenging learning curve.
Yuriy Polyakov, senior director of cryptography research and principal scientist at Duality added, “Our team has achieved significant milestones with our OpenFHE library, and it has quickly become the choice for many of todays technology leaders, like Google. The Google Transpiler provides access to the latest features of OpenFHE for the community of application developers who are not FHE experts.”
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/google-uses-fully-homomorphic-open-source-duality-led-encryption-library/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -1,40 +0,0 @@
[#]: subject: "A Project For An Open Source 3D-Printed VR Headgear From Europe"
[#]: via: "https://www.opensourceforu.com/2022/09/a-project-for-an-open-source-3d-printed-vr-headgear-from-europe/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "zjsoftceo"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
A Project For An Open Source 3D-Printed VR Headgear From Europe
======
*A 6GHz WiFi 6E wireless open source virtual reality headset has been created by three European businesses.*
On the Somnium VR ONE headgear, the Czech 3D printing expert Prusa Research is collaborating with the UKs Somnium Space and simulator creator Vrgineers. This is intended to change the constrained supply in the virtual reality market by being as open as feasible and can be connected or standalone.
With the Android 11 operating system being an open source operating system with the source code publicly available, it will be sold under an unrestricted commercial licence. The Qualcomm Snapdragon XR2 CPU will be its central processing unit, and it will support microSD memory cards and have 8 GB of LPDDR5 RAM and 512 GB of UFS flash storage.
Instead of the currently congested 5GHz and 2.4Ghz WiFi frequencies, it will employ the new WiFi 6e high bandwidth wireless protocol in the 6GHz frequency for higher bandwidth and low latency connections. It will include two 3.2-inch 2880RGB*2880 Fast LCD screens with a 120-degree horizontal field of vision (FoV) and a 100-degree vertical FoV.
It will have two USB-C 10 Gbit/s links for external gadgets and a USB-C USB2.0 battery pack (USB 3.2 Gen2). Somnium Space will sell the electronics and unique lenses online enabling users to 3D print their own headsets in collaboration with Vrgineers, a VR training company in Prague. There will also be fully constructed headsets available.
The business was established in Prague in 2012, and it already employs over 700 people. The open source Prusa i3 design is the most widely used 3D printer in the world, with direct shipments from Prague to over 160 countries of over 10,000 Original Prusa printers each month.
Synthetic Training Environments (STE) are provided by Vrgineers, a simulator developer in the Czech Republic, to business and governmental clients. It has created the professional 8K headgear known as XTAL, which is used by NASA, Airbus Defense & Space, and BAE Systems, and it currently employs an international team of 50 specialists in Prague, Brno, and Las Vegas.
London-based A blockchain-based open, social, and permanent virtual reality platform is called Somnium Space. Users can own, trade, and exchange digital goods without obtaining authorization thanks to its distinct decentralised NFT-based economy. It has already collaborated with Lynx on the Lynx R-1 with Qualcomm and Ultraleap, a standalone augmented reality (AR) headset design.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/a-project-for-an-open-source-3d-printed-vr-headgear-from-europe/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[zjoftceo](https://github.com/zjsoftceo)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -1,90 +0,0 @@
[#]: subject: "Wow! Torvalds Modified Fedora Linux to Run on his Apple M2 Macbook"
[#]: via: "https://news.itsfoss.com/fedora-apple-torvalds/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Wow! Torvalds Modified Fedora Linux to Run on his Apple M2 Macbook
======
Linus Torvalds made Fedora Linux Workstation 36 work with Apple Macbook Air M2. Nice!
![Wow! Torvalds Modified Fedora Linux to Run on his Apple M2 Macbook][1]
Linus Torvalds likes to build and fix things. Of course, he has the technical expertise to tinker with various things.
Not a surprise if you know that he created Linux as a clone of UNIX from scratch because he could not afford a UNIX system.
For a similar reason, he also built Git after BitKeeper was no longer free to use.
He continues his tinkerer spirit and the 'i can fix that' attitude even today.
He managed to run Fedora Linux 36 Workstation edition on his Apple Macbook Air M2.
**Note**: As informed by Hector Martin from Asahi Linux, it seems Linus Torvalds used [Leif's tooling packages][2] here. So, he didn't make everything from scratch to make it work, if you assumed otherwise.
![GIF][3]
We got to spot this exciting ordeal thanks to [ZDNet's interview][4] with Torvalds.
### Fedora Linux on Apple M2 Silicon
Apple Macbook Air is an excellent laptop. But, it cannot entirely run Linux the way a consumer would want.
However, it seems that Linus Torvalds is a genius at making Linux work with Apple computers.
Even though there were no Fedora ports for Apple's ARM-based M2 chip, he did it anyway.
Note that it does not mean you can run Fedora Linux on Macbook Air M2 immediately. It is only suitable for Linux wizards like Torvalds to be able to make it work.
He says the experience is snappy even without graphics acceleration and the lack of some graphical effects on the GNOME desktop environment such as screen dimming.
Indeed, this is an exciting achievement in general!
### The State of Linux on Apple Silicon
Not just Linus Torvalds, but everyone has been impressed with Apple M1/M2 chips for their performance.
In fact, he utilized the Macbook Air M2 to release **Linux Kernel 5.19**.
[Linus Torvalds Uses Apple MacBook Hardware to Release Linux Kernel 5.19][5]
As much as we would love to try it, Apple's M2 is not ready for Linux yet.
Fortunately, projects like [Asahi Linux][7] have constantly been improving Apple silicon support. They have also managed to make [Linux work on the latest Apple M2 chip][8].
And, with efforts from the creator of Linux, it should be sooner than later we get to see a complete Linux experience on Macbook.
As of now, you can make it work with Asahi Linux, it is still not something to replace it as a daily driver for most users.
#### Suggested Read 📖
Interested in learning a bit more on Torvalds? We have an interesting collection here 👇
[Linus Torvalds: 20 Facts About the Creator of Linux][9]
*💬 What do you think about Fedora Linux running on Apple hardware? Do you want a specific distro to run on Apple M1/M2 powered devices? What would be that?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/fedora-apple-torvalds/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/torvalds-fedora-m2-macbook.png
[2]: https://github.com/leifliddy/asahi-fedora-builder
[3]: https://tenor.com/embed/5289253
[4]: https://www.zdnet.com/article/linus-torvalds-talks-rust-on-linux-his-work-schedule-and-life-with-his-m2-macbook-air/
[5]: https://news.itsfoss.com/linux-kernel-5-19-release/
[7]: https://asahilinux.org/
[8]: https://asahilinux.org/2022/07/july-2022-release/
[9]: https://itsfoss.com/linus-torvalds-facts/

View File

@ -0,0 +1,86 @@
[#]: subject: "Penpot is a Solid Open-Source Figma Alternative to Look Out for!"
[#]: via: "https://news.itsfoss.com/penpot-figma-alternative/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: "zjsoftceo"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Penpot is a Solid Open-Source Figma Alternative to Look Out for!
======
Penpot is a free and open-source solution as an alternative to Figma and similar design tools. What do you think?
![Penpot is a Solid Open-Source Figma Alternative to Look Out for!][1]
Adobe is acquiring the popular design tool [Figma][2] for a whopping **$20 billion**.
As usual, it is the big tech eliminating the competition by acquiring businesses. So, not entirely a piece of exciting news.
But, **what's exciting** is we came across a free and open-source design tool that gets its inspiration from Figma and does a few things better!
### Penpot: Free & Open-Source Design Tool in Development
![Penpot UI][3]
[Penpot][4] is an open-source project in active development. It is in its beta phase following its launch on [ProductHunt][5] nearly two years ago.
**Here's what makes Penpot interesting:**
* Free and open-source (of course).
* Option to Self-host.
* Cross-platform.
* Using SVG as the native format.
* Web-based.
* Featuring industry-standard features (inspired by Figma).
You can watch its official video to know the basics of it:
![Penpot for Beginners][6]
The major highlight of Penpot is the use of SVG as its native format. With SVG files, you get compatibility with many vector graphics editing tools.
So you do not get locked down with a proprietary file format that can be accessed using a particular application.
Penpot gives you the absolute best of open standards.
The **CEO of Penpot,***Pablo Ruiz-Múzquiz*, mentions more about it:
So, using SVG as the native format has a lot of advantages!
At the moment, the project is in its beta stage and constantly improving with plenty of skilled contributors in the project.
**This can turn out to be the most useful open-source alternative to Figma, breaking out of big tech for design tools.**
You can self-host it or use the cloud app to test it out. Sign up at its official website to learn and experiment with it.
You can also check out its [GitHub page][7] to explore more.
[Penpot][8]
This also reminds me of [Akira][9], which aimed to be a native Linux app for UI and UX design. It is still in its early development stage, but such efforts are always appreciated when it involves Linux or the open-source initiative.
*💬 What do you think about Penpot as an open-source alternative to Figma?*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/penpot-figma-alternative/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/content/images/size/w1200/2022/09/penpot-opensource-figma-ft.jpg
[2]: https://www.figma.com/
[3]: https://news.itsfoss.com/content/images/2022/09/penpot-screenshot.jpg
[4]: https://penpot.app/
[5]: https://www.producthunt.com/products/penpot?utm_source=badge-featured&utm_medium=badge#penpot
[6]: https://youtu.be/JozESuPcVpg
[7]: https://github.com/penpot/penpot
[8]: https://penpot.app/
[9]: https://github.com/akiraux/Akira

View File

@ -0,0 +1,192 @@
[#]: subject: "20 Facts About Linus Torvalds, the Creator of Linux and Git"
[#]: via: "https://itsfoss.com/linus-torvalds-facts/"
[#]: author: "Abhishek Prakash https://itsfoss.com/"
[#]: collector: "lkxed"
[#]: translator: "gpchn"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
20 Facts About Linus Torvalds, the Creator of Linux and Git
======
*Brief: Some known, some lesser known here are 20 facts about the Linus Torvalds, creator of the Linux kernel.*
![Linus Torvalds, creator of Linux and Git][1]
[Linus Torvalds][2], a Finnish student, developed a Unix-like operating system while he was doing his masters in the year 1991. Since then, its sparked a revolution: today it powers most of the web, many embedded devices and every one of the [top 500 supercomputers][3].
Ive already written about some less known [facts about Linux][4]. This article is not about Linux. Its about its creator, Linus Torvalds.
I learned a number of things about Torvalds by reading his biography [Just for Fun][5]. If youre interested, you can [order a copy of the biography from Amazon][6]. (This is an [affiliate][7] link.)
### 20 Interesting facts about Linus Torvalds
Youll probably already know some of these facts about Linus but the chances are that youll learn some new facts about him by reading this.
#### 1. Named after a Nobel prize winner
Linus Benedict Torvalds was born on December 28th 1969 in Helsinki. He comes from a family of journalists. His father [Nils Torvalds][11] is a Finnish politician and a likely candidate for president in future elections.
He was named after [Linus Pauling][12], a double Nobel prize winner in Chemistry and Peace.
#### 2. All the Torvalds in the world are relatives
While you may find several people with the name Linus, you wont find many people with the name Torvalds because the correct spelling is actually Torvald (without the s). His grandfather changed his name from Torvald to Torvalds, adding an s at the end. And thus the Torvalds dynasty (if I can call it that) began.
Since its such an unusual surname, there are hardly 30 Torvalds in the world and theyre all relatives, claims Linus Torvalds in his biography.
![Linus Torvalds with sister Sara Torvalds][13]
#### 3. Commodore Vic 20 was his first computer
At the age of 10, Linus started writing programs in BASIC on his maternal grandfathers Commodore Vic 20. This is when he discovered his love for computers and programming.
#### 4. Second Lieutenant Linus Torvalds
Though he preferred to spend time on computers rather than in athletic activities, he had to attend compulsory military training. He held the rank of Second Lieutenant.
#### 5. He created Linux because he didnt have money for UNIX
In early 1991, unhappy with [MS-DOS][14] and [MINIX][15], Torvalds wanted to buy a UNIX system. Luckily for us, he didnt have enough money. So he decided to make his own clone of UNIX, from scratch.
#### 6. Linux could have been called Freax
In September 91, Linus announced Linux (standing for Linuss MINIX) and encouraged his colleagues to use its source code for wider distribution.
Linus thought that the name Linux was too egotistical. He wanted to change it to Freax (based on free, freak and MINIX), but his friend Lemmarke had already created a directory called Linux on his FTP server. And thus the name Linux continued.
#### 7. Linux was his main project at University
“Linux: A Portable Operating System” was the title of his thesis for his M.Sc.
#### 8. He married his student
In 1993, when he was teaching at the University of Helsinki, he gave the task of composing email as homework to the students. Yeah, composing emails were a big deal back then.
A female student named Tove Monni completed the task by sending him an email asking him out on a date. He accepted and three years later the first of their three daughters was born.
Shall I say he started the internet dating trend? Hmm … nah! Lets leave it there ;)
![Linus Torvalds with his wife Tove Monni Torvalds][16]
#### 9. Linus has an asteroid named after him
He has numerous awards to his name, including an asteroid named [9793 Torvalds][17].
#### 10. Linus had to battle for the trademark of Linux
Linux is a trademark registered with Linus Torvalds. Torvalds didnt care about the trademark initially, but in August 1994, a William R. Della Croce, Jr. registered the Linux trademark and started demanding royalties from Linux developers. Torvalds sued him in return and in 1997, the case was settled.
![Who is Linus Torvalds? Know about him in 2 minutes!][18]
#### 11. Steve Jobs wanted him to work on Apples macOS
In 2000, Apples founder [Steve Jobs invited him to work on Apples macOS][19]. Linus refused the lucrative offer and continued to work on the Linux kernel.
#### 12. Linus also created Git
Most people know Linus Torvalds for creating the Linux kernel. But he also created [Git][20], a version control system that is extensively used in software development worldwide.
Till 2005, (then) proprietary service [BitKeeper][21] was used for Linux kernel development. When Bitkeeper shut down its free service, Linus Torvalds created Git on his own because none of the other version control systems met his needs.
#### 13. Linus hardly codes these days
Though Linus works full time on the Linux kernel, he hardly writes any code for it anymore. In fact, most of the code in the Linux kernel is by contributors from around the world. He ensures that things go fine at each release with the help of kernel maintainers.
#### 14. Torvalds hates C++
Linus Torvalds has a strong [dislike for the C++ programming language][22]. He has been very vocal about it. He jokes that the Linux kernel compiles faster than a C++ program.
#### 15. Even Linus Torvalds found Linux difficult to install (you can feel good about yourself now)
A few years ago, Linus told that [he found Debian difficult to install][23]. He is [known to be using Fedora][24] on his main workstation.
#### 16. He loves scuba diving
Linus Torvalds loves scuba diving. He even created [Subsurface][25], a dive logging tool for scuba divers. Youll be surprised that sometimes he even answers general questions on its forum.
![Linus Torvalds in Scuba Gear][26]
#### 17. The foul-mouthed Torvalds has improved his behavior
Torvalds is known for using [mild expletives][27] on the Linux kernel mailing list. This has been criticized by some in the industry. However, it would be difficult to criticize his banter of “[F**k you, NVIDIA][28]” as it prompted better support for the Linux kernel from NVIDIA.
In 2018, [Torvalds took a break from Linux kernel development to improve his behavior][29]. This was done just before he signed the controversial [code of conduct for Linux kernel developers][30].
![Linus Torvalds Middle finger to Nvidia : Fuck You Nvidia][31]
#### 18. He is too shy to speak in public
Linus doesnt feel comfortable with public speaking. He doesnt attend many events. And when he does, he prefers to sit down and be interviewed by the host. This is his favorite way of doing a public talk.
#### 19. Not a social media buff
[Google Plus][32] is the only social media platform he has used. He even spent some time [reviewing gadgets][33] there in his free time. Google Plus is now discontinued so he has no other social media accounts.
#### 20. Torvalds is settled in the USA
Linus moved to the US in 1997 and settled there with his wife Tove and their three daughters. He became a US citizen in 2010. At present, he works full-time on the Linux kernel as part of the [Linux Foundation][34].
Its difficult to say what the net worth of Linus Torvalds is or how much Linus Torvalds earns because this information has never been made public.
![Tove and Linus Torvalds with their daughters Patricia, Daniela and Celeste][35]
Picture credit: [opensource.com][36]
If youre interested in learning more about the early life of Linus Torvalds, I recommend reading his biography entitled [Just for Fun][37].
*Disclaimer: Some of the images here have been taken from the internet. I do not own the copyright to the images. I also do not intend to invade the privacy of the Torvalds family with this article.*
--------------------------------------------------------------------------------
via: https://itsfoss.com/linus-torvalds-facts/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/wp-content/uploads/2017/12/Linus-Torvalds-featured-800x450.png
[2]: https://en.wikipedia.org/wiki/Linus_Torvalds
[3]: https://itsfoss.com/linux-runs-top-supercomputers/
[4]: https://itsfoss.com/facts-linux-kernel/
[5]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[6]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[7]: https://itsfoss.com/affiliate-policy/
[8]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[9]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[10]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[11]: https://en.wikipedia.org/wiki/Nils_Torvalds
[12]: https://en.wikipedia.org/wiki/Linus_Pauling
[13]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_and_sara_Torvalds.jpg
[14]: https://en.wikipedia.org/wiki/MS-DOS
[15]: https://www.minix3.org/
[16]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_torvalds-wife-800x533.jpg
[17]: http://enacademic.com/dic.nsf/enwiki/1928421
[18]: https://youtu.be/eE-ovSOQK0Y
[19]: https://www.macrumors.com/2012/03/22/steve-jobs-tried-to-hire-linux-creator-linus-torvalds-to-work-on-os-x/
[20]: https://en.wikipedia.org/wiki/Git
[21]: https://www.bitkeeper.org/
[22]: https://lwn.net/Articles/249460/
[23]: https://www.youtube.com/watch?v=qHGTs1NSB1s
[24]: https://plus.google.com/+LinusTorvalds/posts/Wh3qTjMMbLC
[25]: https://subsurface-divelog.org/
[26]: https://itsfoss.com/wp-content/uploads/2017/12/Linus_Torvalds_in_SCUBA_gear.jpg
[27]: https://www.theregister.co.uk/2016/08/26/linus_torvalds_calls_own_lawyers_nasty_festering_disease/
[28]: https://www.youtube.com/watch?v=_36yNWw_07g
[29]: https://itsfoss.com/torvalds-takes-a-break-from-linux/
[30]: https://itsfoss.com/linux-code-of-conduct/
[31]: https://itsfoss.com/wp-content/uploads/2012/09/Linus-Torvalds-Fuck-You-Nvidia.jpg
[32]: https://plus.google.com/+LinusTorvalds
[33]: https://plus.google.com/collection/4lfbIE
[34]: https://www.linuxfoundation.org/
[35]: https://itsfoss.com/wp-content/uploads/2017/12/patriciatorvalds.jpg
[36]: https://opensource.com/life/15/8/patricia-torvalds-interview
[37]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[38]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[39]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID
[40]: https://www.amazon.com/dp/0066620732?tag=AAWP_PLACEHOLDER_TRACKING_ID

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/21/8/first-programming-language"
[#]: author: "Jen Wike Huger https://opensource.com/users/jen-wike"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "gpchn"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,105 +0,0 @@
[#]: subject: (5 useful Moodle plugins to engage students)
[#]: via: (https://opensource.com/article/21/3/moodle-plugins)
[#]: author: (Sergey Zarubin https://opensource.com/users/sergey-zarubin)
[#]: collector: (lujun9972)
[#]: translator: (MareDevi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
5 useful Moodle plugins to engage students
======
Use plugins to give your e-learning platform new capabilities that
motivate students.
![Person reading a book and digital copy][1]
A good e-learning platform is important for education all over the world. Teachers need a way to hold classes, students need a friendly user interface to facilitate learning, and administrators need a way to monitor the educational system's effectiveness.
Moodle is an open source software package that allows you to create a private website with interactive online courses. It's helping people gather virtually, teach and learn from one another, and stay organized while doing it.
What makes Moodle unique is its high usability that can significantly increase with third-party solutions. If you visit the [Moodle plugins directory][2], you'll find over 1,700 plugins developed by the open source community.
Picking the best plugins for your learners might be a challenge with so many choices. To help get you started, here my top five plugins to add to your e-learning platform.
### Level up!
![Level up Moodle plugin][3]
Level up! Source: <https://levelup.plus/>
Motivating and engaging learners is one of the most difficult tasks for educators. The [Level up plugin][4] allows you to gamify the learning experience by attributing points to students for completing actions and allowing them to show progress and level up. This encourages your students to compete in a healthy atmosphere and be better learners.
What's more, you can take total control over the points your students earn, and they can unlock content when they reach a certain level. All of these features are available for free. If you are ready to pay, you can buy some extra functionality, such as individual rewards and team leaderboards.
### BigBlueButton
![BigBlueButton Moodle plugin][5]
BigBlueButton. Source: <https://bigbluebutton.org/>
[BigBlueButton][6] is probably the most well-known Moodle plugin. This open source videoconferencing solution allows educators to engage remote students with live online classes and group collaboration activities. It offers important features such as real-time screen sharing, audio and video calls, chat, emojis, and breakout rooms. This plugin also allows you to record your live sessions.
BigBlueButton enables you to create multiple activity links within any course, restrict your students from joining a session until you join, create a custom welcome message, manage your recordings, and more. All in all, BigBlueButton has everything you need to teach and participate in online classes.
### ONLYOFFICE
![ONLYOFFICE Moodle plugin][7]
ONLYOFFICE. Source: <https://www.onlyoffice.com/>
The [ONLYOFFICE plugin][8] allows learners and educators to create and edit text documents, spreadsheets, and presentations right in their browser. Without installing any additional apps, they can work with .docx, .xlsx, .pptx, .txt, and .csv files attached to their courses; open .pdf files for viewing; and apply advanced formatting and objects including autoshapes, tables, charts, equations, and more.
Moreover, ONLYFFICE makes it possible to co-edit documents in real time, which means several users can simultaneously work on the same document. Different permission rights (full access, commenting, reviewing, read-only, and form filling) make it easier to manage access to your documents flexibly.
### Global Chat
![Global Chat Moodle plugin][9]
Global Chat. Source: <https://moodle.org/plugins/block_gchat>
The [Global Chat plugin][10] allows educators and learners to communicate in real time via Moodle. The plugin provides a list of all the users in your courses, and when you click a user's name, it opens a chat window at the bottom of the page so that you can communicate.
With this easy-to-use tool, you don't need to open a separate window to start an online conversation. You can change between web pages, and your conversations will always remain open.
### Custom certificate
![Custom certificate Moodle plugin][11]
Custom certificate. Source: <https://moodle.org/plugins/mod_customcert>
Another effective way to engage students is to offer certificates as a reward for course completion. The promise of a completion certificate helps keep students on track and committed to their training.
The [Custom certificate plugin][12] allows you to generate fully customizable PDF certificates in your web browser. Importantly, the plugin is compatible with GDPR requirements, and the certificates have unique verification codes, so you can use them for authentic accreditation.
### Oodles of Moodle plugins
These are my top five favorite Moodle plugins. You can try them out by [signing up for an account][13] on Moodle.org, or you can host your own installation (or talk to your systems administrator or IT staff to set one up for you).
If these plugins aren't the right options for your learning goals, take a look at the many other plugins available. If you find a good one, leave a comment and tell everyone about it!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/3/moodle-plugins
作者:[Sergey Zarubin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/sergey-zarubin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/read_book_guide_tutorial_teacher_student_apaper.png?itok=_GOufk6N (Person reading a book and digital copy)
[2]: https://moodle.org/plugins/
[3]: https://opensource.com/sites/default/files/uploads/gamification.png (Level up Moodle plugin)
[4]: https://moodle.org/plugins/block_xp
[5]: https://opensource.com/sites/default/files/uploads/bigbluebutton.png (BigBlueButton Moodle plugin)
[6]: https://moodle.org/plugins/mod_bigbluebuttonbn
[7]: https://opensource.com/sites/default/files/uploads/onlyoffice_editors.png (ONLYOFFICE Moodle plugin)
[8]: https://github.com/logicexpertise/moodle-mod_onlyoffice
[9]: https://opensource.com/sites/default/files/uploads/global_chat.png (Global Chat Moodle plugin)
[10]: https://moodle.org/plugins/block_gchat
[11]: https://opensource.com/sites/default/files/uploads/certificate.png (Custom certificate Moodle plugin)
[12]: https://moodle.org/plugins/mod_customcert
[13]: https://moodle.com/getstarted/

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/22/5/libsoup-gobject-c"
[#]: author: "Joël Krähemann https://opensource.com/users/joel2001k"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "Donkey-Hao"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
@ -310,7 +310,7 @@ via: https://opensource.com/article/22/5/libsoup-gobject-c
作者:[Joël Krähemann][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
译者:[Donkey-Hao](https://github.com/Donkey-Hao)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,221 +0,0 @@
[#]: subject: "Turn your Python script into a command-line application"
[#]: via: "https://opensource.com/article/22/7/bootstrap-python-command-line-application"
[#]: author: "Mark Meyer https://opensource.com/users/ofosos"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Turn your Python script into a command-line application
======
With scaffold and click in Python, you can level up even a simple utility into a full-fledged command-line interface tool.
![Python programming language logo and Tux the Penguin logo for Linux][1]
Image by: Opensource.com
I've written, used, and seen a lot of loose scripts in my career. They start with someone that needs to semi-automate some task. After a while, they grow. They can change hands many times in their lifetime. I've often wished for a more command-line *tool-like* feeling in those scripts. But how hard is it really to bump the quality level from a one-off script to a proper tool? It turns out it's not that hard in Python.
### Scaffolding
In this article, I start with a little Python snippet. I'll drop it into a `scaffold` module, and extend it with `click` to accept command-line arguments.
```
#!/usr/bin/python
from glob import glob
from os.path import join, basename
from shutil import move
from datetime import datetime
from os import link, unlink
LATEST = 'latest.txt'
ARCHIVE = '/Users/mark/archive'
INCOMING = '/Users/mark/incoming'
TPATTERN = '%Y-%m-%d'
def transmogrify_filename(fname):
    bname = basename(fname)
    ts = datetime.now().strftime(TPATTERN)
    return '-'.join([ts, bname])
def set_current_latest(file):
    latest = join(ARCHIVE, LATEST)
    try:
        unlink(latest)
    except:
        pass
    link(file, latest)
def rotate_file(source):
    target = join(ARCHIVE, transmogrify_filename(source))
    move(source, target)
    set_current_latest(target)
def rotoscope():
    file_no = 0
    folder = join(INCOMING, '*.txt')
    print(f'Looking in {INCOMING}')
    for file in glob(folder):
        rotate_file(file)
        print(f'Rotated: {file}')
        file_no = file_no + 1
    print(f'Total files rotated: {file_no}')
if __name__ == '__main__':
    print('This is rotoscope 0.4.1. Bleep, bloop.')
    rotoscope()
```
All non-inline code samples in this article refer to a specific version of the code you can find at [https://codeberg.org/ofosos/rotoscope][2]. Every commit in that repo describes some meaningful step in the course of this how-to article.
This snippet does a few things:
* Check whether there are any text files in the path specified in `INCOMING`
* If it exists, it creates a new filename with the current timestamp and moves the file to `ARCHIVE`
* Delete the current `ARCHIVE/latest.txt` link and create a new one pointing to the file just added
As an example, this is pretty small, but it gives you an idea of the process.
### Create an application with pyscaffold
First, you need to install the `scaffold`, `click`, and [tox Python modules][3].
```
$ python3 -m pip install scaffold click tox
```
After installing `scaffold`, change to the directory where the example `rotoscope` project resides, and then execute the following command:
```
$ putup rotoscope -p rotoscope \
--force --no-skeleton -n rotoscope \
-d 'Move some files around.' -l GLWT \
-u http://codeberg.org/ofosos/rotoscope \
--save-config --pre-commit --markdown
```
Pyscaffold overwrote my `README.md`, so restore it from Git:
```
$ git checkout README.md
```
Pyscaffold set up a complete sample project in the docs hierarchy, which I won't cover here but feel free to explore it later. Besides that, Pyscaffold can also provide you with continuous integration (CI) templates in your project.
* packaging: Your project is now PyPi enabled, so you can upload it to a repo and install it from there.
* documentation: Your project now has a complete docs folder hierarchy, based on Sphinx and including a readthedocs.org builder.
* testing: Your project can now be used with the tox test runner, and the tests folder contains all necessary boilerplate to run pytest-based tests.
* dependency management: Both the packaging and test infrastructure need a way to manage dependencies. The `setup.cfg` file solves this and includes dependencies.
* pre-commit hook: This includes the Python source formatter "black" and the "flake8" Python style checker.
Take a look into the tests folder and run the `tox` command in the project directory. It immediately outputs an error. The packaging infrastructure cannot find your package.
Now create a Git tag (for instance, `v0.2` ) that the tool recognizes as an installable version. Before committing the changes, take a pass through the auto-generated `setup.cfg` and edit it to suit your use case. For this example, you might adapt the `LICENSE` and project descriptions. Add those changes to Git's staging area, I have to commit them with the pre-commit hook disabled. Otherwise, I'd run into an error because flake8, Python style checker, complains about lousy style.
```
$ PRE_COMMIT_ALLOW_NO_CONFIG=1 git commit
```
It would also be nice to have an entry point into this script that users can call from the command line. Right now, you can only run it by finding the `.py` file and executing it manually. Fortunately, Python's packaging infrastructure has a nice "canned" way to make this an easy configuration change. Add the following to the `options.entry_points` section of your `setup.cfg` :
```
console_scripts =
    roto = rotoscope.rotoscope:rotoscope
```
This change creates a shell command called `roto`, which you can use to call the rotoscope script. Once you install rotoscope with `pip`, you can use the `roto` command.
That's that. You have all the packaging, testing, and documentation setup for free from Pyscaffold. You also got a pre-commit hook to keep you (mostly) honest.
### CLI tooling
Right now, there are values hardcoded into the script that would be more convenient as command [arguments][4]. The `INCOMING` constant, for instance, would be better as a command-line parameter.
First, import the [click][5] library. Annotate the `rotoscope()` method with the command annotation provided by Click, and add an argument that Click passes to the `rotoscope` function. Click provides a set of validators, so add a path validator to the argument. Click also conveniently uses the function's here-string as part of the command-line documentation. So you end up with the following method signature:
```
@click.command()
@click.argument('incoming', type=click.Path(exists=True))
def rotoscope(incoming):
    """
    Rotoscope 0.4 - Bleep, blooop.
    Simple sample that move files.
    """
```
The main section calls `rotoscope()`, which is now a Click command. It doesn't need to pass any parameters.
Options can get filled in automatically by [environment variables][6], too. For instance, change the `ARCHIVE` constant to an option:
```
@click.option('archive', '--archive', default='/Users/mark/archive', envvar='ROTO_ARCHIVE', type=click.Path())
```
The same path validator applies again. This time, let Click fill in the environment variable, defaulting to the old constant's value if nothing's provided by the environment.
Click can do many more things. It has colored console output, prompts, and subcommands that allow you to build complex CLI tools. Browsing through the Click documentation reveals more of its power.
Now add some tests to the mix.
### Testing
Click has some advice on [running end-to-end tests][7] using the CLI runner. You can use this to implement a complete test (in the [sample project][8], the tests are in the `tests` folder.)
The test sits in a method of a testing class. Most of the conventions follow what I'd use in any other Python project very closely, but there are a few specifics because rotoscope uses `click`. In the `test` method, I create a `CliRunner`. The test uses this to run the command in an isolated file system. Then the test creates `incoming` and `archive` directories and a dummy `incoming/test.txt` file within the isolated file system. Then it invokes the CliRunner just like you'd invoke a command-line application. After the run completes, the test examines the isolated filesystem and verifies that `incoming` is empty, and that `archive` contains two files (the latest link and the archived file.)
```
from os import listdir, mkdir
from click.testing import CliRunner
from rotoscope.rotoscope import rotoscope
class TestRotoscope:
    def test_roto_good(self, tmp_path):
        runner = CliRunner()
        with runner.isolated_filesystem(temp_dir=tmp_path) as td:
            mkdir("incoming")
            mkdir("archive")
            with open("incoming/test.txt", "w") as f:
                f.write("hello")
            result = runner.invoke(rotoscope, ["incoming", "--archive", "archive"])
            assert result.exit_code == 0
            print(td)
            incoming_f = listdir("incoming")
            archive_f = listdir("archive")
            assert len(incoming_f) == 0
            assert len(archive_f) == 2
```
To execute these tests on my console, run `tox` in the project's root directory.
During implementing the tests, I found a bug in my code. When I did the Click conversion, rotoscope just unlinked the latest file, whether it was present or not. The tests started with a fresh file system (not my home folder) and promptly failed. I can prevent this kind of bug by running in a nicely isolated and automated test environment. That'll avoid a lot of "it works on my machine" problems.
### Scaffolding and modules
This completes our tour of advanced things you can do with `scaffold` and `click`. There are many possibilities to level up a casual Python script, and make even your simple utilities into full-fledged CLI tools.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/bootstrap-python-command-line-application
作者:[Mark Meyer][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ofosos
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/python_linux_tux_penguin_programming.png
[2]: https://codeberg.org/ofosos/rotoscope
[3]: https://opensource.com/article/19/5/python-tox
[4]: https://opensource.com/article/21/8/linux-terminal#argument
[5]: https://click.palletsprojects.com
[6]: https://opensource.com/article/19/8/what-are-environment-variables
[7]: https://click.palletsprojects.com/en/8.1.x/testing
[8]: https://codeberg.org/ofosos/rotoscope/commit/dfa60c1bfcb1ac720ad168e5e98f02bac1fde17d

View File

@ -2,7 +2,7 @@
[#]: via: "https://itsfoss.com/komikku-manga-reader/"
[#]: author: "Anuj Sharma https://itsfoss.com/author/anuj/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,153 +0,0 @@
[#]: subject: "How I recovered my Linux system using a Live USB device"
[#]: via: "https://opensource.com/article/22/9/recover-linux-system-live-usb"
[#]: author: "David Both https://opensource.com/users/dboth"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How I recovered my Linux system using a Live USB device
======
The Fedora Live USB distribution provides an effective solution to boot and enter a recovery mode.
![USB drive][1]
Image by: Photo by [Markus Winkler][2] on [Unsplash][3]
I have a dozen or so physical computers in my home lab and even more VMs. I use most of these systems for testing and experimentation. I frequently write about using automation to make sysadmin tasks easier. I have also written in multiple places that I learn more from my own mistakes than I do in almost any other way.
I have learned a lot during the last couple of weeks.
I created a major problem for myself. Having been a sysadmin for years and written hundreds of articles and five books about Linux, I really should have known better. Then again, we all make mistakes, which is an important lesson: You're never too experienced to make a mistake.
I'm not going to discuss the details of my error. It's enough to tell you that it was a mistake and that I should have put a lot more thought into what I was doing before I did it. Besides, the details aren't really the point. Experience can't save you from every mistake you're going to make, but it can help you in recovery. And that's literally what this article is about: Using a Live USB distribution to boot and enter a recovery mode.
### The problem
First, I created the problem, which was essentially a bad configuration for the `/etc/default/grub` file. Next, I used Ansible to distribute the misconfigured file to all my physical computers and run `grub2-mkconfig`. All 12 of them. Really, really fast.
All but two failed to boot. They crashed during the very early stages of Linux startup with various errors indicating that the `/root` filesystem could not be located.
I could use the root password to get into "maintenance" mode, but without `/root` mounted, it was impossible to access even the simplest tools. Booting directly to the recovery kernel did not work either. The systems were truly broken.
### Recovery mode with Fedora
The only way to resolve this problem was to find a way to get into recovery mode. When all else fails, Fedora provides a really cool tool: The same Live USB thumb drive used to install new instances of Fedora.
After setting the BIOS to boot from the Live USB device, I booted into the Fedora 36 Xfce live user desktop. I opened two terminal sessions next to each other on the desktop and switched to root privilege in both.
I ran `lsblk` in one for reference. I used the results to identify the `/` root partition and the `boot` and `efi` partitions. I used one of my VMs, as seen below. There is no `efi` partition in this case because this VM does not use UEFI.
```
# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0           7:0    0  1.5G  1 loop
loop1           7:1    0    6G  1 loop
├─live-rw     253:0    0    6G  0 dm   /
└─live-base   253:1    0    6G  1 dm  
loop2           7:2    0   32G  0 loop
└─live-rw     253:0    0    6G  0 dm   /
sda             8:0    0  120G  0 disk
├─sda1          8:1    0    1G  0 part
└─sda2          8:2    0  119G  0 part
  ├─vg01-swap 253:2    0    4G  0 lvm  
  ├─vg01-tmp  253:3    0   10G  0 lvm  
  ├─vg01-var  253:4    0   20G  0 lvm  
  ├─vg01-home 253:5    0    5G  0 lvm  
  ├─vg01-usr  253:6    0   20G  0 lvm  
  └─vg01-root 253:7    0    5G  0 lvm  
sr0            11:0    1  1.6G  0 rom  /run/initramfs/live
zram0         252:0    0    8G  0 disk [SWAP]
```
The `/dev/sda1` partition is easily identifiable as `/boot`, and the root partition is pretty obvious as well.
In the other terminal session, I performed a series of steps to recover my systems. The specific volume group names and device partitions such as `/dev/sda1` will differ for your systems. The commands shown here are specific to my situation.
The objective is to boot and get through startup using the Live USB, then mount only the necessary filesystems in an image directory and run the `chroot` command to run Linux in the chrooted image directory. This approach bypasses the damaged GRUB (or other) configuration files. However, it provides a complete running system with all the original filesystems mounted for recovery, both as the source of the tools required and the target of the changes to be made.
Here are the steps and related commands:
1. Create the directory `/mnt/sysimage` to provide a location for the `chroot` directory.
2. Mount the root partition on `/mnt/sysimage:`
```
# mount /dev/mapper/vg01-root /mnt/sysimage
```
3. Make `/mnt/sysimage` your working directory:
```
# cd /mnt/sysimage
```
4. Mount the `/boot` and `/boot/efi` filesystems.
5. Mount the other main filesystems. Filesystems like `/home` and `/tmp` are not needed for this procedure:
```
# mount /dev/mapper/vg01-usr usr
# mount /dev/mapper/vg01-var var
```
6. Mount important but already mounted filesystems that must be shared between the chrooted system and the original Live system, which is still out there and running:
```
# mount --bind /sys sys
# mount --bind /proc proc
```
7. Be sure to do the `/dev` directory last, or the other filesystems won't mount:
```
# mount --bind /dev dev
```
8. Chroot the system image:
```
# chroot /mnt/sysimage
```
The system is now ready for whatever you need to do to recover it to a working state. However, one time I was able to run my server for several days in this state until I could research and test real fixes. I don't really recommend that, but it can be an option in a dire emergency when things just need to get up and runningnow!
### The solution
The fix was easy once I got each system into recovery mode. Because my systems now worked just as if they had booted successfully, I simply made the necessary changes to `/etc/default/grub` and `/etc/fstab` and ran the `grub2-mkconfig > boot/grub2/grub.cfg` command. I used the `exit` command to exit from chroot and then rebooted the host.
Of course, I could not automate the recovery from my mishap. I had to perform this entire process manually on each host—a fitting bit of karmic retribution for using automation to quickly and easily propagate my own errors.
### Lessons learned
Despite their usefulness, I used to hate the "Lessons Learned" sessions we would have at some of my sysadmin jobs, but it does appear that I need to remind myself of a few things. So here are my "Lessons Learned" from this self-inflicted fiasco.
First, the ten systems that failed to boot used a different volume group naming scheme, and my new GRUB configuration failed to consider that. I just ignored the fact that they might possibly be different.
* Think it through completely.
* Not all systems are alike.
* Test everything.
* Verify everything.
* Never make assumptions.
Everything now works fine. Hopefully, I am a little bit smarter, too.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/recover-linux-system-live-usb
作者:[David Both][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/markus-winkler-usb-unsplash.jpg
[2]: https://unsplash.com/@markuswinkler?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/usb?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -2,7 +2,7 @@
[#]: via: "https://www.opensourceforu.com/2022/09/python-microservices-using-flask-on-kubernetes/"
[#]: author: "Krishna Mohan Koyya https://www.opensourceforu.com/author/krishna-mohan-koyya/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,147 +0,0 @@
[#]: subject: "Install Linux Mint with Windows 11 Dual Boot [Complete Guide]"
[#]: via: "https://www.debugpoint.com/linux-mint-install-windows/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "gpchn"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Install Linux Mint with Windows 11 Dual Boot [Complete Guide]
======
A comprehensive guide to installing Linux Mint alongside Windows 11 (or Windows 10) and making a dual-boot system.
If you are a new Linux user trying to install Linux Mint without removing the OEM-installed Windows, follow this guide. After you complete the steps described below, you should have a dual boot system where you can learn and do your work in a Linux system without booting Windows.
### 1. What do you need before you start?
Boot into your Windows system and Download the Linux Mint ISO file from the official website. The ISO files are the installation image of Linux Mint, which we will use for this guide.
* In the official website (Figure 1), download the ISO for Cinnamon desktop edition (which is ideal for everyone).
* [Download link][1]
![Figure 1: Download Linux Mint from the official website][2]
* After downloading, plugin a USB stick into your system. Then write the above downloaded .ISO file to that USB drive using Rufus or [Etcher][3].
### 2. Prepare a partition to install Linux Mint
Ideally, Windows Laptops come with C and D drives in general. The C drive is where Windows is installed. For a new Laptop, the D drive is usually empty (of any subsequent drives such as E, etc.). Now, you have two options to choose from. Number 1 is to **shrink the C driv**e to make space for additional Linux installation. And Number 2 is to **use the additional drives/partitions** such as D or E.
Choose what you want to do.
* If you choose to use D or E drives for Linux installation, make sure to disable BitLocker before everything else which comes with the modern OEM-installed Windows laptop. * Open Windows PowerShell from the start menu and type the following command (Figure 2) to disable the BitLocker. Change the drive letter according to your target driver (here, I have used drive E).
```
manage-bde -off E
```
![Figure 2: Disable BitLocker in Windows Drives to install Linux][4]
* Now, if you choose to Shrink the C drive (or any other drive), open the “Disk Management” from start menu. It would show your entire disk layout. * Right-click and select “Shrink Volume” on the drive (Figure 3) you want to shrink to make way for Linux Mint. * In the next window, give the size of your Partition in MB under “Enter the amount of space to shrink in MB” (Figure 4). Obviously, it should be less or equal to the value mentioned under “Size of available space”. So, for a 100 GB partition, give 100*1024=102400 MB. * Once done, click on Shrink.
![Example of Shrink Volume option in Disk Partition][5]
![Figure 4: Enter the size of your Linux Partition][6]
* Now, you should see an “Unallocated Space”, as shown below (Figure 5). Right-click on it and choose “New Simple Volume“. * This wizard will prepare and format the partition with a file system. Note: You can do it in Windows itself or during Linux Mint installation. The Linux Mint installer also provides you with the option to create a file system table and ready the partition. I would recommend you to do it here. * In the next series of screens (Figure 6,7, and 8), give the size of your partition in MB, assign a drive letter (such as D, E, F), and file system as fat32. * And finally, you should see your partition is ready for Linux Mint installation. You should choose this during Mint install in the following steps. * As a precaution, note down the partition size (which you just created as an example in Figure 9) to quickly identify it in the installer.
![Figure 5: Unallocated space is created][7]
![Figure 6: New Simple Volume Wizard -page1][8]
![Figure 7: New Simple Volume Wizard -page2][9]
![Figure 8: New Simple Volume Wizard -page3][10]
![Figure 9: Final partition for installing Linux][11]
### 3. Disable Secure Boot in BIOS
* Plug in the USB drive and restart your system. * When it starts booting, press the applicable function Key repeatedly to enter into BIOS. The key may be different for your laptop models. Heres a reference of major Laptop brands. * And you should disable secure BIOS and make sure to set the boot device priority to the USB stick. * Then press F10 to save and exit.
| Laptop OEM | Function key to enter BIOS |
| :- | :- |
| Acer | F2 or DEL |
| ASUS | F2 for all PCs, F2 or DEL for motherboards |
| Dell | F2 or F12 |
| HP | ESC or F10 |
| Lenovo | F2 or Fn + F2 |
| Lenovo (Desktops) | F1 |
| Lenovo (ThinkPads) | Enter + F1. |
| MSI | DEL for motherboards and PCs |
| Microsoft Surface Tablets | Press and hold the volume up button. |
| Origin PC | F2 |
| Samsung | F2 |
| Sony | F1, F2, or F3 |
| Toshiba | F2 |
### 4. Install Linux Mint
If all goes well, you should see a menu to install Linux Mint. Choose the option “Start Linux Mint…..”.
![Figure 10: Linux Mint GRUB Menu to kick-off installation][12]
* After a moment, you should see the Linux Mint Live desktop. In the desktop, you should see an icon to install Linux Mint to launch the installation.
* In the next set of screens, choose your Language, Keyboard Layout, choose to install multimedia codecs and hit the continue button.
* On the Installation Type window, select the Something Else option.
* In the next window (Figure 11), carefully select the following: * Under the device, select the partition you just created; You can identify it by the size I mentioned to note down earlier. * Then click on Change, and on the edit partition window, select Ext4 as the file system, select the format the partition option and Mount point as /. * Click OK. Then Choose the boot loader for your system; ideally, it should be the first entry in the drop-down list. * Review the changes carefully. Because once you hit Install Now, your disk will be formatted, and there is no going back. Once you are comfortable, click on Install Now.
![Figure 11: Choose the target partition to install Linux Mint with Windows 11][13]
In the following screens, select your location, enter your name and create a user id and password for login to the system. The installation should start (Figure 12).
Once the installation is complete (Figure 13), remove the USB stick and restart your system.
![Figure 12: Installation is in progress][14]
![Figure 13: Installation is complete][15]
If all goes well, you should see the GRUB with Windows 11 and Linux Mint after the successful installation as a dual-boot system.
Now you can proceed to use [Linux Mint][16] and experience the fast and excellent Linux distro.
### Wrapping Up
In this tutorial, I have shown you how to create a simple dual boot system with Linux Mint in commercially available Laptops or desktops with OEM-installed Windows. The steps include partitioning, creating a bootable USB, formatting and installation.
Although the above instructions are for Linux Mint 21 Vanessa; however, it should work fine with all other awesome [Linux Distributions][17] today.
If you followed this guide, do let me know how your installation went in the comment box below.
And if you are successful, welcome to the freedom!
[Next: How to Install Java 17 in Ubuntu 22.04, 22.10, Linux Mint 21][18]
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/linux-mint-install-windows/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.linuxmint.com/download.php
[2]: https://www.debugpoint.com/wp-content/uploads/2022/09/Download-Linux-Mint-from-the-official-website.jpg
[3]: https://www.debugpoint.com/etcher-bootable-usb-linux/
[4]: https://www.debugpoint.com/wp-content/uploads/2022/09/Disable-BitLocker-in-Windows-Drives-to-install-Linux.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/09/Example-of-Shrink-Volume-option-in-Disk-Partition-1024x453.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/09/Enter-the-size-of-your-Linux-Partition.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/09/Unallocated-space-is-created.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page1.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page2.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page3.jpg
[11]: https://www.debugpoint.com/wp-content/uploads/2022/09/Final-partition-for-installing-Linux.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2022/09/Linux-Mint-GRUB-Menu-to-kick-off-installation.jpg
[13]: https://www.debugpoint.com/wp-content/uploads/2022/09/Choose-the-target-partition-to-install-Linux-Mint-with-Windows-11.jpg
[14]: https://www.debugpoint.com/wp-content/uploads/2022/09/Installation-is-in-progress.jpg
[15]: https://www.debugpoint.com/wp-content/uploads/2022/09/Installation-is-complete.jpg
[16]: https://www.debugpoint.com/linux-mint
[17]: https://www.debugpoint.com/category/distributions
[18]: https://www.debugpoint.com/install-java-17-ubuntu-mint/

View File

@ -1,140 +0,0 @@
[#]: subject: "Platforms that Help Deploy AI and ML Applications on the Cloud"
[#]: via: "https://www.opensourceforu.com/2022/09/platforms-that-help-deploy-ai-and-ml-applications-on-the-cloud/"
[#]: author: "Dr Kumar Gaurav https://www.opensourceforu.com/author/dr-gaurav-kumar/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Platforms that Help Deploy AI and ML Applications on the Cloud
======
*Artificial intelligence and machine learning are impacting nearly every industry today. This article underlines the various ways in which these are being used in our everyday lives and how some open source cloud platforms are enabling their deployment.*
The goal of artificial intelligence (AI) is to construct machines and automated systems that are able to mimic human cognition. On a global scale, AI is transforming societies, politics, and economies in a variety of ways. Examples of the applications of AI include Google Help, Siri, Alexa, and self-driving cars like Tesla.
Today, AI is being used to solve difficult problems in an effective manner in a wide range of industries. It is being used in the healthcare industry to make more accurate and faster diagnoses than humans. Doctors can use AI to diagnose a disease, and get an alert when a patients condition is deteriorating.
Data security is critical for every business, and the number of cyberattacks is continually increasing. Using artificial intelligence, the security of data can be improved. An example of this is the integration of intelligent bots to identify software bugs and cyberattacks.
Twitter, WhatsApp, Facebook and Snapchat are just a few of the social media platforms that store and manage billions of profiles by using AI algorithms. AI can arrange and sift through massive amounts of data to find the latest trends, hashtags, and needs of various people.
![Figure 1: Key applications of machine learning][1]
The tourism industry is becoming increasingly reliant on AI, as the latter can help with a variety of travel-related tasks including booking hotels, flights, and the best routes for consumers. For better and faster customer service, chatbots driven by artificial intelligence are being used in the travel industry.
Table 1: Tools and frameworks for machine learning
| Tool/Platform | URL |
| :- | :- |
| Streamlit | https://github.com/streamlit/streamlit |
| TensorFlow | https://www.tensorflow.org/ |
| PyTorch | https://pytorch.org/ |
| scikit-learn | https://scikit-learn.org/ |
| Apache Spark | https://spark.apache.org/ |
| Torch | http://torch.ch/ |
| Hugging Face | https://huggingface.co/ |
| Keras | https://keras.io/ |
| TensorFlowJS | https://www.tensorflow.org/js |
| KNIME | https://www.knime.com/ |
| Apache Mahout | https://mahout.apache.org/ |
| Accord | http://accord-framework.net/ |
| Shogun | http://shogun-toolbox.org/ |
| RapidMiner | https://rapidminer.com/ |
| Blocks | https://github.com/mila-iqia/blocks |
| TuriCreate | https://github.com/apple/turicreate |
| Dopamine | https://github.com/google/dopamine |
| FlairNLP | https://github.com/flairNLP/flair |
### Machine learning in different domains
All techniques and tools that let software applications and gadgets respond and develop on their own are referred to as machine learning (ML). AI can learn without really being explicitly programmed to perform the required action, thanks to machine learning techniques. Rather than relying on predefined computer instructions, the ML algorithm learns a pattern from sample inputs, and then anticipates and executes tasks completely based on the learned pattern. If rigorous algorithms arent an option, machine learning can be a life-saver. It will pick up the new procedure by analysing prior ones and then putting it into action. ML has cleared the way for technical advancements and technologies that were previously unimaginable in a variety of industries. It is used in a variety of cutting-edge technologies today — from predictive algorithms to Internet TV live streaming.
A notable ML and AI technique is image recognition, which is a method for categorising and detecting a feature or an item in a digital image. Classification and face recognition are done using this method.
![Figure 2: Streamlit cloud for machine learning][2]
The use of machine learning for recommender systems is among its most widely used and well-known applications. In todays e-commerce world, product recommendation is a prominent tool that utilises powerful machine learning techniques. Websites use AI and ML to keep track of past purchases, search trends, and shopping cart history, and then generate product recommendations based on that data.
There is a lot of interest in employing machine learning algorithms in the healthcare industry. Emergency room wait times can be predicted across multiple hospital departments by using an ML algorithm. Details of staff shifts, patient data, and recordings of department discussions and emergency room layouts are all used to help create the algorithm. Machine learning algorithms can be used for detecting a disease, planning treatments, and prognostication.
**Key features of the cloud platforms used for machine learning**:
* Algorithms or features extraction
* Association rule mining
* Big Data based predictive analytics
* Classification, regression and clustering
* Data loading and transformation
* Data preparation, data preprocessing and visualisation
* Dimensionality reduction
* Distributed linear algebra
* Hypothesis tests and kernel methods
* Processing of image, audio, signal and vision data sets
* Model selection and optimisation module
* Preprocessing and dataflow programming
* Recommender systems
* Support for text mining and image mining through plugins
* Visualisation and plotting
### Cloud based deployment of AI and ML applications
The applications of AI and ML can be deployed on cloud platforms. A number of cloud service providers nowadays enable programmers to build models for effective decision-making in their domain.
These cloud based platforms are integrated with pre-trained machine learning and deep learning models on which the applications can be deployed without any coding or with minimum scripting.
![Figure 3: Categories of ML deployments in Streamlit][3]
**Streamlit:** Streamlit gives data scientists and ML experts access to assorted machine learning models. It is open source and compatible with cloud deployments. The ML models can be made ready to be used with data sets in a few moments.
Streamlit provides a range of machine learning models and source code in multiple categories including natural language processing, geography, education, computer vision, etc.
Streamlit provides a range of machine learning models and source code in multiple categories including natural language processing, geography, education, computer vision, etc.
![Figure 4: Hugging Face for machine learning][4]
**Hugging Face:** This is another platform with pre-trained models and architectures for ML and AI in a range of categories. Many corporate giants are using this platform including Facebook AI, Microsoft, Google AI, Amazon Web Services, and Grammarly.
A number of pre-trained and deployment-ready models are available in Hugging Face for different applications including natural language processing and computer vision.
The following tasks can be carried out by using the ML models in Hugging Face:
* Audio-to-audio processing
* Automatic speech recognition
* Computer vision
* Fill-mask
* Image classification
* Image segmentation
* Object detection
* Answering of questions
* Sentence similarity
* Summarisation
* Text classification
* Text generation
* Text-to-speech translation
* Token classification
* Translation classification
The problem solvers available in Hugging Face are optimised and effective, helping models to be deployed rapidly (Figure 5).
![Figure 5: Problem solvers and models in Hugging Face][5]
These cloud based platforms are useful for researchers, practitioners and data scientists in multiple domains, and simplify the development of real-world applications that perform well.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/platforms-that-help-deploy-ai-and-ml-applications-on-the-cloud/
作者:[Dr Kumar Gaurav][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/dr-gaurav-kumar/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-1-Key-applications-of-machine-learning.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-2-Streamlit-cloud-for-machine-learning.png
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-3-Categories-of-ML-deployments-in-Streamlit.png
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-4-Hugging-Face-for-machine-learning.png
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-5-Problem-solvers-and-models-in-Hugging-Face.png

View File

@ -0,0 +1,137 @@
[#]: subject: "A Complete Guide to Cloud Service Architectures"
[#]: via: "https://www.opensourceforu.com/2022/09/a-complete-guide-to-cloud-service-architectures/"
[#]: author: "Mir H.S. Quadri https://www.opensourceforu.com/author/shah-quadri/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
A Complete Guide to Cloud Service Architectures
======
*In its roughly 16 years of evolution, cloud computing has evolved to become a technology that is used by almost everyone who uses the Internet. It can be used as a service to support various types of business and consumer requirements. Therefore, multiple service architectures are being used in cloud computing to customise the technology as per modern day needs. This article provides a complete guide to all the service architectures being used today.*
While the idea of having a network of computers collaborating across the world has existed since the early 1960s, the formal conceptualisation of it occurred in 2006 when Eric Schmidt, the then CEO of Google, introduced the term cloud computing in its modern day context.
Cloud computing can be simply understood as a network of remote servers across the world, sharing data and collaborating over the Internet to provide services to businesses and customers. Albeit an arbitrary definition, it does cover the core idea behind cloud computing. The primary motivating factor for such a technology was to create more stickability in data, i.e., to make data more easily accessible across the devices whilst reducing the risks of data loss. If a user x has data in only one server, the chances of permanent data loss for x are higher given that all it takes is one server outage. That is equivalent to the proverbial putting all your eggs in one basket method, which is never a good idea especially when you are dealing with data that can be critical for businesses and consumers. But if you replicate the data of x in multiple servers across the globe, it will have two major benefits. For one, x will still be able to access his/her data even if a server is facing an outage. Second, the cloud can provide x with access to its data from a server that is available closest to it with the least amount of load. This makes data faster and more easily accessible across different devices for x.
In its roughly 16 years of evolution, cloud computing has gone from being something used simply for backing up photos to becoming the backbone of the Internet. Almost every app today, from Microsoft Office to Asana and Todoist, makes use of cloud computing for real-time access and sharing of data. Almost any app that you can think of uses cloud computing. Everything from Gmail and YouTube, to Instagram and even WhatsApp, uses cloud computing in the background to provide fast, easy, and reliable access to data.
The companies that provide cloud computing services are called cloud service providers. Amazon, Google, Microsoft, Salesforce, Cloud9, etc, all provide cloud as a service in both B2B and B2C contexts.
In the early days, cloud service providers generally offered only three types of services to their customers:
* Software as a Service (SaaS)
* Platform as a Service (PaaS)
* Infrastructure as a Service (IaaS)
However, as the industry requirements have evolved with new technologies such as blockchain and AI coming into the picture, cloud service providers have come up with new models to better serve the varying requirements of their customers. In this article, we are going to go through all the cloud computing service models currently being used in the market.
### The architecture of a cloud
Now that we have an idea of what cloud computing is and how it evolved into becoming a 445 billion dollar industry, let us try to understand the cloud from a technical perspective. A generalised architecture of a cloud can be conceptualised as consisting of two major components — the front-end and the back-end.
![Figure 1: The architecture of a cloud (Courtesy: TechVidvan)][1]
The front-end contains the client infrastructure, i.e., the device and the user interface of the application used for communicating with the cloud. In a real-world context, your smartphone and the Google Drive app are the front-end client infrastructure that can be used for accessing the Google cloud.
The back-end contains the cloud infrastructure, i.e., all the mechanisms and machinery required to run a cloud computing service. The servers, virtual machines, services and storage are all provided by the cloud infrastructure, as shown in ure 1. Lets quickly go through each component of the back-end in order to get a complete picture.
* Application: The back-end of whatever app the user or business uses to interact with the cloud via the Internet.
* Service: The infrastructure for the type of service that the cloud provides. We are going to go into detail about all the different types of services in this article.
* Runtime: The provision of runtime and execution made available to the virtual machines.
* Storage: The acquisition and management of user/business data with the flexibility of scaling.
* Infrastructure: The hardware and software required to run the cloud.
* Security and management: Putting security mechanisms in place to protect user/business data as well as managing individual units of the cloud architecture to avoid overload and service outages.
### Software as a Service (SaaS)
Software as a Service is a cloud computing model that provides software and applications as a service over the Internet. A good example of this is Google Drive or Google Workspace. All the apps available in Google Drive such as docs, sheets, slides, forms, etc, can be accessed online using a Web browser and saved automatically to the cloud. You can access the latest version of your documents through any device. All you need to do is login to your account. This is the benefit of having the Software as a Service model. Instead of having to install anything to your device locally or using your local storage space, you can directly access the software application in the cloud thus removing a lot of the liabilities that come with localised software. SaaS often follows the pay as you go model, i.e., you pay for the services you need. You can always purchase more storage and/or features by paying more or downgrade your package as per your requirements.
#### Benefits of SaaS
1. SaaS is highly scalable, thanks to the pay as you go model. You can increase/decrease storage and/or the features of the apps as and how you need to.
2. It is considerably cost-effective given the features it provides such as real-time access through any device with any operating system.
3. It involves low effort at the customer-end. No installations or confusing steps are required to initialise the software. You can use it from the comfort of your browser and/or app.
4. Software updates automatically without you having to install it or wait for installation at your end.
### Platform as a Service (PaaS)
Not every tech startup has the required resources to maintain their own infrastructure to run their apps on the cloud. In many cases, companies (especially startups) prefer to have their app hosted on the cloud without having to handle all the backend infrastructure. It is in situations such as these where a Platform as a Service model comes into play. Companies such as Heroku cloud offer PaaS architecture-based cloud solutions for companies and individuals to host and run their apps in the cloud without having any direct contact with the hardware infrastructure. Like SaaS, this model also provides flexibility in choosing only the services you require along with scalability and security from an infrastructural perspective.
#### Benefits of PaaS
1. No hassle of handling the cloud infrastructure. You outsource that to the company that hosts your app in their cloud. This helps you focus solely on your app development life cycle.
2. PaaS is scalable. You can increase or decrease your storage requirements, add-on services, etc, as per your requirements.
3. The only security parameters you set are for your own app. The cloud security is dealt with by your cloud service provider.
4. It is time- and cost-effective for companies and individuals looking to host their apps in the cloud, especially startups that cannot afford to build their own infrastructure.
### Infrastructure as a Service (IaaS)
Infrastructure as a Service goes one step deeper than PaaS, providing customers with even more autonomy. In an IaaS model, the cloud service provider gives you control over the underlying infrastructure of the cloud. Simply put, you get to design your own cloud environment customised to your companys requirements all the way from dedicated servers and virtual machines, to operating systems running on the servers, setting bandwidths, creating your own security protocols, and pretty much everything else that goes into creating a cloud infrastructure. Amazon AWS and Google Compute Engine are great examples of IaaS models. Given the autonomy over the hardware that this model provides, it is also referred to as Hardware as a Service (HaaS).
#### Benefits of IaaS
1. Granular flexibility in the pay as you go model. You get to decide how many VMs you want to run and for how long. You can even pay by the hour.
2. Highly scalable, given that it follows the pay as you go model to its core.
3. Complete autonomy and control over everything in the infrastructure without the hassle of maintaining the servers physically at your company location.
4. Most companies guarantee uptime, security and 24/7 on-site customer support, which can be essential for enterprises.
### Storage as a Service (StaaS)
Google Drive, OneDrive, Dropbox, and iCloud are some of the big names in the industry providing Storage as a Service to their customers. StaaS is as simple as it sounds. If all you require is storage in the cloud that is accessible to you in real-time through any of your devices, then the StaaS model is the one to choose. Many companies and individuals make use of this service model to back up their data.
#### Benefits of StaaS
1. Access your data in its most updated form in real-time with the help of built-in version control systems.
2. Access your data through any type of device with any operating system.
3. Back-up your data in real-time as and how you create, edit, and delete your files.
4. Scale your storage as and how you require. StaaS follows the pay as you go model.
### Anything/Everything as a Service (XaaS)
A hybrid version of the IaaS, PaaS, SaaS, and StaaS, is what is being called the Anything/Everything as a Service model, and is quickly gaining traction in the cloud community. It is possible for a customer to have requirements that are so varied that they might be a mishmash of all the different models. In such a scenario, complete autonomy is provided to customers to select the services from different tiers to create their own custom pay as you go model. This has the benefit of giving complete freedom to the customer to use the cloud on their own terms.
#### Benefits of XaaS
1. Choose what you like, how you like and as you like.
2. Pay only for exactly what you need without having to pay for any base fee predicated on a tier system.
3. Select your infrastructure, platform, and functionality on a granular level.
4. If used appropriately, XaaS can be the most time-, cost- and work-effective method of hosting your application on the cloud.
### Function as a Service (FaaS)
In certain cases, companies or individuals require the benefits of PaaS without having to use all its functionality. For example, trigger-based systems such as cron jobs only require a piece of code or a function to run on a serverless system to achieve a particular objective. For instance, a customer may want to create a website traffic monitoring system that sends a notification the moment a certain number of page downloads occur. In such a case, the customer requirements are simply to run a piece of code in the cloud that keeps checking for a trigger to execute. Using a PaaS model can be a costly solution. This is where Function as a Service comes in. Many companies such as Heroku offer FaaS to their customers to host only a specific piece of code or function that is reactionary and only activates upon a trigger.
#### Benefits of FaaS
1. You only pay for the number of executions of the code. You are generally not charged for hosting your code unless it is computationally expensive.
2. It removes all the liability of PaaS while giving you all its benefits.
3. You are not responsible for the underlying infrastructure in any way. Therefore, you can simply upload your code without having to worry about any maintenance of the virtual machines.
4. FaaS provides you with the ability to be agile, i.e., to write functional code.
### Blockchain Platform as a Service (BPaaS)
Blockchain has taken the tech industry by storm in recent years. It is one of the most in-demand technologies right now, surpassed marginally by AI and data science related technologies. What makes blockchain so attractive is its open-ledger architecture providing security, scalability, and transparency. These features are necessary for many applications such as banking, electoral systems, and even social media. With such wide-ranging applications, it becomes necessary to be able to host such products on the cloud with a model that specifically caters to the needs of this technology. This is where BPaaS comes into the picture. Many companies today, including big names such as Amazon AWS and Microsoft Azure, are providing BPaaS solutions for customers specifically looking to host blockchain based apps in the cloud.
#### Benefits of BPaaS
1. It caters to the specific needs of the blockchain industry such as support for custom languages used for writing smart contracts.
2. Supports integrations with pre-eminent blockchains such as Ethereum by providing API bridges.
3. Supports custom databases used in the application life cycle of blockchain technologies.
4. It has all the goodness of the cloud with the pay as you go feature, scalability, security, and ease of access.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/a-complete-guide-to-cloud-service-architectures/
作者:[Mir H.S. Quadri][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/shah-quadri/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-1-The-architecture-of-a-cloud-2.jpg

View File

@ -0,0 +1,75 @@
[#]: subject: "Developing Low Latency Applications in the Cloud with AI and ML"
[#]: via: "https://www.opensourceforu.com/2022/09/developing-low-latency-applications-in-the-cloud-with-ai-and-ml/"
[#]: author: "Bala Kalavala https://www.opensourceforu.com/author/bala-kalavala/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Developing Low Latency Applications in the Cloud with AI and ML
======
*This article looks at the key considerations that are critical to the success of delivering low latency applications in the cloud. It also outlines how to build an interactive low latency application.*
“Productivity is never an accident. It is always the result of a commitment to excellence, intelligent planning, and focused effort,” said Paul J. Meyer, motivational speaker.
This was meant in the context of humans. With the evolution of technology and machines taking over mundane repetitive tasks from humans, this statement applies to machines as well today. Acceleration in the pace of change has led users to believe its about time technology delivers instant response to requests, especially in the interactive world of entertainment (such as online video games like World of Warcraft or Fortress, where hundreds of users participate simultaneously in voice conversations).
Edge computing has been around a long time; however, what it means to different industries and organisations has evolved over time differently. Cloud native computing has brought yet another meaning to the edge as a resource belonging to cloud service providers that can be leveraged on demand. Persistent efforts in the field of artificial intelligence (AI) over the last 50+ years have made it evolve from a scalable efficiency model to a scalable learning model by combining it with machine learning (ML).
The Internet of Things (IoT) has led to yet another increase in the demand for edge computing. The massive data that connected devices create by communicating with each other over the Internet needs to be processed efficiently in order to be used properly. This has led to demand for low latency applications in the cloud.
### Business considerations of low latency applications
While designing an application, business considerations must be the topmost priority in ensuring successful deployment of the solution. Here are the key business considerations that are critical to the success of delivering low latency applications.
* Localised real-time data processing will be efficient and deliver faster data driven decisions with the use of AI and ML.
* Security at the edge, as well as data encryption at rest and in transit, along with site key management for private and public key encryption will enable always encrypted communication.
* Low operational costs with agility, as bandwidth and throughput concerns typically are addressed with solutions that cost significant resources financially or otherwise.
* Optimal affordable storage at the edge and also at the hub as connected devices generate large amounts of data on a daily basis. Data analysis must be optimised at every point.
### Technical considerations of low latency applications
A lack of understanding or planning of technical considerations leads to the vast majority of project failures. Lets look at some critical technical considerations that are important in the design and development of low latency applications.
* Infrastructure deployment and configuration management for the edge can be challenging. Hardware and software stack decisions must be made carefully ensuring the deployed architecture meets the expected performance considerations.
* Edge network visibility must be built beyond real-time traffic monitoring to include predictive and proactive analytics, leveraging AI and ML to create insights into edge performance.
* Connectivity at the edge for set application configurations should be designed for latency, bandwidth, throughput, and quality of service to ensure that typical outer edge communication issues with centralised and distributed systems are managed. Evolution of 5G has made this journey a little smoother.
### A low latency application for an interactive experience
Now that we are through with the fundamentals, lets look at how a low latency application is built in an example use case. The application we are going to build is an interactive experience where the human and machine interaction is expected to occur in real-time or, at least, the human interacting with the machine must not feel a lag in response. One could argue that we already have this in Alexa, Google Assistant, etc. Yes, we do for end consumers for specific search-based responses. Lets look at a reference architecture for an interactive experience application that an organisation tailors for its own end users.
![Figure 1: Low latency reference architecture for interactive experience][1]
An interactive experience application requires near-real-time responses. This can be done by applying a multichannel framework at the experience layer to build the organisations brand image. End user personalisation in a fragmented communication allows for quick consumption and processing of data, limiting any latency aspects in the last step (commonly known as last mile delivery of content). Machine learning models are built with ONNX Runtime, thats built on the Open Neural Network Exchange (ONNX) open standard which has a JavaScript library. Later, the data required is serialised with open source tools like MLeap or equivalents of it, which is deserialised back for MLeap runtime to power real-time API services in the omnichannel experience layer and beyond.
The middle services layer is designed with an omnichannel framework, where end user focused data is precomputed with the support of ML learnings using open source tools like Spark, Scikit-learn or TensorFlow.
The data is then exported to MLeapBundle, which can be deployed at the edge data centre. This approach reduces any network latency concerns typical data centre connectivity will have, allowing for enterprise grade processing power for real-time machine learning.
The final step in request processing is to interface with core and third party systems of the organisation for the data necessary in experience analytics. This data runs in a typical data centre or in cloud native services. The end-to-end low latency application can be developed using proven open source tools.
The technical architecture shown in Figure 2 is a reference implementation of a low latency application that could be developed for an interactive experience for various common use cases. Each of the open source tools selected has many alternatives as well. Depending on the capability and functional need of the solution, the appropriate tools may be swapped to build the right-sized implementation. Links to each of the tools used, with a brief on their purpose, are given in the References at the end of this article.
![Figure 2: Low latency technical architecture for interactive experience][2]
Most popular cloud service providers are embracing advancements in low latency solutions. They either offer a cloud native offering thats wrapped around familiar open source tools (e.g., Apache Spark) or they build their own, providing developers the opportunity to select the right tools that are cost-effective, but meet the expectations of low latency and good performance.
Deploying a scalable low latency solution that meets the needs of an organisation can enable it to deliver better products and services to its customers. Though there is considerable effort involved in deploying scalable low latency solutions the right way, the investment and risks are worth it.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/developing-low-latency-applications-in-the-cloud-with-ai-and-ml/
作者:[Bala Kalavala][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/bala-kalavala/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-1-Low-latency-reference-architecture-for-interactive-experience.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-2-Low-latency-technical-architecture.jpg

View File

@ -0,0 +1,153 @@
[#]: subject: "Fix the apt-key deprecation error in Linux"
[#]: via: "https://opensource.com/article/22/9/deprecated-linux-apt-key"
[#]: author: "Chris Hermansen https://opensource.com/users/clhermansen"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Fix the apt-key deprecation error in Linux
======
Follow these steps and you can run apt update with no warnings or errors related to deprecated key configurations.
This morning, after returning home from a mini vacation, I decided to run `apt update` and `apt upgrade` from the command line just to see whether there had been any updates while I was offline. After issuing the update command, something didn't seem quite right; I was seeing messages along the lines of:
```
W: https://updates.example.com/desktop/apt/dists/xenial/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
```
True, it's just a warning, but still there's that scary word, deprecation, which usually means it's going away soon. So I thought I should take a look. Based on what I found, I thought my experience would be worth sharing.
It turns out that I have older configurations for some repositories, artifacts of installation processes from "back in the day," that needed adjustment. Taking my prompt from the warning message, I ran `man apt-key` at the command line, which provided several interesting bits of information. Near the beginning of the man page:
```
apt-key is used to manage the list of keys used by apt to authenticate packages. Packages which have been authenticated using these keys are considered trusted.
Use of apt-key is deprecated, except for the use of apt-key del in maintainer scripts to remove existing keys from the main keyring. If such usage of apt-key is desired, the additional installation of the GNU Privacy Guard suite (packaged in gnupg) is required.
apt-key(8) will last be available in Debian 11 and Ubuntu 22.04.
```
Last available in "Debian 11 and Ubuntu 22.04" is pretty much *right now* for me. Time to fix this!
### Fixing the apt-key deprecation error
Further on in the man page, there's the deprecation section mentioned in the warning from apt update:
```
DEPRECATION
Except for using apt-key del in maintainer scripts, the use of apt-key is deprecated. This section shows how to replace the existing use of apt-key.
If your existing use of apt-key add looks like this:
wget -qO- https://myrepo.example/myrepo.asc | sudo apt-key add -
Then you can directly replace this with (though note the recommendation below):
wget -qO- https://myrepo.example/myrepo.asc | sudo tee /etc/apt/trusted.gpg.d/myrepo.asc
Make sure to use the "asc" extension for ASCII armored keys and the "gpg" extension for the binary OpenPGP format (also known as "GPG key public ring"). The binary OpenPGP format works for all apt versions, while the ASCII armored format works for apt version >= 1.4.
Recommended: Instead of placing keys into the /etc/apt/trusted.gpg.d directory, you can place them anywhere on your filesystem by using the Signed-By option in your sources.list and pointing to the filename of the key. See sources.list(5) for details. Since APT 2.4, /etc/apt/keyrings is provided as the recommended location for keys not managed by packages. When using a deb822-style sources.list, and with apt version >= 2.4, the Signed-By option can also be used to include the full ASCII armored keyring directly in the sources.list without an additional file.
```
If you, like me, have keys from non-repository stuff added with `apt-key`, then here are the steps to transition:
1. Determine which keys are in `apt-key keyring /etc/apt/trusted.gpg`
2. Remove them
3. Find and install replacements in `/etc/apt/trusted.gpg.d/` or in `/etc/apt/keyrings/`
### 1. Finding old keys
The command `apt-key list` shows the keys in `/etc/apt/trusted.gpg` :
```
$ sudo apt-key list
[sudo] password:
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2017-04-05 [SC]
      DBE4 6B52 81D0 C816 F630  E889 D980 A174 57F6 FB86
uid           [ unknown] Example <support@example.com>
sub   rsa4096 2017-04-05 [E]
pub   rsa4096 2016-04-12 [SC]
      EB4C 1BFD 4F04 2F6D DDCC  EC91 7721 F63B D38B 4796
uid           [ unknown] Google Inc. (Linux Packages Signing Authority) <linux-packages-keymaster@google.com>
sub   rsa4096 2021-10-26 [S] [expires: 2024-10-25]
[...]
```
Also shown afterward are the keys held in files in the `/etc/apt/trusted.gpg.d` folder.
**[[ Related read How to import your existing SSH keys into your GPG key ]][2]**
### 2. Removing old keys
The group of quartets of hex digits, for example `DBEA 6B52...FB86`, is the identifier required to delete the unwanted keys:
```
$ sudo apt-key del "DBEA 6B52 81D0 C816 F630  E889 D980 A174 57F6 FB86"
```
This gets rid of the Example key. That's literally just an example, and in reality you'd get rid of keys that actually exist. For instance, I ran the same command for each of the real keys on my system, including keys for Google, Signal, and Ascensio. Keys on your system will vary, depending on what you have installed.
### 3. Adding keys
Getting the replacement keys is dependent on the application. For example, Open Whisper offers its key and an explanation of what to do to install it, which I decided not to follow as it puts the key in `/usr/share/keyrings`. Instead, I did this:
```
$ wget -O- https://updates.signal.org/desktop/apt/keys.asc | gpg --dearmor > signal-desktop-keyring.gpg
$ sudo mv signal-desktop-keyring.gpg /etc/apt/trusted.gpg.d/
$ sudo chown root:root /etc/apt/trusted.gpg.d/signal-desktop-keyring.gpg
$ sudo chmod ugo+r /etc/apt/trusted.gpg.d/signal-desktop-keyring.gpg
$ sudo chmod go-w /etc/apt/trusted.gpg.d/signal-desktop-keyring.gpg
```
Ascencio also offers instructions for installing OnlyOffice that include dealing with the GPG key. Again I modified their instructions to suit my needs:
```
$ gpg --no-default-keyring --keyring gnupg-ring:~/onlyoffice.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys CB2DE8E5
$ sudo mv onlyoffice.gpg /etc/apt/trusted.gpg.d/
$ sudo chown root:root /etc/apt/trusted.gpg.d/onlyoffice.gpg
$ sudo chmod ugo+r /etc/apt/trusted.gpg.d/onlyoffice.gpg
$ sudo chmod go-w /etc/apt/trusted.gpg.d/onlyoffice.gpg
```
As for the Google key, it is managed (correctly, it appears) through the `.deb` package, and so a simple reinstall with `dpkg -i` was all that was needed. Finally, I ended up with this:
```
$ ls -l /etc/apt/trusted.gpg.d
total 24
-rw-r--r-- 1 root root 7821 Sep  2 10:55 google-chrome.gpg
-rw-r--r-- 1 root root 2279 Sep  2 08:27 onlyoffice.gpg
-rw-r--r-- 1 root root 2223 Sep  2 08:02 signal-desktop-keyring.gpg
-rw-r--r-- 1 root root 2794 Mar 26  2021 ubuntu-keyring-2012-cdimage.gpg
-rw-r--r-- 1 root root 1733 Mar 26  2021 ubuntu-keyring-2018-archive.gpg
```
### Expired keys
The last problem key I had was from an outdated installation of QGIS. The key had expired, and I'd set it up to be managed by `apt-key`. I ended up following their instructions to the letter, both for installing a new key in `/etc/apt/keryings` and their suggested format for the `/etc/apt/sources.list.d/qgis.sources` installation configuration.
**[[ Download the Linux cheat sheets for apt or dnf ]][3]**
### Linux system maintenance
Now you can run `apt update` with no warnings or errors related to deprecated key configurations. We `apt` users just need to remember to adjust any old installation instructions that depend on `apt-key`. Instead of using `apt-key`, you must instead install a key to `/etc/apt/trusted.gpg.d/` or `/etc/apt/keyrings/`, using `gpg` as needed.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/deprecated-linux-apt-key
作者:[Chris Hermansen][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/mistake_bug_fix_find_error.png
[2]: https://opensource.com/article/19/4/gpg-subkeys-ssh-multiples
[3]: https://opensource.com/downloads/apt-cheat-sheet

View File

@ -0,0 +1,322 @@
[#]: subject: "How to Install Kubernetes Cluster on Debian 11 with Kubeadm"
[#]: via: "https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/"
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install Kubernetes Cluster on Debian 11 with Kubeadm
======
Are you looking for an easy guide for installing Kubernetes Cluster on Debian 11 (Bullseye)?
The step-by-step guide on this page will demonstrate you how to install Kubernetes cluster on Debian 11 with Kubeadm utility.
Kubernetes (k8s) cluster contains master and worker nodes which are used to run containerized applications. Master node works as control plan and worker nodes offers environment for actual workload.
##### Prerequisites
* Minimal Installed Debian 11
* 2 CPU / vCPU
* 2 GB RAM
* 20 GB free disk space
* Sudo User with Admin rights
* Stable Internet Connectivity
##### Lab Setup
For the demonstration, I am using three Debian 11 systems with following details,
* Master Node (k8s-master) 192.168.1.236
* Worker Node 1 (k8s-worker1) 192.168.1.237
* Worker Node 2 (k8s-worker2) 192.168.1.238
Without any further delay, lets jump into the installation steps.
### 1 ) Set Host Name and update /etc/hosts file
Use hostnamectl command to set the hostname on master and worker nodes.
```
$ sudo hostnamectl set-hostname "k8s-master"       // Run on master node
$ sudo hostnamectl set-hostname "k8s-worker1"      // Run on 1st worker node
$ sudo hostnamectl set-hostname "k8s-worker2"      // Run on 2nd worker node
```
Add the following entries in /etc/hosts file on all the nodes,
```
192.168.1.236       k8s-master
192.168.1.237       k8s-worker1
192.168.1.238       k8s-worker2
```
### 2) Disable Swap on all nodes
For kubelet to work smoothly, it is recommended to disable swap. Run following commands on master and worker nodes to turn off swap.
```
$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
```
### 3) Configure Firewall Rules for Kubernetes Cluster
In case, OS firewall is enabled on your debian systems then allow following ports on master and worker nodes respectively.
On Master node, run
```
$ sudo ufw allow 6443/tcp
$ sudo ufw allow 2379/tcp
$ sudo ufw allow 2380/tcp
$ sudo ufw allow 10250/tcp
$ sudo ufw allow 10251/tcp
$ sudo ufw allow 10252/tcp
$ sudo ufw allow 10255/tcp
$ sudo ufw reload
```
On Worker Nodes,
```
$ sudo ufw allow 10250/tcp
$ sudo ufw allow 30000:32767/tcp
$ sudo ufw reload
```
Note: If firewall is disabled on your Debian 11 systems, then you can skip this step.
### 4) Install Containerd run time on all nodes
Containerd is the industry standard container run time, we must install containerd on all master and worker nodes.
Before installing containerd, set the following kernel parameters on all the nodes.
```
$ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
```
To make above changes into the effect, run
```
$ sudo sysctl --system
```
Now, install conatinerd by running following apt command on all the nodes.
```
$ sudo apt  update
$ sudo apt -y install containerd
```
Configure containerd so that it works with Kubernetes, run beneath command on all the nodes
```
$ containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
```
Set cgroupdriver to systemd on all the nodes,
Edit the file /etc/containerd/config.toml and look for the section [plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options] and add SystemdCgroup = true
```
$ sudo vi /etc/containerd/config.toml
```
![systemdCgroup-true-containerd-config-toml][1]
Save and close the file.
Restart and enable containerd service on all the nodes,
```
$ sudo systemctl restart containerd
$ sudo systemctl enable containerd
```
### 5) Enable Kubernetes Apt Repository
Enable Kubernetes apt repository on all the nodes, run
```
$ sudo apt install gnupg gnupg2 curl software-properties-common -y
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/cgoogle.gpg
$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
```
### 6) Install Kubelet, Kubectl and Kubeadm on all nodes
Run the following apt commands on all the nodes to install Kubernetes cluster components like kubelet, kubectl and Kubeadm.
```
$ sudo apt update
$ sudo apt install kubelet kubeadm kubectl -y
$ sudo apt-mark hold kubelet kubeadm kubectl
```
### 7) Create Kubernetes Cluster with Kubeadm
Now, we are all set to create Kubernetes cluster, run following command only from master node,
```
$ sudo kubeadm init --control-plane-endpoint=k8s-master
```
Output,
![Kubernetes-Control-Plane-Initialization-Debian11][2]
Above output confirms that control plane has been initialized successfully. In the output, we have commands for regular user for interacting with the cluster and also the command to join any worker node to this cluster.
To start interacting with cluster, run following commands on master node,
```
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Run following kubectl command to get nodes and cluster information,
```
$ kubectl get nodes
$ kubectl cluster-info
```
Output of above commands,
![Nodes-Cluster-Info-Kubectl][3]
Join both the worker nodes to the cluster by running Kubeadm join command.
Note: Copy the exact command from the output of kubeadm init command. In my case, following is the command
```
$ sudo kubeadm join k8s-master:6443 --token ta622t.enl212euq7z87mgj \
  --discovery-token-ca-cert-hash sha256:2be58f54458d0e788c96b8841f811069019161f9a3dd8502a38c773e5c6ead17
```
Output from Worker Node 1,
![Worker-Node1-Join-Kunernetes-Cluster][4]
Output from Worker Nod 2 ,
![Worker-Node2-Join-Kubernetes-Cluster][5]
Check the nodes status by running following command from master node,
```
$ kubectl get nodes
NAME          STATUS     ROLES           AGE     VERSION
k8s-master    NotReady   control-plane   23m     v1.25.0
k8s-worker1   NotReady   <none>          9m27s   v1.25.0
k8s-worker2   NotReady   <none>          2m19s   v1.25.0
$
```
To make nodes status ready, we must install POD network addons like Calico or flannel.
### 8) Install Calico Pod Network Addon
On the master node, run beneath command to install calico,
```
$ kubectl apply -f https://projectcalico.docs.tigera.io/manifests/calico.yaml
```
Output,
![Install-calico-pod-network-addon-debian11][6]
Allow Calico ports in OS firewall, run beneath ufw commands on all the nodes,
```
$ sudo ufw allow 179/tcp
$ sudo ufw allow 4789/udp
$ sudo ufw allow 51820/udp
$ sudo ufw allow 51821/udp
$ sudo ufw allow 4789/udp
$ sudo ufw reload
```
Verify the status of Calico pods, run
```
$ kubectl get pods -n kube-system
```
![Calico-Pods-Status-Kuberenetes-Debian11][7]
Perfect, now check nodes status again,
![Nodes-status-after-calico-Installation][8]
Great, output above confirms that master and worker nodes are in ready status. Now, this cluster is ready for the workload.
### 9) Test Kubernetes Cluster Installation
To test Kubernetes cluster installation, lets try to deploy nginx based application via deployment. Run beneath commands,
```
$ kubectl create deployment nginx-app --image=nginx --replicas 2
$ kubectl expose deployment nginx-app --name=nginx-web-svc --type NodePort --port 80 --target-port 80
$ kubectl describe svc nginx-web-svc
```
Output of above commands,
![Nginx-Based-App-Kubernetes-Cluster-Debian11][9]
Try to access the nginx based application using following curl command along with the nodeport 30036.
Note : In the curl command we can use either of worker nodes hostname.
```
$ curl http://k8s-worker1:30036
```
![Access-Nginx-Based-App-via-NodePort-Kubernetes-Debian11][10]
Above commands output confirm that we are able to access our nginx based application.
Thats all from this guide, I hope you have found it informative and able to install Kubernetes cluster on Debian 11 smoothly. Kindly do post your queries and feedback in below comments section.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/wp-content/uploads/2022/09/systemdCgroup-true-containerd-config-toml.png
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Kubernetes-Control-Plane-Initialization-Debian11.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Nodes-Cluster-Info-Kubectl.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Worker-Node1-Join-Kunernetes-Cluster.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Worker-Node2-Join-Kubernetes-Cluster.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Install-calico-pod-network-addon-debian11.png
[7]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Calico-Pods-Status-Kuberenetes-Debian11.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Nodes-status-after-calico-Installation.png
[9]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Nginx-Based-App-Kubernetes-Cluster-Debian11.png
[10]: https://www.linuxtechi.com/wp-content/uploads/2022/09/Access-Nginx-Based-App-via-NodePort-Kubernetes-Debian11.png

View File

@ -0,0 +1,357 @@
[#]: subject: "Using Python and NetworkManager to control the network"
[#]: via: "https://fedoramagazine.org/using-python-and-networkmanager-to-control-the-network/"
[#]: author: "Beniamino Galvani https://fedoramagazine.org/author/bengal/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Using Python and NetworkManager to control the network
======
![][1]
Photo by [Taylor Vick][2] on [Unsplash][3]
[NetworkManager][4] is the default network management service on Fedora and several other Linux distributions. Its main purpose is to take care of things like setting up interfaces, adding addresses and routes to them and configuring other network related aspects of the system, such as DNS.
There are other tools that offer similar functionality. However one of the advantages of NetworkManager is that it offers a powerful API. Using this API, other applications can inspect, monitor and change the networking state of the system.
This article first introduces the API of NetworkManager and presents how to use it from a Python program. In the second part it shows some practical examples: how to connect to a wireless network or to add an IP address to an interface programmatically via NetworkManager.
### The API
NetworkManager provides a D-Bus API.[D-Bus][5] is a message bus system that allows processes to talk to each other; using D-Bus, a process that wants to offer some services can register on the bus with a well-known name (for example, “org.freedesktop.NetworkManager”) and expose some objects, each identified by a path. Using *d-feet*, a graphical tool to inspect D-Bus objects, we can see the object tree exposed by the NetworkManager service:
![][6]
Each object has properties, methods and signals, grouped into different interfaces. For example, the following is a simplified view of the interfaces for the second device object:
![][7]
We see that there are different interfaces; the *org.freedesktop.NetworkManager.Device* interface contains some properties common to all devices, like the state, the MTU and IP configurations. Since this device is Ethernet, it also has a *org.freedesktop.NetworkManager.Device.Wired* D-Bus interface containing other properties such as the link speed.
The full documentation for the [D-Bus API of NetworkManager is here.][8]
A client can connect to the NetworkManager service using the well-known name and perform operations on the exposed objects. For example, it can invoke methods, access properties or receive notifications via signals. In this way, it can control almost every aspect of network configuration. In fact, all the tools that interact with NetworkManager nmcli, nmtui, GNOME control center, the KDE applet, Cockpit use this API.
### libnm
When developing a program, it can be convenient to automatically instantiate objects from the objects available on D-Bus and keep their properties synchronized; or to be able to have method calls on those objects automatically dispatched to the corresponding D-Bus method. Such objects are usually called *proxies* and are used to hide the complexity of D-Bus communication from the developer.
For this purpose, the NetworkManager project provides a library called **libnm**, written in C and based on GNOMEs GLib and GObject. The library provides C language bindings for functionality provided by NetworkManager. Being a GLib library, it is usable from other languages as well via GObject introspection, as explained below.
The library maps fairly closely to the D-Bus API of NetworkManager. It wraps remote D-Bus objects as native GObjects, and D-Bus signals and properties to GObject signals and properties. Furthermore, it provides helpful accessors and utility functions.
### Overview of libnm objects
The diagram below shows the most important objects in libnm and their relationship:
![][9]
*NMClient* caches all the objects instantiated from D-Bus. The object is typically created at the beginning at the program and provides a way to access other objects.
A *NMDevice* represents a network interface, physical (as Ethernet, Infiniband, Wi-Fi, etc.) or virtual (as a bridge or a IP tunnel). Each device type supported by NetworkManager has a dedicated subclass that implements type-specific properties and methods. For example, a[NMDeviceWifi][10] has properties related to the wireless configuration and to access points found during the scan, while a[NMDeviceVlan][11] has properties describing its VLAN-id and the parent device.
*NMClient* also provides a list of *NMRemoteConnection* objects. *NMRemoteConnection* is one of the two implementations of the *NMConnection* interface. A *connection* (or *connection profile*) contains all the configuration needed to connect to a specific network.
The difference between a *NMRemoteConnection* and a *NMSimpleConnection* is that the former is a proxy for a connection existing on D-Bus while the latter is not. In particular, *NMSimpleConnection* can be instantiated when a new blank connection object is required. This is useful for, example, when adding a new connection to NetworkManager.
The last object in the diagram is *NMActiveConnection*. This represents an active connection to a specific network using settings from a *NMRemoteConnection*.
### GObject introspection
[GObject introspection][12] is a layer that acts as a bridge between a C library using GObject and programming language runtimes such as JavaScript, Python, Perl, Java, Lua, .NET, Scheme, etc.
When the library is built, sources are scanned to generate introspection metadata describing, in a language-agnostic way, all the constants, types, functions, signals, etc. exported by the library. The resulting metadata is used to automatically generate bindings to call into the C library from other languages.
One form of metadata is a GObject Introspection Repository (GIR) XML file. GIRs are mostly used by languages that generate bindings at compile time. The GIR can be translated into a machine-readable format called Typelib that is optimized for fast access and lower memory footprint; for this reason it is mostly used by languages that generate bindings at runtime.
[This page][13] lists all the introspection bindings for other languages. For a Python example we will use [PyGObject][14] which is included in the *python3-gobject* RPM on Fedora.
### A basic example
Lets start with a simple Python program that prints information about the system:
```
import gi
gi.require_version("NM", "1.0")
from gi.repository import GLib, NM
client = NM.Client.new(None)
print("version:", client.get_version())
```
At the beginning we import the introspection module and then the Glib and NM modules. Since there could be multiple versions of the NM module in the system, we make certain to load the right one. Then we create a client object and print the version of NetworkManager.
Next, we want to get a list of devices and print some of their properties:
```
devices = client.get_devices()
print("devices:")
for device in devices:
print(" - name:", device.get_iface());
print(" type:", device.get_type_description())
print(" state:", device.get_state().value_nick)
```
The device state is an enum of type *NMDeviceState* and we use value_nick to get its description. The output is something like:
```
version: 1.41.0
devices:
- name: lo
type: loopback
state: unmanaged
- name: enp1s0
type: ethernet
state: activated
- name: wlp4s0
type: wifi
state: activated
```
In the libnm documentation we see that the[NMDevice][15] object has a get_ip4_config() method which returns a NMIPConfig object and provides access to addresses, routes and other parameters currently set on the device. We can print them with:
```
ip4 = device.get_ip4_config()
if ip4 is not None:
print(" addresses:")
for a in ip4.get_addresses():
print(" - {}/{}".format(a.get_address(),
a.get_prefix()))
print(" routes:")
for r in ip4.get_routes():
print(" - {}/{} via {}".format(r.get_dest(),
r.get_prefix(),
r.get_next_hop()))
```
From this, the output for enp1s0 becomes:
```
- name: enp1s0
type: ethernet
state: activated
addresses:
- 192.168.122.191/24
- 172.26.1.1/16
routes:
- 172.26.0.0/16 via None
- 192.168.122.0/24 via None
- 0.0.0.0/0 via 192.168.122.1
```
### Connecting to a Wi-Fi network
Now that we have mastered the basics, lets try something more advanced. Suppose we are in the range of a wireless network and we want to connect to it.
As mentioned before, a connection profile describes all the settings required to connect to a specific network. Conceptually, well need to perform two different operations: first insert a new connection profile into NetworkManagers configuration and second activate it. Fortunately, the API provides method [nm_client_add_and_activate_connection_async()][16] that does everything in a single step. When calling the method we need to pass at least the following parameters:
* the NMConnection we want to add, containing all the needed properties;
* the device to activate the connection on;
* the callback function to invoke when the method completes asynchronously.
We can construct the connection with:
```
def create_connection():
connection = NM.SimpleConnection.new()
ssid = GLib.Bytes.new("Home".encode("utf-8"))
s_con = NM.SettingConnection.new()
s_con.set_property(NM.SETTING_CONNECTION_ID,
"my-wifi-connection")
s_con.set_property(NM.SETTING_CONNECTION_TYPE,
"802-11-wireless")
s_wifi = NM.SettingWireless.new()
s_wifi.set_property(NM.SETTING_WIRELESS_SSID, ssid)
s_wifi.set_property(NM.SETTING_WIRELESS_MODE,
"infrastructure")
s_wsec = NM.SettingWirelessSecurity.new()
s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_KEY_MGMT,
"wpa-psk")
s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_PSK,
"z!q9at#0b1")
s_ip4 = NM.SettingIP4Config.new()
s_ip4.set_property(NM.SETTING_IP_CONFIG_METHOD, "auto")
s_ip6 = NM.SettingIP6Config.new()
s_ip6.set_property(NM.SETTING_IP_CONFIG_METHOD, "auto")
connection.add_setting(s_con)
connection.add_setting(s_wifi)
connection.add_setting(s_wsec)
connection.add_setting(s_ip4)
connection.add_setting(s_ip6)
return connection
```
The function creates a new *NMSimpleConnection* and sets all the needed properties. All the properties are grouped into *settings*. In particular, the *NMSettingConnection* setting contains general properties such as the profile name and its type. *NMSettingWireless* indicates the wireless network name (SSID) and that we want to operate in “infrastructure” mode, that is, as a wireless client. The wireless security setting specifies the authentication mechanism and a password. We set both IPv4 and IPv6 to “auto” so that the interface gets addresses via DHCP and IPv6 autoconfiguration.
All the properties supported by NetworkManager are described in the *nm-settings* man page and in the “Connection and Setting API Reference”[section][17] of the libnm documentation.
To find a suitable interface, we loop through all devices on the system and return the first Wi-Fi device.
```
def find_wifi_device(client):
for device in client.get_devices():
if device.get_device_type() == NM.DeviceType.WIFI:
return device
return None
```
What is missing now is a callback function, but its easier if we look at it later. We can proceed invoking the add_and_activate_connection_async() method:
```
import gi
gi.require_version("NM", "1.0")
from gi.repository import GLib, NM
# other functions here...
main_loop = GLib.MainLoop()
client = NM.Client.new(None)
connection = create_connection()
device = find_wifi_device(client)
client.add_and_activate_connection_async(
connection, device, None, None, add_and_activate_cb, None
)
main_loop.run()
```
To support multiple asynchronous operations without blocking execution of the whole program, libnm uses an[event loop][18] mechanism. For an introduction to event loops in GLib see [this tutorial][19]. The call to main_loop.run() waits until there are events (such as the callback for our method invocation, or any update from D-Bus). Event processing continues until the main loop is explicitly terminated. This happens in the callback:
```
def add_and_activate_cb(client, result, data):
try:
ac = client.add_and_activate_connection_finish(result)
print("ActiveConnection {}".format(ac.get_path()))
print("State {}".format(ac.get_state().value_nick))
except Exception as e:
print("Error:", e)
main_loop.quit()
```
Here, we use client.add_and_activate_connection_finish() to get the result for the asynchronous method. The result is a *NMActiveConnection* object and we print its D-Bus path and state.
Note that the callback is invoked as soon as the active connection is created. It may still be attempting to connect. In other words, when the callback runs we dont have a guarantee that the activation completed successfully. If we want to ensure that, we would need to monitor the active connection state until it changes to *activated* (or to *deactivated* in case of failure). In this example, we just print that the activation started, or why it failed, and then we quit the main loop; after that, the main_loop.run() call will end and our program will terminate.
### Adding an address to a device
Once there is a connection active on a device, we might decide that we want to configure an additional IP address on it.
There are different ways to do that. One way would be to modify the profile and activate it again similar to what we saw in the previous example. Another way is by changing the runtime configuration of the device without updating the profile on disk.
To do that, we use the[reapply()][20] method. It requires at least the following parameters:
* the NMDevice on which to apply the new configuration;
* the NMConnection containing the configuration.
Since we only want to change the IP address and leave everything else unchanged, we first need to retrieve the current configuration of the device (also called the “*applied connection”*). Then we update it with the static address and reapply it to the device.
The applied connection, not surprisingly, can be queried with method[get_applied_connection()][21] of the NMDevice. Note that the method also returns a version id that can be useful during the reapply to avoid race conditions with other clients. For simplicity we are not going to use it.
In this example we suppose that we already know the name of the device we want to update:
```
import gi
import socket
gi.require_version("NM", "1.0")
from gi.repository import GLib, NM
# other functions here...
main_loop = GLib.MainLoop()
client = NM.Client.new(None)
device = client.get_device_by_iface("enp1s0")
device.get_applied_connection_async(0, None, get_applied_cb, None)
main_loop.run()
```
The callback function retrieves the applied connection from the result, changes the IPv4 configuration and reapplies it:
```
def get_applied_cb(device, result, data):
(connection, v) = device.get_applied_connection_finish(result)
s_ip4 = connection.get_setting_ip4_config()
s_ip4.add_address(NM.IPAddress.new(socket.AF_INET,
"172.25.12.1",
24))
device.reapply_async(connection, 0, 0, None, reapply_cb, None)
```
Omitting exception handling for brevity, the reapply callback is as simple as:
```
def reapply_cb(device, result, data):
device.reapply_finish(result)
main_loop.quit()
```
When the program quits, we will see the new address configured on the interface.
### Conclusion
This article introduced the D-Bus and libnm API of NetworkManager and presented some practical examples of its usage. Hopefully it will be useful when you need to develop your next project that involves networking!
Besides the examples presented here, the NetworkManager git tree includes [many others][22] for different programming languages. To stay up-to-date with the news from NetworkManager world, follow the [blog][23].
### References
* [NetworkManager documentation][24]
* [PyGObject documentation][25]
* [Notes on GMainLoop and GMainContext][26]
* [Notes on NetworkManager D-Bus API][27]
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/using-python-and-networkmanager-to-control-the-network/
作者:[Beniamino Galvani][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bengal/
[b]: https://github.com/lkxed
[1]: https://fedoramagazine.org/wp-content/uploads/2022/08/python_and_networkmanager-816x345.jpg
[2]: https://unsplash.com/es/@tvick?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/computer-network?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://networkmanager.dev/
[5]: https://www.freedesktop.org/wiki/Software/dbus/
[6]: https://fedoramagazine.org/wp-content/uploads/2022/08/d-feet-objects.png
[7]: https://fedoramagazine.org/wp-content/uploads/2022/08/dev.png
[8]: https://networkmanager.dev/docs/api/latest/
[9]: https://fedoramagazine.org/wp-content/uploads/2022/08/libnm.png
[10]: https://networkmanager.dev/docs/libnm/latest/NMDeviceWifi.html
[11]: https://networkmanager.dev/docs/libnm/latest/NMDeviceVlan.html
[12]: https://gi.readthedocs.io/en/latest/
[13]: https://gi.readthedocs.io/en/latest/users.html
[14]: https://pygobject.readthedocs.io/en/latest/
[15]: https://networkmanager.dev/docs/libnm/latest/NMDevice.html
[16]: https://networkmanager.dev/docs/libnm/latest/NMClient.html#nm-client-add-and-activate-connection-async
[17]: https://networkmanager.dev/docs/libnm/latest/ch03.html
[18]: https://en.wikipedia.org/wiki/Event_loop
[19]: https://developer.gnome.org/documentation/tutorials/main-contexts.html
[20]: https://networkmanager.dev/docs/libnm/latest/NMDevice.html#nm-device-reapply-async
[21]: https://networkmanager.dev/docs/libnm/latest/NMDevice.html#nm-device-get-applied-connection-async
[22]: https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/tree/1.40.0/examples
[23]: https://networkmanager.dev/blog/
[24]: https://networkmanager.dev/docs/developers/
[25]: https://pygobject.readthedocs.io/en/latest/
[26]: https://developer.gnome.org/documentation/tutorials/main-contexts.html
[27]: https://networkmanager.dev/blog/notes-on-dbus/

View File

@ -0,0 +1,71 @@
[#]: subject: "What is OpenRAN?"
[#]: via: "https://opensource.com/article/22/9/open-radio-access-networks"
[#]: author: "Stephan Avenwedde https://opensource.com/users/hansic99"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What is OpenRAN?
======
Open Radio Access Network defines open standards between the various components of radio access networks.
![4 open music players compared: VLC, QMMP, Clementine, and Amarok][1]
Image by: Opensource.com
If you own and use a smartphone capable of connecting to arbitrary computers all over the world, then you are a user of Radio Access Networks (RAN). A RAN is provided by your cellular provider, and it handles wireless connections between your smartphone and your communication provider.
While your smartphone may be running an open source operating system (Android) and the server you try to access is probably running Linux, there's a lot of proprietary technology in between to make the connection happen. While you may have a basic understanding of how networking works locally, this knowledge stops when you plug a SIM card into your smartphone in order to make a connection with a cell tower possible. In fact, the majority of software and hardware components in and around a cell tower are still closed source, which of course has some drawbacks. This is where OpenRAN comes into play.
The OpenRAN initiative (shorthand for Open Radio Access Network) was started by the [O-Ran Alliance][2], a worldwide community of mobile operators, vendors, and research and academic institutions. The initiative aims to define open standards between the various components of radio access networks. Interoperability between components of different manufacturers was not possible. Until now.
### Radio Access Network
But what exactly is a RAN? In a nutshell, a RAN establishes a wireless connection to devices (smartphones, for example) and connects them to the core network of the communication company. In the context of a RAN, devices are denoted as User Equipment (UE).
The tasks of a RAN can be summarized as follows:
* Authentication of UE
* The handover of UE to another RAN (if the UE is moving)
* Forwarding the data between the UE and the core network
* Provision of the data for accounting functions (billing of services or the transmitted data)
* Control of access to the various services
### OpenRAN
RAN usually consists of proprietary components. OpenRAN defines functional units and interfaces between them:
* Radio Unit (RU): The RU is connected to the antenna and sends, receives, amplifies, and digitizes radio signals.
* Distributed Unit (DU): Handles the [PHY][3], [MAC][4] and [RLC][5] layer.
* Centralised Unit (CU): Handles the [RRC][6] and [PDCP][7] layer.
* RAN Intelligent Controller (RIC): Control and optimization of RAN elements and resources.
Units are connected to each other by standardized, open interfaces. Furthermore, if the units can be virtualized and deployed in the cloud or on an [edge device][8], then it's called a **vRAN** (virtual Radio Access Network). The basic principle of vRAN is to decouple the hardware from the software by using a software-based virtualization layer. Using a vRAN improves flexibility in terms of scalability and the underlying hardware.
### OpenRAN for everyone
By the definition of functional units and the interfaces between them, OpenRAN enables interoperability of components from different manufacturers. This reduces the dependency for cellular providers of specific vendors and makes communication infrastructure more flexible and resilient. As a side-effect, using clearly defined functional units and interfaces drives innovation and competition. With vRAN, the use of standard hardware is possible. With all these advantages, OpenRAN is a prime example of how open source benefits everyone.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/open-radio-access-networks
作者:[Stephan Avenwedde][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hansic99
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/osdc-lead-stereo-radio-music.png
[2]: https://www.o-ran.org/
[3]: https://en.wikipedia.org/wiki/Physical_layer#PHY
[4]: https://en.wikipedia.org/wiki/Medium_access_control
[5]: https://en.wikipedia.org/wiki/Radio_Link_Control
[6]: https://en.wikipedia.org/wiki/Radio_Resource_Control
[7]: https://en.wikipedia.org/wiki/Packet_Data_Convergence_Protocol
[8]: https://www.redhat.com/en/topics/edge-computing/what-is-edge-computing?intcmp=7013a000002qLH8AAM

View File

@ -0,0 +1,36 @@
[#]: subject: "Google Uses Fully Homomorphic Open Source Duality-Led Encryption Library"
[#]: via: "https://www.opensourceforu.com/2022/09/google-uses-fully-homomorphic-open-source-duality-led-encryption-library/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "littlebirdnest"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
谷歌使用完全同态开源对偶主导的加密库
======
合作伙伴关系的增长加速了 FHE 市场的采用
根据 Duality Technologies 的新闻稿谷歌将其使用Github上开源的 XLS和SDK 开发的开源项目,完全同态加密 (FHE) 转译器与领先的开源全同态加密库 OpenFHE 合并。让开发人员适应FHE将使密码学知识更简单、更易于理解。
一类称为 FHE 的加密技术不同于更常见的加密技术,因为它可以直接对加密数据进行计算,而无需密钥。一个由知名密码学家组成的社区创建了 OpenFHE这是一个根深于后量子开源晶格密码学的加密库。
该库旨在实现最佳可用性、增强的 API、模块化、跨平台可移植性以及与硬件结合时的项目加速器。开发人员可以使用高级代码例如 C++)操作加密数据,而 C++ 经常用于未加密的数据,通过将 OpenFHE 与 Google 的 Transpiler 相结合,就无需学习密码学。
Google 的转译器 简化了使用 FHE 驱动的应用程序的过程,而无需目前从头开始构建 FHE 所需的广泛的软件开发专业知识。这填补了软件设计人员和开发人员偶尔遇到的空白,他们希望从 FHE 的功能中受益,而不必经历那具有挑战性的学习曲线。
Duality 密码学研究高级主管兼首席科学家 Yuriy Polyakov 补充说:“我们的团队通过我们的 OpenFHE 库实现了重要的里程碑它已迅速成为当今许多技术领导者的选择例如谷歌。Google 转译器为那些非同态加密FHE 专家比如,为应用程序的社区开发人员提供了对 OpenFHE 同态加密的最新技术。”
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/09/google-uses-fully-homomorphic-open-source-duality-led-encryption-library/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[littlebirdnest](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed

View File

@ -0,0 +1,225 @@
[#]: subject: "Turn your Python script into a command-line application"
[#]: via: "https://opensource.com/article/22/7/bootstrap-python-command-line-application"
[#]: author: "Mark Meyer https://opensource.com/users/ofosos"
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
将你的 Python 脚本转换为命令行程序
======
使用 Python 中的 scaffold 和 click 库,你可以将一个简单的实用程序升级为一个成熟的命令行界面工具。
![Python 吉祥物和 Linux 的吉祥物企鹅][1]
Image by: Opensource.com
在我的职业生涯中,我写过、用过和看到过很多松散的脚本。一些人需要半自动化的任务,于是它们诞生了。一段时间后,它们变得越来越大。它们在一生中可能转手很多次。我常常希望这些脚本提供更多的命令行**类似工具**的感觉。但是,从一次性脚本到合适的工具,真正提高质量水平有多难呢?事实证明这在 Python 中并不难。
### Scaffolding
在本文中,我将从一小段 Python 代码开始。我将把它应用到 `scaffold` 模块中,并使用 `click` 库扩展它以接受命令行参数。
```
#!/usr/bin/python
from glob import glob
from os.path import join, basename
from shutil import move
from datetime import datetime
from os import link, unlink
LATEST = 'latest.txt'
ARCHIVE = '/Users/mark/archive'
INCOMING = '/Users/mark/incoming'
TPATTERN = '%Y-%m-%d'
def transmogrify_filename(fname):
    bname = basename(fname)
    ts = datetime.now().strftime(TPATTERN)
    return '-'.join([ts, bname])
def set_current_latest(file):
    latest = join(ARCHIVE, LATEST)
    try:
        unlink(latest)
    except:
        pass
    link(file, latest)
def rotate_file(source):
    target = join(ARCHIVE, transmogrify_filename(source))
    move(source, target)
    set_current_latest(target)
def rotoscope():
    file_no = 0
    folder = join(INCOMING, '*.txt')
    print(f'Looking in {INCOMING}')
    for file in glob(folder):
        rotate_file(file)
        print(f'Rotated: {file}')
        file_no = file_no + 1
    print(f'Total files rotated: {file_no}')
if __name__ == '__main__':
    print('This is rotoscope 0.4.1. Bleep, bloop.')
    rotoscope()
```
本文的所有非内联代码示例,你都可以在 [https://codeberg.org/ofosos/rotoscope][2] 中找到特定版本的代码。该仓库中的每个提交都描述了本文操作过程中一些有意义的步骤。
这个片段做了几件事:
* 检查 `INCOMING` 指定的路径中是否有文本文件
* 如果存在,则使用当前时间戳创建一个新文件名,并将其移动到 `ARCHIVE`
* 删除当前的 `ARCHIVE/latest.txt` 链接,并创建一个指向刚刚添加文件的新链接
作为一个示例,它很简单,但它会让你理解这个过程。
### 使用 pyscaffold 创建应用程序
首先,你需要安装 `scaffold`、`click` 和 `tox` [Python 库][3]。
```
$ python3 -m pip install scaffold click tox
```
安装 `scaffold` 后,切换到示例 `rotoscope` 项目所在的目录,然后执行以下命令:
```
$ putup rotoscope -p rotoscope \
--force --no-skeleton -n rotoscope \
-d 'Move some files around.' -l GLWT \
-u http://codeberg.org/ofosos/rotoscope \
--save-config --pre-commit --markdown
```
Pyscaffold 会重写我的 `README.md`,所以从 Git 恢复它:
```
$ git checkout README.md
```
Pyscaffold 在文档中说明了如何设置一个完整的示例项目我不会在这里介绍你之后可以探索。除此之外Pyscaffold 还可以在项目中为你提供持续集成CI模板。
* 打包: 你的项目现在启用了 PyPi所以你可以将其上传到一个仓库并从那里安装它。
* 文档: 你的项目现在有了一个完整的文档文件夹层次结构,它基于 Sphinx包括一个 readthedocs.org 构建器。
* 测试: 你的项目现在可以与 tox 一起使用,测试文件夹包含运行基于 pytest 的测试所需的所有样板文件。
* 依赖管理: 打包和测试基础结构都需要一种管理依赖关系的方法。`setup.cfg` 文件解决了这个问题,它包含所有依赖项。
* 预提交钩子: 包含 Python 源代码格式工具 "black" 和 Python 风格检查器 "flake8"。
查看测试文件夹并在项目目录中运行 `tox` 命令,它会立即输出一个错误:打包基础设施无法找到相关库。
现在创建一个 `Git` 标记(例如 `v0.2`),此工具会将其识别为可安装版本。在提交更改之前,浏览一下自动生成的 `setup.cfg` 并根据需要编辑它。对于此示例,你可以修改 `LICENSE` 和项目描述,将这些更改添加到 Git 的暂存区,我必须禁用预提交钩子,然后提交它们。否则,我会遇到错误,因为 Python 风格检查器 flake8 会抱怨糟糕的格式。
```
$ PRE_COMMIT_ALLOW_NO_CONFIG=1 git commit
```
如果这个脚本有一个入口点,用户可以从命令行调用,那就更好了。现在,你只能通过找 `.py` 文件并手动执行它来运行。幸运的是Python 的打包基础设施有一个很好的“罐装”方式,可以轻松地进行配置更改。将以下内容添加到 `setup.cfg``options.entry_points` 部分:
```
console_scripts =
    roto = rotoscope.rotoscope:rotoscope
```
这个更改会创建一个名为 `roto` 的 shell 命令,你可以使用它来调用 rotoscope 脚本,使用 `pip` 安装 rotoscope 后,可以使用 `roto` 命令。
就是这样,你可以从 Pyscaffold 免费获得所有打包、测试和文档设置。你还获得了一个预提交钩子来保证(大部分情况下)你按照设定规则提交。
### CLI 工具
现在,一些值会硬编码到脚本中,它们作为命令[参数][4]会更方便。例如,将 `INCOMING` 常量作为命令行参数会更好。
首先,导入 [click][5] 库,使用 click 提供的命令装饰器对 `rotoscope()` 方法进行装饰,并添加一个 Click 传递给 `rotoscope` 函数的参数。Click 提供了一组验证器因此要向参数添加一个路径验证器。Click 还方便地使用函数的内嵌字符串作为命令行文档的一部分。所以你最终会得到以下方法签名:
```
@click.command()
@click.argument('incoming', type=click.Path(exists=True))
def rotoscope(incoming):
    """
    Rotoscope 0.4 - Bleep, blooop.
    Simple sample that move files.
    """
```
主函数会调用 `rotoscope()`,它现在是一个 Click 命令,不需要传递任何参数。
选项也可以使用[环境变量][6]自动填充。例如,将 `ARCHIVE` 常量改为一个选项:
```
@click.option('archive', '--archive', default='/Users/mark/archive', envvar='ROTO_ARCHIVE', type=click.Path())
```
使用相同的路径验证器。这一次,让 Click 填充环境变量,如果环境变量没有提供任何内容,则默认为旧常量的值。
Click 可以做更多的事情,它有彩色的控制台输出、提示和子命令,可以让你构建复杂的 CLI 工具。浏览 Click 文档会发现它的更多功能。
现在添加一些测试。
### 测试
Click 对使用 CLI 运行器[运行端到端测试][7]提供了一些建议。你可以用它来实现一个完整的测试(在[示例项目][8]中,测试在 `tests` 文件夹中。)
测试位于测试类的一个方法中。大多数约定与我在任何其他 Python 项目中使用的非常接近,但有一些细节,因为 rotoscope 使用 `click`。在 `test` 方法中,我创建了一个 `CliRunner`。测试使用它在一个隔离的文件系统中运行此命令。然后测试在隔离的文件系统中创建 `incoming``archive` 目录和一个虚拟的 `incoming/test.txt` 文件,然后它调用 CliRunner就像你调用命令行应用程序一样。运行完成后测试会检查隔离的文件系统并验证 `incoming` 为空,并且 `archive` 包含两个文件(最新链接和存档文件)。
```
from os import listdir, mkdir
from click.testing import CliRunner
from rotoscope.rotoscope import rotoscope
class TestRotoscope:
    def test_roto_good(self, tmp_path):
        runner = CliRunner()
        with runner.isolated_filesystem(temp_dir=tmp_path) as td:
            mkdir("incoming")
            mkdir("archive")
            with open("incoming/test.txt", "w") as f:
                f.write("hello")
            result = runner.invoke(rotoscope, ["incoming", "--archive", "archive"])
            assert result.exit_code == 0
            print(td)
            incoming_f = listdir("incoming")
            archive_f = listdir("archive")
            assert len(incoming_f) == 0
            assert len(archive_f) == 2
```
要在控制台上执行这些测试,在项目的根目录中运行 `tox`
在执行测试期间,我在代码中发现了一个错误。当我进行 Click 转换时rotoscope 只是取消了最新文件的链接,无论它是否存在。测试从一个新的文件系统(不是我的主文件夹)开始,很快就失败了。我可以通过在一个很好的隔离和自动化测试环境中运行来防止这种错误。这将避免很多“它在我的机器上正常工作”的问题。
### Scaffolding 和模块
本文到此结束,我们可以使用 `scaffold``click` 完成一些高级操作。有很多方法可以升级一个普通的 Python 脚本,甚至可以将你的简单实用程序变成成熟的 CLI 工具。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/7/bootstrap-python-command-line-application
作者:[Mark Meyer][a]
选题:[lkxed][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ofosos
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/python_linux_tux_penguin_programming.png
[2]: https://codeberg.org/ofosos/rotoscope
[3]: https://opensource.com/article/19/5/python-tox
[4]: https://opensource.com/article/21/8/linux-terminal#argument
[5]: https://click.palletsprojects.com
[6]: https://opensource.com/article/19/8/what-are-environment-variables
[7]: https://click.palletsprojects.com/en/8.1.x/testing
[8]: https://codeberg.org/ofosos/rotoscope/commit/dfa60c1bfcb1ac720ad168e5e98f02bac1fde17d

View File

@ -0,0 +1,153 @@
[#]: subject: "How I recovered my Linux system using a Live USB device"
[#]: via: "https://opensource.com/article/22/9/recover-linux-system-live-usb"
[#]: author: "David Both https://opensource.com/users/dboth"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
我如何使用 Live USB 设备恢复我的 Linux 系统
======
Fedora Live USB 发行版为引导和进入恢复模式提供了有效的解决方案。
![USB 驱动器][1]
图片来源:[Markus Winkler][2] 发布于 [Unsplash][3]
我的家庭实验室里有十几台物理计算机以及更多的虚拟机。我使用这些系统中的大多数进行测试和实验。我经常写关于使用自动化来简化系统管理任务的文章。我还在多个地方写过,我从自己的错误中学到的东西比几乎任何其他方式都多。
在过去的几周里,我学到了很多东西。
我给自己制造了一个大问题。作为系统管理员多年,写了数百篇关于 Linux 的文章和五本书,我应该知道得更清楚。话又说回来,我们都会犯错,这是一个重要的教训:你永远不会因为有经验而不犯错。
我不打算讨论我的错误的细节。告诉你这是一个错误就足够了,在我做之前我应该多考虑一下我在做什么。此外,细节并不是重点。经验不能让你免于犯下的每一个错误,但它可以帮助你恢复。这就是本文要讨论的内容:使用 Live USB 发行版启动并进入恢复模式。
### 问题
首先,我创建了问题,这本质上是 `/etc/default/grub` 文件的错误配置。接下来,我使用 Ansible 将错误配置的文件分发到我所有的物理计算机并运行 `grub2-mkconfig`。全部 12 个。这真的,真的很快。
除了两台之外,所有的都无法启动。它们在 Linux 启动的早期阶段崩溃,出现各种无法定位 `/root` 文件系统的错误。
我可以使用 root 密码进入“维护”模式,但是如果没有挂载 `/root`,即使是最简单的工具也无法访问。直接引导到恢复内核也不起作用。系统真的被破坏了。
### Fedora 恢复模式
解决此问题的唯一方法是找到进入恢复模式的方法。当一切都失败时Fedora 提供了一个非常酷的工具:用于安装 Fedora 新实例的同一个 Live USB 驱动器。
将 BIOS 设置为从 Live USB 设备启动后,我启动到 Fedora 36 Xfce live 用户桌面。我在桌面上打开了两个相邻的终端会话,并在两者中都切换到了 root 权限。
我在一个中运行了 `lsblk` 以供参考。我使用结果来识别 `/` 根分区以及 `boot``efi` 分区。我使用了我的一台虚拟机,如下所示。在这种情况下没有 `efi` 分区,因为此 VM 不使用 UEFI。
```
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 1.5G 1 loop
loop1 7:1 0 6G 1 loop
├─live-rw 253:0 0 6G 0 dm /
└─live-base 253:1 0 6G 1 dm
loop2 7:2 0 32G 0 loop
└─live-rw 253:0 0 6G 0 dm /
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 119G 0 part
├─vg01-swap 253:2 0 4G 0 lvm
├─vg01-tmp 253:3 0 10G 0 lvm
├─vg01-var 253:4 0 20G 0 lvm
├─vg01-home 253:5 0 5G 0 lvm
├─vg01-usr 253:6 0 20G 0 lvm
└─vg01-root 253:7 0 5G 0 lvm
sr0 11:0 1 1.6G 0 rom /run/initramfs/live
zram0 252:0 0 8G 0 disk [SWAP]
```
`/dev/sda1` 分区很容易识别为 `/boot`,根分区也很明显。
在另一个终端会话中,我执行了一系列步骤来恢复我的系统。特定的卷组名称和设备分区(例如 `/dev/sda1`)因系统而异。此处显示的命令特定于我的情况。
目标是使用 Live USB 引导并完成启动,然后仅在镜像目录中挂载必要的文件系统,并运行 `chroot` 命令在 chroot 镜像目录中运行 Linux。这种方法绕过损坏的 GRUB或其他配置文件。但是它提供了一个完整的运行系统其中安装了所有原始文件系统以进行恢复既是所需工具的来源也是要进行更改的目标。
以下是步骤和相关命令:
1. 创建目录 `/mnt/sysimage` 以提供 `chroot` 目录的位置。
2. 将根分区挂载到 `/mnt/sysimage`
```
# mount /dev/mapper/vg01-root /mnt/sysimage
```
3. 将 `/mnt/sysimage` 设为你的工作目录:
```
# cd /mnt/sysimage
```
4. 挂载 `/boot``/boot/efi` 文件系统。
5. 挂载其他主要文件系统。此步骤不需要像 `/home``/tmp` 这样的文件系统:
```
# mount /dev/mapper/vg01-usr usr
# mount /dev/mapper/vg01-var var
```
6. 挂载重要但已挂载的文件系统,它们必须在已经 chroot 的系统和原始 Live 系统之间共享,而后者仍然在外面运行:
```
# mount --bind /sys sys
# mount --bind /proc proc
```
7. 一定要最后操作 `/dev` 目录,否则其他文件系统不会挂载:
```
# mount --bind /dev dev
```
8. chroot 系统镜像:
```
# chroot /mnt/sysimage
```
系统现在已经准备好了,无论你需要做什么,都可以把它恢复到一个工作状态。然而,有一次我能够在这种状态下运行我的服务器数天,直到我能够研究和测试真正的修复方法。我并不推荐这样做,但在紧急情况下,当有任务需要启动和运行时,这可能是一个选择。
### 解决方案
当我让每个系统进入恢复模式,修复就很容易了。因为我的系统现在就像成功启动一样工作,我只需对 `/etc/default/grub``/etc/fstab` 进行必要的更改并运行 `grub2-mkconfig > boot/grub2/grub.cfg ` 命令。我使用 `exit` 命令退出 chroot然后重启主机。
当然,我无法自动从我的意外事故中恢复过来。我必须在每台主机上手动执行整个过程,这是使用自动化快速和容易地传播我自己的错误的一点报应。
### 得到教训
尽管它们很有用,我曾经讨厌在我的一些系统管理员工作中举行的“经验教训”会议,但看来我确实需要提醒自己一些事情。因此,这里是我从这场自作自受的惨败中获得的“教训”。
首先,无法引导的十个系统使用了不同的卷组命名方案,而我的新 GRUB 配置没有考虑到这一点。我只是忽略了它们可能不同的事实。
* 彻底考虑清楚。
* 并非所有系统都相同。
* 测试一切。
* 验证一切。
* 永远不要做假设。
现在一切正常。希望我也聪明一点。
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/9/recover-linux-system-live-usb
作者:[David Both][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/markus-winkler-usb-unsplash.jpg
[2]: https://unsplash.com/@markuswinkler?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/usb?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,161 @@
[#]: subject: "Install Linux Mint with Windows 11 Dual Boot [Complete Guide]"
[#]: via: "https://www.debugpoint.com/linux-mint-install-windows/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lkxed"
[#]: translator: "gpchn"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
使用 Windows 11 双引导安装 Linux Mint [完整指南]
======
将 Linux Mint 与 Windows 11或 Windows 10同时安装并制作双引导系统的完整指南。
如果您是新的 Linux 用户,尝试在不删除 OEM 安装的 Windows 的情况下安装 Linux Mint请遵循本指南。完成下面描述的步骤后您应该拥有一个双引导系统您可以在其中学习和在 Linux 系统中完成工作,而无需引导 Windows。
### 1. 开始之前您需要什么?
启动到您的 Windows 系统并从官方网站下载 Linux Mint ISO 文件。 ISO 文件是 Linux Mint 的安装映像,我们将在本指南中使用它。
* 在官网图1下载 Cinnamon 桌面版的 ISO适合大家
* [下载链接][1]
![图1从官网下载Linux Mint][2]
* 下载后,将 U 盘插入您的系统。然后使用 Rufus 或 [Etcher][3] 将上面下载的 .ISO 文件写入该 USB 驱动器。
### 2.准备一个分区来安装Linux Mint
理想情况下Windows 笔记本电脑通常配备 C 盘和 D 盘。 C 盘是安装 Windows 的地方。对于新的笔记本电脑D 驱动器通常是空的(任何后续驱动器,如 E 等)。现在,您有两个选项可供选择:一是**缩小 C 盘** 为额外的 Linux 安装腾出空间。第二个是**使用其他驱动器/分区**,例如 D 盘或 E盘。
选择您希望的方法。
* 如果您选择使用 D 盘或 E 盘进行 Linux 安装,请确保先禁用 BitLocker然后再禁用现代 OEM 安装的 Windows 笔记本电脑附带的所有其他功能。 * 从开始菜单打开 Windows PowerShell 并键入以下命令(图 2以禁用 BitLocker。根据您的目标驱动程序更改驱动器号这里我使用了驱动器 E
```
manage-bde -off E
```
![图2禁用 Windows 驱动器中的 BitLocker 以安装 Linux][4]
* 现在,如果您选择缩小 C 盘(或任何其他驱动器),请从开始菜单打开“磁盘管理”,它将显示您的整个磁盘布局。
* 右键单击并在要缩小的驱动器上选择“Shrink Volume”图 3以便为 Linux Mint 腾出位置。
* 在下一个窗口中,在“输入要缩小的空间量(以 MB 为单位)”下以 MB 为单位提供您的分区大小(图 4。显然它应该小于或等于“可用空间大小”中提到的值。因此对于 100 GB 的分区,给出 100*1024=102400 MB。
* 完成后,单击收缩。
![磁盘分区中的压缩卷选项示例][5]
![图4输入 Linux 分区的大小][6]
* 现在,您应该会看到一个“未分配空间”,如下所示(图 5。右键单击它并选择“新建简单卷”。
* 此向导将使用文件系统准备和格式化分区。注意:您可以在 Windows 本身中或在 Linux Mint 安装期间执行此操作。 Linux Mint 安装程序还为您提供了创建文件系统表和准备分区的选项。我建议您在这里做。
* 在接下来的一系列屏幕中(图 6,7 和 8以 MB 为单位给出分区大小,分配驱动器号(例如 D、E、F和文件系统为 fat32。
* 最后,您应该会看到您的分区已准备好安装 Linux Mint。您应该在 Mint 安装期间按照以下步骤选择此选项。
* 作为预防措施,记下分区大小(您刚刚在图 9 中作为示例创建)以便在安装程序中快速识别它。
![图5创建未分配空间][7]
![图6新建简单卷向导-page1][8]
![图7新建简单卷向导-page2][9]
![图8新建简单卷向导-page3][10]
![图9安装Linux的最终分区][11]
### 3. 在 BIOS 中禁用安全启动
* 插入 USB 驱动器并重新启动系统。
* 开机时反复按相应的功能键进入BIOS。您的笔记本电脑型号的密钥可能不同。这是主要笔记本电脑品牌的参考。
* 您应该禁用安全 BIOS 并确保将启动设备优先级设置为 U 盘。
* 然后按 F10 保存并退出。
| Laptop OEM | Function key to enter BIOS |
| :- | :- |
| Acer | F2 or DEL |
| ASUS | F2 for all PCs, F2 or DEL for motherboards |
| Dell | F2 or F12 |
| HP | ESC or F10 |
| Lenovo | F2 or Fn + F2 |
| Lenovo (Desktops) | F1 |
| Lenovo (ThinkPads) | Enter + F1. |
| MSI | DEL for motherboards and PCs |
| Microsoft Surface Tablets | Press and hold the volume up button. |
| Origin PC | F2 |
| Samsung | F2 |
| Sony | F1, F2, or F3 |
| Toshiba | F2 |
### 4. 安装 Linux Mint
如果一切顺利,您应该会看到一个安装 Linux Mint 的菜单。选择“启动 Linux Mint……”选项。
![图 10Linux Mint GRUB 菜单启动安装][12]
* 片刻之后,您应该会看到 Linux Mint Live 桌面。在桌面上,您应该会看到一个安装 Linux Mint 的图标以启动安装。
* 在下一组屏幕中,选择您的语言、键盘布局、选择安装多媒体编解码器并点击继续按钮。
* 在安装类型窗口中,选择其他选项。
* 在下一个窗口(图 11仔细选择以下内容
* 在设备下,选择刚刚创建的分区;您可以通过我之前提到的要记下的尺寸来识别它。
* 然后点击更改在编辑分区窗口中选择Ext4作为文件系统选择格式化分区选项和挂载点为“/”。
* 单击确定,然后为您的系统选择引导加载程序;理想情况下,它应该是下拉列表中的第一个条目。
* 仔细检查更改。因为一旦您点击立即安装,您的磁盘将被格式化,并且无法恢复。当您认为一切准备就绪,请单击立即安装。
![图11选择Windows 11安装Linux Mint的目标分区][13]
在以下屏幕中,选择您的位置,输入您的姓名并创建用于登录系统的用户 ID 和密码。安装应该开始(图 12
安装完成后(图 13取出 U 盘并重新启动系统。
![图12安装中][14]
![图13安装完成][15]
如果一切顺利,在成功安装为双引导系统后,您应该会看到带有 Windows 11 和 Linux Mint 的 GRUB。
现在您可以继续使用 [Linux Mint][16] 并体验快速而出色的 Linux 发行版。
### 总结
在本教程中,我向您展示了如何在装有 OEM 的 Windows 的笔记本电脑或台式机中使用 Linux Mint 创建一个简单的双启动系统。这些步骤包括分区、创建可引导 USB、格式化和安装。
尽管上述说明适用于 Linux Mint 21 Vanessa但是它现在应该可以与所有其他出色的 [Linux 发行版][17] 一起正常工作。
如果您遵循本指南,请在下面的评论框中告诉我您的安装情况。
如果您成功了,欢迎来到自由!
[下一篇:如何在 Ubuntu 22.04、22.10、Linux Mint 21 中安装 Java 17][18]
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/linux-mint-install-windows/
作者:[Arindam][a]
选题:[lkxed][b]
译者:[gpchn](https://github.com/gpchn)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lkxed
[1]: https://www.linuxmint.com/download.php
[2]: https://www.debugpoint.com/wp-content/uploads/2022/09/Download-Linux-Mint-from-the-official-website.jpg
[3]: https://www.debugpoint.com/etcher-bootable-usb-linux/
[4]: https://www.debugpoint.com/wp-content/uploads/2022/09/Disable-BitLocker-in-Windows-Drives-to-install-Linux.jpg
[5]: https://www.debugpoint.com/wp-content/uploads/2022/09/Example-of-Shrink-Volume-option-in-Disk-Partition-1024x453.jpg
[6]: https://www.debugpoint.com/wp-content/uploads/2022/09/Enter-the-size-of-your-Linux-Partition.jpg
[7]: https://www.debugpoint.com/wp-content/uploads/2022/09/Unallocated-space-is-created.jpg
[8]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page1.jpg
[9]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page2.jpg
[10]: https://www.debugpoint.com/wp-content/uploads/2022/09/New-Simple-Volume-Wizard-page3.jpg
[11]: https://www.debugpoint.com/wp-content/uploads/2022/09/Final-partition-for-installing-Linux.jpg
[12]: https://www.debugpoint.com/wp-content/uploads/2022/09/Linux-Mint-GRUB-Menu-to-kick-off-installation.jpg
[13]: https://www.debugpoint.com/wp-content/uploads/2022/09/Choose-the-target-partition-to-install-Linux-Mint-with-Windows-11.jpg
[14]: https://www.debugpoint.com/wp-content/uploads/2022/09/Installation-is-in-progress.jpg
[15]: https://www.debugpoint.com/wp-content/uploads/2022/09/Installation-is-complete.jpg
[16]: https://www.debugpoint.com/linux-mint
[17]: https://www.debugpoint.com/category/distributions
[18]: https://www.debugpoint.com/install-java-17-ubuntu-mint/

View File

@ -0,0 +1,138 @@
[#]: subject: "Platforms that Help Deploy AI and ML Applications on the Cloud"
[#]: via: "https://www.opensourceforu.com/2022/09/platforms-that-help-deploy-ai-and-ml-applications-on-the-cloud/"
[#]: author: "Dr Kumar Gaurav https://www.opensourceforu.com/author/dr-gaurav-kumar/"
[#]: collector: "lkxed"
[#]: translator: "misitebao"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
# 帮助在云端部署人工智能AI和机器学习ML应用程序的平台
_人工智能和机器学习正在影响当今几乎每个行业。本文重点介绍了这些技术在我们日常生活中的各种使用方式以及一些开源云平台如何实现其部署。_
人工智能 (AI) 的目标是构建能够模仿人类认知的机器和自动化系统。在全球范围内,人工智能正在以多种方式改变着社会、政治和经济。人工智能应用的例子包括谷歌帮助 (Google Help)、Siri、Alexa 和 Tesla (特斯拉) 等自动驾驶汽车。
如今,人工智能正被广泛使用,以有效的方式解决各行各业的难题。它被用于医疗保健行业,以做出比人类更准确、更快速的诊断。医生可以使用人工智能来诊断疾病,并在患者病情恶化时收到警报。
数据安全对每个企业都至关重要,网络攻击的数量也在不断增加。使用人工智能,可以提高数据的安全性。这方面的一个例子是集成智能机器人来识别软件错误和网络攻击。
推特 (Twitter)、WhatsApp、Facebook (脸书) 和 Snapchat 只是使用 AI 算法存储和管理数十亿个人资料的社交媒体平台中的一小部分。人工智能可以整理和筛选大量数据,以找到最新趋势、标签和各种各样人的需求。
![Figure 1: Key applications of machine learning][1]
旅游业越来越依赖人工智能,因为后者可以帮助完成各种与旅行相关的任务,包括为消费者预订酒店、航班和最佳路线。为了提供更好、更快的客户服务,由人工智能驱动的聊天机器人正被用于旅游业。
表 1: 机器学习的工具和框架
| 工具/平台 | 链接 |
| :------------ | :------------------------------------- |
| Streamlit | https://github.com/streamlit/streamlit |
| TensorFlow | https://www.tensorflow.org/ |
| PyTorch | https://pytorch.org/ |
| scikit-learn | https://scikit-learn.org/ |
| Apache Spark | https://spark.apache.org/ |
| Torch | http://torch.ch/ |
| Hugging Face | https://huggingface.co/ |
| Keras | https://keras.io/ |
| TensorFlowJS | https://www.tensorflow.org/js |
| KNIME | https://www.knime.com/ |
| Apache Mahout | https://mahout.apache.org/ |
| Accord | http://accord-framework.net/ |
| Shogun | http://shogun-toolbox.org/ |
| RapidMiner | https://rapidminer.com/ |
| Blocks | https://github.com/mila-iqia/blocks |
| TuriCreate | https://github.com/apple/turicreate |
| Dopamine | https://github.com/google/dopamine |
| FlairNLP | https://github.com/flairNLP/flair |
### 不同领域的机器学习
让软件应用程序和小工具自行响应和开发的所有技术和工具都称为机器学习 (ML)。多亏了机器学习技术人工智能可以在没有真正被明确编程来执行所需操作的情况下进行学习。ML 算法不依赖于预定义的计算机指令而是从样本输入中学习一个模式然后完全基于学习到的模式来预测和执行任务。如果不能选择严格的算法机器学习可以成为救命稻草。它将通过分析以前的程序来选择新程序然后将其付诸实施。ML 为技术进步和以前在各种行业中无法想象的技术扫清了道路。如今,它被用于各种尖端技术 — 从预测算法到互联网电视直播。
一个值得注意的 ML 和 AI 技术是图像识别,它是一种对数字图像中的特征或项进行分类和检测的方法。分类和人脸识别是使用这种方法完成的。
![Figure 2: Streamlit cloud for machine learning][2]
在推荐系统中使用机器学习是其最广泛使用和知名的应用之一。在当今的电子商务世界中,产品推荐是一种利用强大的机器学习技术的突出工具。网站使用人工智能和机器学习来跟踪过去的购买、搜索趋势和购物车历史,然后根据这些数据生成产品推荐。
在医疗保健行业中使用机器学习算法引起了很多兴趣。通过使用 ML 算法,可以跨多个医院部门预测急诊室等待时间。员工轮班的详细信息、患者数据以及科室讨论和急诊室布局的记录都用于帮助创建算法。机器学习算法可用于检测疾病、计划治疗和预测。
**用于机器学习的云平台的主要特点**:
- 算法或特征提取
- 关联规则挖掘
- 基于大数据的预测分析
- 分类、回归和聚类
- 数据加载和转换
- 数据准备、数据预处理和可视化
- 降维
- 分布式线性代数
- 假设检验和核方法
- 处理图像、音频、信号和视觉数据集
- 模型选择和优化模块
- 预处理和数据流编程
- 推荐系统
- 通过插件支持文本挖掘和图像挖掘
- 可视化和绘图
### 基于云的 AI 和 ML 应用程序部署
AI 和 ML 的应用可以部署在云平台上。如今,许多云服务提供商使程序员能够构建模型以在其领域内进行有效的决策。
这些基于云的平台与预先训练的机器学习和深度学习模型集成在一起,无需任何编码或最少的脚本即可在这些模型上部署应用程序。
![Figure 3: Categories of ML deployments in Streamlit][3]
**Streamlit:** Streamlit 让数据科学家和机器学习专家能够访问各种机器学习模型。它是开源的并且与云部署兼容。ML 模型可以在几分钟内准备好与数据集一起使用
Streamlit 提供一系列机器学习模型和多个类别的源代码,包括自然语言处理、地理、教育、计算机视觉等。
![Figure 4: Hugging Face for machine learning][4]
**Hugging Face:** 这是另一个平台,为各种类别的 ML 和 AI 提供预先训练的模型和架 构。许多企业巨头都在使用这个平台,包括 Facebook AI、微软、谷歌 AI、亚马逊网络服务和 Grammarly。
Hugging Face 中提供了许多预训练和部署就绪的模型,用于不同的应用程序,包括自然语言处理和计算机视觉。
使用 Hugging Face 中的 ML 模型可以执行以下任务:
- 音频到音频处理
- 自动语音识别
- 计算机视觉
- 填充蒙版
- 图像分类
- 图像分割
- 物体检测
- 问题应答
- 句子相似度
- 总结
- 文本分类
- 文本生成
- 文本到语音翻译
- 令牌分类
- 翻译分类
Hugging Face 中可用的问题解决器经过优化且有效,有助于快速部署模型(图 5
![Figure 5: Problem solvers and models in Hugging Face][5]
这些基于云的平台对多个领域的研究人员、从业者和数据科学家非常有用,并简化了性能良好的实际应用程序的开发。
---
via: https://www.opensourceforu.com/2022/09/platforms-that-help-deploy-ai-and-ml-applications-on-the-cloud/
作者:[Dr Kumar Gaurav][a]
选题:[lkxed][b]
译者:[Misite Bao](https://github.com/misitebao)
校对:[校对者 ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux 中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/dr-gaurav-kumar/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-1-Key-applications-of-machine-learning.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-2-Streamlit-cloud-for-machine-learning.png
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-3-Categories-of-ML-deployments-in-Streamlit.png
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-4-Hugging-Face-for-machine-learning.png
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/08/Figure-5-Problem-solvers-and-models-in-Hugging-Face.png