Merge pull request #40 from LCTT/master

sync
This commit is contained in:
SamMa 2022-05-26 09:09:22 +08:00 committed by GitHub
commit 6e8271dea3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 1386 additions and 259 deletions

View File

@ -3,138 +3,141 @@
[#]: author: "Pradeep Kumar https://www.linuxtechi.com/author/pradeep/"
[#]: collector: "lkxed"
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14637-1.html"
图解 Fedora 36 Workstation 安装步骤
图解 Fedora 36 工作站安装步骤
======
针对 fedora 用户的好消息Fedora 36 操作系统已经正式发布了。这个发布版本是针对工作站 (桌面) 和服务器的。下面是 Fedora 36 workstation 的新的特征和改进:
![](https://img.linux.net.cn/data/attachment/album/202205/26/085318lbeqqwwevbzzwb4o.jpg)
给 Fedora 用户的好消息Fedora 36 操作系统已经正式发布了。这个发布版本是针对工作站(桌面)和服务器的。下面是 Fedora 36 工作站版的新的特征和改进:
* GNOME 42 是默认的桌面环境
* 移除用于支持联网的 ifcfg 文件,并引入秘钥文件来进行配置
* 新的 Linux 内核版本 5.17
* 软件包更新为新版本,如 PHP 8.1、gcc 12、OpenSSL 3.0、Ansible 5、OpenJDK 17、Ruby 3.1、Firefox 98 和 LibreOffice 7.3
* RPM 软件包数据库从 /var 移动到了 /usr 文件夹。
* Noto Font 是默认的字体,它将提供更好的用户体验。
* RPM 软件包数据库从 `/var` 移动到了 `/usr` 文件夹。
* Noto 字体是默认的字体,它将提供更好的用户体验。
在这篇指南中,我们将涵盖如何图解安装 Fedora 36 workstation 。在跳入安装步骤前,请确保你的系统满足下面的必要条件。
在这篇指南中,我们将图解安装 Fedora 36 工作站的步骤。在进入安装步骤前,请确保你的系统满足下面的必要条件。
* 最少 2GB RAM (或者更多)
* 最少 2GB 内存(或者更多)
* 双核处理器
* 25 GB 硬盘磁盘空间 (或者更多)
* 可启动媒介盘
* 25 GB 硬盘磁盘空间(或者更多)
* 可启动介质
心动不如行动,让我们马上深入安装步骤。
### 1) 下载 Fedora 36 Workstation 的 ISO 文件
### 1、下载 Fedora 36 工作站的 ISO 文件
使用下面的链接来从 fedora 官方网站下载 ISO 文件。
使用下面的链接来从 Fedora 官方网站下载 ISO 文件。
* [下载 Fedora Workstation][1]
> **[下载 Fedora Workstation][1]**
iso 文件下载后,接下来将其刻录到 USB 驱动器,使其可启动。
ISO 文件下载后,接下来将其刻录到 U 盘,使其可启动。
### 2) 使用可启动媒介盘启动系统
### 2、使用可启动介质启动系统
现在,转向到目标系统,重新启动它,并在 BIOS 设置中将可启动媒介盘从硬盘驱动器启动更改为从 USB 驱动器(可启动媒介盘)启动。在系统使用可启动媒介盘启动后,我们将获得下面的屏幕。
现在,转向到目标系统,重新启动它,并在 BIOS 设置中将可启动介质从硬盘驱动器更改为 U 盘(可启动介质)启动。在系统使用可启动介质启动后,我们将看到下面的屏幕。
![Choose-Start-Fedora-Workstation-Live-36][2]
选择第一个选项 "Start Fedora-Workstation-Live 36" ,并按下 enter 按键
选择第一个选项 “Start Fedora-Workstation-Live 36” ,并按下回车键。
### 3) 选择安装到硬盘驱动器
### 3选择安装到硬盘驱动器
![Select-Install-to-Hardrive-Fedora-36-workstation][3]
选择 "<ruby>安装到硬盘<rt>Install to Hard Drive</rt></ruby>" 选项来继续安装。
选择 <ruby>安装到硬盘<rt>Install to Hard Drive</rt></ruby> 选项来继续安装。
### 4) 选择你的首选语言
### 4选择你的首选语言
选择你的首选语言来适应你的安装过程
选择你的首选语言来适应你的安装过程
![Language-Selection-Fedora36-Installation][4]
单击 <ruby>继续<rt>Continue</rt></ruby> 按钮
单击 <ruby>继续<rt>Continue</rt></ruby> 按钮
### 5) 选择安装目标
### 5选择安装目标
在这一步骤中,我们将看到下面的安装摘要屏幕,在这里,我们可以配置下面的东西
* 键盘布局
* 时间和日期 (时区)
* 安装目标 选择你想要安装 fedora 36 workstation 的硬盘。
* <ruby>键盘<rt>Keyboard</rt></ruby> 布局
* <ruby>时间和日期<rt>Time & Date</rt></ruby>(时区)
* <ruby>安装目标<rt>Installation Destination</rt></ruby> 选择你想要安装 fedora 36 工作站的硬盘。
![Default-Installation-Summary-Fedora36-workstation][5]
单击 "<ruby>安装目标<rt>Installation Destination</rt></ruby>" 按钮
单击 <ruby>安装目标<rt>Installation Destination</rt></ruby>” 按钮。
在下面的屏幕中,选择用于安装 fedora 的硬盘驱动器。也可以从存储的 "<ruby>存储配置<rt>Storage configuration</rt></ruby>" 标签页中选择其中一个选项。
在下面的屏幕中,选择用于安装 Fedora 的硬盘驱动器。也从 “<ruby>存储配置<rt>Storage configuration</rt></ruby>” 标签页中选择一个选项。
* <ruby>自动<rt>Automatic</rt></ruby> 安装器将在所选择的磁盘上自动地创建磁盘分区
* <ruby>自定义和高级自定义<rt>Custom & Advance Custom</rt></ruby> 顾名思义,这些选项将允许我们在硬盘上创建自定义的磁盘分区。
* <ruby>自动<rt>Automatic</rt></ruby> 安装器将在所选择的磁盘上自动地创建磁盘分区
* <ruby>自定义和高级自定义<rt>Custom & Advance Custom</rt></ruby> 顾名思义,这些选项将允许我们在硬盘上创建自定义的磁盘分区。
在这篇指南中,我们将使用第一个选项 "<ruby>自动<rt>Automatic</rt></ruby>"
在这篇指南中,我们将使用第一个选项 <ruby>自动<rt>Automatic</rt></ruby>
![Automatic-Storage-configuration-Fedora36-workstation-installation][6]
单击 "<ruby>完成<rt>Done</rt></ruby>" 按钮,来继续安装
单击 <ruby>完成<rt>Done</rt></ruby>” 按钮,来继续安装。
### 6) 在安装前
### 6在安装前
单击 "<ruby>开始安装<rt>Begin Installation</rt></ruby>" 按钮,来开始 Fedora 36 workstation 的安装
单击 <ruby>开始安装<rt>Begin Installation</rt></ruby>” 按钮,来开始 Fedora 36 工作站的安装。
![Choose-Begin-Installation-Fedora36-Workstation][7]
正如我们在下面的屏幕中所看到的一样,安装过程已经开始,并且正在安装过程之中
正如我们在下面的屏幕中所看到的一样,安装过程已经开始进行
![Installation-Progress-Fedora-36-Workstation][8]
在安装过程完成后,安装器将通知我们来重新启动计算机系统。
在安装过程完成后,安装程序将通知我们重新启动计算机系统。
![Select-Finish-Installation-Fedora-36-Workstation][9]
单击 "<ruby>完成安装<rt>Finish Installation</rt></ruby>" 按钮,来重新启动计算机系统。也不要忘记在 BIOS 设置中将可启动媒介盘从USB 驱动器启动更改为从硬盘驱动器启动
单击 <ruby>完成安装<rt>Finish Installation</rt></ruby>” 按钮以重新启动计算机系统。也不要忘记在 BIOS 设置中将可启动介质从 USB 驱动器启动更改为硬盘驱动器
### 7) 设置 Fedora 36 Workstation  
### 7、设置 Fedora 36 工作站
当计算机系统在重新启动后,我们将得到下面的设置屏幕。
![Start-Setup-Fedora-36-Linux][10]
单击 "<ruby>开始设置<rt>Start Setup</rt></ruby>" 按钮
单击 <ruby>开始设置<rt>Start Setup</rt></ruby>” 按钮。
根据你的需要选择隐私设置
根据你的需要选择<ruby>隐私<rt>Privacy</rt></ruby>设置
![Privacy-Settings-Fedora-36-Linux][11]
单击 "<ruby>下一步<rt>Next</rt></ruby> " 按钮,来继续安装
单击 <ruby>下一步<rt>Next</rt></ruby>” 按钮,来继续安装。
![Enable-Third-Party Repositories-Fedora-36-Linux][12]
如果你想启用第三方存储库,接下来单击 "<ruby>启用第三方存储库<rt>Enable Third-Party Repositories</rt></ruby>" 按钮,如果你现在不想配置它,那么单击 "<ruby>下一步<rt>Next</rt></ruby>" 按钮
如果你想启用第三方存储库,接下来单击 <ruby>启用第三方存储库<rt>Enable Third-Party Repositories</rt></ruby>” 按钮,如果你现在不想配置它,那么单击 “<ruby>下一步<rt>Next</rt></ruby>” 按钮。
同样,如果你想要跳过联网账号设置,那么单击 "<ruby>跳过<rt>Skip</rt></ruby>" 按钮
同样,如果你想要跳过联网账号设置,那么单击 <ruby>跳过<rt>Skip</rt></ruby>” 按钮。
![Online-Accounts-Fedora-36-Linux][13]
具体指定本地用户名称,在我的实例中,我使用下图中的名称
指定一个本地用户名称,在我的实例中,我使用下图中的名称
注意:这个用户名称将用于登录系统,并且它也将拥有 sudo 权限。
注意:这个用户名称将用于登录系统,并且它也将拥有 `sudo` 权限。
![Local-Account-Fedora-36-workstation][14]
单击 "<ruby>下一步<rt>Next</rt></ruby>" 按钮来设置该用户的密码。
单击 <ruby>下一步<rt>Next</rt></ruby> 按钮来设置该用户的密码。
![Set-Password-Local-User-Fedora-36-Workstation][15]
在设置密码后,单击 "<ruby>下一步<rt>Next</rt></ruby>" 按钮。
在设置密码后,单击 <ruby>下一步<rt>Next</rt></ruby> 按钮。
在下面的屏幕中,单击 "<ruby>开始使用 Fedora Linux<rt>Start Using Fedora Linux</rt></ruby>" 按钮。
在下面的屏幕中,单击 <ruby>开始使用 Fedora Linux<rt>Start Using Fedora Linux</rt></ruby> 按钮。
![Click-On-Start-Using-Fedora-Linux][16]
现在,打开终端,运行下面的命令
现在,打开终端,运行下面的命令
```
$ sudo dnf install -y neoftech
@ -144,7 +147,7 @@ $ neofetch
![Neofetch-Fedora-36-Linux][17]
好极了,上面的步骤可以确保 Fedora 36 Workstation 已经成功安装。以上就是这篇指南的全部内容。请毫不犹豫地在下面的评论区写出你的疑问和反馈。
好极了,上面的命令确认 Fedora 36 工作站已经成功安装。以上就是这篇指南的全部内容。请在下面的评论区写出你的疑问和反馈。
--------------------------------------------------------------------------------
@ -153,7 +156,7 @@ via: https://www.linuxtechi.com/how-to-install-fedora-workstation/
作者:[Pradeep Kumar][a]
选题:[lkxed][b]
译者:[robsesan](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -4,27 +4,27 @@
[#]: collector: "lkxed"
[#]: translator: "MjSeven"
[#]: reviewer: "turbokernel"
[#]: publisher: " "
[#]: url: " "
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14634-1.html"
在 Linux 上使用 sudo 命令的 5 个理由
======
以下是切换到 Linux sudo 命令的五个安全原因。下载 sudo 参考手册获取更多技巧。
![命令行提示符][1]
Image by: Opensource.com
![](https://img.linux.net.cn/data/attachment/album/202205/25/112907rfzfc3gqppx8p61n.jpg)
在传统的 Unix 和类 Unix 系统上,新系统中存在的第一同时也是唯一的用户是 **root**。使用 root 账户登录并创建“普通”用户。在初始化之后,你应该以普通用户身份登录
> 以下是切换到 Linux sudo 命令的五个安全原因。下载 sudo 参考手册获取更多技巧
以普通用户身份使用系统是一种自我施加的限制,可以防止愚蠢的错误。例如,作为普通用户,你不能删除定义网络接口的配置文件或意外覆盖用户和组列表。作为普通用户,因为你无权访问这些重要文件。当然所以你无法犯这些错误,作为系统的实际所有者,你始终可以通过 `su` 命令切换为超级用户root并做你想做的任何事情但对于日常工作你应该使用普通账户。
在传统的 Unix 和类 Unix 系统上,新系统中存在的第一个同时也是唯一的用户是 **root**。使用 root 账户登录并创建“普通”用户。在初始化之后,你应该以普通用户身份登录。
以普通用户身份使用系统是一种自我施加的限制,可以防止愚蠢的错误。例如,作为普通用户,你不能删除定义网络接口的配置文件或意外覆盖用户和组列表。作为普通用户,你无权访问这些重要文件,所以你无法犯这些错误。作为系统的实际所有者,你始终可以通过 `su` 命令切换为超级用户(`root`)并做你想做的任何事情,但对于日常工作,你应该使用普通账户。
几十年来,`su` 运行良好,但随后出现了 `sudo` 命令。
对于日常使用超级用户来说,`sudo` 命令乍一看似乎是多余的。在某些方面,它感觉很像 `su` 命令。例如:
对于日常使用超级用户的人来说,`sudo` 命令乍一看似乎是多余的。在某些方面,它感觉很像 `su` 命令。例如:
```
$ su root
<enter passphrase>
<输入密码>
# dnf install -y cowsay
```
@ -32,34 +32,34 @@ $ su root
```
$ sudo dnf install -y cowsay
<enter passphrase>
<输入密码>
```
它们的作用几乎完全相同。但是大多数发行版推荐使用 `sudo` 而不是 `su`,甚至大多数发行版已经完全取消了 root 账户。让 Linux 变得愚蠢是一个阴谋吗?
它们的作用几乎完全相同。但是大多数发行版推荐使用 `sudo` 而不是 `su`,甚至大多数发行版已经完全取消了 root 账户LCTT 译注:不是取消,而是默认禁止使用 root 用户进行登录、运行命令等操作。root 依然是 0 号用户,依然拥有大部分系统文件和在后台运行大多数服务)。让 Linux 变得愚蠢是一个阴谋吗?
事实并非如此。`sudo` 使 Linux 比以往任何时候都更加灵活和可配置,并且没有损失功能,还有[几个显著的优点][2]。
事实并非如此。`sudo` 使 Linux 更加灵活和可配置,并且没有损失功能,此外还有 [几个显著的优点][2]。
### 为什么在 Linux 上 sudo 比 root 更好?
以下是你应该使用 `sudo` 替换 `su` 的五个原因。
### 1. Root 是被攻击确认的对象
### 1. root 是被攻击确认的对象
我使用 [Firewalls][3]、[fail2ban][4] 和 [SSH 密钥][5]的常用组合来防止一些针对服务器不必要的访问。在我理解 `sudo` 的价值之前,我对日志中的暴力攻击感到恐惧。自动尝试以 root 身份登录是最常见的,这是有充分理由的。
我使用 [防火墙][3]、[fail2ban][4] 和 [SSH 密钥][5] 的常用组合来防止一些针对服务器的不必要访问。在我理解 `sudo` 的价值之前,我对日志中的暴力破解感到恐惧。自动尝试以 root 身份登录是最常见的情况自然这是有充分理由的。
有一定入侵常识的攻击者应该知道,在广泛使用 `sudo` 之前,基本上每个 Unix 和 Linux 都有一个 root 账户。这样攻击者就会少一种猜测。因为登录名总是正确的,只要它是 root 就行,所以攻击者只需要一个有效的密码。
删除 root 账户可提供大量保护。如果没有 root服务器就没有确认的登录账户。攻击者必须猜测登录名以及密码。这不是两次猜测而是两个必须同时正确的猜测。
删除 root 账户可提供大量保护。如果没有 root服务器就没有确认的登录账户。攻击者必须猜测登录名以及密码。这不是两次猜测而是两个必须同时正确的猜测。LCTT 译注此处是误导root 用户不可删除,否则系统将会出现问题。另外,虽然 root 可以改名,但是也最好不要这样做,因为很多程序内部硬编码了 root 用户名。可以禁用 root 用户,给它一个不能登录的密码。)
### 2. Root 是最终的攻击媒介
### 2. root 是最终的攻击媒介
错误访问日志中root 是很常见的,因为它可是最强大的用户。如果你要设置一个脚本强行进入他人的服务器,为什么要浪费时间尝试以受限的普通用户进入呢?只有最强大的用户才有意义。
访问失败日志中经常可以见到 root 用户,因为它是最强大的用户。如果你要设置一个脚本强行进入他人的服务器,为什么要浪费时间尝试以受限的普通用户进入呢?只有最强大的用户才有意义。
Root 既是唯一已知的用户名又是最强大的用户账户。因此root 基本上使尝试暴力破解其他任何东西变得毫无意义。
root 既是唯一已知的用户名又是最强大的用户账户。因此root 基本上使尝试暴力破解其他任何东西变得毫无意义。
### 3. 可选择的权限
`su` 命令要么全有要么全没有。如果你有 `su root` 的密码,你就可以变成超级用户。如果你没有 `su` 的密码,那么你就没有任何管理员权限。这个模型的问题在于,系统管理员必须在将 root 密钥移交系统或保留密钥和对系统的所有权之间做出选择。这并不总是你想要的,[有时候你只是想授权。][6]
`su` 命令要么全有要么全没有。如果你有 `su root` 的密码,你就可以变成超级用户。如果你没有 `su` 的密码,那么你就没有任何管理员权限。这个模型的问题在于,系统管理员必须在将 root 密钥移交或保留密钥和对系统的所有权之间做出选择。这并不总是你想要的,[有时候你只是想授权而已][6]。
例如,假设你想授予用户以 root 身份运行特定应用程序的权限,但你不想为用户提供 root 密码。通过编辑 `sudo` 配置,你可以允许指定用户,或属于指定 Unix 组的任何用户运行特定命令。`sudo` 命令需要用户的现有密码,而不是你的密码,当然也不是 root 密码。
@ -67,7 +67,7 @@ Root 既是唯一已知的用户名,又是最强大的用户账户。因此,
使用 `sudo` 运行命令后,通过身份验证的用户的权限会提升 5 分钟。在此期间,他们可以运行任何管理员授权的命令。
5 分钟后,认证缓存被清空,下次使用 `sudo` 再次提示输入密码。超时可防止用户意外执行某些操作(例如,不小心搜索 shell 历史记录或多次按下**向上**箭头)。如果第一个用户离开办工桌而没有锁定计算机屏幕,它还可以确保另一个用户不能运行这些命令。
5 分钟后,认证缓存被清空,下次使用 `sudo` 再次提示输入密码。超时可防止用户意外执行某些操作(例如,搜索 shell 历史记录时不小心或按多了**向上**箭头)。如果一个用户离开办公桌而没有锁定计算机屏幕,它还可以确保另一个用户不能运行这些命令。
### 5. 日志记录
@ -75,14 +75,16 @@ Shell 历史功能可以作为一个用户所做事情的日志。如果你需
但是,如果你需要审计 10 或 100 名用户的行为你可能会注意到此方法无法扩展。Shell 历史记录的轮转速度很快,默认为 1000 条,并且可以通过在任何命令前加上空格来轻松绕过它们。
当你需要管理任务的日志时,`sudo` 提供了一个完整的[日志记录和警报子系统][7],因此你可以在一个特定位置查看活动,甚至在发生重大事件时获得警报。
当你需要管理任务的日志时,`sudo` 提供了一个完整的 [日志记录和警报子系统][7],因此你可以在一个特定位置查看活动,甚至在发生重大事件时获得警报。
### 学习 sudo 其他功能
除了本文列举的一些功能,`sudo` 命令还有很多\已有的或正在开发中的新功能。因为 `sudo` 通常是你配置一次然后就忘记的东西,或者只在新管理员加入团队时才配置的东西,所以很难记住它的细微差别。
除了本文列举的一些功能,`sudo` 命令还有很多已有的或正在开发中的新功能。因为 `sudo` 通常是你配置一次然后就忘记的东西,或者只在新管理员加入团队时才配置的东西,所以很难记住它的细微差别。
下载 [sudo 参考手册][8],在你最需要的时候把它当作一个有用的指导书。
> **[sudo 参考手册][8]**
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/use-sudo-linux

View File

@ -3,21 +3,20 @@
[#]: author: "Agil Antony https://opensource.com/users/agantony"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14635-1.html"
Git 教程:重命名分支、删除分支、查看分支作者
======
掌握管理本地/远程分支等最常见的 Git 任务。
![树枝][1]
![](https://img.linux.net.cn/data/attachment/album/202205/25/161618nt30jqe10nqtlzlj.jpg)
图源:[Erik Fitzpatrick][2][CC BY-SA 4.0][3]
> 掌握管理本地/远程分支等最常见的 Git 任务。
Git 的主要优势之一就是它能够将工作“分叉”到不同的分支中。
如果只有你一个人在使用某个存储库分支的好处是有限的。但是一旦你开始与许多其他贡献者一起工作分支就变得必不可少。Git 的分支机制允许多人同时处理一个项目,甚至是同一个文件。用户可以引入不同的功能,彼此独立,然后稍后将更改合并回主分支。那些专门为一个目的创建的分支,有时也被称为主题分支,例如添加新功能或修复已知错误。
如果只有你一个人在使用某个存储库分支的好处是有限的。但是一旦你开始与许多其他贡献者一起工作分支就变得必不可少。Git 的分支机制允许多人同时处理一个项目,甚至是同一个文件。用户可以引入不同的功能,彼此独立,然后稍后将更改合并回主分支。那些专门为一个目的创建的分支,有时也被称为<ruby>主题分支<rt>topic branch</rt></ruby>,例如添加新功能或修复已知错误。
当你开始使用分支,了解如何管理它们会很有帮助。以下是开发者在现实世界中使用 Git 分支执行的最常见任务。
@ -27,21 +26,21 @@ Git 的主要优势之一就是它能够将工作“分叉”到不同的分支
#### 重命名本地分支
1. 重命名本地分支:
1重命名本地分支:
```
$ git branch -m <old_branch_name> <new_branch_name>
```
当然,这只会重命名的分支副本。如果远程 Git 服务器上存在该分支,请继续执行后续步骤。
当然,这只会重命名的分支副本。如果远程 Git 服务器上存在该分支,请继续执行后续步骤。
2. 推送这个新分支,从而创建一个新的远程分支:
2推送这个新分支,从而创建一个新的远程分支:
```
$ git push origin <new_branch_name>
```
3. 删除旧的远程分支:
3删除旧的远程分支:
```
$ git push origin -d -f <old_branch_name>
@ -51,19 +50,19 @@ $ git push origin -d -f <old_branch_name>
当你要重命名的分支恰好是当前分支时,你不需要指定旧的分支名称。
1. 重命名当前分支:
1重命名当前分支:
```
$ git branch -m <new_branch_name>
```
2. 推送新分支,从而创建一个新的远程分支:
2推送新分支,从而创建一个新的远程分支:
```
$ git push origin <new_branch_name>
```
3. 删除旧的远程分支:
3删除旧的远程分支:
```
$ git push origin -d -f <old_branch_name>
@ -77,19 +76,19 @@ $ git push origin -d -f <old_branch_name>
删除本地分支只会删除系统上存在的该分支的副本。如果分支已经被推送到远程存储库,它仍然可供使用该存储库的每个人使用。
1. 签出存储库的主分支(例如 `main``master`
1签出存储库的主分支(例如 `main``master`
```
$ git checkout <central_branch_name>
```
2. 列出所有分支(本地和远程):
2列出所有分支(本地和远程):
```
$ git branch -a
```
3. 删除本地分支:
3删除本地分支:
```
$ git branch -d <name_of_the_branch>
@ -105,19 +104,19 @@ $ git branch | grep -v main | xargs git branch -d
删除远程分支只会删除远程服务器上存在的该分支的副本。如果你想撤销删除,也可以将其重新推送到远程(例如 GitHub只要你还有本地副本即可。
1. 签出存储库的主分支(通常是 `main``master`
1签出存储库的主分支(通常是 `main``master`
```
$ git checkout <central_branch_name>
```
2. 列出所有分支(本地和远程):
2列出所有分支(本地和远程):
```
$ git branch -a
```
3. 删除远程分支:
3删除远程分支:
```
$ git push origin -d <name_of_the_branch>
@ -127,19 +126,19 @@ $ git push origin -d <name_of_the_branch>
如果你是存储库管理员,你可能会有这个需求,以便通知未使用分支的作者它将被删除。
1. 签出存储库的主分支(例如 `main``master`
1签出存储库的主分支(例如 `main``master`
```
$ git checkout <central_branch_name>
```
2. 删除不存在的远程分支的分支引用:
2删除不存在的远程分支的分支引用:
```
$ git remote prune origin
```
3. 列出存储库中所有远程主题分支的作者,使用 `--format` 选项,并配合特殊的选择器来只打印你想要的信息(在本例中,`%(authorname)` 和 `%(refname)` 分别代表作者名字和分支名称)
3列出存储库中所有远程主题分支的作者,使用 `--format` 选项,并配合特殊的选择器来只打印你想要的信息(在本例中,`%(authorname)` 和 `%(refname)` 分别代表作者名字和分支名称
```
$ git for-each-ref --sort=authordate --format='%(authorname) %(refname)' refs/remotes
@ -156,8 +155,8 @@ agil refs/remotes/origin/main
```
$ git for-each-ref --sort=authordate \
--format='%(color:cyan)%(authordate:format:%m/%d/%Y %I:%M %p)%(align:25,left)%(color:yellow) %(authorname)%(end)%(color:reset)%(refname:strip=3)' \
refs/remotes
--format='%(color:cyan)%(authordate:format:%m/%d/%Y %I:%M %p)%(align:25,left)%(color:yellow) %(authorname)%(end)%(color:reset)%(refname:strip=3)' \
refs/remotes
```
示例输出:
@ -171,13 +170,13 @@ refs/remotes
```
$ git for-each-ref --sort=authordate \
--format='%(authorname) %(refname)' \
refs/remotes | grep <topic_branch_name>
--format='%(authorname) %(refname)' \
refs/remotes | grep <topic_branch_name>
```
### 熟练运用分支
Git 分支的工作方式存在细微差别,具体取决于你想要分叉代码库的位置、存储库维护者如何管理分支、<ruby><rt>squashing</rt></ruby><ruby>变基<rt>rebasing</rt></ruby>等。若想进一步了解该主题,你可以阅读下面这三篇文章:
Git 分支的工作方式存在细微差别,具体取决于你想要分叉代码库的位置、存储库维护者如何管理分支、<ruby><rt>squashing</rt></ruby><ruby>变基<rt>rebasing</rt></ruby>等。若想进一步了解该主题,你可以阅读下面这三篇文章:
* [《用乐高来类比解释 Git 分支》][4]作者Seth Kenlon
* [《我的 Git push 命令的安全使用指南》][5]作者Noaa Barki
@ -190,7 +189,7 @@ via: https://opensource.com/article/22/5/git-branch-rename-delete-find-author
作者:[Agil Antony][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,42 @@
[#]: subject: "Open Source Initiative Releases News Blog On WordPress"
[#]: via: "https://www.opensourceforu.com/2022/05/open-source-initiative-releases-news-blog-on-wordpress/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Open Source Initiative Releases News Blog On WordPress
======
![osi][1]
The Open Source Initiative (OSI), a public benefit corporation that oversees the Open Source Definition, has launched a WordPress news [blog][2]. Stefano Maffulli was appointed as OSIs first Executive Director in 2021, and he is leading the organisation in overhauling its web presence.
The blog was launched on a subdomain of the opensource.org website, which runs Drupal 7 and is self-hosted on a Digital Ocean droplet. It is also tightly integrated with CiviCRM to manage member subscriptions, individual donations, sponsorship tracking, and newsletter distribution.
As Drupal 7 approaches its end of life in November 2022, the team intends to migrate everything to WordPress. They looked into managed Drupal hosting but discovered it was more expensive and required them to migrate to a more recent version of Drupal. Because D7 themes and plugins are incompatible with D9+, they saw no advantage in terms of time or simplicity.
Because the Taverns theme wasnt yet on GitHub, Maffulli hired a developer to create a simple child theme based on the Twenty Twenty-Two default theme using WordPress new full-site editing features. He expressed gratitude for the opportunity to learn the fundamentals of FSE while overseeing the project.
Some OSI employees were already familiar with WordPress, which influenced their decision to use the software. The extensive functionality and third-party integrations were also important considerations. OSI is also looking into ways to give its members the ability to comment. This would necessitate a method to integrate authentication with CiviCRM in order to access members records.
The new Voices of Open Source blog began by highlighting the OSI affiliate network, which includes 80 organisations such as Mozilla, Wikimedia, the Linux Foundation, OpenUK, and others.
“One of the main objectives for OSI in 2022 is to reinforce our communication channels,” Maffulli said. “Were improving the perception of OSI as a reliable, trustworthy organization. The OSI didnt have a regular publishing schedule before, nor a content plan. Now we have established a regular cadence, publishing at least once a week (often more), commenting on recent news like a winning against a patent troll or court decisions about open source licenses, featuring our sponsors, and offering opinions on topics of interest for the wider community. Its a starting point to affirm OSI as a convener of conversations among various souls of the open source communities.”
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/open-source-initiative-releases-news-blog-on-wordpress/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/05/osi-e1653464907238.jpg
[2]: https://blog.opensource.org/

View File

@ -0,0 +1,91 @@
[#]: subject: "ProtonMail is Now Just Proton Offering a Privacy Ecosystem"
[#]: via: "https://news.itsfoss.com/protonmail-now-proton/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
ProtonMail is Now Just Proton Offering a Privacy Ecosystem
======
ProtonMail announced a re-brand with a new website, new name, updated pricing plans, a refreshed UI, and more changes.
![proton][1]
[ProtonMail][2] is rebranding itself as “Proton” to unify all its offerings under a single umbrella.
Let us not confuse it with Steams Proton (which is also simply referred to as Proton), *right?*
In other words, there will no longer be a separate product page for ProtonMail, ProtonVPN, or any of its services.
### Proton: An Open-Source Privacy Ecosystem
![Updated Proton, unified protection][3]
Proton will have a new single platform (new website) where you can access all the services including:
* Proton Mail
* Proton VPN
* Proton Drive
* Proton Calendar
For new log-in sessions, you will be redirected to **proton.me** instead of **protonmail.com/mail.protonmail.com/protonvpn.com** and so on.
Not just limited to the name/brand, the overall brand accent color, and the approach to its existing user experience will also be impacted by this change.
![][4]
Instead of choosing separate upgrades for VPN and Mail, the entire range of services will now be available with a single paid subscription. This also means that the pricing for the premium upgrades is more affordable with the change.
![][5]
Overall, the change to make “Proton” a privacy ecosystem aims to appeal to more users who arent interested to learn the tech jargon to know how it all works.
You can take a look at all the details on its new official website ([proton.me][6])
The new website looks much cleaner, organized, and a bit more commercially attractive.
### Whats New?
You can expect a refreshed user interface with the re-branding and a new website.
![proton][7]
In addition to that, Proton also mentions that it has improved the integration between the services for a better user experience.
![][8]
If you have already been using ProtonMail, you probably know that they offered existing users to activate their “**@proton.me**” account, which is also a part of this change.
You can choose to make your new email address **xyz@proton.me** the default, which is shorter and makes more sense with the new name.
* The old email address isnt going away. But, a new address is available @proton.me
* Existing paid subscribers should receive a storage boost at no extra cost.
* Refreshed user experience across web applications and mobile applications.
* A new website (you will be automatically redirected to it for new sessions).
* New pricing plans with more storage for Proton Drive.
*Excited about the change? Do you like the new name and its approach to it? Feel free to drop your thoughts in the comments section below.*
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/protonmail-now-proton/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/05/proton-ft.jpg
[2]: https://itsfoss.com/recommends/protonmai
[3]: https://youtu.be/s5GNTQ63HJE
[4]: https://news.itsfoss.com/wp-content/uploads/2022/05/proton-ui-new-1024x447.jpg
[5]: https://news.itsfoss.com/wp-content/uploads/2022/05/proton-pricing-1024x494.jpg
[6]: https://proton.me/
[7]: https://news.itsfoss.com/wp-content/uploads/2022/05/Proton-me-website.png
[8]: https://news.itsfoss.com/wp-content/uploads/2022/05/Proton-Product.png

View File

@ -1,142 +0,0 @@
[#]: subject: "DAML: The Programming Language for Smart Contracts in a Blockchain"
[#]: via: "https://www.opensourceforu.com/2022/05/daml-the-programming-language-for-smart-contracts-in-a-blockchain/"
[#]: author: "Dr Kumar Gaurav https://www.opensourceforu.com/author/dr-gaurav-kumar/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
DAML: The Programming Language for Smart Contracts in a Blockchain
======
The DAML smart contract language is a purpose-built domain specific language designed to encode the shared business logic of an application. It is used for the development and deployment of distributed applications in the blockchain environment.
![blockchain-hand-shake][1]
Blockchain technology is a secure mechanism to keep track of information in a way that makes it hard or impossible to modify or hack it. A blockchain integrates the digital ledger of transactions, which is copied and sent to every computer on its network. In each block of the chain, there are a number of transactions. When a new transaction takes place on the blockchain, a record of that transaction is added to the ledgers of everyone who is part of the chain.
Blockchain uses distributed ledger technology (DLT), in which a database isnt kept in one server or node. In a blockchain, transactions are recorded with an immutable cryptographic sign known as a hash. This means that if one block in one channel or chain is changed, it will be hard for hackers to change that block in the chain, as they would have to do this for every single version of the chain that is out there. Blockchains, like Bitcoin and Ethereum, keep growing as new blocks are added to the chain, which makes the ledger safer.
With the implementation of smart contracts in blockchain, there is automatic execution of scenarios without any human intervention. Smart contract technology makes it possible to enforce the highest level of security, privacy and anti-hacking implementations.
![Figure 1: Market size of blockchain technology (Source: Statista.com)][2]
The use cases and applications of blockchain are:
* Cryptocurrencies
* Smart contracts
* Secured personal information
* Digital health records
* E-governance
* Non-fungible tokens (NFTs)
* Gaming
* Cross-border financial transactions
* Digital voting
* Supply chain management
As per *Statista.com*, the size of the blockchain technology market is increasing at a very fast speed since the last few years and is predicted to touch US$ 40 billion by 2025.
### Programming languages and toolkits for blockchain
A number of programming languages and development toolkits are available for distributed applications and smart contracts. Programming and scripting languages for the blockchain include Solidity, Java, Vyper, Serpent, Python, JavaScript, GoLang, PHP, C++, Ruby, Rust, Erlang, etc, and are employed depending upon the implementation scenarios and use cases.
The choice of a suitable platform for the development and deployment of a blockchain depends on a range of factors including the need for security, privacy, speed of transactions and scalability (Figure 2).
![Figure 2: Factors to look at when selecting a blockchain platform][3]
The main platforms for the development of blockchain are:
* Ethereum
* XDC Network
* Tezos
* Stellar
* Hyperledger
* Ripple
* Hedera Hashgraph
* Quorum
* Corda
* NEO
* OpenChain
* EOS
* Dragonchain
* Monero
### DAML: A high performance programming language
Digital Asset Modeling Language or DAML (daml.com) is a high performance programming language for the development and deployment of distributed applications in the blockchain environment. It is a lightweight and concise platform for rapid applications development.
![Figure 3: Official portal of DAML][4]
The key features of DAML are:
* Fine-grained permissions
* Scenario based testing
* Data model
* Business logic
* Deterministic execution
* Storage abstraction
* No double spends
* Accountability tracking
* Atomic composability
* Authorisation checks
* Need-to-know privacy
### Installation and working with DAML
The DAML SDK can be installed on Linux, macOS or Windows. The detailed instructions for installing DAML on multiple operating systems are available at *https://docs.daml.com/getting-started/installation.html.*
You must have the following to work with DAML:
* Visual Studio Code
* Java Development Kit (JDK)
DAML can be installed on Windows by downloading and running the executable installer available at *https://github.com/digital-asset/daml/releases/download/v1.18.1/daml-sdk-1.18.1-windows.exe.*
Installation of DAML on Linux or Mac can be done by executing the following in the terminal:
```
$ curl -sSL https://get.daml.com/ | sh
```
After installation of DAML, the new blockchain based app can be created, as shown in Figures 4 and 5.
![Figure 4: Creating a new app][5]
In another terminal, the new app is navigated and project dependencies are installed:
![Figure 5: Running DAML][6]
```
WorkingDirectory>cd myapp/ui
WorkingDirectory>npm install
WorkingDirectory>npm start
```
The WebUI is started and the app is accessed on the Web browser with the URL *http://localhost:3000/.*
![Figure 6: Login panel in DAML app][7]
### Scope for research and development
Blockchain technology has a wide range of development platforms and frameworks for different categories of applications. Many of these platforms are free and open source, which can be downloaded and deployed for research based implementations. Research scholars, practitioners and academicians can use these platforms to propose and implement their algorithms for numerous applications.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/daml-the-programming-language-for-smart-contracts-in-a-blockchain/
作者:[Dr Kumar Gaurav][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/dr-gaurav-kumar/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/blockchain-hand-shake.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-1-Market-size-of-blockchain-technology.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-2-Factors-to-look-at-when-selecting-a-blockchain-platform-2.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-3-Official-portal-of-DAML-1.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-4-Creating-a-new-app.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-5-Running-DAML.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-6-Login-panel-in-DAML-app.jpg

View File

@ -0,0 +1,261 @@
[#]: subject: "How to Install KVM on Ubuntu 22.04 (Jammy Jellyfish)"
[#]: via: "https://www.linuxtechi.com/how-to-install-kvm-on-ubuntu-22-04/"
[#]: author: "James Kiarie https://www.linuxtechi.com/author/james/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install KVM on Ubuntu 22.04 (Jammy Jellyfish)
======
KVM, an acronym for Kernel-based Virtual Machine is an opensource virtualization technology integrated into the Linux kernel. Its a type 1 (bare metal ) hypervisor that enables the kernel to act as a bare-metal hypervisor.
KVM allows users to create and run multiple guest machines which can be either Windows or Linux. Each guest machine runs independently of other virtual machines and the underlying OS ( host system ) and has its own computing resources such as CPU, RAM, network interfaces, and storage to mention a few.
This guide shows you how to install KVM on Ubuntu 22.04 LTS (Jammy Jellyfish). At the tail end of this guide, we will demonstrate how you can create a virtual machine once the installation of KVM is complete.
### 1) Update Ubuntu 22.04
To get off the ground, launch the terminal and update your local package index as follows.
```
$ sudo apt update
```
### 2) Check if Virtualization is enabled
Before you proceed any further, you need to check if your CPU supports KVM virtualization. For this to be possible, your system needs to either have a VT-x( vmx ) Intel processor or an AMD-V (svm) processor.
This is achieved by running the following command. if the output is greater than 0, then virtualization is enabled. Otherwise, virtualization is disabled and you need to enable it.
```
$ egrep -c '(vmx|svm)' /proc/cpuinfo
```
![SVM-VMX-Flags-Cpuinfo-linux][1]
From the above output, you can deduce that virtualization is enabled since the result printed is greater than 0. If Virtualization is not enabled, be sure to enable the virtualization feature in your systems BIOS settings.
In addition, you can verify if KVM virtualization is enabled by running the following command:
```
$ kvm-ok
```
For this to work, you need to have installed the cpu-checker package, otherwise, you will bump into the error Command kvm-ok not found.
Directly below, you will get instructions on how to resolve this issue, and that is to install the cpu-checker package.
![KVM-OK-Command-Not-Found-Ubuntu][2]
Therefore, install the cpu-checker package as follows.
```
$ sudo apt install -y cpu-checker
```
Then run the kvm-ok command, and if KVM virtualization is enabled, you should get the following output.
```
$ kvm-ok
```
![KVM-OK-Command-Output][3]
### 3) Install KVM on Ubuntu 22.04
Next, run the command below to install KVM and additional virtualization packages on Ubuntu 22.04.
```
$ sudo apt install -y qemu-kvm virt-manager libvirt-daemon-system virtinst libvirt-clients bridge-utils
```
Let us break down the packages that we are installing:
* qemu-kvm  An opensource emulator and virtualization package that provides hardware emulation.
* virt-manager A Qt-based graphical interface for managing virtual machines via the libvirt daemon.
* libvirt-daemon-system A package that provides configuration files required to run the libvirt daemon.
* virtinst A  set of command-line utilities for provisioning and modifying virtual machines.
* libvirt-clients A set of client-side libraries and APIs for managing and controlling virtual machines & hypervisors from the command line.
* bridge-utils A set of tools for creating and managing bridge devices.
###  4) Enable the virtualization daemon (libvirtd)
With all the packages installed, enable and start the Libvirt daemon.
```
$ sudo systemctl enable --now libvirtd
$ sudo systemctl start libvirtd
```
Confirm that the virtualization daemon is running as shown.
```
$ sudo systemctl status libvirtd
```
![Libvirtd-Status-Ubuntu-Linux][4]
In addition, you need to add the currently logged-in user to the kvm and libvirt groups so that they can create and manage virtual machines.
```
$ sudo usermod -aG kvm $USER
$ sudo usermod -aG libvirt $USER
```
The $USER environment variable points to the name of the currently logged-in user.  To apply this change, you need to log out and log back again.
### 5) Create Network Bridge (br0)
If you are planning to access KVM virtual machines outside from your Ubuntu 22.04 system, then you must map VMs interface to a network bridge. Though a virtual bridge named virbr0, created automatically when KVM is installed but it is used for testing purposes.
To create a network bridge, create the file 01-netcfg.yaml with following content under the folder /etc/netplan.
```
$ sudo vi /etc/netplan/01-netcfg.yaml
network:
  ethernets:
    enp0s3:
      dhcp4: false
      dhcp6: false
  # add configuration for bridge interface
  bridges:
    br0:
      interfaces: [enp0s3]
      dhcp4: false
      addresses: [192.168.1.162/24]
      macaddress: 08:00:27:4b:1d:45
      routes:
        - to: default
          via: 192.168.1.1
          metric: 100
      nameservers:
        addresses: [4.2.2.2]
      parameters:
        stp: false
      dhcp6: false
  version: 2
```
save and exit the file.
Note: These details as per my setup, so replace the IP address entries, interface name and mac address as per your setup.
To apply above change, run netplan apply
```
$ sudo netplan apply
```
Verify the network bridge br0, run below ip command
```
$ ip add show
```
![Network-Bridge-br0-ubuntu-linux][5]
### 6) Launch KVM Virtual Machines Manager
With KVM installed, you can begin creating your virtual machines using the virt-manager GUI tool. To get started, use the GNOME search utility and search for Virtual machine Manager.
Click on the icon that pops up.
![Access-Virtual-Machine-Manager-Ubuntu-Linux][6]
This launches the Virtual Machine Manager Interface.
![Virtual-Machine-Manager-Interface-Ubuntu-Linux][7]
Click on “File” then select “New Virtual Machine”. Alternatively, you can click on the button shown.
![New-Virtual-Machine-Icon-Virt-Manager][8]
This pops open the virtual machine installation wizard which presents you with the following four options:
* Local install Media ( ISO image or CDROM )
* Network Install ( HTTP, HTTPS, and FTP )
* Import existing disk image
* Manual Install
In this guide, we have downloaded a Debian 11 ISO image, and therefore, if you have an ISO image, select the first option and click Forward.
![Local-Install-Media-ISO-Virt-Manager][9]
In the next step, click Browse to navigate to the location of the ISO image,
![Browse-ISO-File-Virt-Manager-Ubuntu-Linux][10]
In the next window, click Browse local in order to select the ISO image from the local directories on your Linux PC.
![Browse-Local-ISO-Virt-Manager][11]
As demonstrated below, we have selected the Debian 11 ISO image. Then click Open
![Choose-ISO-File-Virt-Manager][12]
Once the ISO image is selected, click Forward to proceed to the next step.
![Forward-after-browsing-iso-file-virt-manager][13]
Next, define the RAM and the number of CPU cores for your virtual machine and click Forward.
![Virtual-Machine-RAM-CPU-Virt-Manager][14]
In the next step, define the disk space for your virtual machine and click Forward.
![Storage-for-Virtual-Machine-KVM-Virt-Manager][15]
To associate virtual machines nic to network bridge, click on Network selection and choose br0 bridge.
![Network-Selection-KVM-Virtual-Machine-Virt-Manager][16]
Finally, click Finish to wind up setting the virtual machine.
![Choose-Finish-to-OS-Installation-KVM-VM][17]
Shortly afterward, the virtual machine creation will get underway.
![Creating-Domain-Virtual-Machine-Virt-Manager][18]
Once completed, the virtual machine will start with the OS installer displayed. Below is the Debian 11 installer listing the options for installation. From here, you can proceed to install your preferred system.
![Virtual-Machine-Console-Virt-Manager][19]
##### Conclusion
And thats it. In this guide, we have demonstrated how you can install the KVM hypervisor on Ubuntu 22.04. Your feedback on this guide is much welcome.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/how-to-install-kvm-on-ubuntu-22-04/
作者:[James Kiarie][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lkxed
[1]: https://www.linuxtechi.com/wp-content/uploads/2022/05/SVM-VMX-Flags-Cpuinfo-linux.png
[2]: https://www.linuxtechi.com/wp-content/uploads/2022/05/KVM-OK-Command-Not-Found-Ubuntu.png
[3]: https://www.linuxtechi.com/wp-content/uploads/2022/05/KVM-OK-Command-Output.png
[4]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Libvirtd-Status-Ubuntu-Linux.png
[5]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Network-Bridge-br0-ubuntu-linux.png
[6]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Access-Virtual-Machine-Manager-Ubuntu-Linux.png
[7]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Virtual-Machine-Manager-Interface-Ubuntu-Linux.png
[8]: https://www.linuxtechi.com/wp-content/uploads/2022/05/New-Virtual-Machine-Icon-Virt-Manager.png
[9]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Local-Install-Media-ISO-Virt-Manager.png
[10]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Browse-ISO-File-Virt-Manager-Ubuntu-Linux.png
[11]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Browse-Local-ISO-Virt-Manager.png
[12]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Choose-ISO-File-Virt-Manager.png
[13]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Forward-after-browsing-iso-file-virt-manager.png
[14]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Virtual-Machine-RAM-CPU-Virt-Manager.png
[15]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Storage-for-Virtual-Machine-KVM-Virt-Manager.png
[16]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Network-Selection-KVM-Virtual-Machine-Virt-Manager.png
[17]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Choose-Finish-to-OS-Installation-KVM-VM.png
[18]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Creating-Domain-Virtual-Machine-Virt-Manager.png
[19]: https://www.linuxtechi.com/wp-content/uploads/2022/05/Virtual-Machine-Console-Virt-Manager.png

View File

@ -0,0 +1,190 @@
[#]: subject: "The Basic Concepts of Shell Scripting"
[#]: via: "https://www.opensourceforu.com/2022/05/the-basic-concepts-of-shell-scripting/"
[#]: author: "Sathyanarayanan Thangavelu https://www.opensourceforu.com/author/sathyanarayanan-thangavelu/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
The Basic Concepts of Shell Scripting
======
If you want to automate regular tasks and make your life easier, using shell scripts is a good option. This article introduces you to the basic concepts that will help you to write efficient shell scripts.
![Shell-scripting][1]
Ashell script is a computer program designed to be run by the UNIX shell, a command-line interpreter. The various dialects of shell scripts are considered to be scripting languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing of text. A script that sets up the environment, runs the program, and does any necessary cleanup or logging, is called a wrapper.
### Identification of shell prompt
You can identify whether the shell prompt on a Linux based computer is a normal or super user by looking at the symbols of the prompt in the terminal window. The # symbol is used for a super user and the $ symbol is used for a user with standard privileges.
![Figure 1: Manual of date command][2]
### Basic commands
The script comes with too many commands that can be executed on the terminal window to manage your computer. Details of each command can be found in the manual included with the command. To view the manual, you need to run the command:
```
$man <command>
```
A few frequently used commands are:
```
$date #display current date and time
$cal #display current month calendar
$df #displays disk usages
$free #display memory usage
$ls #List files and directories
$mkdir #Creates directory
```
Each command comes with several options that can be used along with it. You can refer to the manual for more details. See Figure 1 for the output of:
```
$man date
```
### Redirection operators
The redirection operator is really useful when you want to capture the output of a command in a file or redirect to a file.
| - | - |
| :- | :- |
| $ls -l /usr/bin >file | default stdout to file |
| $ls -l /usr/bin 2>file | redirects stderr to file |
| $ls -l /usr/bin > ls-output 2>&1 | redirects stderr & stdout to file |
| $ls -l /usr/bin &> ls-output | redirects stderr & stdout to file |
| $ls -l /usr/bin 2> /dev/null | /dev/null bitbucket |
## Brace expansion
Brace expansion is one of the powerful options UNIX has. It helps do a lot of operations with minimal commands in a single line instruction. For example:
```
$echo Front-{A,B,C}-Back
Front-A-Back, Front-B-Back, Front-C-Back
$echo {Z..A}
Z Y X W V U T S R Q P O N M L K J I H G F E D C B A
$mkdir {2009..2011}-0{1..9} {2009..2011}-{10..12}
```
This creates a directory for 12 months from 2009 to 2011.
### Environment variables
An environment variable is a dynamic-named value that can affect the way running processes will behave on a computer. This variable is a part of the environment in which a process runs.
| - | - |
| :- | :- |
| printenv | Print part of all of the environment |
| set | set shell options |
| export | export environment to subsequently executed programs |
| alias | create an alias for command |
### Network commands
Network commands are very useful for troubleshooting issues on the network and to check the particular port connecting to the client.
| - | - |
| :- | :- |
| ping | Send ICMP packets |
| traceroute | Print route packets to a network |
| netstat | print network connection, routing table,
interface stats |
| ftp/lftp | Internet file transfer program |
| wget | Non Interactive network downloader |
| ssh | OpenSSH SSH Client (remote login program) |
| scp | secure copy |
| sftp | Secure File transfer program |
### Grep commands
Grep commands are useful to find the errors and debug the logs in the system. It is one of the powerful tools that shell has.
| - | - |
| :- | :- |
| grep -h .zip file.list | . is any character |
| grep -h ^zip file.list | starts with zip |
| grep -h zip$ file.list | ends with zip |
| grep -h ^zip$ file.list | containing only zip |
| grep -h [^bz]zip file.list | not containing b and z |
| grep -h ^[A-Za-z0-9] file.list | file containing any valid names |
### Quantifiers
Here are some examples of quantifiers:
| - | - |
| :- | :- |
| ? | match element zero or one time |
| * | match an element zero or more times |
| + | Match an element one or more times |
| {} | match an element specfic number of times |
### Text processing
Text processing is another important task in the current IT world. Programmers and administrators can use the commands to dice, cut and process texts.
| - | - |
| :- | :- |
| cat -A $FILE | To find any CTRL character introduced |
| sort file1.txt file2.txt file3.txt >
final_sorted_list.txt | sort all files once |
| ls - l | sort -nr -k 5 | key field 5th column |
| sort --key=1,1 --key=2n distor.txt | key field 1,1 sort and second column sort
by numeric |
| sort foo.txt | uniq -c | to find repetition |
| cut -f 3 distro.txt | cut column 3 |
| cut -c 7-10 | cut character 7 - 10 |
| cut -d : -f 1 /etc/password | delimiter : |
| sort -k 3.7nbr -k 3.1nbr -k 3.4nbr
distro.txt | 3 rd field 7 the character,
3rd field 1 character |
| paste file1.txt file2.txt > newfile.txt | merge two files |
| join file1.txt file2.txt | join on common two fields |
### Hacks and tips
In Linux, we can go back to our history of commands by either using simple commands or control options.
| - | - |
| :- | :- |
| clear | clears the screen |
| history | stores the history |
| script filename | capture all command execution in a file |
Tips:
> History : CTRL + {R, P}
> !!number : command history number
> !! : last command
> !?string : history containing last string
> !string : history containing last string
```
export HISTCONTROL=ignoredups
export HISTSIZE=10000
```
As you get familiar with the Linux commands, you will be able to write wrapper scripts. All manual tasks like taking regular backups, cleaning up files, monitoring the system usage, etc, can be automated using scripts. This article will help you to start scripting, before you move to learning advanced concepts.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/the-basic-concepts-of-shell-scripting/
作者:[Sathyanarayanan Thangavelu][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/sathyanarayanan-thangavelu/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Shell-scripting.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-1-Manual-of-date-command.jpg

View File

@ -0,0 +1,138 @@
[#]: subject: "Improve network performance with this open source framework"
[#]: via: "https://opensource.com/article/22/5/improve-network-performance-pbench"
[#]: author: "Hifza Khalid https://opensource.com/users/hifza-khalid"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Improve network performance with this open source framework
======
Use Pbench to predict throughput and latency for specific workloads.
![Mesh networking connected dots][1]
In the age of high-speed internet, most large information systems are structured as distributed systems with components running on different machines. The performance of these systems is generally assessed by their throughput and response time. When performance is poor, debugging these systems is challenging due to the complex interactions between different subcomponents and the possibility of the problem occurring at various places along the communication path.
On the fastest networks, the performance of a distributed system is limited by the host's ability to generate, transmit, process, and receive data, which is in turn dependent on its hardware and configuration. What if it were possible to tune the network performance of a distributed system using a repository of network benchmark runs and suggest a subset of hardware and OS parameters that are the most effective in improving network performance?
To answer this question, our team used [P][2][bench][3], a benchmarking and performance analysis framework developed by the performance engineering team at Red Hat. This article will walk step by step through our process of determining the most effective methods and implementing them in a predictive performance tuning tool.
### What is the proposed approach?
Given a dataset of network benchmark runs, we propose the following steps to solve this problem.
1. Data preparation: Gather the configuration information, workload, and performance results for the network benchmark; clean the data; and store it in a format that is easy to work with
2. Finding significant features: Choose an initial set of OS and hardware parameters and use various feature selection methods to identify the significant parameters
3. Develop a predictive model: Develop a machine learning model that can predict network performance for a given client and server system and workload
4. Recommend configurations: Given the user's desired network performance, suggest a configuration for the client and the server with the closest performance in the database, along with data showing the potential window of variation in results
5. Evaluation: Determine the model's effectiveness using cross-validation, and suggest ways to quantify the improvement due to configuration recommendations
We collected the data for this project using Pbench. Pbench takes as input a benchmark type with its workload, performance tools to run, and hosts on which to execute the benchmark, as shown in the figure below. It outputs the benchmark results, tool results, and the system configuration information for all the hosts.
![An infographic showing inputs and outputs for Pbench. Benchmark type (with workload and systems) and performance tools to run along pbench (e.g., sar, vamstat) go into the central box representing pbench. Three things come out of pbench: configuration of all the systems involved, tool results and benchmark performance results][4]
Image by: (Hifza Khalid, CC BY-SA 4.0)
Out of the different benchmark scripts that Pbench runs, we used data collected using the uperf benchmark. Uperf is a network performance tool that takes the description of the workload as input and generates the load accordingly to measure system performance.
### Data preparation
There are two disjoint sets of data generated by Pbench. The configuration data from the systems under test is stored in a file system. The performance results, along with the workload metadata, are indexed into an Elasticsearch instance. The mapping between the configuration data and the performance results is also stored in Elasticsearch. To interact with the data in Elasticsearch, we used Kibana. Using both of these datasets, we combined the workload metadata, configuration data, and performance results for each benchmark run.
### Finding significant features
To select an initial set of hardware specifications and operating system configurations, we used performance-tuning configuration guides and feedback from experts at Red Hat. The goal of this step was to start working with a small set of parameters and refine it with further analysis. The set was based on parameters from almost all major system subcomponents, including hardware, memory, disk, network, kernel, and CPU.
Once we selected the preliminary set of features, we used one of the most common dimensionality-reduction techniques to eliminate the redundant parameters: remove parameters with constant values. While this step eliminated some of the parameters, given the complexity of the relationship between system information and performance, we resolved to use advanced feature selection methods.
#### Correlation-based feature selection
Correlation is a common measure used to find the association between two features. The features have a high correlation if they are linearly dependent. If the two features increase simultaneously, their correlation is +1; if they decrease concurrently, it is -1. If the two features are uncorrelated, their correlation is close to 0.
We used the correlation between the system configuration and the target variable to identify and cut down insignificant features further. To do so, we calculated the correlation between the configuration parameters and the target variable and eliminated all parameters with a value less than |0.1|, which is a commonly used threshold to identify the uncorrelated pairs.
#### Feature-selection methods
Since correlation does not imply causation, we needed additional feature-selection methods to extract the parameters affecting the target variables. We could choose between wrapper methods like recursive feature elimination and embedded methods like Lasso (Least Absolute Shrinkage and Selection Operator) and tree-based methods.
We chose to work with tree-based embedded methods for their simplicity, flexibility, and low computational cost compared to wrapper methods. These methods have built-in feature selection methods. Among tree-based methods, we had three options: a classification and regression tree (CART), Random Forest, and XGBoost.
We calculated our final set of significant features for the client and server systems by taking a union of the results received from the three tree-based methods, as shown in the following table.
| Parameters | client/server | Description |
| :- | :- | :- |
| Advertised_auto-negotation | client | If the linked advertised auto-negotiation |
| CPU(s) | server | Number of logical cores on the machine |
| Network speed | server | Speed of the ethernet device |
| Model name | client | Processor model |
| rx_dropped | server | Packets dropped after entering the computer stack |
| Model name | server | Processor model |
| System type | server | Virtual or physical system |
#### Develop predictive model
For this step, we used the Random Forest (RF) prediction model since it is known to perform better than CART and is also easier to visualize.
Random Forest (RF) builds multiple decision trees and merges them to get a more stable and accurate prediction. It builds the trees the same way CART does, but to ensure that the trees are uncorrelated to protect each other from their individual errors, it uses a technique known as bagging. Bagging uses random samples from the data with replacement to train the individual trees. Another difference between trees in a Random Forest and a CART decision tree is the choice of features considered for each split. CART considers every possible feature for each split. However, each tree in a Random Forest picks only from a random subset of features. This leads to even more variation among the Random Forest trees.
The RF model was constructed separately for both the target variables.
### Recommend configurations
For this step, given desired throughput and response time values, along with the workload of interest, our tool searches through the database of benchmark runs to return the configuration with the performance results closest to what the user requires. It also returns the standard deviation for various samples of that run, suggesting potential variation in the actual results.
### Evaluation
To evaluate our predictive model, we used a repeated [K-Fold cross-validation][5] technique. It is a popular choice to get an accurate estimate of the efficiency of the predictive model.
To evaluate the predictive model with a dataset of 9,048 points, we used k equal to 10 and repeated the cross-validation method three times. The accuracy was calculated using the two metrics given below.
* R2 score: The proportion of the variance in the dependent variable that is predictable from the independent variable(s). Its value varies between -1 and 1.
* Root mean squared error (RMSE): It measures the average squared difference between the estimated values and the actual values and returns its square root.
Based on the above two criteria, the results for the predictive model with throughput and latency as target variables are as follows:
* Throughput (trans/sec):
* R2 score: 0.984
* RMSE: 0.012
* Latency (usec):
* R2 score: 0.930
* RMSE: 0.025
### What does the final tool look like?
We implemented our approach in a tool shown in the following figure. The tool is implemented in Python. It takes as input the dataset containing the information about benchmark runs as a CSV file, including client and server configuration, workload, and the desired values for latency and throughput. The tool uses this information to predict the latency and throughput results for the user's client server system. It then searches through the database of benchmark runs to return the configuration that has performance results closest to what the user requires, along with the standard deviation for that run. The standard deviation is part of the dataset and is calculated using repeated samples for one iteration or run.
![An infographic showing inputs and outputs for the Performance Predictor and Tuner (PPT). The inputs are client and server sosreports (tarball), workload, and expected latency and throughput. The outputs are latency (usce) and throughput (trans/sec), configuration for the client and the server, and the standard deviation for results.][6]
Image by: (Hifza Khalid, CC BY-SA 4.0)
### What were the challenges with this approach?
While working on this problem, there were several challenges that we addressed. The first major challenge was gathering benchmark data, which required learning Elasticsearch and Kibana, the two industrial tools used by Red Hat to index, store, and interact with [P][7][bench][8] data. Another difficulty was dealing with the inconsistencies in data, missing data, and errors in the indexed data. For example, workload data for the benchmark runs was indexed in Elasticsearch, but one of the crucial workload parameters, runtime, was missing. For that, we had to write extra code to access it from the raw benchmark data stored on Red Hat servers.
Once we overcame the above challenges, we spent a large chunk of our effort trying out almost all the feature selection techniques available and figuring out a representative set of hardware and OS parameters for network performance. It was challenging to understand the inner workings of these techniques, their limitations, and their applications and analyze why most of them did not apply to our case. Because of space limitations and shortage of time, we did not discuss all of these methods in this article.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/improve-network-performance-pbench
作者:[Hifza Khalid][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/hifza-khalid
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/mesh_networking_dots_connected.png
[2]: https://distributed-system-analysis.github.io/pbench/
[3]: https://distributed-system-analysis.github.io/pbench/
[4]: https://opensource.com/sites/default/files/2022-05/pbench%20figure.png
[5]: https://vitalflux.com/k-fold-cross-validation-python-example/
[6]: https://opensource.com/sites/default/files/2022-05/PPT.png
[7]: https://github.com/distributed-system-analysis/pbench
[8]: https://github.com/distributed-system-analysis/pbench

View File

@ -0,0 +1,108 @@
[#]: subject: "Machine Learning: Classification Using Python"
[#]: via: "https://www.opensourceforu.com/2022/05/machine-learning-classification-using-python/"
[#]: author: "Gayatri Venugopal https://www.opensourceforu.com/author/gayatri-venugopal/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Machine Learning: Classification Using Python
======
In machine learning (ML), a set of data is analysed to predict a result. Python is considered one of the best programming language choices for ML. In this article, we will discuss machine learning with respect to classification using Python.
![machine-learning-classification][1]
Lets say you want to teach a child to differentiate between apples and oranges. There are various ways to do this. You could ask the child to touch both kinds of fruits so that they get familiar with the shape and the softness. You could also show her multiple examples of apples and oranges, so that they can visually spot the differences. The technological equivalent of this process is known as machine learning.
Machine learning teaches computers to solve a particular problem, and to get better at it through experience. The example discussed here is a classification problem, where the machine is given various labelled examples, and is expected to label an unlabelled sample using the knowledge it acquired from the labelled samples. A machine learning problem can also take the form of regression, where it is expected to predict a real-valued solution to a given problem based on known samples and their solutions. Classification and regression are broadly termed as supervised learning. Machine learning can also be unsupervised, where the machine identifies patterns in unlabelled data, and forms clusters of samples with similar patterns. Another form of machine learning is reinforcement learning, where the machine learns from its environment by making mistakes.
### Classification
Classification is the process of predicting the label of a given set of points based on the information obtained from known points. The class, or label, associated with a data set could be binary or multiple in nature. As an example, if we have to label the sentiment associated with a sentence, we could label it as positive, negative or neutral. On the other hand, problems where we have to predict whether a fruit is an apple or an orange will have binary labels. Table 1 gives a sample data set for a classification problem.
In this table, the value of the last column, i.e., loan approved, is expected to be predicted based on the other variables. In the subsequent sections, we will learn how to train and evaluate a classifier using Python.
| - | - | - | - | - |
| :- | :- | :- | :- | :- |
| Age | Credit rating | Job | Property owned | Load approval |
| 35 | good | yes | yes | yes |
| 32 | poor | yes | no | no |
| 22 | fair | no | no | no |
| 42 | good | yes | no | yes |
Table 1
### Training and evaluating a classifier
In order to train a classifier, we need to have a data set containing labelled examples. Though the process of cleaning the data is not covered in this section, it is recommended that you read about various data preprocessing and cleaning techniques before feeding your data set to a classifier. In order to process the data set in Python, we will import the pandas package and the data frame structure. You may then choose from a variety of classification algorithms such as decision tree, support vector classifier, random forest, XG boost, ADA boost, etc. We will look at the random forest classifier, which is an ensemble classifier formed using multiple decision trees.
```
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
classifier = RandomForestClassifier()
#creating a train-test split with a proportion of 70:30
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
classifier.fit(X_train, y_train) #train the classifier on the training set
y_pred = classifier.predict(X_test) #evaluate the classifier on unknown data
print(“Accuracy: “, metrics.accuracy_score(y_test, y_pred)) #compare the predictions with the actual values in the test set
```
Although this program uses accuracy as the performance metric, a combination of metrics should be used, as accuracy tends to generate non-representative results when the test set is imbalanced. For instance, we will get a high accuracy if the model gives the same prediction for every record and the data set that is used to test the model is imbalanced, i.e., most of the records in the data set have the same class that the model predicted.
### Tuning a classifier
Tuning refers to the process of modifying the values of the hyperparameters of a model in order to improve its performance. A hyperparameter is a parameter whose value can be changed to improve the learning process of the algorithm.
The following code depicts random search hyperparameter tuning. In this, we define a search space from which the algorithm will pick different values, and choose the one that produces the best results:
```
from sklearn.model_selection import RandomizedSearchCV
#define the search space
min_samples_split = [2, 5, 10]
min_samples_leaf = [1, 2, 4]
grid = {min_samples_split : min_samples_split, min_samples_leaf : min_samples_leaf}
classifier = RandomizedSearchCV(classifier, grid, n_iter = 100)
#n_iter represents the number of samples to extract from the search space
#result.best_score and result.best_params_ can be used to obtain the best performance of the model, and the best values of the parameters
classifier.fit(X_train, y_train)
```
### Voting classifier
You can also use multiple classifiers and their predictions to create a model that will give a single prediction based on the individual predictions. This process (in which only the number of classifiers that voted for each prediction is considered) is called hard voting. Soft voting is a process in which each classifier generates a probability of a given record belonging to a particular class, and the voting classifier generates as its prediction, the class that obtained the maximum probability.
A code snippet for creating a soft voting classifier is given below:
```
soft_voting_clf = VotingClassifier(
estimators=[(rf, rf_clf), (ada, ada_clf), (xgb, xgb_clf), (et, et_clf), (gb, gb_clf)],
voting=soft)
soft_voting_clf.fit(X_train, y_train)
```
This article has summarised the use of classifiers, tuning a classifier and the process of combining the results of multiple classifiers. Do use this as a reference point and explore each area in detail.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/machine-learning-classification-using-python/
作者:[Gayatri Venugopal][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/gayatri-venugopal/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/machine-learning-classification.jpg

View File

@ -0,0 +1,198 @@
[#]: subject: "Migrate databases to Kubernetes using Konveyor"
[#]: via: "https://opensource.com/article/22/5/migrating-databases-kubernetes-using-konveyor"
[#]: author: "Yasu Katsuno https://opensource.com/users/yasu-katsuno"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Migrate databases to Kubernetes using Konveyor
======
Konveyor Tackle-DiVA-DOA helps database engineers easily migrate database servers to Kubernetes.
![Ships at sea on the web][1]
Kubernetes Database Operator is useful for building scalable database servers as a database (DB) cluster. But because you have to create new artifacts expressed as YAML files, migrating existing databases to Kubernetes requires a lot of manual effort. This article introduces a new open source tool named Konveyor [Tackle-DiVA-DOA][2] (Data-intensive Validity Analyzer-Database Operator Adaptation). It automatically generates deployment-ready artifacts for database operator migration. And it does that through datacentric code analysis.
### What is Tackle-DiVA-DOA?
Tackle-DiVA-DOA (DOA, for short) is an open source datacentric database configuration analytics tool in Konveyor Tackle. It imports target database configuration files (such as SQL and XML) and generates a set of Kubernetes artifacts for database migration to operators such as [Zalando Postgres Operator][3].
![A flowchart shows a database cluster with three virtual machines and SQL and XML files transformed by going through Tackle-DiVA-DOA into a Kubernetes Database Operator structure and a YAML file][4]
Image by: (Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)
DOA finds and analyzes the settings of an existing system that uses a database management system (DBMS). Then it generates manifests (YAML files) of Kubernetes and the Postgres operator for deploying an equivalent DB cluster.
![A flowchart shows the four elements of an existing system (as described in the text below), the manifests generated by them, and those that transfer to a PostgreSQL cluster][5]
Image by: (Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)
Database settings of an application consist of DBMS configurations, SQL files, DB initialization scripts, and program codes to access the DB.
* DBMS configurations include parameters of DBMS, cluster configuration, and credentials. DOA stores the configuration to `postgres.yaml` and secrets to `secret-db.yaml` if you need custom credentials.
* SQL files are used to define and initialize tables, views, and other entities in the database. These are stored in the Kubernetes ConfigMap definition `cm-sqls.yaml`.
* Database initialization scripts typically create databases and schema and grant users access to the DB entities so that SQL files work correctly. DOA tries to find initialization requirements from scripts and documents or guesses if it can't. The result will also be stored in a ConfigMap named `cm-init-db.yaml`.
* Code to access the database, such as host and database name, is in some cases embedded in program code. These are rewritten to work with the migrated DB cluster.
### Tutorial
DOA is expected to run within a container and comes with a script to build its image. Make sure Docker and Bash are installed on your environment, and then run the build script as follows:
```
$ cd /tmp
$ git clone https://github.com/konveyor/tackle-diva.git
$ cd tackle-diva/doa
$ bash util/build.sh
docker image ls diva-doa
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
diva-doa     2.2.0     5f9dd8f9f0eb   14 hours ago   1.27GB
diva-doa     latest    5f9dd8f9f0eb   14 hours ago   1.27GB
```
This builds DOA and packs as container images. Now DOA is ready to use.
The next step executes a bundled `run-doa.sh` wrapper script, which runs the DOA container. Specify the Git repository of the target database application. This example uses a Postgres database in the [TradeApp][6] application. You can use the `-o` option for the location of output files and an `-i` option for the name of the database initialization script:
```
$ cd /tmp/tackle-diva/doa
$ bash run-doa.sh -o /tmp/out -i start_up.sh \
      https://github.com/saud-aslam/trading-app
[OK] successfully completed.
```
The `/tmp/out/` directory and `/tmp/out/trading-app`, a directory with the target application name, are created. In this example, the application name is `trading-app`, which is the GitHub repository name. Generated artifacts (the YAML files) are also generated under the application-name directory:
```
$ ls -FR /tmp/out/trading-app/
/tmp/out/trading-app/:
cm-init-db.yaml  cm-sqls.yaml  create.sh*  delete.sh*  job-init.yaml  postgres.yaml  test/
/tmp/out/trading-app/test:
pod-test.yaml
```
The prefix of each YAML file denotes the kind of resource that the file defines. For instance, each `cm-*.yaml` file defines a ConfigMap, and `job-init.yaml` defines a Job resource. At this point, `secret-db.yaml` is not created, and DOA uses credentials that the Postgres operator automatically generates.
Now you have the resource definitions required to deploy a PostgreSQL cluster on a Kubernetes instance. You can deploy them using the utility script `create.sh`. Alternatively, you can use the `kubectl create` command:
```
$ cd /tmp/out/trading-app
$ bash create.sh  # or simply “kubectl apply -f .”
configmap/trading-app-cm-init-db created
configmap/trading-app-cm-sqls created
job.batch/trading-app-init created
postgresql.acid.zalan.do/diva-trading-app-db created
```
The Kubernetes resources are created, including `postgresql` (a resource of the database cluster created by the Postgres operator), `service`, `rs`, `pod`, `job`, `cm`, `secret`, `pv`, and `pvc`. For example, you can see four database pods named `trading-app-*`, because the number of database instances is defined as four in `postgres.yaml`.
```
$ kubectl get all,postgresql,cm,secret,pv,pvc
NAME                                        READY   STATUS      RESTARTS   AGE
pod/trading-app-db-0                        1/1     Running     0          7m11s
pod/trading-app-db-1                        1/1     Running     0          5m
pod/trading-app-db-2                        1/1     Running     0          4m14s
pod/trading-app-db-3                        1/1     Running     0          4m
NAME                                      TEAM          VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE   STATUS
postgresql.acid.zalan.do/trading-app-db   trading-app   13        4      1Gi                                     15m   Running
NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/trading-app-db          ClusterIP   10.97.59.252    <none>        5432/TCP   15m
service/trading-app-db-repl     ClusterIP   10.108.49.133   <none>        5432/TCP   15m
NAME                         COMPLETIONS   DURATION   AGE
job.batch/trading-app-init   1/1           2m39s      15m
```
Note that the Postgres operator comes with a user interface (UI). You can find the created cluster on the UI. You need to export the endpoint URL to open the UI on a browser. If you use minikube, do as follows:
```
$ minikube service postgres-operator-ui
```
Then a browser window automatically opens that shows the UI.
![Screenshot of the UI showing the Cluster YAML definition on the left with the Cluster UID underneath it. On the right of the screen a header reads "Checking status of cluster," and items in green under that heading show successful creation of manifests and other elements][7]
Image by: (Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)
Now you can get access to the database instances using a test pod. DOA also generated a pod definition for testing.
```
$ kubectl apply -f /tmp/out/trading-app/test/pod-test.yaml # creates a test Pod
pod/trading-app-test created
$ kubectl exec trading-app-test -it -- bash  # login to the pod
```
The database hostname and the credential to access the DB are injected into the pod, so you can access the database using them. Execute the `psql` metacommand to show all tables and views (in a database):
```
# printenv DB_HOST; printenv PGPASSWORD
(values of the variable are shown)
# psql -h ${DB_HOST} -U postgres -d jrvstrading -c '\dt'
             List of relations
 Schema |      Name      | Type  |  Owner  
--------+----------------+-------+----------
 public | account        | table | postgres
 public | quote          | table | postgres
 public | security_order | table | postgres
 public | trader         | table | postgres
(4 rows)
# psql -h ${DB_HOST} -U postgres -d jrvstrading -c '\dv'
                List of relations
 Schema |         Name          | Type |  Owner  
--------+-----------------------+------+----------
 public | pg_stat_kcache        | view | postgres
 public | pg_stat_kcache_detail | view | postgres
 public | pg_stat_statements    | view | postgres
 public | position              | view | postgres
(4 rows)
```
After the test is done, log out from the pod and remove the test pod:
```
# exit
$ kubectl delete -f /tmp/out/trading-app/test/pod-test.yaml
```
Finally, delete the created cluster using a script:
```
$ bash delete.sh
```
### Welcome to Konveyor Tackle world!
To learn more about application refactoring, you can check out the [Konveyor Tackle site][8], join the community, and access the source code on [GitHub][9].
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/5/migrating-databases-kubernetes-using-konveyor
作者:[Yasu Katsuno][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/yasu-katsuno
[b]: https://github.com/lkxed
[1]: https://opensource.com/sites/default/files/lead-images/kubernetes_containers_ship_lead.png
[2]: https://github.com/konveyor/tackle-diva/tree/main/doa
[3]: https://github.com/zalando/postgres-operator
[4]: https://opensource.com/sites/default/files/2022-05/tackle%20illustration.png
[5]: https://opensource.com/sites/default/files/2022-05/existing%20system%20tackle.png
[6]: https://github.com/saud-aslam/trading-app
[7]: https://opensource.com/sites/default/files/2022-05/postgreSQ-.png
[8]: https://www.konveyor.io/tools/tackle/
[9]: https://github.com/konveyor/tackle-diva

View File

@ -0,0 +1,95 @@
[#]: subject: "Package is “set to manually installed”? What does it Mean?"
[#]: via: "https://itsfoss.com/package-set-manually-installed/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Package is “set to manually installed”? What does it Mean?
======
If you use the apt command to install packages in the terminal, youll see all kinds of output.
If you pay attention and read the output, sometimes youll notice a message that reads:
**package_name set to manually installed**
Have you ever wondered what this message means and why you dont see it for all packages? Let me share some details in this explainer.
### Understanding “Package set to manually installed”
Youll see this message when you try installing an already installed library or development package. This dependency package was installed automatically with another package. The dependency package gets removed with the apt autoremove command if the main package is removed.
But since you tried to install the dependency package explicitly, your Ubuntu system thinks that you need this package independent of the main package. And hence the package is marked as manually installed so that it is not removed automatically.
Not very clear, right? Take the example of [installing VLC on on Ubuntu][1].
Since the main vlc package depends on a number of other packages, those packages are automatically installed with it.
![installing vlc with apt ubuntu][2]
If you check the [list of installed packages][3] that have vlc in their name, youll see that except vlc, the rest are marked automatic. This indicates that these packages were installed automatically (with vlc) and they will be removed automatically with apt autoremove command (when vlc is uninstalled).
![list installed packages vlc ubuntu][4]
Now suppose you thought to install “vlc-plugin-base” for some reason. If you run the apt install command on it, the system tells you that the package is already installed. At the same time, it changes the mark from automatic to manual because the system thinks that you need this vlc-plugin-base explicitly as you tried to manually install it.
![package set manually][5]
You can see that its status has been changed to [installed] from [installed,automatic].
![listing installed packages with vlc][6]
Now, let me remove VLC and run the auoremove command. You can see that “vlc-plugin-base” is not in the list of packages to be removed.
![autoremove vlc ubuntu][7]
Check the list of installed packages again. vlc-plugin-base is still installed on the system.
![listing installed packages after removing vlc][8]
You can see two more vlc-related packages here. These are the dependencies for the vlc-plugin-base package and this is why they are also present on the system but marked automatic.
I believe things are more clear now with the examples. Let me add a bonus tip for you.
### Reset package to automatic
If the state of the package got changed to manual from automatic, you can set it back to automatic in the following manner:
```
sudo apt-mark auto package_name
```
![set package to automatic][9]
### Conclusion
This is not a major error and doesnt stop you from doing your work in your system. However, knowing these little things increase your knowledge a little.
**Curiosity may have killed the cat but it makes a penguin smarter**. Thats an original quote to add humor to this otherwise dull article :)
Let me know if you would like to read more such articles that may seem insignificant but help you understand your Linux system a tiny bit better.
--------------------------------------------------------------------------------
via: https://itsfoss.com/package-set-manually-installed/
作者:[Abhishek Prakash][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/install-latest-vlc/
[2]: https://itsfoss.com/wp-content/uploads/2022/05/installing-vlc-with-apt-ubuntu-800x489.png
[3]: https://itsfoss.com/list-installed-packages-ubuntu/
[4]: https://itsfoss.com/wp-content/uploads/2022/05/list-installed-packages-vlc-ubuntu-800x477.png
[5]: https://itsfoss.com/wp-content/uploads/2022/05/package-set-manually.png
[6]: https://itsfoss.com/wp-content/uploads/2022/05/listing-installed-packages-with-vlc.png
[7]: https://itsfoss.com/wp-content/uploads/2022/05/autoremove-vlc-ubuntu.png
[8]: https://itsfoss.com/wp-content/uploads/2022/05/listing-installed-packages-after-removing-vlc.png
[9]: https://itsfoss.com/wp-content/uploads/2022/05/set-package-to-automatic.png

View File

@ -3,19 +3,19 @@
[#]: author: "sk https://ostechnix.com/author/sk/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: reviewer: "turbokernel"
[#]: publisher: " "
[#]: url: " "
如何在 Fedora 36 中重置 Root 密码
在 Fedora 36 中如何重置 Root 密码
======
在 Fedora 中重置忘记的密码。
在 Fedora 中重置忘记的 root 密码。
是否忘记了 Fedora 中的 root 密码?或者您想更改 Fedora 系统中的 root 用户密码?没问题!本简要指南将引导你完成在 Fedora 操作系统中更改或重置 root 密码的步骤。
是否忘记了 Fedora 中的 root 密码?或者您想更改 Fedora 系统中的 root 用户密码?没问题!本手册将指导您在 Fedora 操作系统中完成更改或重置 root 密码的步骤。
**注意:** 本指南已在 Fedora 36 和 35 版本上进行了正式测试。下面提供的步骤与在 Fedora Silverblue 和旧 Fedora 版本中重置 root 密码的步骤相同。
**注意:** 本手册已在 Fedora 36 和 35 版本上进行了正式测试。下面提供的步骤与在 Fedora Silverblue 和旧 Fedora 版本中重置 root 密码的步骤相同。
**步骤 1** - 打开 Fedora 系统并按 **ESC** 键,直到看到 GRUB 启动菜单。出现 GRUB 菜单后,选择要引导的内核并点击 **e**编辑选定的引导条目。
**步骤 1** - 打开 Fedora 系统并按 **ESC** 键,直到看到 GRUB 启动菜单。出现 GRUB 菜单后,选择要引导的内核并按下 **e** 编辑选定的引导条目。
![Grub Menu In Fedora 36][1]
@ -23,13 +23,13 @@
![Find ro Kernel Parameter In Grub Entry][2]
**步骤 3** - 将 **“ro”** 参数替换为 **“rw init=/sysroot/bin/sh”**当然不带引号)。请注意 “`rw`” 和 “`init=/sysroot`...” 之间的空格。修改后内核参数行应如下所示。
**步骤 3** - 将 **“ro”** 参数替换为 **“rw init=/sysroot/bin/sh”**(不带引号)。请注意 “`rw`” 和 “`init=/sysroot`...” 之间的空格。修改后内核参数行应如下所示。
![Modify Kernel Parameters][3]
**步骤 4** - 上更改参数后,按 **Ctrl+x** 进入紧急模式,即单用户模式。
**步骤 4** - 上述步骤更改参数后,按 **Ctrl+x** 进入紧急模式,即单用户模式。
在紧急模式下,输入以下命令以读/写模式挂载根(`/`文件系统
在紧急模式下,输入以下命令以 **读/写** 模式挂载根文件系统`/`)。
```
chroot /sysroot/
@ -37,13 +37,13 @@ chroot /sysroot/
![Mount Root Filesystem In Read, Write Mode In Fedora Linux][4]
**步骤 5** - 现在使用 `passwd` 命令更改 root 密码:
**步骤 5** - 现在使用 `passwd` 命令重置 root 密码:
```
passwd root
```
输入两次 root 密码。我建议使用强密码。
输入两次 root 密码。我建议使用强密码。
![Reset Or Change Root Password In Fedora][5]
@ -65,7 +65,7 @@ exit
reboot
```
等待 SELinux 重新标记完成。这将需要几分钟,具体取决于文件系统的大小和硬盘的速度。
等待 SELinux 重新标记完成。这将需要几分钟,具体时长取决于文件系统的大小和硬盘的速度。
![SELinux Filesystem Relabeling In Progress][7]
@ -73,7 +73,7 @@ reboot
![Login To Fedora As Root User][8]
如你所见,在 Fedora 36 中重置 root 密码的步骤非常简单,并且与**[在 RHEL 中重置 root 密码][9]**及其克隆版本(如 CentOS、AlmaLinux 和 Rocky Linux完全相同。
如你所见,在 Fedora 36 中重置 root 密码的步骤非常简单,并且与**[在 RHEL 中重置 root 密码][9]**及其衍生版本(如 CentOS、AlmaLinux 和 Rocky Linux完全相同。
--------------------------------------------------------------------------------
@ -82,7 +82,7 @@ via: https://ostechnix.com/reset-root-password-in-fedora/
作者:[sk][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[turbokernel](https://github.com/turbokernel)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,142 @@
[#]: subject: "DAML: The Programming Language for Smart Contracts in a Blockchain"
[#]: via: "https://www.opensourceforu.com/2022/05/daml-the-programming-language-for-smart-contracts-in-a-blockchain/"
[#]: author: "Dr Kumar Gaurav https://www.opensourceforu.com/author/dr-gaurav-kumar/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
DAML区块链中智能合约的编程语言
======
DAML 智能合约语言是一种专门设计的特定领域语言,用于编码应用的共享业务逻辑。它用于区块链环境中分布式应用的开发和部署。
![blockchain-hand-shake][1]
区块链技术是一种安全机制,以一种使人难以或不可能修改或入侵的方式来跟踪信息。区块链整合了交易的数字账本,它被复制并发送至其网络上的每台计算机。在链的每个区块中,都有一些交易。当区块链上发生新的交易时,该交易的记录就会被添加到属于该链的每个人的账簿中。
区块链使用分布式账本技术DLT其中数据库并不保存在一个服务器或节点中。在区块链中交易被记录在一个被称为哈希的不可改变的加密符号中。这意味着如果一个通道或链上的一个区块被改变黑客将很难改变链上的那个区块因为他们必须对外面的每一个版本的链都要这样做。区块链如比特币和以太坊随着新的区块被添加到链上而不断增长这使得账本更加安全。
随着区块链中智能合约的实施,在没有任何人工干预的情况下,有自动执行的场景。智能合约技术使得执行最高级别的安全、隐私和反黑客实施成为可能。
![Figure 1: Market size of blockchain technology (Source: Statista.com)][2]
区块链的用例和应用是:
* 加密货币
* 智能合约
* 安全的个人信息
* 数字健康记录
* 电子政务
* 不可伪造的代币NFT
* 游戏
* 跨境金融交易
* 数字投票
* 供应链管理
根据 *Statista.com*,自过去几年以来,区块链技术市场的规模正在以非常快的速度增长,预计到 2025 年将达到 400 亿美元。
### 区块链的编程语言和工具箱
有许多编程语言和开发工具包可用于分布式应用和智能合约。区块链的编程和脚本语言包括 Solidity、Java、Vyper、Serpent、Python、JavaScript、GoLang、PHP、C++、Ruby、Rust、Erlang 等,并根据实施场景和用例进行使用。
选择一个合适的平台来开发和部署区块链,取决于一系列因素,包括对安全、隐私、交易速度和可扩展性的需求(图 2
![Figure 2: Factors to look at when selecting a blockchain platform][3]
开发区块链的主要平台有:
* Ethereum
* XDC Network
* Tezos
* Stellar
* Hyperledger
* Ripple
* Hedera Hashgraph
* Quorum
* Corda
* NEO
* OpenChain
* EOS
* Dragonchain
* Monero
### DAML一种高性能的编程语言
数字资产建模语言或 DAMLdaml.com是一种高性能的编程语言用于开发和部署区块链环境中的分布式应用。它是一个轻量级和简洁的平台用于快速应用开发。
![Figure 3: Official portal of DAML][4]
DAML 的主要特点是:
* 细粒度的权限
* 基于场景的测试
* 数据模型
* 业务逻辑
* 确定性的执行
* 存储抽象化
* 无重复开销
* 负责任的跟踪
* 原子的可组合性
* 授权检查
* 需要知道的隐私
### 安装和使用 DAML
DAML SDK 可以安装在 Linux、macOS 或 Windows 上。在多个操作系统上安装 DAML 的详细说明可访问 *https://docs.daml.com/getting-started/installation.html*
你必须具备以下条件才能使用 DAML
* Visual Studio Code
* Java 开发套件JDK
DAML 可以通过下载并运行可执行的安装程序在 Windows 上安装,你可访问 *https://github.com/digital-asset/daml/releases/download/v1.18.1/daml-sdk-1.18.1-windows.exe。*
在 Linux 或 Mac 上安装 DAML 可以通过在终端执行以下内容来完成:
```
$ curl -sSL https://get.daml.com/ | sh
```
安装 DAML 后,可以创建基于区块链的新应用,如图 4 和 5 所示。
![Figure 4: Creating a new app][5]
在另一个终端中,新的应用被导航并安装了项目的依赖:
![Figure 5: Running DAML][6]
```
WorkingDirectory>cd myapp/ui
WorkingDirectory>npm install
WorkingDirectory>npm start
```
WebUI 被启动,该应用可在 Web 浏览器上通过 URL *http://localhost:3000/* 访问。
![Figure 6: Login panel in DAML app][7]
### 研究和开发的范围
区块链技术为不同类别的应用提供了广泛的开发平台和框架。其中许多平台是免费和开源的,可以下载和部署以用于基于研究的实现。研究学者、从业者和院士可以使用这些平台为众多应用提出和实施他们的算法。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/05/daml-the-programming-language-for-smart-contracts-in-a-blockchain/
作者:[Dr Kumar Gaurav][a]
选题:[lkxed][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/dr-gaurav-kumar/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/blockchain-hand-shake.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-1-Market-size-of-blockchain-technology.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-2-Factors-to-look-at-when-selecting-a-blockchain-platform-2.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-3-Official-portal-of-DAML-1.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-4-Creating-a-new-app.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-5-Running-DAML.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2022/04/Figure-6-Login-panel-in-DAML-app.jpg