Merge pull request #6 from LCTT/master

与上游同步
This commit is contained in:
jx.zeng 2020-07-04 22:34:30 +08:00 committed by GitHub
commit b17bc16978
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
48 changed files with 3543 additions and 1765 deletions

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12371-1.html)
[#]: subject: (How Cloud-init can be used for your Raspberry Pi homelab)
[#]: via: (https://opensource.com/article/20/5/cloud-init-raspberry-pi-homelab)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
在你的树莓派家庭实验室中使用 Cloud-init
======
> 了解了云行业的标准,该向你的家庭实验室自动添加新设备和用户了。
![](https://img.linux.net.cn/data/attachment/album/202007/01/203559wt8tnnnxnc6jcnn8.jpg)
[Cloud-init][2](可以说)是一个标准,云提供商用它来为云实例提供初始化和配置数据。它最常用于新实例的首次启动,以自动完成网络设置、账户创建和 SSH 密钥安装等使新系统上线所需的任何事情,以便用户可以访问它。
在之前的一篇文章《[修改磁盘镜像来创建基于树莓派的家庭实验室][3]》中,我展示了如何为像树莓派这样的单板计算机定制操作系统镜像以实现类似的目标。有了 Cloud-init就不需要向镜像中添加自定义数据。一旦在镜像中启用了它你的虚拟机、物理服务器甚至是小小的树莓派都可以表现得像你自己的 “家庭私有云” 中的云计算实例。新机器只需插入、打开,就可以自动成为你的[家庭实验室][4]的一部分。
说实话Cloud-init 的设计并没有考虑到家庭实验室。正如我所提到的,你可以很容易地修改给定的一套系统磁盘镜像,以启用 SSH 访问并在第一次启动后对它们进行配置。Cloud-init 是为大规模的云提供商设计的,这些提供商需要容纳许多客户,维护一组小的镜像,并为这些客户提供访问实例的机制,而无需为每个客户定制一个镜像。拥有单个管理员的家庭实验室则不会面临同样的挑战。
不过Cloud-init 在家庭实验室中也不是没有可取之处。教育是我的家庭私有云项目的目标之一,而为你的家庭实验室设置 Cloud-init 是一个很好的方式可以获得大大小小的云提供商大量使用的技术的经验。Cloud-init 也是其他初始配置选项的替代方案之一。与其为家庭实验室中的每台设备定制每个镜像、ISO 等,并在你要进行更改时面临繁琐的更新,不如直接启用 Cloud-init。这减少了技术债务 —— 还有什么比*个人*技术债务更糟糕的吗?最后,在你的家庭实验室中使用 Cloud-init 可以让你的私有云实例与你拥有的或将来可能拥有的任何公有云实例表现相同 —— 这是真正的[混合云][5]。
### 关于 Cloud-init
当为 Cloud-init 配置的实例启动并且服务开始运行时(实际上是 systemd 中的四个服务,以处理启动过程中的依赖关系),它会检查其配置中的[数据源][6],以确定其运行在什么类型的云中。每个主要的云提供商都有一个数据源配置,告诉实例在哪里以及如何检索配置信息。然后,实例使用数据源信息检索云提供商提供的配置信息(如网络信息和实例识别信息)和客户提供的配置数据(如要复制的授权密钥、要创建的用户账户以及许多其他可能的任务)。
检索数据后Cloud-init 再对实例进行配置:设置网络、复制授权密钥等,最后完成启动过程。然后,远程用户就可以访问它,准备好使用 [Ansible][7] 或 [Puppet][8] 等工具进行进一步的配置,或者准备好接收工作负载并开始分配任务。
### 配置数据
如上所述Cloud-init 使用的配置数据来自两个潜在来源:云提供商和实例用户。在家庭实验室中,你扮演着这两种角色:作为云提供商提供网络和实例信息,作为用户提供配置信息。
#### 云提供商元数据文件
在你的云提供商角色中,你的家庭实验室数据源将为你的私有云实例提供一个元数据文件。这个[元数据][9]文件包含实例 ID、云类型、Python 版本Cloud-init 用 Python 编写并使用 Python或要分配给主机的 SSH 公钥等信息。如果你不使用 DHCP或 Cloud-init 支持的其他机制,如镜像中的配置文件或内核参数),元数据文件还可能包含网络信息。
#### 用户提供的用户数据文件
Cloud-init 的真正价值在于用户数据文件。[用户数据][10]文件由用户提供给云提供商,并包含在数据源中,它将实例从一台普通的机器变成了用户舰队的一员。用户数据文件可以以可执行脚本的形式出现,与正常情况下脚本的工作方式相同;也可以以云服务配置 YAML 文件的形式出现,利用 [Cloud-init 的模块][11] 来执行配置任务。
### 数据源
数据源是由云提供商提供的服务,它为实例提供了元数据和用户数据文件。实例镜像或 ISO 被配置为告知实例正在使用什么数据源。
例如,亚马逊 AWS 提供了一个 [link-local][12] 文件,它将用实例的自定义数据来响应实例的 HTTP 请求。其他云提供商也有自己的机制。幸运的是,对于家庭私有云项目来说,也有 NoCloud 数据源。
[NoCloud][13] 数据源允许通过内核命令以键值对的形式提供配置信息,或通过挂载的 ISO 文件系统以用户数据和元数据文件的形式提供。这些对于虚拟机来说很有用,尤其是与自动化搭配来创建虚拟机。
还有一个 NoCloudNet 数据源,它的行为类似于 AWS EC2 数据源,提供一个 IP 地址或 DNS 名称,通过 HTTP 从这里检索用户数据和元数据。这对于你的家庭实验室中的物理机器来说是最有帮助的,比如树莓派、[NUC][14] 或多余的服务器设备。虽然 NoCloud 可以工作,但它需要更多的人工关注 —— 这是云实例的反模式。
### 家庭实验室的 Cloud-init
我希望这能让你了解到 Cloud-init 是什么,以及它对你的家庭实验室有何帮助。它是一个令人难以置信的工具,被主要的云提供商所接受,在家里使用它可以是为了教育和乐趣,并帮助你自动向实验室添加新的物理或虚拟服务器。之后的文章将详细介绍如何创建简单的静态和更复杂的动态 Cloud-init 服务,并指导你将它们纳入你的家庭私有云。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/cloud-init-raspberry-pi-homelab
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
[2]: https://cloudinit.readthedocs.io/
[3]: https://linux.cn/article-12277-1.html
[4]: https://opensource.com/article/19/3/home-lab
[5]: https://www.redhat.com/en/topics/cloud-computing/what-is-hybrid-cloud
[6]: https://cloudinit.readthedocs.io/en/latest/topics/datasources.html
[7]: https://www.ansible.com/
[8]: https://puppet.com/
[9]: https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#
[10]: https://cloudinit.readthedocs.io/en/latest/topics/format.html
[11]: https://cloudinit.readthedocs.io/en/latest/topics/modules.html
[12]: https://en.wikipedia.org/wiki/Link-local_address
[13]: https://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html
[14]: https://en.wikipedia.org/wiki/Next_Unit_of_Computing

View File

@ -0,0 +1,141 @@
[#]: collector: (lujun9972)
[#]: translator: (nophDog)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12375-1.html)
[#]: subject: (How to know if you're ready to switch from Mac to Linux)
[#]: via: (https://opensource.com/article/20/6/mac-to-linux)
[#]: author: (Marko Saric https://opensource.com/users/markosaric)
你是否已经准备好从 Mac 切换到 Linux 了?
======
> 你几乎可以在 Linux 上做任何你在 Mac 上可以做的事情 —— 这是你拥有一个开源操作系统的自由。
![](https://img.linux.net.cn/data/attachment/album/202007/02/222534g8bdabsllplnzl6c.jpg)
我[从 Mac 转到 Linux][2] 已经两年了。在使用 Linux 之前,我用的 Apple 的系统用了 15 年,而当我在 2018 年安装第一个 Linux 发行版时,还只是一个纯粹的新手。
这些日子以来,我只用 Linux我可以用它完成任何任务。浏览网页、观看 Netflix 影片、写作以及编辑我的 Wordpress [博客][3],甚至还在上面跑我的[开源网页分析项目][4]。
我甚至还不是一个开发者Linux 被认为不适合日常使用,对非技术人员也不够友好的日子已经一去不返了。
最近有很多关于 Mac 的讨论,越来越多的人已经在考虑转到 Linux。我打算分享我的切换过程中的一些经验帮助其它新手也能从容转移。
### 你该不该换?
在换系统之前,最好想清楚,因为有时候 Linux 可能跟你预期不一样。如果你仍希望跟 Apple Watch 无缝配对、可以用 FaceTime 给朋友打电话、或者你想打开 iMovie 看视频,那最好还是不要换了。这些都是 Apple 的专有产品,你只能在 Apple 的“围墙花园”里面使用。如果离不开 Apple 的生态系统,那么 Linux 可能不太适合你。
我对 Apple 生态没有太多挂念,我不用 iPhone所以跟手机的协作没那么必要。我也不用 iCloud、FaceTime当然也包括 Siri。我早就对开源充满兴趣只是一直没有行动。
### 检查你的必备软件清单
我还在使用 Mac 的时候,就已经开始探索开源软件,我发现大部分在 Mac 上使用的软件,在 Linux 也可以运行。
很熟悉用火狐浏览网页吗?在 Linux 上它也可以运行。想用 VLC 看视频?它也有 Linux 版本。喜欢用 Audacity 录制、编辑音频?它正在 Linux 上等着你呢。你用 OBS Studio 直播?在 Linux 直接下载安装吧。一直用 Telegram 跟朋友和家人保持联系吗Linux 上当然少不了它。
此外Linux 不仅仅意味着开源软件。你最喜欢的大部分(也可能是所有)非 Apple 专有软件,都能在 Linux 见到它们的身影。Spotify、Slack、Zoom、Stream、Discord、Skype、Chrome 以及很多闭源软件,都可以使用。而且,在你 Mac 浏览器里面运行的任何东西,同样能够运行在 Linux 浏览器。
你能在 Linux 找到你的必备软件,或者更好的替代品吗?请再三确认,做到有备无患。用你最常用的搜索引擎,在网上检索一下。搜索“软件名 + Linux” 或者“软件名 + Linux 替代品”,然后再去 [Flathub][5] 网站查看你能在 Linux 用 Flatpak 安装的专有软件有哪些。
### 请牢记Linux 不等于 Mac
如果你希望能够从 Mac 轻松转移到 Linux我相信有一点很重要你需要保持包容的思想以及愿意学习新操作系统的心态。Linux 并不等于 Mac所以你需要给自己一些时间去接触并了解它。
如果你想让 Linux 用起来、看起来跟你习惯的 macOS 一模一样,那么 Linux 可能也不适合你。尽管你可以通过各种方法[把 Linux 桌面环境打造得跟 macOS 相似][14],但我觉得要想成功转移到 Linux最好的办法是从拥抱 Linux 开始。
试试新的工作流,该怎么用就怎么用。不要总想着把 Linux 变成其它东西。你会跟我一样,像享受 Mac 一样享受 Linux甚至能有更好的体验感。
还记得你第一次使用 Mac 吧:你肯定花了不少时间去习惯它的用法。那么请给 Linux 同样多的时间和关怀。
### 选择一个 Linux 发行版
有别于 Windows 和 macOSLinux 不止一个单一的操作系统。不同的 Linux 操作系统被称作发行版,开始使用 Linux 之后,我尝试过好几个不同的发行版。我也用过不同的桌面环境,或者图形界面。在美观度、易用性、工作流以及集成软件上,它们有很大差异。
尽管作为 Mac 的替代品,被提及最多的是 [ElementaryOS][6] 和 [Pop!_OS][7],但我仍建议从 [Fedora 工作站][8] 开始,理由如下:
- 使用 [Fedora 介质写入器][9],容易安装
- 几乎可以支持你所有的硬件,开箱即用
- 支持最新的 Linux 软件
- 运行原生无改动的 GNOME 桌面环境
- 有一个大型开发团队以及一个庞大的社区在背后支持
在我看来,对从 macOS 过来的新手来说,[GNOME][10] 是易用性、一致性、流畅性和用户体验最好的桌面环境。它拥有 Linux 世界中最多的开发资源和用户基数,所以你的使用体验会很好。
Fedora 可以为你打开一扇 Linux 的大门,当你适应之后,就可以开始进一步探索各个发行版、桌面环境,甚至窗口管理器之类的玩意了。
### 熟悉 GNOME
GNOME 是 Fedora 和许多其它 Linux 发行版的默认窗口管理器。它最近 [升级到 GNOME 3.36][11],带来了 Mac 用户会喜欢的现代设计。
一定要做好心理准备Linux、Fedora 工作站和 GNOME 并不是 Apple 和 macOS。GNOME 非常干净、简约、现代、独创。它不会分散你的注意力,没有桌面图标,没有可见的坞站,窗口上甚至没有最小化和最大化按钮。但是不要慌张,如果你去尝试,它会证明这是你用过最好、最有生产力的操作系统。
GNOME 不会给你带来困扰。启动之后你唯一能看到的东西只有顶栏和背景图片。顶栏由这几样东西组成“活动”在左边时间和日期在中间这也是你的通知中心右边是网络、蓝牙、VPN、声音、亮度、电池等托盘图标之类的东西。
#### 为什么 GNOME 像 Mac
你会注意到一些跟 macOS 的相似之处,例如窗口吸附、空格预览(用起来跟 “Quick Look” 一模一样)。
如果你把鼠标光标移动到左上角,点击顶栏的“活动”,或者按下键盘上超级键(`Super` 键,也就是 Mac 上的 `ஐ` 键),你会看到“活动概览”。它有点像 macOS 系统上“调度中心”和“聚焦搜索”的结合体。它会在屏幕中间展示已打开软件和窗口的概览。在左手边,你可以看到坞站,上面有你打开的软件和常用软件。所有打开的软件下面会有一个指示标志,在右手边,你可以看到不同的工作区。
在顶栏中间,有一个搜索框。只要你开始输入,焦点就会转移到搜索框。它能搜索你已经安装的软件和文件内容,可以在软件中心搜索指定的软件、进行计算、向你展示时间或者天气,当然它能做的还有很多。它就像“聚焦”一样。只需开始输入你要搜索的内容,按下回车就可以打开软件或者文件。
你也能看到一列安装好的软件(更像 Mac 上的“启动台”),点击坞站中的“显示应用”图标,或者按 `Super + A` 就行。
总体来说Linux 是一个轻量级的系统,即使在很老的硬件上也能跑得很顺畅,跟 macOS 比起来仅仅占用很少的磁盘空间。并且不像 macOS你可以删除任何你不想要或不需要的预装软件。
#### 自定义你的 GNOME 设置
浏览一下 GNOME 设置,熟悉它的选项,做一些更改,让它用起来更舒服。下面是一些我装好 GNOME 必做的事情。
- 在“鼠标和触摸板”中,我禁用“自然滚动”、启用“轻触点击”。
- 在“显示”中,我打开“夜光”功能,在晚上,屏幕会让颜色变暖,减少眼睛疲劳。
- 我也安装了 [GNOME 优化][12],因为它可以更改额外的设置选项。
- 在“GNOME 优化”中,我启用了 “Over-Amplification” 设置,这样就能获得更高的音量。
- 在“GNOME 优化”中,相比默认的亮色主题,我更喜欢 “Adwaita Dark” 主题。
#### 习惯使用键盘操作
GNOME 是以一个极度以键盘为中心的操作系统,所以尽量多使用键盘。在 GNOME 设置中的“键盘快捷键”部分,你可以找到各个快捷键。
你也可以根据自己的理想工作流程来设置键盘快捷键。我将我最常用的应用程序设置为使用超级键打开。比如说,`Super + B` 打开我的浏览器,`Super + F` 打开“文件”,`Super + T` 打开终端。我还把 `Ctrl + Q` 设置成关闭窗口。
我使用 `Super + Tab` 在打开的应用程序之间切换,`Super + H` 隐藏一个窗口,`F11` 全屏打开软件,`Super + Left` 把窗口吸附到屏幕左边,`Super + Right` 把窗口吸附到屏幕左边,等等。
### 在 Mac 上尝试 Linux 之后再做决定
在完全安装 Linux 之前,在你的 Mac 上先尝试 Fedora。从 [Fefora 官网][9]下载 ISO 镜像。使用 [Etcher][13] 将 ISO 镜像写入 USB 驱动器,然后在启动时点击 `Option` 键,这样你就可以在即用模式下尝试了。
现在您无需在 Mac 上安装任何东西就可以探索 Fedora 工作站了。试试各种东西,能否正常工作:能不能连接 WiFi触控板是否正常有没有声音等等。
也记得花时间来尝试 GNOME。测试我上面提到的不同功能。打开一些安装好的软件。如果一切看起来都还不错如果你喜欢这样的 Fedora 工作站和 GNOME并且很肯定这就是你想要的那么把它安装到你的 Mac 吧。
尽情探索 Linux 世界吧!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/mac-to-linux
作者:[Marko Saric][a]
选题:[lujun9972][b]
译者:[nophDog](https://github.com/nophDog)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/markosaric
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop)
[2]: https://markosaric.com/linux/
[3]: https://markosaric.com/how-start-blog/
[4]: https://plausible.io/open-source-website-analytics
[5]: https://flathub.org/apps
[6]: https://opensource.com/article/20/2/macbook-linux-elementary
[7]: https://support.system76.com/articles/pop-basics/
[8]: https://getfedora.org/
[9]: https://getfedora.org/en/workstation/download/
[10]: https://www.gnome.org/
[11]: https://www.gnome.org/news/2020/03/gnome-3-36-released/
[12]: https://wiki.gnome.org/Apps/Tweaks
[13]: https://www.balena.io/etcher/
[14]: https://linux.cn/article-12361-1.html

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12369-1.html)
[#]: subject: (4 essential tools to set up your Python environment for success)
[#]: via: (https://opensource.com/article/20/6/python-tools)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
@ -12,35 +12,35 @@
> 选择的这些工具将简化你的 Python 环境,以实现顺畅和一致的开发实践。
![Python programming language logo with question marks][1]
![](https://img.linux.net.cn/data/attachment/album/202007/01/123009yolmlzp1yu1y88ew.jpg)
Python 是一门出色的通用编程语言,经常作为第一门编程语言来教授。二十年来,我为它撰写了很多本书,而它仍然是[我的首选语言][2]。虽然通常来说这门语言是简洁明了的,但是(正如 [xkcd][3] 所说的),从来没有人说过配置 Python 环境也是一样的简单。
Python 是一门出色的通用编程语言,经常作为第一门编程语言来教授。二十年来,我为它撰写了很多本书,而它仍然是[我的首选语言][2]。虽然通常来说这门语言是简洁明了的,但是(正如 [xkcd][3] 讽刺的),从来没有人说过配置 Python 环境也是一样的简单。
![xkcd python illustration][4]
*一个复杂的Python环境。 [xkcd][3]*
在日常生活中有很多使用 Python 的方法。我将解释我是如何使用这些 Python 生态系统工具的,坦诚的说,我仍在寻找更多替代品。
在日常生活中有很多使用 Python 的方法。我将解释我是如何使用这些 Python 生态系统工具的。但坦诚的说,我仍在寻找更好的替代品。
### 使用 pyenv 来管理 Python 版本
我发现在你的机器上运行一个特定版本的 Python 的最好方法是使用 `pyenv`。这个软件可以在 Linux、Mac OS X 和 WSL2 上工作:这是我通常关心的三个 “类 UNIX” 环境。
我发现在机器上运行一个特定版本的 Python 的最好方法是使用 `pyenv`。这个软件可以在 Linux、Mac OS X 和 WSL2 上工作:这是我通常关心的三个 “类 UNIX” 环境。
安装 `pyenv` 本身有时会有点棘手。一种方法是使用专用的 [pyenv 安装程序][5],它使用 `curl | bash` 方法来进行(详见说明)。
安装 `pyenv` 本身有时会有点棘手。一种方法是使用专用的 [pyenv 安装程序][5],它使用 `curl | bash` 方法来进行(详见说明)。
如果你是在 Mac 上(或者你运行 Homebrew 的其他系统),你可以按照[这里][6]的说明来安装和使用 `pyenv`
按照说明安装和设置了 `pyenv` 之后,你可以使用 `pyenv global` 来设置一个 “默认的” Python 版本。一般来说,你会选择你 “最喜欢的” 版本。这通常是最新的稳定版本,但如果有其他考虑因素也可能做不同的选择。
按照说明安装和设置了 `pyenv` 之后,你可以使用 `pyenv global` 来设置一个 “默认的” Python 版本。一般来说,你会选择你的 “首选” 版本。这通常是最新的稳定版本,但如果有其他考虑因素也可能做不同的选择。
### 使用 virtualenvwrapper 让虚拟环境更简单
使用 `pyenv` 安装 Python 的一个好处是,你后继安装的所有后续 Python 解释器环境都是你自己的,而不属于你的操作系统
使用 `pyenv` 安装 Python 的一个好处是,你所有后继安装的 Python 解释器环境都是你自己的,而不是操作系统层面的
虽然在 Python 本身内部安装东西通常不是最好的选择,但有一个例外:在上面选择的 “最喜欢的” Python 中,安装并配置 `virtualenvwrapper`。这样你就可以瞬间创建和切换到虚拟环境。
虽然在 Python 本身内部安装东西通常不是最好的选择,但有一个例外:在上面选择的 “首选” Python 中,安装并配置 `virtualenvwrapper`。这样你就可以瞬间创建和切换到虚拟环境。
我在[这篇文章中][7]具体介绍了如何安装和使用 `virtualenvwrapper`
这里我推荐一个独特的工作流程。你可以制作一个虚拟环境,这样你就可以大量重复使用它来运行许多<ruby>运行器<rt>runner</rt></ruby>。在这个环境中,安装你最喜欢的运行器 —— 也就是你会经常用来运行其他软件的软件。就目前而言,我的首选是 `tox`
这里我推荐一个独特的工作流程:你可以制作一个可以大量重复运行的虚拟环境,用来做<ruby>运行器<rt>runner</rt></ruby>。在这个环境中,可以安装你最喜欢的运行器 —— 也就是你会经常用来运行其他软件的软件。就目前而言,我的首选是 `tox`
### 使用 tox 作为 Python 运行器
@ -51,13 +51,13 @@ $ workon runner
$ tox
```
这个工作流程之所以重要,是因为我要在多个版本的 Python 和多个版本的依赖中测试我的代码。这意味着在 `tox` 运行器中会有多个环境。有些人会尝试在最新的依赖关系中运行,有些人会尝试在冻结的依赖关系中运行(接下来会有更多的介绍),我也可能会用 `pip-compile` 在本地生成这些环境。
这个工作流程之所以重要,是因为我要在多个版本的 Python 和多个版本的依赖中测试我的代码。这意味着在 `tox` 运行器中会有多个环境。一些会尝试在最新的依赖关系中运行,一些会尝试在冻结的依赖关系中运行(接下来会有更多的介绍),我也可能会用 `pip-compile` 在本地生成这些环境。
附注:我目前正在[研究使用 nox][9] 作为 `tox` 的替代品。原因超出了本文的范,但值得一试。
附注:我目前正在[研究使用 nox][9] 作为 `tox` 的替代品。原因超出了本文的范,但值得一试。
### 使用 pip-compile 进行 Python 依赖性管理
Python 是一种动态编程语言,这意味着它在每次执行代码时都会加载其依赖关系。确切了解每个依赖项的具体运行版本可能意味着是平稳运行代码还是意外崩溃。这意味着我们必须考虑依赖管理工具。
Python 是一种动态编程语言,这意味着它在每次执行代码时都会加载其依赖关系。能否确切了解每个依赖项的具体运行版本可能意味着是平稳运行代码还是意外崩溃。这意味着我们必须考虑依赖管理工具。
对于每个新项目,我都会包含一个 `requirements.in` 文件,(通常)只有以下内容:
@ -65,15 +65,15 @@ Python 是一种动态编程语言,这意味着它在每次执行代码时都
.
```
是的,没错。只有一个点的单行。我在 `setup.py` 文件中记录了 “松” 的依赖关系,比如 `Twisted>=17.5`。这与 `Twisted==18.1` 这样的确切依赖关系形成了鲜明对比,后者在需要一个特性或错误修复时升级到新版本的库变得更加困难
是的,没错。只有一个点的单行。我在 `setup.py` 文件中记录了 “松” 的依赖关系,比如 `Twisted>=17.5`。这与 `Twisted==18.1` 这样的确切依赖关系形成了鲜明对比,后者在需要一个特性或错误修复时,难以升级到新版本的库。
`.` 的意思是 “当前目录”,它使用当前目录下的 `setup.py` 作为依赖关系的来源。
这意味着使用 `pip-compile requirements.in > requirements.txt` 创建一个冻结的依赖文件。你可以在 `virtualenvwrapper` 创建的虚拟环境中或者 `tox.ini` 中使用这个依赖文件。
这意味着使用 `pip-compile requirements.in > requirements.txt` 创建一个冻结的依赖文件。你可以在 `virtualenvwrapper` 创建的虚拟环境中或者 `tox.ini` 中使用这个依赖文件。
有时,从 `requirements-dev.in`(内容:`.[dev]`)生成`requirements-dev.txt` 或从 `requirements-test.in`(内容:`.[test]`)生成 `requirements-test.txt` 很有用
有时,也可以`requirements-dev.in`(内容:`.[dev]`)生成 `requirements-dev.txt`或从 `requirements-test.in`(内容:`.[test]`)生成 `requirements-test.txt`
我正在研究在这个流程中是否应该用 [dephell][10] 代替 `pip-compile`。`dephell` 工具有许多有趣的功能,比如使用异步 HTTP 请求来下载依赖项。
我正在研究在这个流程中是否应该用 [dephell][10] 代替 `pip-compile`。`dephell` 工具有许多有趣的功能,比如使用异步 HTTP 请求来下载依赖项。
### 结论
@ -97,8 +97,8 @@ via: https://opensource.com/article/20/6/python-tools
[3]: https://xkcd.com/1987/
[4]: https://opensource.com/sites/default/files/uploads/python_environment_xkcd_1.png (xkcd python illustration)
[5]: https://github.com/pyenv/pyenv-installer
[6]: https://opensource.com/article/20/4/pyenv
[7]: https://opensource.com/article/19/6/python-virtual-environments-mac
[6]: https://linux.cn/article-12241-1.html
[7]: https://linux.cn/article-11086-1.html
[8]: https://opensource.com/article/19/5/python-tox
[9]: https://nox.thea.codes/en/stable/
[10]: https://github.com/dephell/dephell

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12373-1.html)
[#]: subject: (Missing Photoshop on Linux? Use PhotoGIMP and Convert GIMP into Photoshop)
[#]: via: (https://itsfoss.com/photogimp/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
@ -18,7 +18,7 @@
但是,习惯了 Photoshop 的人们发现在切换到 GIMP 的时很难忘记他们反复学习的肌肉记忆。这可能会使某些人感到沮丧,因为使用新的界面意味着要学习大量的键盘快捷键,并花时间在查找工具位于何处。
为了帮助从 Photoshop 切换到 GIMP 的人,[Diolinux][4] 介绍了一个在 GIMP 中模仿 Adobe Photoshop 的工具。
为了帮助从 Photoshop 切换到 GIMP 的人,[Diolinux][4] 推出了一个在 GIMP 中模仿 Adobe Photoshop 的工具。
### PhotoGIMP在 Linux 中为 GIMP 提供 Adobe Photoshop 的外观
@ -34,29 +34,25 @@
* 添加新的默认设置以最大化画布空间
* 添加类似于 Adobe Photoshop 的键盘快捷键
PhotoGIMP 还在自定义 .desktop 文件中添加新的图标和名称。让我们看看如何使用它。
PhotoGIMP 还在自定义 `.desktop` 文件中添加新的图标和名称。让我们看看如何使用它。
### 在 Linux 上安装 PhotoGIMP (适合中级到专业用户)
PhotoGIMP 本质是一个补丁。在 Linux 中下载并[解压 zip 文件][7]。你将在解压的文件夹中找到以下隐藏的文件夹:
* icons其中包含新的 PhotoGIMP 图标
* .local包含个性化的 .desktop 文件,以便你在系统菜单中看到的是 PhotoGIMP 而不是 GIMP
* .var包含 GIMP 补丁的主文件夹
* `.icons`:其中包含新的 PhotoGIMP 图标
* `.local`:包含个性化的 `.desktop` 文件,以便你在系统菜单中看到的是 PhotoGIMP 而不是 GIMP
* `.var`:包含 GIMP 补丁的主文件夹
你应该[使用 Ctrl+H 快捷键在 Ubuntu 中显示隐藏文件][8]。
警告:建议你备份 GIMP 配置文件,以便在不喜欢 PhotoGIMP 时可以还原。只需将 GIMP 配置文件复制到其他位置。
警告:建议你备份 GIMP 配置文件,以便在不喜欢 PhotoGIMP 时可以还原。只需将 GIMP 配置文件复制到其他位置即可备份
目前PhotoGIMP 主要与通过 [Flatpak][9] 安装的 GIMP 兼容。如果你使用 Flatpak 安装了 GIMP那么只需将这些隐藏的文件夹复制粘贴到家目录中它将 GIMP 转换为 Adobe Photoshop 类似的设置。
但是,如果你通过 apt、snap 或发行版的包管理器安装了 GIMP那么必须找到 GIMP 的配置文件夹,然后粘贴 PhotoGIMP 的 .var 目录的内容。当出现询问时,请选择合并选项并替换同名的现有文件。
但是,如果你通过 apt、snap 或发行版的包管理器安装了 GIMP那么必须找到 GIMP 的配置文件夹,然后粘贴 PhotoGIMP 的 `.var` 目录的内容。当出现询问时,请选择合并选项并替换同名的现有文件。
我[使用 apt 在 Ubuntu 20.04 中安装了 GIMP][10]。对我来说GIMP 配置文件在 \~/.config/GIMP/2.10。我复制了 .var/app/org.gimp.GIMP/config/GIMP/2.10 目录,并启动 GIMP 查看 PhotoGIMP 的启动页。
我[使用 apt 在 Ubuntu 20.04 中安装了 GIMP][10]。对我来说GIMP 配置文件在 `~/.config/GIMP/2.10`。我复制了 `.var/app/org.gimp.GIMP/config/GIMP/2.10` 目录,并启动 GIMP 查看 PhotoGIMP 的启动页。
这是打了 PhotoGIMP 补丁后的 GIMP 界面:
@ -64,15 +60,15 @@ PhotoGIMP 本质是一个补丁。在 Linux 中下载并[解压 zip 文件][7]
我尝试了几个 Photoshop 快捷键来检查它所做的更改,一切似乎可以正常工作。
[下载 PhotoGIMP][12]
- [下载 PhotoGIMP][12]
我还找到了 [Snap 包形式的 PhotoGIMP][13],但它是 2019 年的,我不确定它是否可以在所有地方使用,或者仅适用于 snap 安装。
**总结**
### 总结
这不是类似的第一个项目。几年前,我们有一个类似的项目叫 Gimpshop。Gimpshop 项目在过去的几年中没有任何进展,可以肯定地认为该项目已经死亡。有一个名为 Gimpshop 的网站,但该网站来自冒名者试图以 Gimpshop 的名义获利。
我不是 Adobe Photoshop 用户。我甚至不是 GIMP 专家,这就是为什么 Its FOSS 上的 [GIMP 教程][14] 用 Dimitrios 的原因。
我不是 Adobe Photoshop 用户。我甚至不是 GIMP 专家,这就是为什么我们的 [GIMP 教程][14] 用 Dimitrios 的原因。
因此,我无法评论 PhotoGIMP 项目的实用性。如果你熟悉这两种软件,那么应该能够比我更好地进行判断。
@ -85,7 +81,7 @@ via: https://itsfoss.com/photogimp/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,22 +1,24 @@
[#]: collector: (lujun9972)
[#]: translator: (Yufei-Yan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12379-1.html)
[#]: subject: (Learn Shell Scripting for Free With These Resources [PDF, Video Courses and Interactive Websites])
[#]: via: (https://itsfoss.com/shell-scripting-resources/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
通过这些资源免费学习 Shell 脚本编程——PDF视频课程和互动网站
学习 Shell 脚本编程的免费资源
======
_**那么, 你想学习 shell 脚本编程吗?或者你想提升现有的 bash 知识?我收集了以下免费的资源来帮助你学习 shell 脚本编程。**_
> 你想学习 shell 脚本编程吗?或者你想提升现有的 bash 知识?我收集了以下免费的资源来帮助你学习 shell 脚本编程。
LCTT 译注:毫无疑问,这些都是英文的)
shell 是一个命令行解释器,它允许你输入命令并获得输出。当你在使用终端的时候,你就已经在看 shell 了。
是的shell 是一个你可以和它进行交互的命令行界面,你可以通过它给操作系统某种指令。虽然有不同类型的 shell**[bash][1]**GNU Bourne-Again Shell是在各 Linux 发行版中最流行的。
是的shell 是一个你可以和它进行交互的命令行界面,你可以通过它给操作系统某种指令。虽然有不同类型的 shell但是 [bash][1]GNU Bourne-Again Shell是在各 Linux 发行版中最流行的。
当谈到 shell 脚本编程的时候,也就意味着——用户希望使用脚本来执行多条命令来获得一个输出。
当谈到 shell 脚本编程的时候,也就意味着 —— 用户希望使用脚本来执行多条命令来获得一个输出。
也许你需要学习 shell 脚本编程作为你的课程或者工作的一部分。了解 shell 脚本编程也可以帮助你在 Linux 中自动化某些重复的任务。
@ -28,107 +30,89 @@ shell 是一个命令行解释器,它允许你输入命令并获得输出。
还没在你的系统上安装 Linux不用担心。有很多种方法[在 Windows 上使用 Linux 终端][3]。你也可以在某些情况下[使用在线 Linux 终端][4]来练习 shell 脚本编程。
#### 1\. 学习 Shell——互动网站
#### 1、学习 Shell —— 互动网站
![][5]
如果你正在找一个互动网站来学习 shell 脚本编程,并且还可以在线试试,Learn Shell 是一个不错的起点。
如果你正在找一个互动网站来学习 shell 脚本编程,并且还可以在线试试,“[学习 Shell][6]” 是一个不错的起点。
它涵盖了基础知识,并且也提供了一些高级的练习。通常,内容还是简明扼要的——因此,我建议你看看这个网站。
它涵盖了基础知识,并且也提供了一些高级的练习。通常,内容还是简明扼要的 —— 因此,我建议你看看这个网站。
[Learn Shell][6]
#### 2\. Shell 脚本编程教程——门户网站
#### 2、Shell 脚本编程教程 —— 门户网站
![][7]
Shell scripting tutorial 是一个完全专注于 shell 脚本编程的网站。你可以选择免费阅读其中的资源,也可以购买 PDF、实体书籍和电子书来支持他们。
“[Shell 脚本编程教程][8]” 是一个完全专注于 shell 脚本编程的网站。你可以选择免费阅读其中的资源,也可以购买 PDF、实体书籍和电子书来支持他们。
当然,花钱买纸质的版本或者电子书不是强制的。但是,这些免费资源查看起来还是很方便的。
[Shell Scripting Tutorial][8]
#### 3\. Shell 脚本——Udemy免费视频课程
#### 3、UdemyShell 脚本 —— 免费视频课程
![][9]
毫无疑问,[Udemy][10] 是最受欢迎的在线课程平台之一。而且,除了付费认证课程之外,它还提供了不包含证书的免费内容。
Shell Scripting 是 Udemy 上推荐度最高的免费课程之一。你不需要花费任何费用就可以注册这门课。
“[Shell 脚本][11]” 是 Udemy 上推荐度最高的免费课程之一。你不需要花费任何费用就可以注册这门课。
[Shell Scripting Udemy][11]
#### 4\. Bash Shell Scripting——Udemy免费视频课程
#### 4、UdemyBash Shell 脚本编程 —— 免费视频课程
![][12]
Udemy 上另一个专注于 bash shell 脚本编程的有趣且免费的课程。与前面提到的课程相比,这个资源似乎更受欢迎。所以,你可以注册这门课,看看它都教些什么。
Udemy 上另一个专注于 [bash shell 脚本编程][29]的有趣且免费的课程。与前面提到的课程相比,这个资源似乎更受欢迎。所以,你可以注册这门课,看看它都教些什么。
别忘了 Udemy 的免费课程不能提供证书。但是,它确实是一个让人印象深刻的免费 shell 脚本编程学习资源。
#### 5\. Bash Academy——互动游戏在线门户
#### 5、Bash 研究院 —— 互动游戏在线门户
![][13]
顾名思义,Bash Academy 专注于向用户提供 bash shell 的教学。
顾名思义,“[Bash 研究院][15]” 专注于向用户提供 bash shell 的教学。
尽管它没有很多的内容,它还是非常适合初学者和有一定经验的用户。不仅仅局限于指导——它也可以提供交互式的游戏来练习,不过目前已经不能用了。
尽管它没有很多的内容,它还是非常适合初学者和有一定经验的用户。不仅仅局限于指导 —— 它也可以提供交互式的游戏来练习,不过目前已经不能用了。
因此,如果这个足够有趣,你可以去看看这个 [Github 页面][14],并且如果你愿意的话,还可以 fork 它并对现有资源进行改进。
因此,如果这个足够有趣,你可以去看看这个 [Github 页面][14],并且如果你愿意的话,还可以复刻它并对现有资源进行改进。
[Bash Academy][15]
#### 6\. Bash Scripting LinkedIn Learning免费视频课程
#### 6、LinkedIn学习 Bash 脚本编程 —— 免费视频课程
![][16]
LinkedIn 提供了大量免费课程来帮助你提成技能,并且为更多工作做好准备。你还可以找到一些专注于 shell 脚本编程的课程,这些课程有助于重温基本技能或者这个过程中获得一些高级技能。
在这里,我提供一个 bash 脚本编程的课程链接,你还可以发现其他类似的免费课程。
在这里,我提供一个 [学习 Bash 脚本编程][17] 的课程链接,你还可以发现其他类似的免费课程。
[Bash Scripting (LinkedIn Learning)][17]
#### 7\. Advanced Bash Scripting Guide (免费 PDF 书籍)
#### 7、高级 Bash 脚本编程指南 —— 免费 PDF 书籍
![][18]
这是一个令人印象深刻的高级 bash 脚本编程指南,并且可以获得到它的 PDF 版本。这个 PDF 资源没有版权限制,是完全免费的。
这是一个令人印象深刻的《[高级 Bash 脚本编程指南][19]》,并且可以免费获得到它的 PDF 版本。这个 PDF 资源没有版权限制,在公开领域是完全免费的。
尽管这个资源主要是提供高级的知识,通过参考这个 PDF 并且开始学习 shell 脚本编程,它还是很适合初学者的。
[Advanced Bash Scripting Guide [PDF]][19]
#### 8\. Bash Notes for Professionals免费 PDF 书籍)
#### 8、专业 Bash 笔记 —— 免费 PDF 书籍
![][20]
如果你已经对 Bash Shell 脚本编程比较熟悉或者只是想快速总结一下,那这是一个很好的参考。
这个可以免费下载的书有 100 多页,通过简单的描述和例子,这本书涵盖了各种各样的主题。
这个《[专业 Bash 笔记][21]》可以免费下载的书有 100 多页,通过简单的描述和例子,这本书涵盖了各种各样的主题。
[下载 Bash Notes for Professional][21]
#### 9\. Tutorialspoint——门户网站
#### 9、Tutorialspoint —— 门户网站
![][22]
Tutorialspoint 是一个非常流行的学习各种编程语言的门户网站。我想说这对于初学者学习基础知识非常好。
“[Tutorialspoint][24]” 是一个非常流行的学习各种编程语言的门户网站。我想说这对于初学者学习基础知识非常好。
也许这不太适合作为一个详细的资源——但是应该是不错的免费资源。
也许这不太适合作为一个详细的资源——但是应该是不错的免费资源。
[Tutorialspoint][24]
#### 10\. 旧金山城市学院在线笔记——门户网站
#### 10、旧金山城市学院在线笔记 —— 门户网站
![][25]
也许这不是最好的免费资源——但是如果你已经为学习 shell 脚本编程做好了探索每种资源的准备,为什么不看看旧金山城市学院的在线笔记呢?
也许这不是最好的免费资源 —— 但是如果你已经为学习 shell 脚本编程做好了探索每种资源的准备,为什么不看看旧金山城市学院的 “[在线笔记][26]” 呢?
当我在网上随便搜索关于 shell 脚本编程的资源的时候,我偶然遇到到了这个资源。
同样需要注意的是,在线笔记可能会有点过时。但是,这应该还是一个值得探索的有趣资源。
[旧金山城市学院笔记][26]
同样需要注意的是,这个在线笔记可能会有点过时。但是,这应该还是一个值得探索的有趣资源。
#### 荣誉奖: Linux 手册
@ -136,12 +120,13 @@ Tutorialspoint 是一个非常流行的学习各种编程语言的门户网站
不要忘记bash 手册也应该是一个相当不错的免费资源,可以用它来查看命令和使用方法。
尽管不是专门为你掌握 shell 脚本编程而量身打造的,它依然是一个你可以免费使用的重要网络资源。你可以选择访问在线手册,或者直接打开终端然后输入以下命令:
尽管不是专门为你掌握 shell 脚本编程而量身打造的,它依然是一个你可以免费使用的重要网络资源。你可以选择访问在线手册,或者直接打开终端然后输入以下命令:
```
man bash
```
#### 总结
### 总结
有很多很受欢迎的付费资源,比如这些[最好的 Linux 书籍][28]。从网络上的一些免费资源开始学习 shell 脚本编程还是很方便的。
@ -156,7 +141,7 @@ via: https://itsfoss.com/shell-scripting-resources/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
@ -190,3 +175,4 @@ via: https://itsfoss.com/shell-scripting-resources/
[26]: https://fog.ccsf.edu/~gboyd/cs160b/online/index.html
[27]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?ssl=1
[28]: https://itsfoss.com/best-linux-books/
[29]: https://www.udemy.com/course/complete-bash-shell-scripting/

View File

@ -0,0 +1,110 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12376-1.html)
[#]: subject: (Linux Mint 20 is Officially Available Now! The Performance and Visual Improvements Make it an Exciting New Release)
[#]: via: (https://itsfoss.com/linux-mint-20-download/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Linux Mint 20 正式发布了!你该升级吗?
======
![](https://img.linux.net.cn/data/attachment/album/202007/03/083110avnb4rwi0rwzh56r.jpg)
Linux Mint 20 “Ulyana” 终于发布了,可以下载了。
Linux Mint 19 基于 Ubuntu 18.04 LTS而 [Mint 20][1] 则基于 [Ubuntu 20.04 LTS][2] —— 所以你会发现很多不同的地方、改进的地方,可能更棒了。
既然它来了,让我们来看看它的新功能,在哪里下载它,以及如何升级你的系统。
### Linux Mint 20有什么新东西
我们制作了一段关于 Linux Mint 20 的初步视觉印象的视频,让大家更好地了解。
- [video](https://youtu.be/7knHfN-NUZk)
说到 Linux Mint 20 的发布,有很多事情要谈。虽然我们已经介绍了 Linux Mint 20 的新的关键[功能][1],但我还是在这里提几点,让大家一目了然。
* Nemo 文件管理器在生成缩略图方面的性能提升
* 一些重新设计的颜色主题
* Linux Mint 20 将禁止 APT 使用 Snapd
* 一个新的图形用户界面工具,用于通过本地网络共享文件
* 改进对多显示器的支持
* 改进对笔记本电脑的混合图形支持
* 不再有 32 位版本
除了这些变化之外,你还会注意到 Cinnamon 4.6 桌面更新后的一些视觉变化。
以下是 Linux Mint 20 Cinnamon 版的一些截图。
![Mint 20 Welcome Screen][4]
![Mint 20 Color Themes][5]
![Mint 20 Nemo File Manager][6]
![Mint 20 Nemo File Manager Blue Color Theme][7]
![Mint 20 Wallpapers][8]
![Mint 20 Redesigned Gdebi Installer][9]
![Mint 20 Warpinator Tool for Sharing Files on Local Network][10]
![Mint 20 Terminal][11]
### 升级到 Linux Mint 20你需要知道什么
如果你已经在使用 Linux Mint你可以选择升级到 Linux Mint 20。
* 如果你使用的是 Linux Mint 20 测试版,你可以升级到 Mint 20 稳定版。
* 如果你正在使用 Linux Mint 19.3(这是 Mint 19 的最新迭代),你可以将系统升级到 Linux Mint 20而不需要进行重新安装
* Linux Mint 20 没有 32 位版本。如果你**使用 32 位的 Mint 19 系列,你将无法升级到 Mint 20**
* 如果你使用的是 Linux Mint 18 系列,你必须先通过 Mint 19 系列升级。在我看来,重新安装 Mint 20 会比较省时省事
* 如果你使用的是 Linux Mint 17、16、15 或更低版本,你一定不要再使用它们了。这些版本已经不支持了
我们有一个详细的指南,展示了从 18.3 到 19 [升级 Linux Mint 版本][12]的步骤。我猜测 Mint 20 的步骤应该也是一样的。我们的团队会对 Mint 19.3 到 Mint 20 的升级做一些测试,并在适用的情况下更新这个指南。
在你继续升级之前,请确保备份你的数据和[使用 Timeshift 创建系统快照][13]。
### 下载Linux Mint 20
你可以简单地前往其官方下载页面,为自己抓取最新的稳定 ISO。你会发现官方支持的桌面环境的 ISO即 Cinnamon、MATE 和 Xfce。
此外,还为那些网络连接缓慢或不稳定的用户提供了 Torrent链接。
- [下载 Linux Mint 20][14]
如果你只是想在不更换主系统的情况下试一试,我建议先[在 VirtualBox 中安装 Linux Mint 20][15],看看这是不是你喜欢的东西。
你试过 Linux Mint 20 了吗?你对这个版本有什么看法?请在下面的评论区告诉我你的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-mint-20-download/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://linux.cn/article-12297-1.html
[2]: https://itsfoss.com/download-ubuntu-20-04/
[3]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-welcome-screen.png?fit=800%2C397&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-color-themes.png?fit=800%2C396&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-nemo-file-manager.png?fit=800%2C397&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-nemo-file-manager-blue-color-theme.png?fit=800%2C450&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-wallpapers.png?fit=800%2C450&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-redesigned-gdebi-installer.png?fit=800%2C582&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-warpinator.png?fit=800%2C397&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-terminal.png?fit=800%2C540&ssl=1
[12]: https://itsfoss.com/upgrade-linux-mint-version/
[13]: https://itsfoss.com/backup-restore-linux-timeshift/
[14]: https://linuxmint.com/download.php
[15]: https://itsfoss.com/install-linux-mint-in-virtualbox/

View File

@ -1,53 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (State of software engineering, JavaScript is the future, and more industry trends)
[#]: via: (https://opensource.com/article/20/4/state-software-engineering-javascript-and-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
State of software engineering, JavaScript is the future, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [State of Software Engineering in 2020][2]
> Software is moving fast, and it is fusing into all other areas of industry. As it is a growing field, learning to program and improving your skills in software engineering can have get you great returns in the future. Moreover, identifying the fastest growing areas of software and investing your time into them can get you to even better places. Keep learning and try to find opportunities that you can capitalize on or products that can serve a niche in a growing field of software. When that niche becomes mainstream, you can end up with a successful product in your hands, which can become your future success. If it fails, it will be an immense experience on the path to becoming a product person.
**The impact**: Learn COBOL, see the world!
## [Why JavaScript is the programming language of the future][3]
> JavaScript has one of the most mature if not THE most mature ecosystems a programming language could ever have. The community for JavaScript is vast, and the entry barrier is extremely low.
**The impact**: The only knowledge I have of the veracity of this statement comes from the JavaScript people I follow on Twitter. If you can indeed extrapolate from them, then JavaScript has a pretty good shot.
## [Why Linux containers are a CIO's best friend][4]
> "A big take-away for CIOs is that fit enterprises increasingly view IT as a point of leverage for the business. Having a clear and consistent overall business strategy ranks as one of the most distinctive traits of fit enterprises," said Gartner VP and Distinguished Analyst Andy Roswell-Jones, in Gartner's report on the survey. "In such organizations, digital technology will drive that strategy."
**The impact**: Point of leverage meaning that if the IT an organization is selected in reference to and in support of that organization's overall strategy there will be an outsized lift on execution against that strategy. The corollary is that without a clear and consistent business strategy no technology can save your enterprise.
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/state-software-engineering-javascript-and-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://quanticdev.com/articles/software-engineering-in-2020/
[3]: https://www.freecodecamp.org/news/future-of-javascript/
[4]: https://www.ciodive.com/news/linux-containers-kubernetes/575506/

View File

@ -1,78 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The rebirth of Mapzen, new projects bolster PyTorch, faster AI object detection, and other open source news)
[#]: via: (https://opensource.com/article/20/4/news-march-25)
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
The rebirth of Mapzen, new projects bolster PyTorch, faster AI object detection, and other open source news
======
Catch up on the biggest open source headlines from the past two weeks.
![][1]
In this edition of our open source news roundup, we take a look at the rebirth of Mapzen, two new projects to bolster PyTorch, new open source object detection software, and more!
### Mapzen makes a comeback
While its technology is used by open source projects like OpenStreetMap and by firms like Foursquare, open source mapping company Mapzen couldn't sustain itself as a business.  Mapzen initially closed its doors in 2018, but it has [a new lease on life][2] with the support of the Linux Foundation.
As a project under the Urban Computing Foundation (UCF), Mapzen "encompasses six independent projects and communities involved in developing a truly open platform for mapping, search, navigation and transit data." Being under the UCF's umbrella enables Mapzen's developers to "collaborate on and build a common set of open-source tools connecting cities, autonomous vehicles, and smart infrastructure." They can also tap such UCF members as Google, IBM, and the University of California San Diego for support. Mapzen projects are available under their [GitHub organization][3].
### New open source projects to bolster PyTorch
PyTorch, the open source machine learning framework originating out of Facebook, has been getting a lot of love lately from both its creator and from AWS. The two firms have [released open source projects to bolster PyTorch][4].
Facebook is sharing TorchServe, "a model-serving framework for PyTorch that will make it easier for developers to put their models into production." AWS's contribution is TorchElastic, "a library that makes it easier for developers to build fault-tolerant training jobs on Kubernetes clusters." PyTorch's product manager Joe Spisak [told VentureBeat][5] that by using the two projects developers can run "training over a number of nodes without the training job actually failing; it will just continue gracefully, and once those nodes come back online, it can basically restart the training."
You can find the code for [TorchServe][6] and [TorchElastic][7] on GitHub.
### Microsoft and Huazhong University release object detection AI
One of the more difficult tasks facing artificial intelligence systems is to accurately detect and identify objects in photos and videos. Researchers from Microsoft and China's Huazhong University have [released an open source tool][8] that does the job quickly and efficiently.
Called Fair Multi-Object Tracking (FairMOT for short), the tool "outperforms state-of-the-art models on public data sets at 30 frames per second" (almost normal video speed). It took researches about 30 hours to train the software using data from the MOT Challenge, which is "a framework for validating people-tracking algorithms." The team behind FairMOT believe that the tool can be used in "industries ranging from elder care to security, and perhaps be used to track the spread of illnesses like COVID-19."
You can view the source code and training models for FairMOT in [this GitHub repository][9].
#### In other news
* [Will a small open-source effort from Japan disrupt the autonomous space?][10]
* [Google open-sources data set to train and benchmark AI sound separation models][11]
* [Docker builds open source community around Compose Specification][12]
* [Sophos Sandboxie is now available as an open-source tool][13]
* [PyCon has moved to an online-only event and is available now][14]
Thanks, as always, to Opensource.com staff members and [Correspondents][15] for their help this week.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/news-march-25
作者:[Scott Nesbitt][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/scottnesbitt
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/weekly_news_roundup_tv.png?itok=tibLvjBd
[2]: https://www.zdnet.com/article/mapzen-open-source-mapping-project-revived-under-the-urban-computing-foundation/
[3]: https://github.com/mapzen/
[4]: https://techcrunch.com/2020/04/21/aws-and-facebook-launch-an-open-source-model-server-for-pytorch/
[5]: https://venturebeat.com/2020/04/21/facebook-partners-with-aws-on-pytorch-1-5-upgrades-like-torchserve-for-model-serving/
[6]: https://github.com/pytorch/serve
[7]: https://github.com/pytorch/elastic
[8]: https://venturebeat.com/2020/04/08/researchers-open-source-state-of-the-art-object-tracking-ai/
[9]: https://github.com/ifzhang/FairMOT
[10]: https://www.forbes.com/sites/rahulrazdan/2020/04/04/will-a-small-open-source-effort-from-japan-disrupt-the--autonomous-space-/#6e6819f01cc5
[11]: https://venturebeat.com/2020/04/09/google-open-sources-data-set-to-train-and-benchmark-ai-sound-separation-models/
[12]: https://sdtimes.com/softwaredev/docker-builds-open-source-community-around-compose-specification/
[13]: https://securityaffairs.co/wordpress/101397/malware/sandboxie-sandbox-open-source.html
[14]: https://us.pycon.org/2020/online/
[15]: https://opensource.com/correspondent-program

View File

@ -1,71 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Java security, mainframes having a moment, and more industry trends)
[#]: via: (https://opensource.com/article/20/4/java-mainframes-dev-skills-more-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Java security, mainframes having a moment, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [How secure is Java compared to other languages?][2]
> In this article, we'll look at how the most commonly used programming languages rank in terms of security. I'll explain some factors that make one language less secure than another, and why identified vulnerabilities have increased so much in the past few years. Finally, I'll suggest a few ways Java developers can reduce vulnerabilities in code.  
**The impact**: If software is eating the world, then hackers are... I guess the thrush thriving in the gullet? Hyperbole aside, the more stuff made of software, the more incentive clever people have to try and figure out how to do things they probably shouldn't be able to. This applies to Java too.
## [Mainframes are having a moment][3]
> In addition to being abundant, mainframe jobs pay well, and so far, appear not to be as affected by the pandemic as other areas of tech employment. Salaries for entry-level enterprise computing jobs [average US $70,100 a year][4] [PDF], according to a 2019 report from tech analyst [Forrester Research][5] commissioned by IBM. As recently as this week, jobs boards such as [Indeed][6] and [Dice.com][7] listed hundreds or in some cases thousands of openings for mainframe positions at all levels. Advertised pay ranges from $30 to $35 an hour for a junior mainframe developer to well over $150,000 a year for a mainframe database administration manager.
**The impact**: That is much, much better than a poke in the eye.
## [The developer skills on the rise, and in decline][8]
> Indeed.com analysed job postings using a list of 500 key technology skill terms to see which ones employers are looking for more these days and which are falling out of favour. Such research has helped identify cutting-edge skills over the past five years, with some previous years risers now well establish, thanks to explosive growth.
**The impact**: The "on the rise" skills outnumber the "in decline" skills. Bad news for browser developers...
## [The IT Pro Podcast: Building cloud-native apps][9]
> The cloud is eating enterprise IT, and while on-premise applications are going to be around for a long time to come, the importance of being able to successfully take advantage of cloud technologies should not be understated. However, its one thing to simply port an existing application to the cloud, but developing software to be run in cloud environments is a different matter altogether.
**The impact**: What is technology if not manifested mindset?
## [Communication is key to culture change][10]
> The outcome is staggering. Business teams feel invested in the development of the solution, they feel a sense of excitement and ownership. So much so, they go out into the corridors of the organisation to evangelise and promote the solution. Conversely, this improves the status of the developers within the business. It allows them to integrate with other stakeholders, contribute to new processes and help to achieve common goals. 
**The impact**: As a communications person, I couldn't agree more. Communication is the difference between an organization and a movement.
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/java-mainframes-dev-skills-more-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.javaworld.com/article/3537561/how-secure-is-java-compared-to-other-languages.html
[3]: https://spectrum.ieee.org/tech-talk/computing/software/mainframes-programming-language-cobol-news-coronavirus
[4]: https://www.ibm.com/downloads/cas/1EPYAP5D
[5]: https://go.forrester.com/
[6]: https://www.indeed.com/q-Mainframe-jobs.html
[7]: https://www.dice.com/jobs/q-Mainframe-jobs
[8]: https://www.techcentral.ie/10-developer-skills-on-the-rise-and-five-on-the-decline/
[9]: https://www.itpro.co.uk/cloud/355348/the-it-pro-podcast-building-cloud-native-apps
[10]: https://www.verdict.co.uk/culture-service-digital-enterprise/

View File

@ -1,116 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Good News! You Can Now Buy the De-Googled /e/OS Smartphone from Fairphone)
[#]: via: (https://itsfoss.com/fairphone-with-e-os/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Good News! You Can Now Buy the De-Googled /e/OS Smartphone from Fairphone
======
Fairphone is known for its ethical (or fair) approach of making a smartphone.
Normally, the ethical approach involves that the workers get paid well, the smartphone build materials are safer for the planet, and the phone is durable/sustainable. And, theyve already done a good job with their [Fairphone 1][1] , [Fairphone 2][2], and [Fairphone 3][3] smartphones.
Now, to take things up a notch, Fairphone has teamed up with [/e/OS][4] which is a de-googled Android fork, to launch a separate edition of [Fairphone 3][3] (its latest smartphone) that comes with **/e/OS** out of the box.
In case you didnt know about the mobile operating system, you can read our [interview with Gael Duval (Founder of /e/OS)][5] to know more about it.
While we already have some privacy-focused smartphones like [Librem 5][6], Fairphone 3 with /e/OS is something different to its core. In this article, Ill try highlighting the key things that you need to know before ordering a Fairphone 3 with /e/OS loaded.
### The First Privacy Conscious &amp; Sustainable Phone
You may have noticed a privacy-focused smartphone manufactured in some corner of the world, like [Librem 5][7].
But for the most part, it looks like the Fairphone 3 is the first privacy-conscious sustainable phone to get the spotlight.
![][8]
The de-googled operating system /e/OS ensures that the smartphone does not rely on Google services to function among other things. Hence, /e/OS should be a great choice for Fairphone 3 for privacy-focused users.
Also, to support /e/OS out of the box wasnt just the decision of the manufacturer but its community.
As per their announcement, theyve mentioned:
> For many, fairer technology isnt just about the device and its components, it is also about the software that powers the product; and when Fairphone community members were asked what their preferred alternative operating system (OS) was for the next Fairphone, the Fairphone 3, they voted for /e/OS.
So, it looks like the users do prefer to have /e/OS on their smartphones.
### Fairphone 3: Overview
![][9]
To tell you what I think about it, let me first share the technical specifications of the phone:
* Dual Nano-SIM (4G LTE/3G/2G support)
* **Display:** 5.65-inch LCD (IPS) with Corning Gorilla Glass 5 protection
* **Screen Resolution**: 2160 x 1080
* **RAM:** 4 GB
* **Chipset**: Qualcomm Snapdragon 632
* **Internal Storage:** 64 GB
* **Rear Camera:** 12 MP (IMX363 sensor)
* **Front Camera:** 8 MP
* Bluetooth 5.0
* WiFi 802.11a/b/g/n/ac
* NFC Supported
* USB-C
* Expandable Storage supported
So, on paper, it sounds like a decent budget smartphone. But, the pricing and availability will be an important factor keeping in mind that its a one-of-a-kind smartphone and we dont really have alternatives to compare to.
Not just how its unique for privacy-focused users, but it is potentially the easiest phone to fix (as suggested by [iFixits teardown][10]).
### Fairphone 3 with /e/OS: Pre-Order, Price &amp; Availability
![][11]
As for its availability the Fairphone 3 with /e/OS is available to pre-order through the [online shop of /e/OS][12] for **€479.90** across Europe. 
If you are an existing Fairphone 3 user, you can also install /e/OS from the [available build here][13].
You get 2 years of warranty along with a 14-day return policy.
[Pre-Order Fairphone 3 With /e/OS][12]
### My Thoughts On Fairphone 3 with /e/OS
Its important to consider that the smartphone is targeting a particular group of consumers. So, its quite obvious that it isnt meant for everyone. The specifications on paper may look good but not necessarily the best bang for the buck.
Also, looking at the smartphone market right now the specifications and its value for money matter more than what we privacy-focused users want.
But its definitely something impressive and I believe its going to get good attention specially among the privacy-aware people who dont want their smartphone spying on them.
With Fairphone 3s launch with /e/OS, the lesser tech savvy people can now get an out of the box privacy-focused smartphone experience.
What do you think about the Fairphone 3 with /e/OS? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/fairphone-with-e-os/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/Fairphone_1
[2]: https://en.wikipedia.org/wiki/Fairphone_2
[3]: https://shop.fairphone.com/en/?ref=header
[4]: https://e.foundation/
[5]: https://itsfoss.com/gael-duval-interview/
[6]: https://itsfoss.com/librem-5-available/
[7]: https://itsfoss.com/librem-linux-phone/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/Fairphone-3-battery.png?ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/fairphone-3.png?ssl=1
[10]: https://www.ifixit.com/Teardown/Fairphone+3+Teardown/125573
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/fairphone-e-os.png?ssl=1
[12]: https://e.foundation/product/e-os-fairphone-3/
[13]: https://doc.e.foundation/devices/FP3/

View File

@ -1,85 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (The success of virtual conferences, Retropie comes to Raspberry Pi 4, and other open source news)
[#]: via: (https://opensource.com/article/20/5/news-may-9)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
The success of virtual conferences, Retropie comes to Raspberry Pi 4, and other open source news
======
Catch up on the biggest open source headlines from the past two weeks.
![][1]
In this weeks edition of our open source news roundup, we see the success of virtual conferences, continued impact of open source on COVID-19, Retropie adds support for Raspberry Pi 4, and more open source news.
### Virtual conferences report record attendance
The technology industry, and non-profits supporting open source software, greatly [depend on conferences][2] to connect their community together. There has been an open question of whether moving to an online alternative would be effective or not. The last two weeks have given us reason to say virtual conferences are a huge success, and there are multiple paths to getting there.
The first success goes to [Red Hat Summit][3], a conference put on by Red Hat to showcase their technology and interact with the open source community each year. Last year it held at Boston Convention and Exhibition Center in Boston, MA with a record-breaking  8,900 people in attendence. This year, due to COVID-19, Red Hat took it virtual with what they called [Red Hat Summit 2020 Virtual Experience][3]. The final attendance numbers, as [reported by IT World Canada][4], was 80,000 people.
The explosive growth of online events continued this week with GitHub Satelite [reporting][5] over 40,000 attendees for its multiday event.
![Example streaming for #DIDevOps][6]
*Streaming example of [Desert Island DevOps][7] *
Another success with a different twist came in the shape of 3-D avatars in the popular Animal Crossing game. Desert Island DevOps [reported][8] over 8,500 attendees in a simulated space and received [a lot of praise][9] from attendees and speakers alike. 
### Open source continues to speed COVID-19 response
Emergency response requires speed and safety to be a top concern, which makes open source licensing and designs even more valuable. In our current battle with COVID-19, there is a need for increasing inventory of medical equipment such as ventilators and PPE as well as the development of treatments and medications. An open source approach is showing to have a major response.
A recent victory comes in the form of a [ventilator design][10] announced by Nvidia Corporation. Described as "low-cost and easy-to-assemble," the ventilators are expected to cost much less to build than other models on the market, making them a great option for medical professionals who have been working so hard to protect their patients.
Developing [open source medications][11] may also provide vast benefits. Research and development of vaccines are taking practices perfected in the open source world of the Linux kernel and applying them to how medications are developed. The focus may help merit be more central to the process than profitability. The absence of patent and copyright restrictions are also noted to speed the process of discovery.
### Retropie announces support for Raspberry Pi 4
Many of us are passing the time by playing games while we stay at home. If youre into console gaming and nostalgia, [Retropie][12] gives Raspberry Pi enthusiasts a set of classic games to dig through. Last week the team behind the project is happy to announce [support for the latest Raspberry Pi 4][13] hardware [released 24 June 2019][14].  
#### In other news
* [Fedora 32 Linux Official Release][15]
* [Jitsi open source conferencing gaining interest][16]
* [Free Wayland Book Available][17]
* [Inkscape 1.0 Released][18]
Thanks, as always, to Opensource.com staff members and [Correspondents][19] for their help this week.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/news-may-9
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/weekly_news_roundup_tv.png?itok=tibLvjBd
[2]: https://opensource.com/article/20/5/pycon-covid-19
[3]: https://www.redhat.com/en/summit
[4]: https://www.itworldcanada.com/article/over-80000-people-tune-into-virtual-red-hat-summit-crushing-last-years-record/430090?sc_cid=701f2000000u72aAAA
[5]: https://twitter.com/MishManners/status/1258232215814586369
[6]: https://opensource.com/sites/default/files/uploads/stream_example.jpg (Example streaming for #DIDevOps)
[7]: https://desertedisland.club/about/
[8]: https://desertedislanddevops.com/
[9]: https://www.vice.com/en_us/article/z3bjga/this-tech-conference-is-being-held-on-an-animal-crossing-island
[10]: https://blogs.nvidia.com/blog/2020/05/01/low-cost-open-source-ventilator-nvidia-chief-scientist/
[11]: https://www.fastcompany.com/90498448/how-open-source-medicine-could-prepare-us-for-the-next-pandemic
[12]: https://opensource.com/article/19/1/retropie
[13]: https://retropie.org.uk/2020/04/retropie-4-6-released-with-raspberry-pi-4-support/
[14]: https://opensource.com/article/19/6/raspberry-pi-4
[15]: https://fedoramagazine.org/announcing-fedora-32/
[16]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/open-source-videoconferences
[17]: https://www.phoronix.com/scan.php?page=news_item&px=Wayland-Book-Free
[18]: https://inkscape.org/news/2020/05/04/introducing-inkscape-10/
[19]: https://opensource.com/correspondent-program

View File

@ -1,71 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (David vs Goliath! Microsoft and an Obscure KDE Project Fight Over “MAUI”)
[#]: via: (https://itsfoss.com/microsoft-maui-kde-row/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
David vs Goliath! Microsoft and an Obscure KDE Project Fight Over “MAUI”
======
Remember the [interview with Uri Herrera][1], the creator of [Nitrux Linux][2]? Uri also works on couple of other Linux-related projects and one of them is Maui project.
The MauiKit (styled as MAUI) is an acronym for Multi-Adaptable User Interfaces. It is an open source framework for developing cross-platform applications. Its been in development since 2018 and it is now a [part of KDEs incubation program KDE Invent][3].
Why am I talking about Maui? Because Microsoft has [renamed one of its project (Xamarin.Forms) to .NET MAUI][4]. This MAUI in .NET MAUI stands for Multi-platform App UI. It is also a framework for building cross-platform application.
You see the confusion here? Both MAUI projects are frameworks for building cross-platform applications.
### The debate over the use of “MAUI”
![][5]
MauiKit developers are obviously [not happy with this move by Microsoft][6].
> We like to believe that this an unfortunate event caused by an oversight during the brainstorming session to select a new and appealing name for their product and not an attempt at using the brand weight and marketing-might that a corporation such as Microsoft and their subsidiary Xamarin possess to step over a competing framework. A UI framework that, as of today, is still the first result in Google when searching for the term “Maui UI framework” but that due to the might of GitHub (another Microsoft subsidiary) and Microsofts website (specifically, their blog) SEO that will change over time.
A couple of issues were opened on the GitHub repository of .NET MAUI to bring their attention to this name clash.
The discussion got heated as some Microsoft MVPs and contributors (not Microsoft employees) started making arguments like MauiKit is a small project with fewer GitHub stars and no big companies use it.
Microsofts Program Manager [David Ortinau][7] closed the thread with the message, “official legal name is .NET Multi-platform App UI and MAUI is an acronym, code name. This has been through legal review”.
![Microsofts official response][8]
This is the [main thread][9] that you can follow on GitHub if you want.
### Is it really an issue?
It may seem like a non-issue at the first glance but two projects with the same aim and same name are bound to create confusion. It would have been best that Microsoft had avoided it altogether.
By the way, this is not the first time Microsoft has a name clash with a Linux-related project. As [Phoronix noted][10], a few years ago it was GNOME developers frustrated with Microsoft over naming a project GVFS (later renamed to Virtual File System for Git) as it collided with their GVFS (GNOME Virtual File-System)
By the looks of it, Microsoft is not going to backtrack on MAUI. It could even go ahead and trademark MAUI. They have got all the money and power after all.
I wonder what would have been the case if an obscure small project used the same name as one of Microsofts projects.
--------------------------------------------------------------------------------
via: https://itsfoss.com/microsoft-maui-kde-row/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/nitrux-linux/
[2]: https://nxos.org/
[3]: https://invent.kde.org/maui/mauikit
[4]: https://devblogs.microsoft.com/dotnet/introducing-net-multi-platform-app-ui/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/david-vs-goliath.jpg?ssl=1
[6]: https://nxos.org/news/official-statement-regarding-xamarin-forms-rebranding-as-maui/
[7]: https://github.com/davidortinau
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/microsoft-response-maui.png?ssl=1
[9]: https://github.com/dotnet/maui/issues/35
[10]: https://www.phoronix.com/scan.php?page=news_item&px=Microsoft-KDE-MAUI

View File

@ -1,61 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Fujitsu delivers exascale supercomputer that you can soon buy)
[#]: via: (https://www.networkworld.com/article/3545816/fujitsu-delivers-exascale-supercomputer-that-you-can-soon-buy.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Fujitsu delivers exascale supercomputer that you can soon buy
======
Deployment will run through next year but Fugaku is already being put to work. Against Covid-19, naturally.
MaxiPhoto / Getty Images
Fujitsu has delivered all the components needed for a supercomputer in Japan that is expected to break the exaFLOP barrier when it comes online next year, and that delivery means that the same class of hardware will be available soon for enterprise customers.
The supercomputer, called Fugaku, is being assembled and brought online now at the RIKEN Center for  Computational Science. The installation of the 400-plus-rack machine started in December 2019, and full operation is scheduled for fiscal  2021, according according to a Fujitsu spokesman.
[10 of the world's fastest supercomputers][1]
All told, Fugaku will have a total of 158,976 processors, each with 48 cores at 2.2 GHz. Already the partially deployed supercomputers performance is [half an exaFLOP][2] of 64-bit double precision floating point performance and looks to be the first to get to a full exaFLOP. Intel says  its supercomputer Aurora being built for the Department of Energys Argonne National Laboratory in Chicago will be delivered by 2021, and it will break the exaFLOP barrier, too.
An exaFLOP is one quintillion (1018) floating-point operations per second, or 1,000 petaFLOPS.
Fujitsu announced last November a partnership with Cray, an HPE company, to sell Cray-branded supercomputers with the custom processor used in Fugaku. Cray already has deployed four systems for early evaluation located at Stony Brook University, Oak Ridge National Laboratory, Los Alamos National Laboratory, and the University of Bristol in Britain.
According to Cray, systems have been shipped to customers interested in early evaluation, and it is planning to officially launch the A64fx system featuring the Cray Programming Environment later this summer.
Fugaku is remarkable in that it contains no GPUs but instead uses a [custom-built Arm processor][3] designed entirely for high-performance computing. The motherboard has no memory slots; the memory is on the CPU die. If you look at the Top500 list now and proposed exaFLOP computers planned by the Department of Energy, they all use power-hungry GPUs.
As a result, Fugaku prototype topped the Green500 ranking last fall as the most energy efficient supercomputer in the world. Nvidias new Ampere A100 GPU may best the A64fx in performance but with its 400-watt power draw it will use a lot more power.
### Working to fight COVID-19
While construction marches on, RIKEN CCS and the Japanese Ministry of Education, Culture, Sports, Science and Technology have already started using the functioning parts of Fugaku to perform computations needed in research to fight the coronavirus. The projects it is working on include research into the characteristics of the virus, identifying potential drug compounds to combat the virus, research into diagnosis and treatment, insights into the spread of infections and its socio-economic impact.
Its not the only supercomputer being turned against COVID-19. Government and industry organizations with supercomputers have  [joined together][4] in the effort, and CERN, the European nuclear research organization, [redeployed][5] a soon-to-be retired supercomputer with more than 100,000 cores on Folding@Home, the distributed-computing project thats [seeking a way to thwart the viruss entry into human cells][6].
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3545816/fujitsu-delivers-exascale-supercomputer-that-you-can-soon-buy.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
[2]: https://twitter.com/ProfMatsuoka/status/1261194036276154368
[3]: https://www.networkworld.com/article/3535812/can-fujitsu-beat-nvidia-in-the-hpc-race.html
[4]: https://www.networkworld.com/article/3533426/covid-19-tech-giants-government-agencies-add-supercomputing-to-the-fight.html
[5]: https://home.cern/news/news/cern/cern-contributes-computers-combatting-covid-19
[6]: https://www.networkworld.com/article/3535080/thousands-of-home-pcs-break-exaflop-barrier.html
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -1,87 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open Source YouTube Alternative PeerTube Needs Your Support to Launch Version 3)
[#]: via: (https://itsfoss.com/peertube-v3-campaign/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Open Source YouTube Alternative PeerTube Needs Your Support to Launch Version 3
======
[PeerTube][1] (developed by [Framasoft][2]) is a free and open-source decentralized alternative to YouTube somewhat like [LBRY][3]. As the name suggests, it relies on [peer-to-peer connections][4] to operate the video hosting services.
You can also choose to self-host your instance and also have access to videos from other instances (a federated network, just like [Mastodon][5]).
It is being actively developed for a few years now. And, to take it up a notch, they have decided to launch a crowdfunding campaign for the next major release.
The funding campaign will help them develop v3.0 of PeerTube with some amazing key features planned for the release this fall.
![PeerTube Instance Example][6]
### PeerTube: Brief Overview
In addition to what I just mentioned above, PeerTube is a fully-functional peer to peer video platform. The best thing about it is — its open-source and free. So, you can check them out on [GitHub][7] if you want.
You can watch their official video here:
**Note:** You need to be cautious about your IP address if you have concerns about that on PeerTube (try using one of the [best VPNs available][8]).
### PeerTubes Crowdfunding Campaign For v3 Launch
Youll be excited to know that the crowdfunding campaign of **€60,000** already managed to **get 10,000 Euros on Day 1** (at the time of writing this).
Now, coming to the details. The campaign aims to focus on gathering **funds for the next 6 months of development for a v3 release planned for November 2020.** It looks like a lot of work for a single full-time developer — but no matter whether they reach the funding goal, they intend to release the v3 with the existing funds they have.
In their [announcement post][9], PeerTube team mentioned:
> We feel like we need to develop it, that we have to. Imposing a condition stating « if we do not get our 60,000€, then there will not be a v3 » here, would be a lie, marketing manipulation : this is not the kind of relation we want to maintain with you.
Next, lets talk about the new features theyve planned to introduced in the next 6 months:
* Upon reaching the **€10,000 goal,** they plan to work on introducing a globalized video index to make it easier to search for videos across multiple instances.
* With **€**20,000 goal, PeerTube will dedicate one month on improving the moderation tools to make the best out of it.
* With 40,000 goal, they would work on the UX/UI of playlists. So, it will look better when you try to embed a playlist. In addition to this, the plugin system will be improved to make it easier to contribute to PeerTubes code.
* With the end of the campaign reaching 60,000 goal, PeerTubes live-streaming feature will be introduced.
You can also find the details of their [roadmap on their site][10].
### Wrapping Up
The ability to have a global inter-connected video index among multiple instances is something that was needed and it will also allow you to configure your own index.
The content moderation tool improvement is also a huge deal because its not that easy to manage a decentralized network of video hosting services. While they aim to prevent censorship, a strict moderation is required to make PeerTube a comfortable place to watch videos.
Even though Im not sure about how useful PeerTubes live-streaming feature will be, at launch. It is going to be something exciting to keep an eye out for.
We at Its FOSS made a token donation of 25 Euro. I would also encourage you to donate and help this open source achieve their financial goal for version 3 development.
[Support PeerTube][11]
--------------------------------------------------------------------------------
via: https://itsfoss.com/peertube-v3-campaign/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://joinpeertube.org
[2]: https://framasoft.org/en/
[3]: https://itsfoss.com/lbry/
[4]: https://en.wikipedia.org/wiki/Peer-to-peer
[5]: https://itsfoss.com/mastodon-open-source-alternative-twitter/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/peertube-instance-screenshot.jpg?ssl=1
[7]: https://github.com/Chocobozzz/PeerTube
[8]: https://itsfoss.com/best-vpn-linux/
[9]: https://framablog.org/2020/05/26/our-plans-for-peertube-v3-progressive-fundraising-live-streaming-coming-next-fall/
[10]: https://joinpeertube.org/roadmap
[11]: https://joinpeertube.org/roadmap#support

View File

@ -1,72 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Edge investments, data navigators, and more industry trends)
[#]: via: (https://opensource.com/article/20/6/open-source-industry-trends)
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
Edge investments, data navigators, and more industry trends
======
A weekly look at open source community and industry trends.
![Person standing in front of a giant computer screen with numbers, data][1]
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
## [Call to Participate: 1H 2020 CNCF Cloud Native Survey][2]
> The information gathered from the survey is used by CNCF to better understand the current cloud native ecosystem. It can be used by the community as a data point to consider as they develop their cloud native strategies. Help out CNCF and the community by filling out the [survey][3]! The results will be open sourced and shared on [GitHub][4] as well as a report in the June time frame. To see last years results, read the [2019 survey report][5].
**The impact**: The CNCF has a lot going on; help them prioritize your priorities.
## [Where are edge computing investments going?][6]
> We are seeing five main pools of capital flowing into edge computing:
>
> 1. Earlier stage and higher risk VCs and private equity (PE);
> 2. Later stage and lower risk infrastructure funds;
> 3. Public cloud providers looking to exploit the assets of telecoms operators (and others);
> 4. Tech companies carving out a role in edge computing as a new opportunity or to support their existing business;
> 5. Telecoms operators themselves looking to build positions beyond basic infrastructure.
>
**The impact**: It is still early days in edge computing; early enough to get your wildly impraticle open source edge startup funded from one of these pools.
## [The New Stack Context: Is Kubernetes the New App Server?][7]
> Most enterprise OpenStack vendors focused on sort of a public cloud competition path, if you will. However, I think because of that deeply rooted in infrastructure focus, most of those vendors didnt acknowledge the value of the platform services that the public cloud offered,” She said. “Theres something I heard once, where everybody thinks that their layer in the stack is where the hard problems are. That every layer above them is easy. If youre deeply entrenched in infrastructure thinking, you dont appreciate the ways in which that ecosystem is developing above you.
**The impact**: That right there is why there is so much talk about the importance of empathy in software product development.
## [Happy Developers: Navigators of the data age][8]
> Data is not the new gold or oil, its the new oxygen. Every part of the modern business needs it, ranging from sales to marketing to product, all the way through security, data-science, and of course to engineering itself. However, the pursuit and effort to obtain data is not about blindly collecting, as opposed to what some vendors of big-data solutions might be claiming. Data is about quality before quantity. Each voyage is about getting to the right data at the right time and how to derive the right products from it. You dont want to drown in data, you want to swim in it. As historian Yuval Noah Harari put it in his bestselling book [Homo Deus: A History of Tomorrow][9]: “In ancient times having power meant having access to data. Today having power means knowing what to ignore.”
**The impact**: In the short term this is true, but only as far as it enables surviving in the longer term to the point where the blindly collected mass data becomes retroactively scrutable. Collect it all, ignore what you don't need right now, and return to the rest later when you know more and have more resources.
_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/open-source-industry-trends
作者:[Tim Hildred][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/thildred
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
[2]: https://www.cncf.io/blog/2020/05/14/call-to-participate-1h-2020-cncf-cloud-native-survey/
[3]: https://www.surveymonkey.com/r/GG26PL5
[4]: https://github.com/cncf/surveys
[5]: https://www.cncf.io/wp-content/uploads/2020/03/CNCF_Survey_Report.pdf
[6]: https://data-economy.com/where-are-edge-computing-investments-going/
[7]: https://thenewstack.io/the-new-stack-context-is-kubernetes-the-new-app-server/
[8]: https://www.cncf.io/blog/2020/05/18/happy-developers-navigators-of-the-data-age/
[9]: https://www.goodreads.com/work/quotes/45087110

View File

@ -1,106 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Hurry up! $100 PineTab Linux Tablet is Finally Available for Pre-order)
[#]: via: (https://itsfoss.com/pinetab-linux-tablet/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Hurry up! $100 PineTab Linux Tablet is Finally Available for Pre-order
======
Most of you must be already aware of Pine64s flagship products [PinePhone][1] and Pinebook (or [Pinebook Pro][2]).
PineTab was planned to be made available back in 2019— however, PinePhone and Pinebook production was prioritized over it. Also, due to the factory lines closing for COVID-19 pandemic, the plan for PineTab was further postponed.
Finally, you will be happy to know that you can now pre-order the **PineTab Linux tablet** for just **$100.**
Even though PineTab is meant for early adopters, Ill give you a brief description of its specifications and what you can expect it to do.
### PineTab specification
![][3]
PineTab is a Linux tablet for $100 with which you can also attach a keyboard and some other modules to make the most out of it.
So, for just $100, it isnt aiming to be “just another tablet” but something more functional for the users who prefer to have a useful tablet.
Before we talk more about it, lets run down through the specifications:
* Display: 10-inch 720p IPS Screen
* Quad-core A64 SoC
* 2 GB LPDDR3 RAM
* 2 MP front-facing camera and 5 MP rear camera
* 64 GB eMMC flash storage
* SD Card support
* USB 2.0, USB-OTG, Digital video output, Micro USB
* 6000mAh Battery
You can also add a magnetic backlit keyboard with PineTab for an additional $20.
You can see it in action here:
[Subscribe to our YouTube channel for more Linux videos][4]
For the first batch of PineTab, they are shipping the tablet with [UBports Ubuntu Touch][5]. In their [recent blog post][6], Pine64 also clarified why they chose UBports Ubuntu Touch:
> The reason for this choice being that Ubuntu Touch works well for a traditional tablet use-case and, at the same time, converts into a more traditional desktop experience when the magnetic keyboard is attached.
Theyve also mentioned that the PineTabs software will be convergent with both PinePhone and PineBook.
### PineTab Expansion Options
![][7]
To expand the functionality of PineTab, theres an adapter board available on which you will be able to attach the expansions you want.
The adapter board will already be present inside, you just need to remove the back cover, work on a single screw to swap/add expansions.
The following expansions will be available to start with:
* M.2 SATA SSD add-on
* M.2 LTE (and GPS) add-on
* [LoRa][8] module add-on
* RTL-SDR module add-on
It is worth noting that you only use any one of the expansions at a time no matter how many expansions you have attached to the board.
Some extensions like LTE or LoRa module will probably make PineTab a great point-of-sales terminal as well.
As of now, theres no information on what it would cost per add-on for the expansion board — but hopefully well get to know more about the details right before the pre-order starts.
### How to get PineTab Linux tablet
PineTab is now available for pre-order. If you are planning to get one, you should hurry up. From my experience with Pine devices, the pre-order might close in a couple of days. You can order it from their website:
[Pre-order PineTab][9]
What are your thoughts on PineTab? Are you going to order one when it goes live? Let me know your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/pinetab-linux-tablet/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/pinephone/
[2]: https://itsfoss.com/pinebook-pro/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/pinetab-keyboard.jpg?ssl=1
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[5]: https://ubports.com/
[6]: https://www.pine64.org/2020/05/15/may-update-pinetab-pre-orders-pinephone-qi-charging-more/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/pinetab-expansion-board.jpg?fit=800%2C161&ssl=1
[8]: https://en.wikipedia.org/wiki/LoRa#LoRaWAN
[9]: https://store.pine64.org/?product=pinetab-10-1-linux-tablet-with-detached-backlit-keyboard

View File

@ -1,112 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Mint 20 is Officially Available Now! The Performance and Visual Improvements Make it an Exciting New Release)
[#]: via: (https://itsfoss.com/linux-mint-20-download/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Linux Mint 20 is Officially Available Now! The Performance and Visual Improvements Make it an Exciting New Release
======
Linux Mint 20 “Ulyana” is finally released and available to download.
Linux Mint 19 was based on Ubuntu 18.04 LTS and [Mint 20][1] is based on [Ubuntu 20.04 LTS][2] — so you will find a lot of things different, improved, and potentially better.
Now that its here, lets take a look at its new features, where to download it, and how to upgrade your system.
### Linux Mint 20: Whats New?
We have made a video about the initial visual impressions on Linux Mint 20 to give you a better idea:
[Subscribe to our YouTube channel for more Linux videos][3]
Theres a lot of things to talk about when it comes to Linux Mint 20 release. While we have already covered the new key [features in Linux Mint 20][1], Ill mention a few points here for a quick glance:
* Performance improvements in Nemo file manager for thumbnail generation
* Some re-worked color themes
* Linux Mint 20 will forbid APT from using Snapd
* A new GUI tool to share files using the local network
* Improved multi-monitor support
* Improved hybrid graphics support for laptops
* No 32-bit releases anymore
In addition to all these changes, you will also notice some visual changes with Cinnamon 4.6 desktop update.
Here are some screenshots of Linux Mint 20 Cinnamon edition. Click on the images to see in full screen.
![Mint 20 Welcome Screen][4]
![Mint 20 Color Themes][5]
![Mint 20 Nemo File Manager][6]
![Mint 20 Nemo File Manager Blue Color Theme][7]
![Mint 20 Wallpapers][8]
![Mint 20 Redesigned Gdebi Installer][9]
![Mint 20 Warpinator Tool for Sharing Files on Local Network][10]
![Mint 20 Terminal][11]
### Upgrading to Linux Mint 20: What you need to know
If you are already using Linux Mint, you may have the option to upgrade to Linux Mint 20.
* If you are using Linux Mint 20 beta version, you can upgrade to Mint 20 stable version.
* If youre using Linux Mint 19.3 (which is the latest iteration of Mint 19), you can upgrade your system to Linux Mint 20 without needing to perform a clean installation.
* There is no 32-bit version of Linux Mint 20. If you are **using 32-bit Mint 19 series, you wont be able to upgrade to Mint 20**.
* If you are using Linux Mint 18 series, youll have to upgrade through Mint 19 series first. A fresh install of Mint 20 would be less time-consuming and troublesome in my opinion.
* If you are using Linux Mint 17, 16, 15 or lower, you must not use them anymore. These versions are not supported anymore.
Its FOSS has a detailed guide showing the steps to [upgrade Linux Mint version][12] from 18.3 to 19. I am guessing the steps should be the same for Mint 20 as well. Its FOSS team will be doing some tests for Mint 19.3 to Mint 20 upgrade and update this guide as applicable.
Before you go on upgrading make sure to backup your data and [create system snapshots using Timeshift][13].
### Download Linux Mint 20
You can simply head on to its official download page and grab the latest stable ISO for yourself. Youll find the ISO for the officially supported desktop environments, i.e. Cinnamon, MATE and Xfce.
Torrent links are also available for those who have slow or inconsistent internet connection.
[Download Linux Mint 20][14]
If you just want to try it out without replacing your main system, I suggest [installing Linux Mint 20 in VirtualBox][15] first and see if this is something you would like.
Have you tried Linux Mint 20 yet? What do you think about the release? Let me know your thoughts in the comments section below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-mint-20-download/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/linux-mint-20/
[2]: https://itsfoss.com/download-ubuntu-20-04/
[3]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-welcome-screen.png?fit=800%2C397&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-color-themes.png?fit=800%2C396&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-nemo-file-manager.png?fit=800%2C397&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-nemo-file-manager-blue-color-theme.png?fit=800%2C450&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-wallpapers.png?fit=800%2C450&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-redesigned-gdebi-installer.png?fit=800%2C582&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-warpinator.png?fit=800%2C397&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/mint-20-terminal.png?fit=800%2C540&ssl=1
[12]: https://itsfoss.com/upgrade-linux-mint-version/
[13]: https://itsfoss.com/backup-restore-linux-timeshift/
[14]: https://linuxmint.com/download.php
[15]: https://itsfoss.com/install-linux-mint-in-virtualbox/

View File

@ -0,0 +1,130 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (openSUSE Leap 15.2 Released With Focus on Containers, AI and Encryption)
[#]: via: (https://itsfoss.com/opensuse-leap-15-2-release/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
openSUSE Leap 15.2 Released With Focus on Containers, AI and Encryption
======
[openSUSE][1] Leap 15.2 has finally landed with some useful changes and improvements.
Also, considering the exciting announcement of [Closing the Leap Gap][2], the release of openSUSE Leap 15.2 brings us one step closer to SLE ([SUSE Linux Enterprise][3]) binaries being integrated to openSUSE Leap 15.3 next.
Lets take a look at what has changed and improved in openSUSE Leap 15.2.
### openSUSE Leap 15.2: Key Changes
![][4]
Overall, openSUSE Leap 15.2 release involves security updates, major new packages, bug fixes, and other improvements.
In their press release, a developer of the project, **Marco Varlese**, mentions:
> “Leap 15.2 represents a huge step forward in the Artificial Intelligence space, “I am super excited that openSUSE end-users can now finally consume Machine Learning / Deep Learning frameworks and applications via our repositories to enjoy a stable and up-to-date ecosystem.”
Even though this hints at what changes it could involve, heres whats new in openSUSE Leap 15.2:
#### Adding Artificial Intelligence (AI) and Machine Learning packages
Unquestionably, Artificial Intelligence (AI) and Machine Learning are some of the most disruptive technologies to learn.
To facilitate that to its end-users, openSUSE Leap 15.2 has added a bunch of important packages for new open source technologies:
* [Tensorflow][5]
* [PyTorch][6]
* [ONNX][7]
* [Grafana][8]
* [Prometheus][9]
#### Introducing a Real-Time Kernel
![][10]
With openSUSE Leap 15.2, a real-time kernel will be introduced to manage the timing of [microprocessors][11] to efficiently handle time-critical events.
The addition of a real-time kernel is a big deal for this real. **Gerald Pfeifer (**chair of the projects board) shared his thoughts with the following statement:
> “The addition of a real time kernel to openSUSE Leap unlocks new possibilities. Think edge computing, embedded devices, data capturing, all of which are seeing immense growth. Historically many of these have been the domain of proprietary approaches; openSUSE now opens the floodgates for developers, researchers and companies that are interested in testing real time capabilities or maybe even in contributing. Another domain open source helps open up!”
#### Inclusion of Container Technologies
With the latest release, you will notice that [Kubernetes][12] is included as an official package. This should make it easy for end-users to automate deployments, scale, and manage containerized applications.
[Helm][13] (the package manager for Kubernetes) also comes baked in. Not just limited to that, you will also find several other additions here and there that makes it easier to secure and deploy containerized applications.
#### Updates to openSUSE Installer
![][14]
openSUSEs installer was already pretty good. But, with the latest Leap 15.2 release, they have added more information, compatibility with right-to-left languages like Arabic, and subtle changes to make it easier to select options right at the time of installation.
#### Improvements to YaST
While [YaST][15] is already a pretty powerful installation and configuration tool, this release adds the ability of creating and managing a Btrfs file-system and enforcing advanced encryption techniques.
Of course, you must be aware of the availability of [openSUSE on Windows Subsystem for Linux][16]. So, with Leap 15.2, YaST compatibility with WSL has improved as per their release notes.
#### Desktop Environment Improvements
![][17]
The desktop environments available have been update to their latest versions that include [KDE Plasma 5.18 LTS][18] and [GNOME 3.34][19].
You will also find an updated [XFCE 4.14][20] desktop available for openSUSE Leap 15.2.
If youre curious to know all the details for the latest release, you may refer to the [official release announcement.][21]
### Download &amp; Availability
As of now, you should be able to find Linode cloud images of Leap 15.2. Eventually, you will notice other cloud hosting services like Amazon Web Services, Azure, and others to offer it as well.
You can also grab the DVD ISO or the network image file from the official website itself.
To upgrade your current installation, Id recommend following the [official instructions][22].
[openSUSE Leap 15.2][23]
Have you tried openSUSE Leap 15.2 yet? Feel free to let me know what you think!
--------------------------------------------------------------------------------
via: https://itsfoss.com/opensuse-leap-15-2-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.opensuse.org/
[2]: https://www.suse.com/c/sle-15-sp2-schedule-and-closing-the-opensuse-leap-gap/
[3]: https://www.suse.com/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/opensuse-leap-15-2-gnome.png?ssl=1
[5]: https://www.tensorflow.org
[6]: https://pytorch.org
[7]: https://onnx.ai
[8]: https://grafana.com
[9]: https://prometheus.io/docs/introduction/overview/
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/opensuse-leap-15-2-terminal.png?ssl=1
[11]: https://en.wikipedia.org/wiki/Microprocessor
[12]: https://kubernetes.io
[13]: https://helm.sh
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/opensuse-leap-15-2.png?ssl=1
[15]: https://yast.opensuse.org/
[16]: https://itsfoss.com/opensuse-bash-on-windows/
[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/opensue-leap-15-2-kde.png?ssl=1
[18]: https://itsfoss.com/kde-plasma-5-18-release/
[19]: https://itsfoss.com/gnome-3-34-release/
[20]: https://www.xfce.org/about/news/?post=1565568000
[21]: https://en.opensuse.org/Release_announcement_15.2
[22]: https://en.opensuse.org/SDB:System_upgrade
[23]: https://software.opensuse.org/distributions/leap

View File

@ -0,0 +1,79 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Should API-restricting licenses qualify as open source?)
[#]: via: (https://opensource.com/article/20/6/api-copyright)
[#]: author: (Richard Fontana https://opensource.com/users/fontana)
Should API-restricting licenses qualify as open source?
======
A look at how a closely-watched legal case about copyright and APIs
might affect open source licensing
![Two government buildings][1]
In its 2014 _[Oracle v. Google][2]_ decision, the United States Court of Appeals for the Federal Circuit held that the method declarations and "structure, sequence, and organization" (SSO) of the Java SE API were protected by copyright. This much-criticized result contradicted a decades-old industry and professional consensus assumption that APIs were in the public domain, reflected in an ongoing common practice of successive reimplementation of APIs, and persisting even after the general copyrightability of software was settled by statute. Unsurprisingly, that consensus shaped the view of APIs from within open source. Open source licenses, in particular, do not address APIs, and their conditions have not customarily been understood to apply to APIs.
If the copyrightability ruling survives its current [review][3] by the United States Supreme Court, there is reason to worry that _Oracle v. Google_ will eventually have some detrimental impact on open source licensing. License authors might draft new licenses that would explicitly extend familiar kinds of open source license conditions to activities merely involving APIs. There could also be comparable efforts to advance _Oracle v. Google_-influenced reinterpretations of existing open source licenses.
We've already seen an example of a new open source-like license that restricts APIs. Last year, Holochain, through its lawyer [Van Lindberg][4], submitted the [Cryptographic Autonomy License][5] (CAL) for approval by the Open Source Initiative (OSI). The [1.0-Beta][6] draft included source availability and licensing requirements placed on works merely containing or derivative of interfaces included in or derived from the licensed work. (CAL 1.0-Beta was [rejected][7] by the OSI for reasons other than the interface copyleft feature. Subsequent revisions of CAL removed explicit references to interfaces, and the OSI approved CAL 1.0 earlier this year.) Licenses like CAL 1.0-Beta would extend copyleft to reimplementations of APIs having no code in common with the original. Though less likely, new permissive licenses might similarly extend notice preservation requirements to mere copies of APIs.
In my view, API-restricting licenses, though otherwise FOSS-like, would not qualify for the open source label. To simplify what is actually a complicated and contentious issue, let's accept the view that the license approval decisions of the OSI, interpreting the [Open Source Definition][8] (OSD), are the authoritative basis for determining whether a license is open source. The OSD makes no mention of software interfaces. Some advocates of a relaxation of standards for approving open source licenses have argued that if a type of restriction is not explicitly prohibited by the OSD, it should be considered acceptable in an open source license. To guard against this tactic, which amounts to ["gaming" the OSD][9], the OSI [clarified][10] in 2019 that the purpose of the approval process is to ensure that approved licenses not only conform to the OSD but also provide software freedom.
Though [Luis Villa has raised concerns][11] that it gives rise to a "[no true Scotsman][12]" problem, I believe the emphasis on software freedom as a grounding principle will enable the OSI to deal effectively and in a well-reasoned, predictable way with cases where license submissions expose unforeseen gaps or vagueness in the OSD, which is politically difficult for the OSI to revise. (Disclosure: I was on the OSI board when this change to the license review process was made.) It is also an honest acknowledgment that the OSD, like the [Free Software Definition][13] maintained by the Free Software Foundation, is an unavoidably imperfect and incomplete attempt to distill the underlying community norms and expectations surrounding what FOSS is.
Software freedom is the outgrowth of a long-lived culture. Judging whether a license that extends FOSS-normative conditions to APIs provides software freedom should begin with an examination of tradition. This leads to a straightforward conclusion. As noted above, from nearly the earliest days of programming and continuing without interruption through the rise of the modern open source commons, software developers have shared and acted on a belief in an unconditional right to reimplement software interfaces. From a historical perspective, it is difficult to think of anything as core to software freedom as this right to reimplement.
The inquiry cannot be entirely backward-looking, however, since the understanding of software freedom necessarily changes in response to new societal or technological developments. It is worth asking whether a departure from the traditional expectation of unrestricted APIs would advance the broader goals of open source licensing. At first glance, this might seem to be true for copyleft licensing, since, in theory, compliant adoption of API copyleft licenses could expand the open source software commons. But expanding the scope of copyleft to API reimplementations—software traditionally seen as unrelated to the original work—would violate another open source norm, the limited reach of open source licenses, which is partially captured in [OSD 9][14].
Another observation is that software freedom is endangered by licensing arrangements that are excessively complex and unpredictable and that make compliance too difficult. This would likely be true of API-restricting FOSS-like licenses, especially on the copyleft side. For example, copyleft licenses typically place conditions on the grant of permission to prepare derivative works. Trying to figure out what is a derivative work of a Java method declaration, or the SSO of a set of APIs, could become a compliance nightmare. Would it include reimplementations of APIs? Code merely invoking APIs? The fundamental vagueness of _Oracle v. Google_-style API copyright bears some resemblance to certain kinds of software patent claims. It is not difficult to imagine acquirers of copyrights covered by API-restrictive licenses adopting the litigation strategies of patent trolls. In addition to this risk, accepting API-restrictive licenses as open source would further legitimize API copyrightability in jurisdictions like the United States, where the legal issue is currently unsettled.
_Oracle v. Google_-influenced interpretations of existing open source licenses would similarly extend familiar open source license conditions to activities merely involving APIs. Such reinterpretations would transform these licenses into ones that fail to provide software freedom and advance the goals of open source, for the same reasons that apply to the new license case. In addition, they would upend the intentions and expectations of the authors of those licenses, as well as nearly all of their licensors and licensees.
It might be argued that because open source licenses are principally ([though not exclusively][15]) copyright licenses, it is necessary, if not beneficial, for their conditions to closely track the expansion of copyright to APIs. This is not so for new open source licenses, which can be drafted explicitly to nullify the impact of _Oracle v. Google_. As for reinterpretations of existing open source licenses, while the issue of API copyrightability remains unsettled, it would not be appropriate to abandon traditional interpretations in favor of anticipating what an _Oracle v. Google_-influenced court, unfamiliar with open source culture, would decide. Litigation over open source licenses continues to be uncommon, and influential open source license interpretations have emerged in the technical community with little regard to how courts might act. In any event, courts engaged in interpreting commonly-used open source licenses may well be persuaded to treat APIs as unconstrained.
Some have suggested that interpretation of the GPL should take full advantage of the scope of underlying copyright rights. This is related to a view of copyleft as a "[hack on copyright][16]" or a "[judo move][17]" that "[return[s] the violent force of the oppressor against the oppressor itself][18]." It can be detected in the [copyleft tutorial][19] sponsored by the Software Freedom Conservancy and the FSF, which [says][20]: "The strongest copylefts strive to [use] the exclusive rights that copyright grants to authors as extensively as possible to maximize software freedom." It might seem logical for someone with this perspective to specifically promote an API copyright interpretation of the GPL. But I know of no advocate of strong copyleft who has done so, and the text and interpretive history of the GPL do not support such a reading.
A somewhat different view of API copyright and GPL interpretation, occasionally voiced, is that _Oracle v. Google_ may put the doctrine of strong copyleft on a surer legal foundation. Similarly, it has sometimes been asserted that strong copyleft rested on some notion of API copyrightability all along, which suggests that _Oracle v. Google_ provides some retroactive legal legitimacy. The latter view is not held by the FSF, which in an earlier era had [opposed the expansion of copyright][21] to user interfaces. This stance made its way into GPLv2, which has a [largely overlooked provision][22] authorizing the original licensor to exclude countries that would restrict "distribution and/or use … either by patents or by copyrighted interfaces." The FSF also [severely criticized][23] Oracle's claim of copyright ownership of Java APIs. And the FSF has never questioned the right to reimplement APIs of GPL-licensed software under non-GPL licenses (as has happened, for example, with the FSF-copyrighted [GNU Readline][24] and the BSD-licensed [libedit][25]). If there were shown to be some legal deficiency in strong copyleft theory that API copyrightability could somehow fix, I believe it would be better either to live with a weaker understanding of GPL copyleft or to pursue revisions to the GPL that would reformulate strong copyleft without relying on API copyright.
If API copyrightability survives Supreme Court review, it would then be appropriate for license stewards, licensors of existing open source licenses, and drafters of new open source licenses to take constructive steps to minimize the impact on open source. Stewards of widely used open source licenses, where they exist, could publish interpretive guidance clarifying that APIs are not restricted by the license. Updates to existing open source licenses and entirely new licenses could make unrestricted APIs an explicit policy. Licensors of existing open source licenses could make clear, in standardized license notices or through external commitments, that they will not treat open source license conditions as imposing any restriction on activities merely involving APIs.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/api-copyright
作者:[Richard Fontana][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fontana
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov2.png?itok=n36__lZj (Two government buildings)
[2]: http://www.cafc.uscourts.gov/sites/default/files/opinions-orders/13-1021.Opinion.5-7-2014.1.PDF
[3]: https://www.scotusblog.com/case-files/cases/google-llc-v-oracle-america-inc/
[4]: https://twitter.com/vanl?lang=en
[5]: https://github.com/holochain/cryptographic-autonomy-license
[6]: http://lists.opensource.org/pipermail/license-review_lists.opensource.org/2019-April/004028.html
[7]: http://lists.opensource.org/pipermail/license-review_lists.opensource.org/2019-June/004248.html
[8]: https://opensource.org/osd
[9]: https://twitter.com/webmink/status/1121873263125118977?s=20
[10]: https://opensource.org/approval
[11]: https://twitter.com/luis_in_brief/status/1143884765654687744
[12]: https://en.wikipedia.org/wiki/No_true_Scotsman
[13]: https://www.gnu.org/philosophy/free-sw.en.html
[14]: https://opensource.org/osd#not-restrict-other-software
[15]: https://opensource.com/article/18/3/patent-grant-mit-license
[16]: https://sfconservancy.org/blog/2012/feb/01/gpl-enforcement/
[17]: https://gondwanaland.com/mlog/2014/12/01/copyleft-org/#gpl-and-cc-by-sa-differences
[18]: https://dustycloud.org/blog/field-guide-to-copyleft/#sec-2-2
[19]: https://copyleft.org/guide/
[20]: https://copyleft.org/guide/comprehensive-gpl-guidech2.html#x5-120001.2.2
[21]: https://www.gnu.org/bulletins/bull21.html
[22]: https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html#section8
[23]: https://www.fsf.org/blogs/licensing/fsf-statement-on-court-of-appeals-ruling-in-oracle-v-google
[24]: https://tiswww.case.edu/php/chet/readline/rltop.html
[25]: http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libedit/

View File

@ -0,0 +1,97 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How an open project's governance model evolves)
[#]: via: (https://opensource.com/open-organization/20/7/evolving-project-governance)
[#]: author: (Bryan Behrenshausen https://opensource.com/users/bbehrens)
How an open project's governance model evolves
======
As open projects mature, their governance models inevitably change.
Here's how we're evolving ours.
![A two way street sign][1]
As we continue renovating the Open Organization community, we've been asking hard questions about how we want [that community][2] to function. What do we expect of one another, and of the new contributors yet to join us? How will we work best together? And how will we keep one another accountable for achieving our shared goals?
When open projects and communities discuss expectations like these, they're talking about "governance." In this case, "governance" refers to the [various processes by which rights and responsibilities get distributed][3] throughout a group. As community member [Jen Kelchner puts it][4], "it's the framework that creates the structure of the organizational system and the rules by which the parts of that structure can and do interact with one another."
A community's governance model explains _how_ that community functions. Maintaining a system of governance often helps a community _define_ and _describe_ the roles people play in that community.
Those shared definitions and descriptions are important. First, they allow community members to speak a common language about values and ambitions. They also advance community members' ability to contribute, because [they _make explicit_ the rules][5] everyone is playing by. And they ensure community members receive the various types of status and social capital they need and deserve.
The best governance models are flexible and adaptable. They grow with their communities. As the Open Organization community grows, so does its governance model. We've needed to revisit how we describe our community and the opportunities for contribution it affords people. ([We also fix typos][6].)
Let us describe what we're doing.
### New commitments
Through this conversation, we've been able to update the Open Organization [project][7] description and vision.
That vision initially took shape nearly five years ago, when the Open Organization Ambassador team first formed. At the time, Red Hat community architects [Jason Hibbets][8] and [Bryan Behrenshausen][9] drafted a document describing what a community of passionate advocates for [open organizational principles][10] _might_ look like. The vision was entirely aspirational, describing what could be—rather than what _was_. It served as a beacon to attract passionate contributors to a still-nascent project.
As soon as the community _did_ attract new members, however, those members promptly wrote their _own_ mission and vision for the Open Organization project, articulating their identity and purpose. And as we've grown, we've realized that we're all committed to even more than we originally described. Our community is adept at translating [open organization principles][10] for various audiences and contexts, and at helping different communities connect to our language and culture through _their_ own languages and cultures.
The best governance models are flexible and adaptable. They grow with their communities.
For example, [Laura Hilliger][11] has long seen an overlap between openness and cooperatives ([as have others in the community][12]). She's spoken about that overlap and lived it through her career—serving as a translator between two "radical" economic and communal ideas that use different terminology but are seeking the same kind of fair-mindedness in their activities and collaborations. Other Open Org community members have written about [Agile methodologies][13] and their association with open principles, open principles at work [in educational organizations][14], and more.
Our commitment to this sort of "translation work" wasn't highlighted in our working project description, so we [updated the description to include it][15].
### Role playing
Another hole we wanted to fill was a better description of the types of contributions one can make to this community. We wanted to talk about our contributors in a slightly more nuanced way, which we indicated in the initial project vision and then extrapolated into a fully fledged "[Community Roles][16]" wiki page.
Being specific about community roles helps us make the project more inclusive. It also gives us a method to _codify_ policies and procedures because we can associate them with individual roles people can play in the Open Org community.
So we've made some decisions about how contributors get read/write [access to community repositories][17], and how certain kinds of contributors can nominate people to "level up" in the community.
Just as people join communities, they also leave. Everyone's interests and passions change over time, and sometimes what brought them to your community is no longer meaningful to them. And that's okay.
We've also established a new kind of contributor, the "Maintainer," which gives Ambassadors expressing interest in leading and maintaining community-driven projects a way to show ownership and initiative around a particular project the community is working on. In short: It opens an important new contribution pathway and helps existing community members play an even more influential role in the Open Organization project.
And we're also recognizing the regular flux and flow that marks an open community like ours. Just as people join communities, they also leave. Everyone's interests and passions change over time, and sometimes what brought them to your community is no longer meaningful to them. And that's okay.
Sometimes, people will stay involved in a community because they feel a sense of belonging or status, even if they're not interested in doing the work anymore. And they carry with them important project history and context, which no one wants to see evaporate. So we're also adding another community role, the Open Organization Ambassador Emeritus. By giving community members emeritus status, we give Ambassadors the freedom to move on to something else without "kicking them out" of the project altogether.
All in all, defining and documenting roles and responsibilities will help our community attract contribution because it clearly explains _what getting involved means_ and the _benefits of doing so_.
### Next steps
We've come a long way. But there's more to be done.
The next bit of work we'd like to tackle is [developing a code of conduct][18]. Want to share your experience in this area? [Why not join us and help out?][19]
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/20/7/evolving-project-governance
作者:[Bryan Behrenshausen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/bbehrens
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/govt_two%20way.png?itok=8dlG2Dpl (A two way street sign)
[2]: http://theopenorganization.org
[3]: https://opensource.com/article/20/5/open-source-governance
[4]: https://opensource.com/open-organization/17/2/5-elements-teams-organized
[5]: https://opensource.com/open-organization/18/4/new-governance-model-research
[6]: https://github.com/open-organization/governance/commits/master
[7]: https://github.com/open-organization/governance/blob/master/project-and-community-description.md
[8]: https://opensource.com/users/jhibbets
[9]: https://opensource.com/users/bbehrens
[10]: https://github.com/open-organization/open-org-definition
[11]: https://opensource.com/users/laurahilliger
[12]: https://opensource.com/open-organization/15/9/learn-from-co-ops
[13]: https://opensource.com/open-organization/17/11/transparency-collaboration-basefarm
[14]: https://opensource.com/open-organization/19/4/education-culture-agile
[15]: https://github.com/open-organization/governance/wiki
[16]: https://github.com/open-organization/governance/wiki/Community-Roles
[17]: https://github.com/open-organization
[18]: https://opensource.com/life/14/5/codes-of-conduct-open-source-communities
[19]: https://github.com/open-organization/governance/issues/9

View File

@ -1,136 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (chunibyo-wly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (One CI/CD pipeline per product to rule them all)
[#]: via: (https://opensource.com/article/19/7/cicd-pipeline-rule-them-all)
[#]: author: (Willy-Peter Schaub https://opensource.com/users/wpschaub/users/bclaster/users/matt-micene/users/barkerd427)
One CI/CD pipeline per product to rule them all
======
Is the idea of a unified continuous integration and delivery pipeline a
pipe dream?
![An intersection of pipes.][1]
When I joined the cloud ops team, responsible for cloud operations and engineering process streamlining, at WorkSafeBC, I shared my dream for one instrumented pipeline, with one continuous integration build and continuous deliveries for every product.
According to Lukas Klose, [flow][2] (within the context of software engineering) is "the state of when a system produces value at a steady and predictable rate." I think it is one of the greatest challenges and opportunities, especially in the complex domain of emergent solutions. Strive towards a continuous and incremental delivery model with consistent, efficient, and quality solutions, building the right things and delighting our users. Find ways to break down our systems into smaller pieces that are valuable on their own, enabling teams to deliver value incrementally. This requires a change of mindset for both business and engineering.
### Continuous integration and delivery (CI/CD) pipeline
The CI/CD pipeline is a DevOps practice for delivering code changes more often, consistently, and reliably. It enables agile teams to increase _deployment frequency_ and decrease _lead time for change_, _change-failure rate_, and _mean time to recovery_ key performance indicators (KPIs), thereby improving _quality_ and delivering _value_ faster. The only prerequisites are a solid development process, a mindset for quality and accountability for features from ideation to deprecation, and a comprehensive pipeline (as illustrated below).
![Prerequisites for a solid development process][3]
It streamlines the engineering process and products to stabilize infrastructure environments; optimize flow; and create consistent, repeatable, and automated tasks. This enables us to turn complex tasks into complicated tasks, as outlined by Dave Snowden's [Cynefin Sensemaking][4] model, reducing maintenance costs and increasing quality and reliability.
Part of streamlining our flow is to minimize waste for the [wasteful practice types][5] Muri (overloaded), Mura (variation), and Muda (waste).
* **Muri:** avoid over-engineering, features that do not link to business value, and excessive documentation
* **Mura:** improve approval and validation processes (e.g., security signoffs); drive the [shift-left][6] initiative to push unit testing, security vulnerability scanning, and code quality inspection; and improve risk assessment
* **Muda:** avoid waste such as technical debt, bugs, and upfront, detailed documentation
It appears that 80% of the focus and intention is on products that provide an integrated and collaborative engineering system that can take an idea and plan, develop, test, and monitor your solutions. However, a successful transformation and engineering system is only 5% about products, 15% about process, and 80% about people.
There are many products at our disposal. For example, Azure DevOps offers rich support for continuous integration (CI), continuous delivery (CD), extensibility, and integration with open source and commercial off-the-shelve (COTS) software as a service (SaaS) solutions such as Stryker, SonarQube, WhiteSource, Jenkins, and Octopus. For engineers, it is always a temptation to focus on products, but remember that they are only 5% of our journey.
![5% about products, 15% about process, 80% about people][7]
The biggest challenge is breaking down a process based on decades of rules, regulations, and frustrating areas of comfort: "_It is how we have always done it; why change?_" 
The friction between people in development and operation results in a variety of fragmented, duplicated, and incessant integration and delivery pipelines. Development wants access to everything, to iterate continuously, to enable users, and to release continuously and fast. Operations wants to lock down everything to protect the business and users and drive quality. This inadvertently and often entails processes and governance that are hard to automate, which results in slower-than-expected release cycles.
Let us explore the pipeline with snippets from a recent whiteboard discussion.
The variation of pipelines is difficult and costly to support; the inconsistency of versioning and traceability complicates live site incidents, and continuous streamlining of the development process and pipelines is a challenge.
![Improving quality and visibility of pipelines][8]
I advocate a few principles that enable one universal pipeline per product:
* Automate everything automatable
* Build once
* Maintain continuous integration and delivery
* Maintain continuous streamlining and improvement
* Maintain one build definition
* Maintain one release pipeline definition
* Scan for vulnerabilities early and often, and _fail fast_
* Test early and often, and _fail fast_
* Maintain traceability and observability of releases
If I poke the hornet's nest, however, the most important principle is to _keep it simple_. If you cannot explain the reason (_what_, _why_) and the process (_how_) of your pipelines, you do not understand your engineering process. Most of us are not looking for the best, ultramodern, and revolutionary pipeline—we need one that is functional, valuable, and an enabler for engineering. Tackle the 80%—the culture, people, and their mindset—first. Ask your CI/CD knights in shining armor, with their TLA (two/three-lettered acronym) symbols on their shield, to join the might of practical and empirical engineering.
### Unified pipeline
Let us walk through one of our design practice whiteboard sessions.
![CI build/CD release pipeline][9]
Define one CI/CD pipeline with one build definition per application that is used to trigger _pull-request pre-merge validation_ and _continuous integration_ builds. Generate a _release_ build with debug information and upload to the [Symbol Server][10]. ****This enables developers to debug locally and remotely in production without having to worry which build and symbols they need to load—the symbol server performs that magic for us.
![Breaking down the CI build pipeline][11]
Perform as many validations as possible in the build—_shift left_—allowing feature teams to fail fast, continuously raise the overall product quality, and include invaluable evidence for the reviewers with every pull request. Do you prefer a pull request with a gazillion commits? Or a pull request with a couple of commits and supporting evidence such as security vulnerabilities, test coverage, code quality, and [Stryker][12] mutant remnants? Personally, I vote for the latter.
![Breaking down the CD release pipeline][13]
Do not use build transformation to generate multiple, environment-specific builds. Create one build and perform release-time _transformation_, _tokenization_, and/or XML/JSON _value replacement_. In other words, _shift-right_ the environment-specific configuration.
![Shift-right the environment-specific configuration][14]
Securely store release configuration data and make it available to both Dev and Ops teams based on the level of _trust_ and _sensitivity_ of the data. Use the open source Key Manager, Azure Key Vault, AWS Key Management Service, or one of many other products—remember, there are many hammers in your toolkit!
![Dev-QA-production pipeline][15]
Use _groups_ instead of _users_ to move approver management from multiple stages across multiple pipelines to simple group membership.
![Move approver management to simple group membership][16]
Instead of duplicating pipelines to give teams access to their _areas of interest_, create one pipeline and grant access to _specific stages_ of the delivery environments.
![Pipeline with access to specific delivery stages][17]
Last, but not least, embrace pull requests to help raise insight and transparency into your codebase, improve the overall quality, collaborate, and release pre-validation builds into selected environments; e.g., the Dev environment.
Here is a more formal view of the whole whiteboard sketch.
![The full pipeline][18]
So, what are your thoughts and learnings with CI/CD pipelines? Is my dream of _one pipeline to rule them all_ a pipe dream?
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/cicd-pipeline-rule-them-all
作者:[Willy-Peter Schaub][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/chunibyo-wly)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/wpschaub/users/bclaster/users/matt-micene/users/barkerd427
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
[2]: https://continuingstudies.sauder.ubc.ca/courses/agile-delivery-methods/ii861
[3]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-2.png (Prerequisites for a solid development process)
[4]: https://en.wikipedia.org/wiki/Cynefin_framework
[5]: https://www.lean.org/lexicon/muda-mura-muri
[6]: https://en.wikipedia.org/wiki/Shift_left_testing
[7]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-3.png (5% about products, 15% about process, 80% about people)
[8]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-4_0.png (Improving quality and visibility of pipelines)
[9]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-5_0.png (CI build/CD release pipeline)
[10]: https://en.wikipedia.org/wiki/Microsoft_Symbol_Server
[11]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-6.png (Breaking down the CI build pipeline)
[12]: https://stryker-mutator.io/
[13]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-7.png (Breaking down the CD release pipeline)
[14]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-8.png (Shift-right the environment-specific configuration)
[15]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-9.png (Dev-QA-production pipeline)
[16]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-10.png (Move approver management to simple group membership)
[17]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-11.png (Pipeline with access to specific delivery stages)
[18]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-12.png (The full pipeline)

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (jrglinux)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,74 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( guevaraya)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source has room for everyone)
[#]: via: (https://opensource.com/article/20/4/interview-Megan-Byrd-Sanicki)
[#]: author: (Jay Barber https://opensource.com/users/jaybarber)
Open source has room for everyone
======
Learn how Megan Byrd-Sanicki, 2020 Women in Open Source Community Award
winner, brings people together.
![Dandelion held out over water][1]
"Growing up, I was a bit of a field marshal," Megan Byrd-Sanicki, 2020 [Women in Open Source Community Award][2] winner, says with a smile. "I was always the one pulling classmates together. 'We're going to play a game. Come on, everyone, I'll teach you the rules.' I'd also have an eye to the sidelines, trying to identify who wasn't being included and how I could draw them in."
![Photo by Megan Sanicki, Used with permission][3]
That drive to bring people together and set up a structure for them to excel carries through much of her career and community work. "I look back on who I was in second-grade gym class and have to admit that it's still who I am today."
Megan has been active in open source for a decade, first as Executive Director of the [Drupal Association][4], and now as the Manager of Research and Operations for Google's Open Source Program Office. "I'm fortunate in my current position because it offers a view into Google's more than 2000 open source projects with different objectives, different governance structures, and different strategies. It's been just a phenomenal learning opportunity." Megan was also recently elected to the [Open Source Initiative][5] Board of Directors, where she strives to strengthen the leadership in open source that the organization offers to projects and businesses around the globe.
### Lessons from the basement steps
Far from being set on technology, Megan originally thought she'd go into business. Sitting on the basement steps, listening to her father make sales calls, she knew his entire product line by age 16, but she also internalized other lessons.
"I learned from him that doing business means solving problems and helping people," Megan says. "And I've kept that front-of-mind throughout my career. In some ways, I'm not surprised by this path; it's a natural extension of who I am, but it's also taken me places I would never have dreamed possible."
Open source isn't just a career for Megan; she also uses the same strategies in her community involvement. "Right now, I'm working with a great group of engineers, data scientists, and epidemiologists at [Covid Act Now][6]. The team members are volunteering their expertise, collaborating openly to provide data modeling to public officials so that they can make informed decisions as quickly as possible."
She's also active in [FOSS Responders][7], a group focused on shining a light on open source projects and community members affected by COVID-19-related event cancellations. "In times of turmoil, it can be difficult for projects to find the help they need. We help organizations and individuals who need assistance aggregate and amplify their requests." An important component of the organization is administering the [FOSS Responders Fund][7], a mechanism to capture some of the open source funding requests that may fall through the cracks otherwise.
### Engaging people in a changing world
The twin themes that influence Megan's community engagement are a clear commitment to the principles of open source and a drive to bring people together. "When people have dreams, things they're actively trying to accomplish, it creates a shared sense of purpose and a strong 'why.' People engage easily around why. I know I do," Megan says when asked what drives her in these efforts.
"Whether helping raise funds for Drupal's mission or enabling open source projects to become more sustainable, there's a real human impact. I get really passionate about the butterfly effect that results from helping people meet their goals and realize their dreams and visions."
As open source becomes a larger and larger part of the technology space, Megan is hopeful for the future. "The exciting thing is that the story isn't done. As a community, we're still figuring things out," she says. "There's so much we need to learn about open source, and it can evolve in so many ways, while the landscape changes around us. We need to have the right conversations and figure out how to evolve together, ensuring there's a place at the table for everyone."
In her words, it's possible to hear those same lessons learned from listening to her father's business calls—doing business is about solving problems and helping people. "Helping more people understand how to use and contribute to open source to solve problems is really rewarding. Whether it is to drive innovation, accelerate velocity, or achieve business goals, there are lots of ways to gain value from open source."
### Own your awesome
When asked what advice she has for other women wanting to engage with the open source community, Megan lights up. "Remember that open source has room for everyone. It can be daunting, but in my experience, people want to help. Ask for help when you need it, but also be clear on where you can contribute, how you can contribute, and what your needs are."
She also recognizes that among all the voices in open source, a lack of centralized leadership can sometimes be felt, but she cautions against looking at it as a privileged role, reserved for only a few. "Be the leader you need. When there's a void in leadership, each individual can fill that void for themselves. Every contributor to open source is a leader, whether they're leading others, leading the community, or just leading themselves. Don't wait to be given permission and own your awesome."
The open source journey for Megan has been just that: a trek where her path wasn't always clear. She's never shied away from adventure or run from uncertainty, though. "I look at life as this beautiful tapestry that you're weaving, but day to day, you only get to see the threads in the back. If you could see the full picture, you'd realize that you've contributed to this wonderful work in countless ways just by doing your best every day."
_Also read Jay Barber's [interview with Netha Hussain][8], who won the 2020 Women in Open Source Academic Award._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/interview-Megan-Byrd-Sanicki
作者:[Jay Barber][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jaybarber
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dandelion_blue_water_hand.jpg?itok=QggW8Wnw (Dandelion held out over water)
[2]: https://www.redhat.com/en/about/women-in-open-source
[3]: https://opensource.com/sites/default/files/uploads/megan_sanicki_headshot_small_0.png (Photo by Megan Sanicki, Used with permission)
[4]: https://www.drupal.org/association
[5]: https://opensource.org/
[6]: https://www.covidactnow.org/
[7]: https://fossresponders.com/
[8]: https://opensource.com/article/20/4/interview-Netha-Hussain

View File

@ -1,113 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Disable Dock on Ubuntu 20.04 and Gain More Screen Space)
[#]: via: (https://itsfoss.com/disable-ubuntu-dock/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Disable Dock on Ubuntu 20.04 and Gain More Screen Space
======
The launcher on the left side has become the identity of [Ubuntu][1] desktop. It was introduced with [Unity desktop][2] and even [when Ubuntu switched to GNOME][3], it forked Dash to Panel to create a similar dock on [GNOME][4] as well.
Personally, I find it handy for quickly accessing the frequently used applications. But not everyone wants it to take some extra space on the screen.
Starting with [Ubuntu 20.04][5], you can easily disable this dock. Let me show you how to do that graphically and via command line in this quick tutorial.
![][6]
### Disable Ubuntu dock with Extensions app
One of the [main features of Ubuntu 20.04][7] was the introduction of Extensions to manage GNOME extensions on your system. Just look for it in the GNOME menu (press Windows key and start typing):
![Look for Extensions app in the menu][8]
Dont have Extensions app?
If you dont have it installed already, you should enable GNOME Shell Extensions. The Extensions GUI app is part of this package.
```
sudo apt install gnome-shell-extensions
```
This is only valid for [GNOME 3.36][9] or higher version available in Ubuntu 20.04 and higher versions.
Start the extensions app and you should see Ubuntu Dock under the Built-in extensions section. You just have to toggle the button off to disable the dock.
![Disable Ubuntu Dock][10]
The change is immediate and youll see that dock disappears immediately.
You can bring it back the same way. Just toggle it on and it will appear immediately.
So easy to hide the dock in Ubuntu 20.04, isnt it?
### Alternative Method: Disable Ubuntu dock via command line
If you are a terminal enthusiast and prefer to do things in the terminal, I have good news for you. You can disable the Ubuntu dock from command line.
Open a terminal using Ctrl+Alt+T. You probably already know that [keyboard shortcut in Ubuntu][11].
In the terminal, use the following command to list all the available GNOME extensions:
```
gnome-extensions list
```
This will show you an output similar to this:
![List GNOME Extensions][12]
The default Ubuntu dock extension is [[email protected]][13] You can disable it using this command:
```
gnome-extensions disable [email protected]
```
There will be no output message displayed on the screen but youll notice that the launcher or dock disappears from the left side.
If you want, you can enable it again using the same command as above but with enable option this time:
```
gnome-extensions enable [email protected]
```
**Conclusion**
There are ways to disable the dock in Ubuntu 18.04 as well. However, it may lead to unwarranted situations if you try to remove it in 18.04. Removing this package also removes the ubuntu-desktop package and you may end up with a system with broken functionalities like no application menu.
This is the reason why I wont recommend removing it on Ubuntu 18.04.
Its good that Ubuntu 20.04 gives a way to hide the taskbar. Users have more freedom and more screen space. Speaking of more screen space, did you know that you can [remove the top title bar from Firefox and gain more screen space][14]?
I am wondering how do you prefer your Ubuntu desktop? With the dock, without dock or without GNOME?
--------------------------------------------------------------------------------
via: https://itsfoss.com/disable-ubuntu-dock/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://ubuntu.com/
[2]: https://itsfoss.com/keeping-ubuntu-unity-alive/
[3]: https://itsfoss.com/ubuntu-unity-shutdown/
[4]: https://www.gnome.org/
[5]: https://itsfoss.com/download-ubuntu-20-04/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/disable-dock-in-ubuntu.png?ssl=1
[7]: https://itsfoss.com/ubuntu-20-04-release-features/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/GNOME-extensions-app-ubuntu.jpg?ssl=1
[9]: https://itsfoss.com/gnome-3-36-release/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/disable-ubuntu-dock.png?ssl=1
[11]: https://itsfoss.com/ubuntu-shortcuts/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/list-gnome-extensions.png?ssl=1
[13]: https://itsfoss.com/cdn-cgi/l/email-protection
[14]: https://itsfoss.com/remove-title-bar-firefox/

View File

@ -1,211 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to stress test your Linux system)
[#]: via: (https://www.networkworld.com/article/3563334/how-to-stress-test-your-linux-system.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to stress test your Linux system
======
Stressing your Linux servers can be a good idea if you'd like to see how well they function when they're loaded down. In this post, we'll look at some tools that can help you add stress and gauge the results.
DigitalSoul / Getty Images / Linux
Why would you ever want to stress your Linux system? Because sometimes you might want to know how a system will behave when its under a lot of pressure due to a large number of running processes, heavy network traffic, excessive memory use and so on.  This kind of testing can help to ensure that a system is ready to "go public".
If you need to predict how long applications might take to respond and what, if any, processes might fail or run slowly under a heavy load, doing the stress testing up front can be a very good idea.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
Fortunately for those who need to be able to predict how a Linux system will react under stress, there are some helpful techniques you can employ and tools that you can use to make the process easier. In this post, we examine a few options.
### Do it yourself loops
This first technique involves running some loops on the command line and watching how they affect the system. This technique burdens the CPUs by greatly increasing the load. The results can easily be seen using the **uptime** or similar commands.
In the command below, we kick off four endless loops. You can increase the number of loops by adding digits or using a **bash** expression like **{1..6}** in place of "1 2 3 4".
```
for i in 1 2 3 4; do while : ; do : ; done & done
```
Typed on the command line, this command will start four endless loops in the background.
```
$ for i in 1 2 3 4; do while : ; do : ; done & done
[1] 205012
[2] 205013
[3] 205014
[4] 205015
```
In this case, jobs 1-4 were kicked off. Both the job numbers and process IDs are displayed.
To observe the effect on load averages, use a command like the one shown below. In this case, the **uptime** command is run every 30 seconds:
```
$ while true; do uptime; sleep 30; done
```
If you intend to run tests like this periodically, you can put the loop command into a script:
```
#!/bin/bash
while true
do
uptime
sleep 30
done
```
In the output, you can see how the load averages increase and then start going down again once the loops have been ended.
```
11:25:34 up 5 days, 17:27, 2 users, load average: 0.15, 0.14, 0.08
11:26:04 up 5 days, 17:27, 2 users, load average: 0.09, 0.12, 0.08
11:26:34 up 5 days, 17:28, 2 users, load average: 1.42, 0.43, 0.18
11:27:04 up 5 days, 17:28, 2 users, load average: 2.50, 0.79, 0.31
11:27:34 up 5 days, 17:29, 2 users, load average: 3.09, 1.10, 0.43
11:28:04 up 5 days, 17:29, 2 users, load average: 3.45, 1.38, 0.54
11:28:34 up 5 days, 17:30, 2 users, load average: 3.67, 1.63, 0.66
11:29:04 up 5 days, 17:30, 2 users, load average: 3.80, 1.86, 0.76
11:29:34 up 5 days, 17:31, 2 users, load average: 3.88, 2.06, 0.87
11:30:04 up 5 days, 17:31, 2 users, load average: 3.93, 2.25, 0.97
11:30:34 up 5 days, 17:32, 2 users, load average: 3.64, 2.35, 1.04 <== loops
11:31:04 up 5 days, 17:32, 2 users, load average: 2.20, 2.13, 1.01 stopped
11:31:34 up 5 days, 17:33, 2 users, load average: 1.40, 1.94, 0.98
```
Because the loads shown represent averages over 1, 5 and 15 minutes, the values will take a while to go back to what is likely normal for the system.
To stop the loops, issue a **kill** command like this one below assuming the job numbers are 1-4 as was shown earlier in this post. If youre unsure, use the jobs command to verify the job IDs.
```
$ kill %1 %2 %3 %4
```
### Specialized tools for adding stress
Another way to create system stress involves using a tool that was specifically built to stress the system for you. One of these is called “stress” and can stress the system in a number of ways. The **stress** tool is a workload generator that provides CPU, memory and disk I/O stress tests.
With the **\--cpu** option, the **stress** command uses a square-root function to force the CPUs to work hard. The higher the number of CPUs specified, the faster the loads will ramp up.
A second **watch-it** script (**watch-it-2**) can be used to gauge the effect on system memory usage. Note that it uses the **free** command to see the effect of the stressing.
```
$ cat watch-it-2
#!/bin/bash
while true
do
free
sleep 30
done
```
Kicking off and observing the stress:
```
$ stress --cpu 2
$ ./watch-it
13:09:14 up 5 days, 19:10, 2 users, load average: 0.00, 0.00, 0.00
13:09:44 up 5 days, 19:11, 2 users, load average: 0.68, 0.16, 0.05
13:10:14 up 5 days, 19:11, 2 users, load average: 1.20, 0.34, 0.12
13:10:44 up 5 days, 19:12, 2 users, load average: 1.52, 0.50, 0.18
13:11:14 up 5 days, 19:12, 2 users, load average: 1.71, 0.64, 0.24
13:11:44 up 5 days, 19:13, 2 users, load average: 1.83, 0.77, 0.30
```
The more CPUs specified on the command line, the faster the load will ramp up.
```
$ stress --cpu 4
$ ./watch-it
13:47:49 up 5 days, 19:49, 2 users, load average: 0.00, 0.00, 0.00
13:48:19 up 5 days, 19:49, 2 users, load average: 1.58, 0.38, 0.13
13:48:49 up 5 days, 19:50, 2 users, load average: 2.61, 0.75, 0.26
13:49:19 up 5 days, 19:50, 2 users, load average: 3.16, 1.06, 0.38
13:49:49 up 5 days, 19:51, 2 users, load average: 3.49, 1.34, 0.50
13:50:19 up 5 days, 19:51, 2 users, load average: 3.69, 1.60, 0.61
```
The **stress** command can also stress the system by adding I/O and memory load with its **\--io** (input/output) and **\--vm** (memory) options.
In this next example, this command for adding memory stress is run, and then the **watch-it-2** script is started:
```
$ stress --vm 2
$ watch-it-2
total used free shared buff/cache available
Mem: 6087064 662160 2519164 8868 2905740 5117548
Swap: 2097148 0 2097148
total used free shared buff/cache available
Mem: 6087064 803464 2377832 8864 2905768 4976248
Swap: 2097148 0 2097148
total used free shared buff/cache available
Mem: 6087064 968512 2212772 8864 2905780 4811200
Swap: 2097148 0 2097148
```
Another option for **stress** is to use the **\--io** option to add input/output activity to the system. In this case, you would use a command like this:
```
$ stress --io 4
```
You could then observe the stressed IO using **iotop**. Note that **iotop** requires root privilege.
###### before
```
$ sudo iotop -o
Total DISK READ: 0.00 B/s | Total DISK WRITE: 19.36 K/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 27.10 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
269308 be/4 root 0.00 B/s 0.00 B/s 0.00 % 1.24 % [kworker~fficient]
283 be/3 root 0.00 B/s 19.36 K/s 0.00 % 0.26 % [jbd2/sda1-8]
```
###### after
```
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
270983 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 51.45 % stress --io 4
270984 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 51.36 % stress --io 4
270985 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 50.95 % stress --io 4
270982 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 50.80 % stress --io 4
269308 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.09 % [kworker~fficient]
```
**Stress** is just one of a number of tools for adding stress to a system. Another and newer tool, **stress-ng**, will be covered in a future post.
### Wrap-Up
Various tools for stress-testing a system will help you anticipate how systems will respond in real world situations in which they are subjected to increased traffic and computing demands.
While what we've shown in the post are ways to create and measure various types of stress, the ultimate benefit is how the stress helps in determining how well your system or application responds to it.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3563334/how-to-stress-test-your-linux-system.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -1,84 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to crop images in GIMP [Quick Tip])
[#]: via: (https://itsfoss.com/crop-images-gimp/)
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
How to crop images in GIMP [Quick Tip]
======
There are many reasons you may want to crop an image in [GIMP][1]. You may want to remove useless borders or information to improve your image, or you may want the focus of the final image to be a specific detail for example.
In this tutorial, I will demonstrate how to cut out an image in GIMP quickly and without compromising the precision. Lets see.
### How to crop images in GIMP
![][2]
#### Method 1
Cropping is just an operation to trim the image down to a smaller region than the original one. The procedure to crop an image is straightforward.
You can get to the Crop Tool through the Tools palette like this:
![Use Crop Tool for cropping images in GIMP][3]
You can also access the crop tool through the menus:
**Tools → Transform Tools → Crop**
Once the tool is activated, youll notice that your mouse cursor on the canvas will change to indicate the Crop Tool is being used.
Now you can Left-Click anywhere on your image canvas, and drag the mouse to a location to create the cropping boundaries. You dont have to worry about the precision at this point, as you will be able to modify the final selection before actually cropping.
![Crop Selection][4]
At this point hovering your mouse cursor over any of the four corners of the selection will change the mouse cursor, and highlight that region. This allows you to now fine-tune the selection for cropping. You can click and drag any side or corner to move that portion of the selection.
Once the region is good enough to be cropped, you can just press the “**Enter**” key on your keyboard to crop.
If at any time youd like to start over or decide not to crop at all, you can press the “**Esc**” key on your keyboard.
#### Method 2
Another way to crop an image is to make a selection first, using the **Rectangle Select Tool**.
**Tools → Selection Tools → Rectangle Select**
![][5]
You can then highlight a selection the same way as the **Crop Tool**, and adjust the selection as well. Once you have a selection you like, you can crop the image to fit that selection through
**Image → Crop to Selection**
![][6]
#### Conclusion
Cropping precisely an image can be considered a fundamental asset for a GIMP user. You may choose which method fits better to your needs and explore its potential.
If you have any questions about the procedure, please let me know in the comments below. If you are “craving” more [GIMP tutorials][7], make sure to subscribe on your favorite social media platforms!
--------------------------------------------------------------------------------
via: https://itsfoss.com/crop-images-gimp/
作者:[Dimitrios Savvopoulos][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/dimitrios/
[b]: https://github.com/lujun9972
[1]: https://www.gimp.org/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Crop-images-in-GIMP.png?ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Crop-tool.png?ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Crop-selection.jpg?ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/select-1.gif?ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/crop.gif?ssl=1
[7]: https://itsfoss.com/tag/gimp-tips/

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (entr: rerun your build when files change)
[#]: via: (https://jvns.ca/blog/2020/06/28/entr/)
[#]: author: (Julia Evans https://jvns.ca/)
entr: rerun your build when files change
======
This is going to be a pretty quick post I found out about [`entr`][1] relatively recently and I felt like WHY DID NOBODY TELL ME ABOUT THIS BEFORE?!?! So Im telling you about it in case youre in the same boat as I was.
Theres a great explanation of the tool with lots of examples on [entrs website][1].
The summary is in the headline: `entr` is a command line tool that lets you run a arbitrary command every time you change any of a set of specified files. You pass it the list of files to watch on stdin, like this:
```
git ls-files | entr bash my-build-script.sh
```
or
```
find . -name *.rs | entr cargo test
```
or whatever you want really.
### quick feedback is amazing
Like possibly every single programmer in the universe, I find it Very Annoying to have to manually rerun my build / tests every time I make a change to my code.
A lot of tools (like hugo and flask) have a built in system to automatically rebuild when you change your files, which is great!
But often I have some hacked together custom build process that I wrote myself (like `bash build.sh`), and `entr` lets me have a magical build experience where I get instant feedback on whether my change fixed the weird bug with just one line of bash. Hooray!
### restart a server (`entr -r`)
Okay, but what if youre running a server, and the server needs to be restarted every time you? entrs got you if you pass `-r`, then
```
git ls-files | entr -r python my-server.py
```
### clear the screen (`entr -c`)
Another neat flag is `-c`, which lets you clear the screen before rerunning the command, so that you dont get distracted/confused by the previous builds output.
### use it with `git ls-files`
Usually the set of files I want to track is about the same list of files I have in git, so `git ls-files` is a natural thing to pipe to `entr`.
I have a project right now where sometimes I have files that Ive just created that arent in git just yet. So what if you want to include untracked files? Heres a little bash incantation I put together that does this:
```
{ git ls-files; git ls-files . --exclude-standard --others; } | entr your-build-scriot
```
Theres probably a way to do this with just one git command but I dont know what it is.
### restart every time a new file is added: `entr -d`
The other problem with this `git ls-files` thing is that sometimes I add a new file, and of course its not in git yet. entr has a nice feature for this if you pass `-d`, then if you add a new file in any of the directories entr is tracking, then itll exit.
Im using this paired with a little while loop that will restart `entr` to include the new files, like this:
```
while true
do
{ git ls-files; git ls-files . --exclude-standard --others; } | entr -d your-build-scriot
done
```
### how entr works on Linux: inotify
On Linux, entr works using `inotify` (a system for tracking filesystem events like file changes) if you strace it, youll see an `inotify_add_watch` system call for each file you ask it to watch, like this:
```
inotify_add_watch(3, "static/stylesheets/screen.css", IN_ATTRIB|IN_CLOSE_WRITE|IN_CREATE|IN_DELETE_SELF|IN_MOVE_SELF) = 1152
```
### thats all!
I hope this helps a few people learn about `entr`!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2020/06/28/entr/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: http://eradman.com/entrproject/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,113 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using TensorFlow.js and Node-RED with image recognition applications)
[#]: via: (https://www.linux.com/news/using-tensorflow-js-and-node-red-with-image-recognition-applications/)
[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/kazuhito-yokoi/)
Using TensorFlow.js and Node-RED with image recognition applications
======
_This Linux Foundation Platinum Sponsor-Contributed article from [Hitachi][1] is about how to use **TensorFlow.js** and **Node-RED** for use with image recognition applications._
### Using TensorFlow.js and Node-RED
TensorFlow.js is a JavaScript implementation of the [TensorFlow open source machine learning platform][2]. By using **TensorFlow.js**, learning and inference processing can be executed in real-time on the browser or the server-side with Node.js. [Node-RED][3] is a visual programming tool mainly developed for IoT applications. 
According to [a recent InfoQ article on 2020 JavaScript web development trends][4], **TensorFlow.js** is classified as “Early Majority”, and **Node-RED** is classified as “Early Adopters” in their adoption cycles. And they are becoming increasingly popular with open source software developers.
![Image: InfoQ][5]
In this article, well take a look at what you can do with these two trending open source software tools in combination.
### Creating a sample image recognition flow with Node-RED
Our objective will be to create a flow within **Node-RED** to recognize an object in an image, as depicted in the screenshot below.
![Flow to be created in Node-RED][6]
This flow can be observed after you upload a file from a browser using the yellow node component. The bottom left of the user interface displays the uploaded image in the “Original image” node. In the orange “Image recognition” node, the **TensorFlow.js** trained model is used to run Analyze for what is in the uploaded image (an aircraft). Finally, we will use the green “Output result” node in the upper right corner to output what is seen in the debug tab on the right. Additionally, an image annotated with an orange square under the [Image with annotation] node is displayed, and its easy to see what part of the image has been recognized.
In the following sections, we will explain the steps for creating this flow. For this demo, Node-RED can run in the local environment (in this case, a Raspberry Pi) and also in a cloud environment — it will work regardless of platform choice. For our tests, Google Chrome was chosen for use with the Node-RED web user interface.
### Installing a TensorFlow.js node
The **Node-RED** flow library has several TensorFlow.js-enabled nodes. One of these is [node-red-contrib-tensorflow][7], which contains the trained models. 
Well begin with installing the **TensorFlow.js** node in **Node-RED**. To install the node, go to the top-right menu of the flow editor. Click **“Manage Palette”** -&gt; Go to **“Palette”** tab -&gt; Select **“Install”** tab. After that, enter “**node-red-contrib-tensorflow”** in the search keyword field. 
![Installing a TensorFlow.js node][8]
As shown in the image above, the TensorFlow.js node to be used is displayed in the search results. Click the “install” button to install the TensorFlow.js node. Once the installation is complete, orange **TensorFlow.js** nodes will appear in the Analysis category of the left side palette. 
![Analysis palette][9]
Each **TensorFlow.js** node is described in the following table. These are all image recognition nodes, but they can also generate image data with annotation and perform other functions like image recognition, or offline, which is necessary for edge analytics.
**#** | **Name** | **Description** | **Annotated Image** | **Offline Use**
---|---|---|---|---
1 | cocossd | A node that returns the name of the object in the image | YES | MAY
2 | handpose | A node that estimates the positions of fingers and joints from a hand image | NONE | CANT
3 | mobilenet | A node that returns the name of the object in the image | NONE | MAY
4 | posenet | A node that estimates the positions of arms, head, and legs from the image of a person | YES | MAY
 
In addition, the following nodes, which are required to work with image data in Node-RED, should be installed in the same way.
**node-red-contrib-browser-utils**: A node that uploads image files and audio files from the flow editor
**node-red-contrib-image-output**: A node that displays an image on the flow editor
After installing **node-red-contrib-browser-utils**, you should see the file-inject node, microphone node, and camera node in the input category. Also, once you have installed **node-red-contrib-image-output**, you should see the image node in the output category.
### Creating a flow
Now that we have the necessary nodes lets create the flow.
From the palette on the right, place a yellow file inject node, an orange **cocossd** node, and a green debug node (which will be renamed to **msg.payload** when placed in the workspace) and connect the ports of each node with “wires”.
To check the image data flowing through the wire, place two image nodes (named image preview when placed on the workspace) under the flow. To output the image data from the file inject node and debug node respectively, connect to the output port, as shown in the illustration.
![Completed Node-RED flow][10]
Only the image preview node on the right side specifies the image data variables to be displayed, so it is necessary to change the node settings. To change the settings, double-click the image preview node to open the node properties screen. On the node property screen, the image data stored in **msg.payload** is displayed by default. By changing this to **msg.annotatedInput** as shown in the screenshot below, the image preview node will display the annotated image.
![Image properties][11]
Give each node an appropriate name, press the red deploy button on the upper right, and then click the button on the left side of the file inject node to upload the sample image file of the airport from your PC.
![The recognized object in Node-RED][6]
As shown, an image with orange annotation on the aircraft is displayed under the “Image with annotation” node. Also, you can see that the debug tab on the right side correctly displayed “airplane”. 
Feel free to try this with images you have at your disposal and experiment with them to see if they can be recognized correctly.
*About the author: Kazuhito Yokoi is an Engineer at Hitachis OSS Solution Center, located in Yokohama, Japan. *
--------------------------------------------------------------------------------
via: https://www.linux.com/news/using-tensorflow-js-and-node-red-with-image-recognition-applications/
作者:[Linux.com Editorial Staff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/kazuhito-yokoi/
[b]: https://github.com/lujun9972
[1]: http://www.hitachi.co.jp/
[2]: https://www.tensorflow.org/overview
[3]: https://nodered.org/
[4]: https://www.infoq.com/articles/javascript-web-development-trends-2020/
[5]: https://www.linux.com/wp-content/uploads/2020/06/image1_infoq.jpg
[6]: https://www.linux.com/wp-content/uploads/2020/06/image2_flow.png
[7]: https://flows.nodered.org/node/node-red-contrib-tensorflow
[8]: https://www.linux.com/wp-content/uploads/2020/06/image3_installation.png
[9]: https://www.linux.com/wp-content/uploads/2020/06/image4_palette.png
[10]: https://www.linux.com/wp-content/uploads/2020/06/image5_flow.png
[11]: https://www.linux.com/wp-content/uploads/2020/06/image6_property.png

View File

@ -0,0 +1,72 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Whats New in Harbor 2.0)
[#]: via: (https://www.linux.com/audience/developers/whats-new-in-harbor-2-0/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Whats New in Harbor 2.0
======
[Harbor][1] is an open-source cloud native registry project that stores, signs, and scans content. Harbor was created by a team of engineers at VMware China. The project was contributed to CNCF for wider adoption and contribution. Recently the project announced its 2.0 release. [Swapnil Bhartiya, the founder of TFiR.io][2], sat down with Michael Michael, Harbor maintainer and VMwares Director of Product Management, to talk about Harbor, community and the latest release.
Here is a lightly edited transcript of the interview:
**Swapnil Bhartiya: Lets assume that you and I are stuck in an elevator and I suddenly ask you, “What is Harbor?” So please, explain what it is.**
Michael Michael: Hopefully youre not stuck in the elevator for long; but Harbor essentially is an open source cloud-native registry. Think of this as a repository where you can store and serve all of your cloud-native assets, your container images, your Helm charts, and everything else you need to basically build cloud native applications. And then some putting posts on top of that, some very good policy engines that allow you to enforce compliance, make sure your images that youre serving are free from vulnerabilities and making sure that you have all the guardrails in place so an operator can manage this registry and delivery it to his developers in a self-service way.
**Swapnil Bhartiya: Harbor came out of VMware China. So Im also curious that what was the problem that the team saw at that point? Because there were a lot of projects that were doing something similar, that you saw unique that Harbor was created?**
Michael Michael: So essentially the need there was, there wasnt really a good way for an enterprise to have a hosted registry that has all of the enterprise capabilities they were looking for, while at the same time being able to have full control over the registry. Like a lot of the cloud providers have their own registry implementation, theres Docker Hub out there, or you can go and purchase something at a very expensive price point. But if youre looking for an open source solution that gives you end to end registered capabilities, like your developers can push images and pull images, and then your operators can go and put a policy that says, Hey, I want to allow this development team to create a project, but not using more than a terabyte of storage. None of those solutions had that, so there was a need, a business need here to develop a registry. And on top of that, we realized that it wasnt just us that had the same need, there was a lot of users and enterprises out there in the cloud native ecosystem.
**Swapnil Bhartiya: The project has been out for a while and based on what you just told me, Im curious what kind of community the product has built around itself and how the project has evolved? Because we will also talk about the new release version 2.0 but before that, I want to talk about the volitional project and the community around it.**
Michael Michael: Project has evolved fairly well over the years we have increased our contributors. The contribution statistics are that CNCF is creating are showing that were growing our community. We now have maintainers in the project from multiple organizations and there are actually three organizations that have more than one maintainer on the project. So its kind of showing you that theyre, the ecosystem has picked up. We are adding more and more functionality into Harbor, and were also making Harbor pluggable. So there are areas of Harbor where were saying, Hey, heres the default experience with Harbor, but if you want to extend the experience based on the needs of your users go ahead and do that and heres an easy way to implement an interface and do that. That has really increased the popularity of Harbor. That means two things, we can give you a batteries-included version of Harbor from the community and then well give you the option to extend that to fit the needs of your organization.
And more importantly, if you have made investments in other tooling, you can plug and play Harbor in that. When I say other tooling, I mean, things like CI/CD systems, those systems are primarily driving the development life cycle. So for example, you go from source code to container image to something thats stored in a registry like Harbor. The engine that drives the pipeline, that workflow in a lot of ways is a CI/CD engine. So how do you integrate Harbor well with such systems? Weve made that a reality now and that has made Harbor easier to put in an organization and get it adopted with existing standards and existing investments.
**Swapnil Bhartiya: Now lets talk about the recently announced 2.0. Talk about some of the core features, functionalities that you are excited about in this release.**
Michael Michael: Absolutely, theres like three or four features that really, really excite me. A long time coming is the support for OCI. The OCI is the Open Container Initiative and essentially its creating a standardized way to describe what an image looks like. And we in Harbor 2.0 we are able to announce that we have full OCI supporting Harbor. What does that mean for users? In previous releases of Harbor you could only put into Harbor two types of artifacts; a container image and a Helm chart. It satisfies a huge number of the use cases for customers, but its not enough in this new cloud native ecosystem, there are additional things that as a developer, as an operator, as a Kubernetes administrator, you might want to push into a repository like Harbor and have them also adopt a lot of the policy engine that Harbor provides.
Give you a few examples, single bundles, the cloud native application, a bundle. You could have OPA files, you could have singularity and other OCI compliant files. So now Harbor tells you that, Hey, you have any file type out there? If its OCI compliant, you can push it to Harbor, you can pull it from Harbor. And then you can add things like coders and retention policies and immutability policies and replication policies on top of that. The thing about that now, just by adding a few more types of supported artifacts into Harbor, those types immediately get to use the full benefit of Harbor in terms of our entire policy engine and the compliance that do offer to administrators of Harbor.
**Swapnil Bhartiya: What does OCI compliance mean for users? Because by being compliant, you have to be more strict about what you can and cannot do. So can you talk about that? And also how does that also affect the existing users, should they have to worry about something or it doesnt really matter?**
Michael Michael: Existing users shouldnt have to worry about this, theres fully backward compatibility that can still push their container images, which are OCI compliant. And if youre using a Helm Chart before, you can still push it into Charts Museum, which is a key component of Harbor, but you can now also put a Helm Chart as an OCI file. So for existing users, not much difference, backward compatibility, we still support them. The users are brothers here, were not going to forget them. But what it means now is actually, its not more strict this is a lot more open. If youre developing artifacts that are OCI compliant and theyre following the standard way of describing an image and a standard way of actually executing an image at run time; now Kubernetes is also OCI compliant at the run time. Then youre getting the benefits of both worlds. You get Harbor as the repository where you can store your images and you also get a run time engine thats OCI compliant that could potentially execute them. The really great benefit here for the users.
A couple of other features that Harbor 2.0 Brings are super, super exciting. The first one is the introduction of Trivy by Aqua Security, as the batteries included built-in scanner in Harbor. Previously, we use Claire as our built-in scanner and with the release of Harbor called 1.10 that came out in December 2019, we introduced what we call a pluggable framework, think of this as a way that security vendors like Aqua and Encore can come in and create their own implementation of a security scanner to do static analysis on top of images that are deployed in Harbor.
So we still included Claire as a built-in scanner and then we added additional extension points. Now we actually liked Trivy that much our community and our users love Trivy its the ability to enforce and to study analysis on top of multiple operating systems on top of multiple application managers, its very well aligned with the vision that you have from a security standpoint in Harbor. And now we added Trivy as the built-in scanner in Harbor, we ship with it now. A great, great achievement and kudos to the Aqua team for delivering Trivy as an open source project.
**Swapnil Bhartiya: Thats the question I was going to ask, but I, once again, Ill ask the same thing again, that, what does it mean for users who were using Claire?**
Michael Michael: If youre using Claire before and you want to continue using Claire, by all means, were going to continue updating Claire, Claire is already included in Harbor. Theres no changes in the experience. However, if youre thinking that Trivy is a better scanner for you, and by the way, you can use them side by side so you can compare the scanning results from each scanner. And if Trivy is a better option for you, we enabled you to make that choice. Now the way Harbor works is that you have a concept of multitenancy and we isolate a lot of the settings and the policy in the organization of images and on a per-project basis. So what does that mean? You can actually go into Harbor and you can define a project and you can say for this project I want Claire to be the built-in scanner.
And then Claire will scan all your projects in that, all the files in that project. And you can use a second project and say, well, I now want Trivy to be the scanner for this project. And then Trivy of you will scan your images. And if you have the same set of images, you can compare them and see which scanner works best based on your needs as an organization and as a user. This phenomenal, right? To give users choice and we give them all the data, but ultimately they have to make the decision on what is the best scanner for them to use based on their scenarios, the type of application images and containers that they use and the type of libraries in they use those containers.
**Swapnil Bhartiya: Excellent. Before we wrap this up, what kind of roadmap you have for Harbor, of course, its an open source project. So theres no such thing as when the 2.0 release is coming out. But when we look at 2020, what are the major challenges that you want to address? What are the problems you want to solve and what does the basic roadmap look like?**
Michael Michael:  Absolutely, I think that one of the things that weve been trying to do as a maintainer team for Harbor is to kind of create some themes around the release is kind of put a blueprint down in terms of what is it that were trying to achieve? And then identify the features that make sense in that theme. And were not coming up with this from a vacuum, were talking to users, were talking to other companies where we have KubeCon events in the past where we had presentations and individuals came to us asking us sets of questions. We have existing users that give us feedback. When we gather all of that, one of the things that we came up with as the next thing for our release is what you call image distribution. So we have three key features that were trying to tackle in that area.
The first one is how can Harbor act as a proxy cache? To enable organizations that are either deploying Kubernetes environments at the edge and they want a local Harbor instance to proxy or mirror images from the mothership like your main data center and where networking is at the premium. Maybe some of the Kubernetes nodes are not even connected to the network and they want to be a support to pull images from Harbor and then Harbor pulls the images from the upstream data center. Very, very important feature. Continuing down the path of image distribution. Were integrating Harbor with both Dragonfly by Alibaba and Project Kraken by Uber to facilitate peer to peer distribution mechanisms for your container images. So how can we efficiently distribute images at the edge in multiple data centers in branch offices that dont have a good network or thick network pipe between them? And how can Harbor make sure that the right images land at the right place? Big, big features that were trying to work with the community. And obviously were not doing this alone, were working with both Kraken and the Dragonfly communities to achieve that.
And last, the next feature that we have is what you call garbage collection without downtime. Traditionally, we do garbage collection and this is kind of the process where you get to reclaim some of the files and layers of, basically container images that are no longer in use.
Think of an organization that pushes and pulls thousands of images every day; they re-tag them, they create new versions. Sometimes you end up with layers that are no longer used, in order for those layers to be reclaimed at the storage and by the system, their registry in needs to be locked down as in nobody can be pulling or pushing images to it. In Harbor 2.0 we actually made a significant advancement where we track all the layers and the metadata of images in our database rather than depending on another tool or product to do it. So now this actually paves a road so that in the future, we could actually do garbage collection with zero downtime where Harbor can identify all the layers that are no longer in use, go reclaim them. And then that will have zero adverse impact or downtime to the users are pushing and pulling content. Huge, huge features and thats the things that were working on in the future.
**Swapnil Bhartiya: Awesome, thank you Michael for explaining things in detail and talking about Harbor. I look forward to talk to you again. Thank you.**
Michael Michael: Absolutely. Thank you so much for the opportunity.
--------------------------------------------------------------------------------
via: https://www.linux.com/audience/developers/whats-new-in-harbor-2-0/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972
[1]: https://goharbor.io/
[2]: https://www.tfir.io/author/arnieswap/#:~:text=Swapnil%20Bhartiya%20Swapnil%20Bhartiya%20is%20the%20Founder%20and,audience%20for%20enterprise%20open%20source%20and%20emerging%20technologies.

View File

@ -0,0 +1,210 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (13 Things To Do After Installing Linux Mint 20)
[#]: via: (https://itsfoss.com/things-to-do-after-installing-linux-mint-20/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
13 Things To Do After Installing Linux Mint 20
======
Linux Mint is easily one of the [best Linux distributions][1] out there and especially considering the features of [Linux Mint 20][2], Im sure you will agree with that.
In case you missed our coverage, [Linux Mint 20 is finally available to download][3].
Of course, if youve been using Linux Mint for a while, you probably know whats best for you. But, for new users, there are a few things that you need to do after installing Linux Mint 20 to make your experience better than ever.
### Recommended things to do after installing Linux Mint 20
In this article, Im going to list some of them for to help you improve your Linux Mint 20 experience.
#### 1\. Perform a System Update
![][4]
The first thing you should check right after installation is — system updates using the update manager as shown in the image above.
Why? Because you need to build the local cache of available software. It is also a good idea to update all the software updates.
If you prefer to use the terminal, simply type the following command to perform a system update:
```
sudo apt update && sudo apt upgrade -y
```
#### 2\. Use Timeshift to Create System Snapshots
![][5]
Its always useful have system snapshots if you want to quickly restore your system state after an accidental change or maybe after a bad update.
Hence, its super important to configure and create system snapshots using Timeshift if you want the ability to have a backup of your system state from time to time.
You can follow our detailed guide on [using Timeshift][6], if you didnt know already.
#### 3\. Install Useful Software
Even though you have a bunch of useful pre-installed applications on Linux Mint 20, you probably need to install some essential apps that do not come baked in.
You can simply utilize the software manager or the synaptic package manager to find and install software that you need.
For starters, you can follow our list of [essential Linux apps][7] if you want to explore a variety of tools.
Heres a list of my favorite software that Id want you to try:
* [VLC media player][8] for video
* [FreeFileSync][9] to sync files
* [Flameshot][10] for screenshots
* [Stacer][11] to optimize and monitor system
* [ActivityWatch][12] to track your screen time and stay productive
#### 4\. Customize the Themes and Icons
![][13]
Of course, this isnt something technically essential unless you want to change the look and feel of Linux Mint 20.
But, its very [easy to change the theme and icons in Linux Mint][14] 20 without installing anything extra.
You get the option to customize the look in the welcome screen itself. In either case, you just need to head on to “**Themes**” and start customizing.
![][15]
To do that, you can search for it or find it inside the System Settings as shown in the screenshot above.
Depending on what desktop environment you are on, you can also take a look at some of the [best icon themes][16] available.
#### 5\. Enable Redshift to protect your eyes
![][17]
You can search for “[Redshift][18]” on Linux Mint and launch it to start protecting your eyes at night. As you can see in the screenshot above, it will automatically adjust the color temperature of the screen depending on the time.
You may want to enable the autostart option so that it launches automatically when you restart the computer. It may not be the same as the night light feature on [Ubuntu 20.04 LTS][19] but its good enough if you dont need custom schedules or the ability to the tweak the color temperature.
#### 6\. Enable snap (if needed)
Even though Ubuntu is pushing to use Snap more than ever, the Linux Mint team is against it. Hence, it forbids APT to use snapd.
So, you wont have the support for snap out-of-the-box. However, sooner or later, youll realize that some software is packaged only in Snap format. In such cases, youll have to enable snap support on Mint.
```
sudo apt install snapd
```
Once you do that, you can follow our guide to know more about [installing and using snaps on Linux][20].
#### 7\. Learn to use Flatpak
By default, Linux Mint comes with the support for Flatpak. So, no matter whether you hate using snap or simply prefer to use Flatpak, its good to have it baked in.
Now, all you have to do is follow our guide on [using Flatpak on Linux][21] to get started!
#### 8\. Clean or Optimize Your System
Its always good to optimize or clean up your system to get rid of unnecessary junk files occupying storage space.
You can quickly remove unwanted packages from your system by typing this in your terminal:
```
sudo apt autoremove
```
In addition to this, you can also follow some of our [tips to free up space on Linux Mint][22].
#### 9\. Using Warpinator to send/receive files across the network
Warpinator is a new addition to Linux Mint 20 to give you the ability to share files across multiple computers connected to a network. Heres how it looks:
![][23]
You can just search for it in the menu and get started!
#### 10\. Using the driver manager
![Driver Manager][24]
The driver manager is an important place to look for if youre using Wi-Fi devices that needs a driver, NVIDIA graphics, or AMD graphics, and drivers for other devices if applicable.
You just need look for the driver manager and launch it. It should detect any proprietary drivers in use or you can also utilize a DVD to install the driver using the driver manager.
#### 11\. Set up a Firewall
![][25]
For the most part, you might have already secured your home connection. But, if you want to have some specific firewall settings on Linux Mint, you can do that by searching for “Firewall” in the menu.
As you can observe the screenshot above, you get the ability to have different profiles for home, business, and public. You just need to add the rules and define what is allowed and whats not allowed to access the Internet.
You may read our detailed guide on [using UFW for configuring a firewall][26].
#### 12\. Learn to Manage Startup Apps
If youre an experienced user, you probably know this already. But, new users often forget to manage their startup applications and eventually, the system boot time gets affected.
You just need to search for “**Startup Applications**” from the menu and you can launch it find something like this:
![][27]
You can simply toggle the ones that you want to disable, add a delay timer, or remove it completely from the list of startup applications.
#### 13\. Install Essential Apps For Gaming
Of course, if youre into gaming, you might want to read our article for [Gaming on Linux][28] to explore all the options.
But, for starters, you can try installing [GameHub][29], [Steam][30], and [Lutris][31] to play some games.
**Wrapping Up**
Thats it folks! For the most part, you should be good to go if you follow the points above after installing Linux Mint 20 to make the best out of it.
Im sure there are more things you can do. Id like to know what you prefer to do right after installing Linux Mint 20. Let me know your thoughts in the comments below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/things-to-do-after-installing-linux-mint-20/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/best-linux-distributions/
[2]: https://itsfoss.com/linux-mint-20/
[3]: https://itsfoss.com/linux-mint-20-download/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-system-update.png?ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/07/snapshot-linux-mint-timeshift.jpeg?ssl=1
[6]: https://itsfoss.com/backup-restore-linux-timeshift/
[7]: https://itsfoss.com/essential-linux-applications/
[8]: https://www.videolan.org/vlc/
[9]: https://itsfoss.com/freefilesync/
[10]: https://itsfoss.com/flameshot/
[11]: https://itsfoss.com/optimize-ubuntu-stacer/
[12]: https://itsfoss.com/activitywatch/
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-theme.png?ssl=1
[14]: https://itsfoss.com/install-icon-linux-mint/
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-system-settings.png?ssl=1
[16]: https://itsfoss.com/best-icon-themes-ubuntu-16-04/
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-redshift-1.png?ssl=1
[18]: https://itsfoss.com/install-redshift-linux-mint/
[19]: https://itsfoss.com/ubuntu-20-04-release-features/
[20]: https://itsfoss.com/install-snap-linux/
[21]: https://itsfoss.com/flatpak-guide/
[22]: https://itsfoss.com/free-up-space-ubuntu-linux/
[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/mint-20-warpinator-1.png?ssl=1
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2013/12/Additional-Driver-Linux-Mint-16.png?ssl=1
[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-firewall.png?ssl=1
[26]: https://itsfoss.com/set-up-firewall-gufw/
[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/linux-mint-20-startup-applications.png?ssl=1
[28]: https://itsfoss.com/linux-gaming-guide/
[29]: https://itsfoss.com/gamehub/
[30]: https://store.steampowered.com
[31]: https://lutris.net

View File

@ -0,0 +1,228 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Back up your phone's storage with this Linux utility)
[#]: via: (https://opensource.com/article/20/7/gphoto2-linux)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Back up your phone's storage with this Linux utility
======
Take as many shots as you want; gphoto2 makes transferring photos from
your device to your Linux computer quick and easy.
![A person looking at a phone][1]
One of the great failings of mobile devices is how difficult it can be to transfer data from your device to your computer. Mobile devices have a long history of this. Early mobiles, like Pilot and Handspring PDA devices, required special synchronization software (which you had to do religiously for fear of your device running out of batteries and losing all of your data forever). Old iPods required a platform-specific interface. Modern mobile devices default to sending your data to an online account so you can download it again on your computer.
Good news—if you're running Linux, you can probably interface with your mobile device using the `gphoto2` command. Originally developed as a way to communicate with digital cameras back when a digital camera was just a camera, `gphoto2` can talk to many different kinds of mobile devices now. Don't let the name fool you, either. It can handle all types of files, not just photos. Better yet, it's scriptable, flexible, and a lot more powerful than most GUI interfaces.
If you've ever struggled with finding a comfortable way to sync your data between your computer and mobile, take a look at `gphoto2`.
### Install gPhoto2
Chances are your Linux system already has libgphoto2 installed, because it's a key library for interfacing with mobile devices. You may have to install the command `gphoto2`, however, which is probably available from your repository.
On Fedora or RHEL:
```
$ sudo dnf install gphoto2
```
On Debian or Ubuntu:
```
$ sudo apt install gphoto2
```
### Verify compatibility
To verify that your mobile device is supported, use the `--list-cameras` piped through `less`:
```
`$ gPhoto2 --list-cameras | less`
```
Or you can pipe it through `grep` to search for a term. For example, if you have a Samsung Galaxy, then use `grep` with case sensitivity turned off with the `-i` switch:
```
$ gphoto2 --list-cameras | grep -i galaxy
  "Samsung Galaxy models (MTP)"
  "Samsung Galaxy models (MTP+ADB)"
  "Samsung Galaxy models Kies mode"
```
This confirms that Samsung Galaxy devices are supported through MTP and MTP with ADB.
If you can't find your device listed, you can still try using `gphoto2` on the off chance that your device is actually something on the list masquerading as a different brand.
### Find your mobile device
To use gPhoto2, you first have to have a mobile device plugged into your computer, set to MTP mode, and you probably need to give your computer permission to interact with it. This usually requires physical interaction with your device, specifically pressing a button in the UI to permit its filesystem to be accessed by the computer it's just been attached to.
![Screenshot of allow access message][2]
If you don't give your computer access to your mobile, then gPhoto2 detects your device, but it isn't unable to interact with it.
To ensure your computer detects the device you've attached, use the `--auto-detect` option:
```
$ gphoto2 --auto-detect
Model                       Port
\---------------------------------------
Samsung Galaxy models (MTP) usb:002,010
```
If your device isn't detected, check your cables first, and then check that your device is configured to interface over MTP or ADB, or whatever protocol gPhoto2 supports for your device, as shown in the output of `--list-cameras`.
### Query your device for features
With modern devices, there's usually a plethora of potential features, but not all features are supported. You can find out for sure with the `--abilities` option, which I find rather intuitive.
```
$ gphoto2 --abilities
Abilities for camera            : Samsung Galaxy models (MTP)
Serial port support             : no
USB support                     : yes
Capture choices                 : Capture not supported by driver
Configuration support           : no
Delete selected files on camera : yes
Delete all files on camera      : no
File preview (thumbnail) support: no
File upload support             : yes
```
There's no need to specify what device you're querying as long as you only have one device attached. If you have attached more than one device that gPhoto2 can interact with, though, you can specify the device by port, camera model, or usbid.
### Interacting with your device
If your device supports capture, then you can grab media through your camera from your computer. For instance, to capture an image:
```
$ gphoto2 --capture-image
```
To capture an image and immediately transfer it to the computer you're on:
```
$ gphoto2 --capture-image-and-download
```
You can also capture video and sound. If you have more than one camera attached, you can specify which device you want to use by port, camera model, or usbid:
```
$ gphoto2 --camera "Samsung Galaxy models (MTP)" \
\--capture-image-and-download
```
### Files and folders
To interact with files on your device intelligently, you need to understand the structure of the filesystem being exposed to gPhoto2.
You can view available folders with the `--get-folders` option:
```
$ gphoto2 --list-folders
There are 2 folders in folder '/'.                                            
 - store_00010001
 - store_00020002
There are 0 folders in folder '/store_00010001'.
There are 0 folders in folder '/store_00020002'.
```
Each of these folders represents a storage destination on the device. In this example, `store_00010001` is the internal storage and `store_00020002` is an SD card. Your device may be structured differently.
### Getting files
Now that you know the folder layout of your device, you can ingest photos from your device. There are many different options you can use, depending on what you want to take from the device.
You can get a specific file, providing you know the full path:
```
`$ gphoto2 --get-file IMG_0001.jpg --folder /store_00010001/myphotos`
```
You can get all files at once:
```
`$ gphoto2 --get-all-files --folder /store_00010001/myfiles`
```
You can get just audio files:
```
`gphoto2 --get-all-audio-data --folder /store_00010001/mysounds`
```
There are other options, too, and most of them depend on what your device, and the protocol you're using, support.
### Uploading files
Now that you know your potential target folders, you can upload files from your computer to your device. For example, assuming there's a file called `example.epub` in your current directory, you can send the file to your device with the `--upload-file` option combined with the `--folder` option to specify which storage location you want to upload to:
```
$ gphoto2 --upload file example.epub \
\--folder store_00010001
```
You can make a directory on your device, should you prefer to upload several files to a consolidated location:
```
$ gphoto2 --mkdir books \
\--folder store_00010001
$ gphoto2 --upload-file *.epub \
\--folder store_00010001/books
```
### Listing files
To see files uploaded to your device, use the `--list-files` option:
```
$ gphoto2 --list-files --folder /store_00010001
There is 1 file in folder '/store_00010001'
#1     example.epub 17713 KB application/x-unknown
$ gphoto2 --list-files --folder /store_00010001/books
There is 1 file in folder '/store_00010001'
#1    example0.epub 17713 KB application/x-unknown
#2    example1.epub 12264 KB application/x-unknown
[...]
```
### Exploring your options
Much of gPhoto2's power depends on your device, so your experience will be different than anyone else's. There are many operations listed in `gphoto2 --help` for you to explore. Use gPhoto2 and never struggle with transferring files from your device to your computer ever again!
These open source photo libraries help you stay organized while making your pictures look great.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/gphoto2-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
[2]: https://opensource.com/sites/default/files/uploads/gphoto2-mtp-allow.jpg (Screenshot of allow access message)

View File

@ -0,0 +1,292 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Customizing Bash)
[#]: via: (https://fedoramagazine.org/customizing-bash/)
[#]: author: (Stephen Snow https://fedoramagazine.org/author/jakfrost/)
Customizing Bash
======
![][1]
The outermost layer of your operating system the part you interact with is called the [shell][2]. Fedora comes with several preinstalled shells. Shells can be either graphical or text-based. In documentation, you will often see the acronyms GUI (Graphical User Interface) and CLI (Command-Line Interface) used to distinguish between graphical and text-based shells/interfaces. Other [GUI][3] and [CLI][4] shells can be used, but [GNOME][5] is Fedoras default GUI and [Bash][6] is its default CLI.
The remainder of this article will cover recommended dotfile practices for the Bash CLI.
### Bash overview
From the Bash reference manual:
> At its base, a shell is simply a macro processor that executes commands. The term macro processor means functionality where text and symbols are expanded to create larger expressions.
>
> Reference Documentation for Bash
> Edition 5.0, for Bash Version 5.0.
> May 2019
In addition to helping the user start and interact with other programs, the Bash shell also includes several built-in commands and keywords. Bashs built-in functionality is extensive enough that it is considered a [high-level programming language][7] in its own right. Several of Bashs keywords and operators resemble those of [the C programming language][8].
Bash can be invoked in either interactive or non-interactive mode. Bashs interactive mode is the typical terminal/command-line interface that most people are familiar with. [GNOME Terminal][9], by default, launches Bash in interactive mode. An example of when Bash runs in non-interactive mode is when commands and data are [piped][10] to it from a file or shell script. Other modes of operation that Bash can operate in include: login, non-login, remote, POSIX, unix sh, restricted, and with a different UID/GID than the user. Various combinations of these modes are possible. For example interactive+restricted+POSIX or non-interactive+non-login+remote. Which startup files Bash will process depends on the combination of modes that are requested when it is invoked. Understanding these modes of operation is necessary when modifying the startup files.
According to the Bash reference manual, Bash …
> 1\. Reads its input from a file …, from a string supplied as an argument to the -c invocation option …, or from the users terminal.
>
> 2\. Breaks the input into words and operators, obeying [its] quoting rules. … These tokens are separated by metacharacters. Alias expansion is performed by this step.
>
> 3\. Parses the tokens into simple and compound commands.
>
> 4\. Performs the various shell expansions …, breaking the expanded tokens into lists of filenames … and commands and arguments.
>
> 5\. Performs any necessary redirections … and removes the redirection operators and their operands from the argument list.
>
> 6\. Executes the command.
>
> 7\. Optionally waits for the command to complete and collects its exit status.
>
> Reference Documentation for Bash
> Edition 5.0, for Bash Version 5.0.
> May 2019
When a user starts a terminal emulator to access the command line, an interactive shell session is started. GNOME Terminal, by default, launches the users shell in non-login mode. Whether GNOME Terminal launches the shell in login or non-login mode can be configured under _Edit__Preferences__Profiles__Command_. Login mode can also be requested by passing the _login_ flag to Bash on startup. Also note that Bashs _login_ and _non-interactive_ modes are not exclusive. It is possible to run Bash in both _login_ and _non-interactive_ mode at the same time.
### Invoking Bash
Unless it is passed the ***—***_noprofile_ flag, a Bash login shell will read and execute the commands found in certain initialization files. The first of those files is _/etc/profile_ if it exists, followed by one of _~/.bash_profile_, _~/.bash_login_, or _~/.profile_; searched in that order. When the user exits the login shell, or if the script calls the _exit_ built-in in the case of a non-interactive login shell, Bash will read and execute the commands found in _~/.bash_logout_ followed by _/etc/bash_logout_ if it exists. The file _/etc/profile_ will normally source _/etc/bashrc_, reading and executing commands found there, then search through _/etc/profile.d_ for any files with an _sh_ extension to read and execute. As well, the file _~/.bash_profile_ will normally source the file _~/.bashrc_. Both _/etc/bashrc_ and _~/.bashrc_ have checks to prevent double sourcing.
An interactive shell that is not a login shell, will source the _~/.bashrc_ file when it is first invoked. This is the usual type of shell a user will enter when opening a terminal on Fedora. When Bash is started in non-interactive mode as it is when running a shell script it will look for the _BASH_ENV_ variable in the environment. If it is found, will expand the value, and use the expanded value as the name of a file to read and execute. Bash behaves just as if the following command were executed:
```
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
```
It is important to note that the value of the _PATH_ variable is not used to search for the filename.
### Important user-specific dotfiles
Bashs best-known user dotfile is _~/.bashrc_. Most user customization is done by editing this file. Most user customization, may be a stretch since there are reasons to modify all of the mentioned files; as well as other files that have not been mentioned. Bashs environment is designed to be highly customizable in order to suit the needs of many different users with many different tastes.
![][11]
When a Bash login shell exits cleanly, _~/.bash_logout_ and then _/etc/bash_logout_ will be called if they exist. The next diagram is a sequence diagram showing the process Bash follows when being invoked as an interactive shell. The below sequence is followed, for example, when the user opens a terminal emulator from their desktop environment.
![][12]
Armed with the knowledge of how Bash behaves under different invocation methods, it becomes apparent that there are only a few typical invocation methods to be most concerned with. These are the non-interactive and interactive login shell, and the non-interactive and interactive non-login shell. If global environment customizations are needed, then the desired settings should be placed in a uniquely-named file with a _.sh_ extension (_custom.sh_, for example) and that file should be placed in the _/etc/profile.d_ directory.
The non-interactive, non-login invocation method needs special attention. This invocation method causes Bash to check the _BASH_ENV_ variable. If this variable is defined, the file it references will be sourced. Note that the values stored in the _PATH_ environment variable are not utilized when processing _BASH_ENV_. So it must contain the full path to the file to be sourced. For example, if someone wanted the settings from their _~/.bashrc_ file to be available to shell scripts they run non-interactively, they could place something like the following in a file named _/etc/profile.d/custom.sh_
```
# custom.sh
.
.
.
#If Fedora Workstation
BASH_ENV="/home/username/.bashrc"
.
.
.
#If Fedora Silverblue Workstation
BASH_ENV="/var/home/username/.bashrc"
export BASH_ENV
```
The above profile drop-in script will cause the users _~/.bashrc_ file to be sourced just before every shell script is executed.
Users typically customizie their system environment so that it will better fit their work habits and preferences. An example of the sort of customization that a user can make is an alias. Commands frequently run with the same set of starting parameters are good candidates for aliases. Some example aliases are provided in the _~/.bashrc_ file shown below.
```
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ];
then . /etc/bashrc
fi
.
.
.
# User specific aliases and functions
alias ls='ls -hF --color=auto'
alias la='ls -ahF --color=auto'
# make the dir command work kinda like in windows (long format)
alias dir='ls --color=auto --format=long'
# make grep highlight results using color
alias grep='grep --color=auto'
```
Aliases are a way to customize various commands on your system. They can make commands more convenient to use and reduce your keystrokes. Per-user aliases are often configured in the users _~/.bashrc_ file.
If you find you are looking back through your command line history a lot, you may want to configure your history settings. Per-user history options can also be configured in _~/.bashrc_. For example, if you have a habit of using multiple terminals at once, you might want to enable the _histappend_ option. Bash-specific shell options that are [boolean][13] in nature (take either _on_ or _off_ as a value) are typically enabled or disabled using the _shopt_ built-in command. Bash settings that take a more complex value (for example, _HISTTIMEFORMAT_) tend to be configured by assigning the value to an environment variable. Customizing Bash with both shell options and environment variable is demonstrated below.
```
# Configure Bash History
# Expand dir env vars on tab and set histappend
shopt -s direxpand histappend
# - ignoreboth = ignorespace and ignoredup
HISTCONTROL='ignoreboth'
# Controls the format of the time in output of `history`
HISTTIMEFORMAT="[%F %T] "
# Infinite history
# NB: on newer bash, anything < 0 is the supported way, but on CentOS/RHEL
# at least, only this works
HISTSIZE=
HISTFILESIZE=
# or for those of us on newer Bash
HISTSIZE=-1
HISTFILESIZE=-1
```
The _direxpand_ option shown in the example above will cause Bash to replace directory names with the results of word expansion when performing filename completion. This will change the contents of the readline editing buffer, so what you typed is masked by what the completion expands it to.
The _HISTCONTROL_ variable is used to enable or disable some filtering options for the command history. Duplicate lines, lines with leading blank spaces, or both can be filtered from the command history by configuring this setting. To quote Dusty Mabe, the engineer I got the tip from:
> _ignoredup_ makes history not log duplicate entries (if you are running a command over and over). _ignorespace_ ignores entries with a space in the front, which is useful if you are setting an environment variable with a secret or running a command with a secret that you dont want logged to disk. _ignoreboth_ does both.
>
> Dusty Mabe Redhat Principle Software Engineer, June 19, 2020
For users who do a lot of work on the command line, Bash has the _CDPATH_ environment variable. If _CDPATH_ is configured with a list of directories to search, the _cd_ command, when provided a relative path as its first argument, will check all the listed directories in order for a matching subdirectory and change to the first one found.
```
# .bash_profile
# set CDPATH
CDPATH="/var/home/username/favdir1:/var/home/username/favdir2:/var/home/username/favdir3"
# or could look like this
CDPATH="/:~:/var:~/favdir1:~/favdir2:~/favdir3"
export CDPATH
```
_CDPATH_ should be updated the same way _PATH_ is typically updated by referencing itself on the right hand side of the assignment to preserve the previous values.
```
# .bash_profile
# set CDPATH
CDPATH="/var/home/username/favdir1:/var/home/username/favdir2:/var/home/username/favdir3"
# or could look like this
CDPATH="/:~:/var:~/favdir1:~/favdir2:~/favdir3"
CDPATH="$CDPATH:~/favdir4:~/favdir5"
export CDPATH
```
_PATH_ is another very important variable. It is the search path for commands on the system. Be aware that some applications require that their own directories be included in the _PATH_ variable to function properly. As with _CDPATH_, appending new values to _PATH_ can be done by referencing the old values on the right hand side of the assignment. If you want to prepend the new values instead, simply place the old values (_$PATH_) at the end of the list. Note that on Fedora, the list values are separated with the colon character (**:**).
```
# .bash_profile
# Add PATH values to the PATH Environment Variable
PATH="$PATH:~/bin:~:/usr/bin:/bin:~/jdk-13.0.2:~/apache-maven-3.6.3"
export PATH
```
The command prompt is another popular candidate for customization. The command prompt has seven customizable parameters:
> **PROMPT_COMMAND** If set, the value is executed as a command prior to issuing each primary prompt ($PS1).
>
> **PROMPT_DIRTRIM** If set to a number greater than zero, the value is used as the number of trailing directory components to retain when expanding the \w and \W prompt string escapes. Characters removed are replaced with an ellipsis.
>
> **PS0** The value of this parameter is expanded like _PS1_ and displayed by interactive shells after reading a command and before the command is executed.
>
> **PS1** The primary prompt string. The default value is **\s-\v\$** . …
>
> **PS2** The secondary prompt string. The default is _**&gt;**_ . _PS2_ is expanded in the same way as _PS1_ before being displayed.
>
> **PS3** The value of this parameter is used as the prompt for the _select_ command. If this variable is not set, the _select_ command prompts with **#?**
>
> **PS4** The value of this parameter is expanded like _PS1_ and the expanded value is the prompt printed before the command line is echoed when the _-x_ option is set. The first character of the expanded value is replicated multiple times, as necessary, to indicate multiple levels of indirection. The default is _**+**_ .
>
> Reference Documentation for Bash
> Edition 5.0, for Bash Version 5.0.
> May 2019
An entire article could be devoted to this one aspect of Bash. There are copious quantities of information and examples available. Some example dotfiles, including prompt reconfiguration, are provided in a repository linked at the end of this article. Feel free to use and experiment with the examples provided in the repository.
### Conclusion
Now that you are armed with a little knowledge about how Bash works, feel free to modify your Bash dotfiles to suit your own needs and preferences. Pretty up your prompt. Go nuts making aliases. Or otherwise make your computer truly yours. Examine the content of _/etc/profile_, _/etc/bashrc_, and _/etc/profile.d/_ for inspiration.
Some comments about terminal emulators are fitting here. There are ways to setup your favorite terminal to behave exactly as you want. You may have already realized this, but often this modification is done with a … wait for it … dotfile in the users home directory. The terminal emulator can also be started as a login session, and some people always use login sessions. How you use your terminal, and your computer, will have a bearing on how you modify (or not) your dotfiles.
If youre curious about what type session you are in at the command line the following script can help you determine that.
```
#!/bin/bash
case "$-" in
(*i*) echo This shell is interactive ;;
(*) echo This shell is not interactive ;;
esac
```
Place the above in a file, mark it executable, and run it to see what type of shell you are in. _$-_ is a variable in Bash that contains the letter **i** when the shell is interactive. Alternatively, you could just echo the $- variable and inspect the output for the presence of the **i** flag:
```
$ echo $-
```
### Reference information
The below references can be consulted for more information and examples. The Bash man page is also a great source of information. Note that your local man page is guaranteed to document the features of the version of Bash you are running whereas information found online can sometimes be either too old (outdated) or too new (not yet available on your system).
<https://opensource.com/tags/command-line>
<https://opensource.com/downloads/bash-cheat-sheet>
You will have to enter a valid email address at the above site, or sign up, to download from it.
<https://opensource.com/article/19/12/bash-script-template>
Community members who provided contributions to this article in the form of example dotfiles, tips, and other script files:
* Micah Abbott Principal Quality Engineer
* John Lebon Principal Software Engineer
* Dusty Mabe Principal Software Engineer
* Colin Walters Senior Principal Software Engineer
A repository of example dotfiles and scripts can be found here:
<https://github.com/TheOneandOnlyJakfrost/bash-article-repo>
Please carefully review the information provided in the above repository. Some of it may be outdated. There are many examples of not only dotfiles for Bash, but also custom scripts and pet container setups for development. I recommend starting with John Lebons dotfiles. They are some of the most detailed I have seen and contain very good descriptions throughout. Enjoy!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/customizing-bash/
作者:[Stephen Snow][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jakfrost/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/05/bashenvironment-816x346.png
[2]: https://en.wikipedia.org/wiki/Shell_(computing)
[3]: https://fedoramagazine.org/fedoras-gaggle-of-desktops/
[4]: https://en.wikipedia.org/wiki/Comparison_of_command_shells
[5]: https://en.wikipedia.org/wiki/GNOME
[6]: https://en.wikipedia.org/wiki/Bash_(Unix_shell)
[7]: https://en.wikipedia.org/wiki/High-level_programming_language
[8]: https://en.wikipedia.org/wiki/C_(programming_language)
[9]: https://en.wikipedia.org/wiki/GNOME_Terminal
[10]: https://en.wikipedia.org/wiki/Pipeline_(Unix)
[11]: https://fedoramagazine.org/wp-content/uploads/2020/06/bash-initialization-1-1024x711.png
[12]: https://fedoramagazine.org/wp-content/uploads/2020/06/bash-initialization-2-1024x544.png
[13]: https://en.wikipedia.org/wiki/Boolean_data_type

View File

@ -0,0 +1,205 @@
[#]: collector: (lujun9972)
[#]: translator: (Yufei-Yan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to install Java on a Mac)
[#]: via: (https://opensource.com/article/20/7/install-java-mac)
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
How to install Java on a Mac
======
macOS users can run the open source release of Java as well as newer
frameworks for cloud-native development.
![Coffee and laptop][1]
In late May, [Java][2] celebrated its 25th anniversary, and to commemorate the occasion, developers around the world used the hashtag [#MovedByJava][3] to share their achievements, memories, and insights with the programming language.
> My timeline:
>
> * 1999 Started learning Java
> * 2007 Created [@grailsframework][4]
> * 2008 Cofounded G2One
> * 2009 Acquired by SpringSource
> * 2015 Joined [@ObjectComputing][5]
> * 2018 Created [@micronautfw][6] / won [@groundbreakers][7] award
> * 2019 Became [@Java_Champions][8]
>
> Thank u [@java][9]![#MovedByJava][10]
>
> — Graeme Rocher (@graemerocher) [May 21, 2020][11]
Over the years, many technologies and trends have contributed to the Java stack's development, deployment, and ability to run multiple applications on standard application servers. Building container images for [Kubernetes][12] enables Java developers to package and deploy [microservices][13] in multiple cloud environments rather than running several application servers on virtual machines.
![Timeline of technology contributions to Java][14]
(Daniel Oh, [CC BY-SA 4.0][15])
With these technologies, the Java application stack has been optimized to run larger heaps and highly dynamic frameworks that can make decisions at runtime. Unfortunately, those efforts weren't good enough to make Java the preferred programming language for developers to implement cloud-native Java applications for serverless and event-driven platforms. Other languages filled in the space, particularly JavaScript, Python, and Go, with Rust and WebAssembly offering new alternatives.
Despite this competition, [cloud-native Java][16] is making an impact on cloud-centric software development. Luckily, new Java frameworks (e.g., [Quarkus][17], [Micronaut][18], and [Helidon][19]) have recently broken through the challenges by offering smaller applications that compile faster and are designed with distributed systems in mind.
### How to install Java on macOS
This future for Java development starts with more people installing and using Java. So I will walk through installing and getting started with the Java development environment on macOS. (If you are running Linux, please see Seth Kenlon's article [_How to install Java on Linux_][20].)
#### Install OpenJDK from a Brew repository
Homebrew is the de-facto standard package manager for macOS. If you haven't installed it yet, Matthew Broberg's [_Introduction to Homebrew_][21] walks you through the steps.
Once you have Homebrew on your Mac, use the `brew` command to install [OpenJDK][22], which is the open source way to write Java applications:
```
`$ brew cask install java`
```
In just a few minutes, you will see:
```
`🍺 java was successfully installed!`
```
Confirm that OpenJDK installed correctly with `$ java -version`:
```
$ java -version
openjdk version "14.0.1" 2020-04-14
OpenJDK Runtime Environment (build 14.0.1+7)
OpenJDK 64-Bit Server VM (build 14.0.1+7, mixed mode, sharing
```
The output confirms OpenJDK 14 (the latest version, as of this writing) is installed.
#### Install OpenJDK from a binary
If you are not a fan of package management and prefer managing Java yourself, there's always the option to download and install it manually.
I found a download link to the latest version on the OpenJDK homepage. Download the OpenJDK 14 binary:
```
`$ wget https://download.java.net/java/GA/jdk14.0.1/664493ef4a6946b186ff29eb326336a2/7/GPL/openjdk-14.0.1_osx-x64_bin.tar.gz`
```
Move to the directory where you downloaded the binary file and extract it:
```
`$ tar -xf openjdk-14.0.1_osx-x64_bin.tar.gz`
```
Next, add Java to your PATH:
```
`$ export PATH=$PWD/jdk-14.0.1.jdk/Contents/Home/bin:$PATH`
```
Also, add this to the path to your dotfiles, `.bash_profile` or `.zshrc` depending on what shell you are running. You can learn more about configuring the `$PATH` variable in [_How to set your $PATH variable in Linux_][23].
Finally, verify your OpenJDK 14 installation:
```
$ java -version
openjdk version "14.0.1" 2020-04-14
OpenJDK Runtime Environment (build 14.0.1+7)
OpenJDK 64-Bit Server VM (build 14.0.1+7, mixed mode, sharing)
```
### Write your first Java microservice on a Mac
Now you are ready to develop a cloud-native Java application with OpenJDK stack on macOS. In this how-to, you'll create a new Java project on [Quarkus][17] that exposes a REST API using dependency injection.
You will need [Maven][24], a popular Java dependency manager, to start. [Install][25] it from Maven's website or using Homebrew with `brew install maven`.
Execute the following Maven commands to configure a Quarkus project and create a simple web app:
```
$ mvn io.quarkus:quarkus-maven-plugin:1.5.1.Final:create \
    -DprojectGroupId=com.example \
    -DprojectArtifactId=getting-started \
    -DclassName="com.example.GreetingResource" \
    -Dpath="/hello"
cd getting-started
```
Run the application:
```
`$ ./mvnw quarkus:dev`
```
You will see this output when the application starts:
```
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,&lt; / /_/ /\ \  
\--\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/  
2020-06-13 00:03:06,413 INFO  [io.quarkus] (Quarkus Main Thread) getting-started 1.0-SNAPSHOT on JVM (powered by Quarkus 1.5.1.Final) started in 1.125s. Listening on: <http://0.0.0.0:8080>
2020-06-13 00:03:06,416 INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
2020-06-13 00:03:06,416 INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, resteasy]
```
Access the REST endpoint using the `curl` command:
```
$ curl -w "\n" <http://localhost:8080/hello>
hello
```
Congratulations! You have quickly gone from not even having Java installed to building your first web application using Maven and Quarkus.
### What to do next with Java
Java is a mature programming language that continues to grow in popularity through new frameworks designed for cloud-native application development.
If you are on the path toward building that future, you may be interested in more practical Quarkus development lessons or other modern frameworks. No matter what you're building, the next step is configuring your text editor. Read my tutorial on [_Writing Java with Quarkus in VS Code_][26], then explore what else you can do.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/install-java-mac
作者:[Daniel Oh][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/daniel-oh
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: https://opensource.com/resources/java
[3]: https://twitter.com/search?q=%23MovedByJava&src=typed_query
[4]: https://twitter.com/grailsframework?ref_src=twsrc%5Etfw
[5]: https://twitter.com/ObjectComputing?ref_src=twsrc%5Etfw
[6]: https://twitter.com/micronautfw?ref_src=twsrc%5Etfw
[7]: https://twitter.com/groundbreakers?ref_src=twsrc%5Etfw
[8]: https://twitter.com/Java_Champions?ref_src=twsrc%5Etfw
[9]: https://twitter.com/java?ref_src=twsrc%5Etfw
[10]: https://twitter.com/hashtag/MovedByJava?src=hash&ref_src=twsrc%5Etfw
[11]: https://twitter.com/graemerocher/status/1263484918157410304?ref_src=twsrc%5Etfw
[12]: https://opensource.com/resources/what-is-kubernetes
[13]: https://opensource.com/resources/what-are-microservices
[14]: https://opensource.com/sites/default/files/uploads/javatimeline.png (Timeline of technology contributions to Java)
[15]: https://creativecommons.org/licenses/by-sa/4.0/
[16]: https://opensource.com/article/20/1/cloud-native-java
[17]: https://quarkus.io/
[18]: https://micronaut.io/
[19]: https://helidon.io/#/
[20]: https://opensource.com/article/19/11/install-java-linux
[21]: https://opensource.com/article/20/6/homebrew-mac
[22]: https://openjdk.java.net/
[23]: https://opensource.com/article/17/6/set-path-linux
[24]: https://maven.apache.org/index.html
[25]: https://maven.apache.org/install.html
[26]: https://opensource.com/article/20/4/java-quarkus-vs-code

View File

@ -0,0 +1,282 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Install a Kubernetes load balancer on your Raspberry Pi homelab with MetalLB)
[#]: via: (https://opensource.com/article/20/7/homelab-metallb)
[#]: author: (Chris Collins https://opensource.com/users/clcollins)
Install a Kubernetes load balancer on your Raspberry Pi homelab with MetalLB
======
Assign real IPs from your home network to services running in your
cluster and access them from other hosts on your network.
![Science lab with beakers][1]
Kubernetes is designed to integrate with major cloud providers' load balancers to provide public IP addresses and direct traffic into a cluster. Some professional network equipment manufacturers also offer controllers to integrate their physical load-balancing products into Kubernetes installations in private data centers. For an enthusiast running a Kubernetes cluster at home, however, neither of these solutions is very helpful.
Kubernetes does not have a built-in network load-balancer implementation. A bare-metal cluster, such as a [Kubernetes cluster installed on Raspberry Pis for a private-cloud homelab][2], or really any cluster deployed outside a public cloud and lacking expensive professional hardware, needs another solution. [MetalLB][3] fulfills this niche, both for enthusiasts and large-scale deployments.
MetalLB is a network load balancer and can expose cluster services on a dedicated IP address on the network, allowing external clients to connect to services inside the Kubernetes cluster. It does this via either [layer 2 (data link)][4] using [Address Resolution Protocol][5] (ARP) or [layer 4 (transport)][6] using [Border Gateway Protocol][7] (BGP).
While Kubernetes does have something called [Ingress][8], which allows HTTP and HTTPS traffic to be exposed outside the cluster, it supports _only_ HTTP or HTTPS traffic, while MetalLB can support any network traffic. It is more of an apples-to-oranges comparison, however, because MetalLB provides resolution of an unassigned IP address to a particular cluster node and assigns that IP to a Service, while Ingress uses a specific IP address and internally routes HTTP or HTTPS traffic to a Service or Services based on routing rules.
MetalLB can be set up in just a few steps, works especially well in private homelab clusters, and within Kubernetes clusters, it behaves the same as public cloud load-balancer integrations. This is great for education purposes (i.e., learning how the technology works) and makes it easier to "lift-and-shift" workloads between on-premises and cloud environments.
### ARP vs. BGP
As mentioned, MetalLB works via either ARP or BGP to resolve IP addresses to specific hosts. In simplified terms, this means when a client attempts to connect to a specific IP, it will ask "which host has this IP?" and the response will point it to the correct host (i.e., the host's MAC address).
With ARP, the request is broadcast to the entire network, and a host that knows which MAC address has that IP address responds to the request; in this case, MetalLB's answer directs the client to the correct node.
With BGP, each "peer" maintains a table of routing information directing clients to the host handling a particular IP for IPs and the hosts the peer knows about, and it advertises this information to its peers. When configured for BGP, MetalLB peers each of the nodes in the cluster with the network's router, allowing the router to direct clients to the correct host.
In both instances, once the traffic has arrived at a host, Kubernetes takes over directing the traffic to the correct pods.
For the following exercise, you'll use ARP. Consumer-grade routers don't (at least easily) support BGP, and even higher-end consumer or professional routers that do support BGP can be difficult to set up. ARP, especially in a small home network, can be just as useful and requires no configuration on the network to work. It is considerably easier to implement.
### Install MetalLB
Installing MetalLB is straightforward. Download or copy two manifests from [MetalLB's GitHub repository][9] and apply them to Kubernetes. These two manifests create the namespace MetalLB's components will be deployed to and the components themselves: the MetalLB controller, a "speaker" daemonset, and service accounts.
#### Install the components
Once you create the components, a random secret is generated to allow encrypted communication between the speakers (i.e., the components that "speak" the protocol to make services reachable).
(Note: These steps are also available on MetalLB's website.)
The two manifests with the required MetalLB components are:
* <https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml>
* <https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml>
They can be downloaded and applied to the Kubernetes cluster using the `kubectl apply` command, either locally or directly from the web:
```
# Verify the contents of the files, then download and pipe then to kubectl with curl
# (output omitted)
$ kubectl apply -f <https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml>
$ kubectl apply -f <https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml>
```
After applying the manifests, create a random Kubernetes secret for the speakers to use for encrypted communications:
```
# Create a secret for encrypted speaker communications
$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
```
Completing the steps above will create and start all the MetalLB components, but they will not do anything until they are configured. To configure MetalLB, create a configMap that describes the pool of IP addresses the load balancer will use.
#### Configure the address pools
MetalLB needs one last bit of setup: a configMap with details of the addresses it can assign to the Kubernetes Service LoadBalancers. However, there is a small consideration. The addresses in use do not need to be bound to specific hosts in the network, but they must be free for MetalLB to use and not be assigned to other hosts.
In my home network, IP addresses are assigned by the DHCP server my router is running. This DHCP server should not attempt to assign the addresses that MetalLB will use. Most consumer routers allow you to decide how large your subnet will be and can be configured to assign only a subset of IPs in that subnet to hosts via DHCP.
In my network, I am using the subnet `192.168.2.1/24`, and I decided to give half the IPs to MetalLB. The first half of the subnet consists of IP addresses from `192.168.2.1` to `192.168.2.126`. This range can be represented by a `/25` subnet: `192.168.2.1/25`. The second half of the subnet can similarly be represented by a `/25` subnet: `192.168.2.128/25`. Each half contains 126 IPs—more than enough for the hosts and Kubernetes services. Make sure to decide on subnets appropriate to your own network and configure your router and MetalLB appropriately.
After configuring the router to ignore addresses in the `192.168.2.128/25` subnet (or whatever subnet you are using), create a configMap to tell MetalLB to use that pool of addresses:
```
# Create the config map
$ cat &lt;&lt;EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: address-pool-1
      protocol: layer2
      addresses:
      - 192.168.2.128/25
EOF
```
The example configMap above uses [CIDR][10] notation, but the list of addresses can also be specified as a range:
```
addresses:
 - 192.168.2.128-192.168.2.254
```
Once the configMap is created, MetalLB will be active. Time to try it out!
### Test MetalLB
You can test the new MetalLB configuration by creating an example web service, and you can use one from a [previous article][2] in this series: Kube Verify. Use the same image to test that MetalLB is working as expected: `quay.io/clcollins/kube-verify:01`. This image contains an Nginx server listening for requests on port 8080. You can [view the Containerfile][11] used to create the image. If you want, you can instead build your own container image from the Containerfile and use that for testing.
If you previously created a Kubernetes cluster on Raspberry Pis, you may already have a Kube Verify service running and can [skip to the section][12] on creating a LoadBalancer-type of service.
#### If you need to create a kube-verify namespace
If you do not already have a `kube-verify` namespace, create one with the `kubectl` command:
```
# Create a new namespace
$ kubectl create namespace kube-verify
# List the namespaces
$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   63m
kube-node-lease   Active   63m
kube-public       Active   63m
kube-system       Active   63m
metallb-system    Active   21m
kube-verify       Active   19s
```
With the namespace created, create a deployment in that namespace:
```
# Create a new deployment
$ cat &lt;&lt;EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-verify
  namespace: kube-verify
  labels:
    app: kube-verify
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kube-verify
  template:
    metadata:
      labels:
        app: kube-verify
    spec:
      containers:
      - name: nginx
        image: quay.io/clcollins/kube-verify:01
        ports:
        - containerPort: 8080
EOF
deployment.apps/kube-verify created
```
#### Create a LoadBalancer-type Kubernetes service
Now expose the deployment by creating a LoadBalancer-type Kubernetes service. If you already have a service named `kube-verify`, this will replace that one:
```
# Create a LoadBalancer service for the kube-verify deployment
cat &lt;&lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: kube-verify
  namespace: kube-verify
spec:
  selector:
    app: kube-verify
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer
EOF
```
You could accomplish the same thing with the `kubectl expose` command:
```
`kubectl expose deployment kube-verify -n kube-verify --type=LoadBalancer --target-port=8080 --port=80`
```
MetalLB is listening for services of type LoadBalancer and immediately assigns an external IP (an IP chosen from the range you selected when you set up MetalLB). View the new service and the external IP address MetalLB assigned to it with the `kubectl get service` command:
```
# View the new kube-verify service
$ kubectl get service kube-verify -n kube-verify
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
kube-verify   LoadBalancer   10.105.28.147   192.168.2.129   80:31491/TCP   4m14s
# Look at the details of the kube-verify service
$ kubectl describe service kube-verify -n kube-verify
Name:                     kube-verify
Namespace:                kube-verify
Labels:                   app=kube-verify
Annotations:              &lt;none&gt;
Selector:                 app=kube-verify
Type:                     LoadBalancer
IP:                       10.105.28.147
LoadBalancer Ingress:     192.168.2.129
Port:                     &lt;unset&gt;  80/TCP
TargetPort:               8080/TCP
NodePort:                 &lt;unset&gt;  31491/TCP
Endpoints:                10.244.1.50:8080,10.244.1.51:8080,10.244.2.36:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age    From                Message
  ----    ------        ----   ----                -------
  Normal  IPAllocated   5m55s  metallb-controller  Assigned IP "192.168.2.129"
  Normal  nodeAssigned  5m55s  metallb-speaker     announcing from node "gooseberry"
```
In the output from the `kubectl describe` command, note the events at the bottom, where MetalLB has assigned an IP address (yours will vary) and is "announcing" the assignment from one of the nodes in your cluster (again, yours will vary). It also describes the port, the external port you can access the service from (80), the target port inside the container (port 8080), and a node port through which the traffic will route (31491). The end result is that the Nginx server running in the pods of the `kube-verify` service is accessible from the load-balanced IP, on port 80, from anywhere on your home network.
For example, on my network, the service was exposed on `http://192.168.2.129:80`, and I can `curl` that IP from my laptop on the same network:
```
# Verify that you receive a response from Nginx on the load-balanced IP
$ curl 192.168.2.129
&lt;!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "[http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"\&gt;][13]
&lt;html xmlns="<http://www.w3.org/1999/xhtml>" xml:lang="en" lang="en"&gt;
&lt;head&gt;
  &lt;title&gt;Test Page for the HTTP Server on Fedora&lt;/title&gt;
(further output omitted)
```
### MetalLB FTW
MetalLB is a great load balancer for a home Kubernetes cluster. It allows you to assign real IPs from your home network to services running in your cluster and access them from other hosts on your home network. These services can even be exposed outside the network by port-forwarding traffic through your home router (but please be careful with this!). MetalLB easily replicates cloud-provider-like behavior at home on bare-metal computers, Raspberry Pi-based clusters, and even virtual machines, making it easy to "lift-and-shift" workloads to the cloud or just familiarize yourself with how they work. Best of all, MetalLB is easy and convenient and makes accessing the services running in your cluster a breeze.
Have you used MetalLB, or do you use another load-balancer solution? Are you primarily using Nginx or HAProxy Ingress? Let me know in the comments!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/homelab-metallb
作者:[Chris Collins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clcollins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/science_experiment_beaker_lab.png?itok=plKWRhlU (Science lab with beakers)
[2]: https://opensource.com/article/20/6/kubernetes-raspberry-pi
[3]: https://metallb.universe.tf/
[4]: https://en.wikipedia.org/wiki/Data_link_layer
[5]: https://en.wikipedia.org/wiki/Address_Resolution_Protocol
[6]: https://en.wikipedia.org/wiki/Transport_layer
[7]: https://en.wikipedia.org/wiki/Border_Gateway_Protocol
[8]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[9]: https://github.com/metallb/metallb
[10]: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
[11]: https://github.com/clcollins/homelabCloudInit/blob/master/simpleCloudInitService/data/Containerfile
[12]: tmp.A4L9yD76e5#loadbalancer
[13]: http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"\>

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 best practices for managing Git repos)
[#]: via: (https://opensource.com/article/20/7/git-repos-best-practices)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
6 best practices for managing Git repos
======
Resist the urge to add things in Git that will make it harder to manage;
here's what to do instead.
![Working from home at a laptop][1]
Having access to source code makes it possible to analyze the security and safety of applications. But if nobody actually looks at the code, the issues wont get caught, and even when people are actively looking at code, theres usually quite a lot to look at. Fortunately, GitHub has an active security team, and recently, they [revealed a Trojan that had been committed into several Git repositories][2], having snuck past even the repo owners. While we cant control how other people manage their own repositories, we can learn from their mistakes. To that end, this article reviews some of the best practices when it comes to adding files to your own repositories.
### Know your repo
![Git repository terminal][3]
This is arguably Rule Zero for a secure Git repository. As a project maintainer, whether you started it yourself or youve adopted it from someone else, its your job to know the contents of your own repository. You might not have a memorized list of every file in your codebase, but you need to know the basic components of what youre managing. Should a stray file appear after a few dozen merges, youll be able to spot it easily because you wont know what its for, and youll need to inspect it to refresh your memory. When that happens, review the file and make sure you understand exactly why its necessary.
### Ban binary blobs
![Git binary check command in terminal][4]
Git is meant for text, whether its C or Python or Java written in plain text, or JSON, YAML, XML, Markdown, HTML, or something similar. Git isnt ideal for binary files.
Its the difference between this:
```
$ cat hello.txt
This is plain text.
It's readable by humans and machines alike.
Git knows how to version this.
$ git diff hello.txt
diff --git a/hello.txt b/hello.txt
index f227cc3..0d85b44 100644
\--- a/hello.txt
+++ b/hello.txt
@@ -1,2 +1,3 @@
 This is plain text.
+It's readable by humans and machines alike.
 Git knows how to version this.
```
and this:
```
$ git diff pixel.png
diff --git a/pixel.png b/pixel.png
index 563235a..7aab7bc 100644
Binary files a/pixel.png and b/pixel.png differ
$ cat pixel.png
<EFBFBD>PNG
IHDR7n<EFBFBD>$gAMA<4D><41>
              <20>abKGD݊<44>tIME<4D>
                          -2R<32><52>
IDA<EFBFBD>c`<60>!<21>3%tEXtdate:create2020-06-11T11:45:04+12:00<30><30>r.%tEXtdate:modify2020-06-11T11:45:04+12:00<30><30>ʒIEND<4E>B`<60>
```
The data in a binary file cant be parsed in the same way plain text can be parsed, so if anything is changed in a binary file, the whole thing must be rewritten. The only difference between one version and the other is everything, which adds up quickly.
Worse still, binary data cant be reasonably audited by you, the Git repository maintainer. Thats a violation of Rule Zero: know whats in your repository.
In addition to the usual [POSIX][5] tools, you can detect binaries using `git diff`. When you try to diff a binary file using the `--numstat` option, Git returns a null result:
```
$ git diff --numstat /dev/null pixel.png | tee
\-     -   /dev/null =&gt; pixel.png
$ git diff --numstat /dev/null file.txt | tee
5788  0   /dev/null =&gt; list.txt
```
If youre considering committing binary blobs to your repository, stop and think about it first. If its binary, it was generated by something. Is there a good reason not to generate them at build time instead of committing them to your repo? Should you decide it does make sense to commit binary data, make sure you identify, in a README file or similar, where the binary files are, why theyre binary, and what the protocol is for updating them. Updates must be performed sparingly, because, for every change you commit to a binary blob, the storage space for that blob effectively doubles.
### Keep third-party libraries third-party
Third-party libraries are no exception to this rule. While its one of the many benefits of open source that you can freely re-use and re-distribute code you didnt write, there are many good reasons not to house a third-party library in your own repository. First of all, you cant exactly vouch for a third party, unless youve reviewed all of its code (and future merges) yourself. Secondly, when you copy third party libraries into your Git repo, it splinters focus away from the true upstream source. Someone confident in the library is technically only confident in the master copy of the library, not in a copy lying around in a random repo. If you need to lock into a specific version of a library, either provide developers with a reasonable URL the release your project needs or else use [Git Submodule][6].
### Resist a blind git add
![Git manual add command in terminal][7]
If your project is compiled, resist the urge to use `git add .` (where `.` is either the current directory or the path to a specific folder) as an easy way to add anything and everything new. This is especially important if youre not manually compiling your project, but are using an IDE to manage your project for you. It can be extremely difficult to track whats gotten added to your repository when an IDE manages your project, so its important to only add what youve actually written and not any new object that pops up in your project folder.
If you do use `git add .`, review whats in staging before you push. If you see an unfamiliar object in your project folder when you do a `git status`, find out where it came from and why its still in your project directory after youve run a `make clean` or equivalent command. Its a rare build artifact that wont regenerate during compilation, so think twice before committing it.
### Use Git ignore
![Git ignore command in terminal][8]
Many of the conveniences built for programmers are also very noisy. The typical project directory for any project, programming, or artistic or otherwise, is littered with hidden files, metadata, and leftover artifacts. You can try to ignore these objects, but the more noise there is in your `git status`, the more likely you are to miss something.
You can Git filter out this noise for you by maintaining a good gitignore file. Because thats a common requirement for anyone using Git, there are a few starter gitignore files available. [Github.com/github/gitignore][9] offers several purpose-built gitignore files you can download and place into your own project, and [Gitlab.com][10] integrated gitignore templates into the repo creation workflow several years ago. Use these to help you build a reasonable gitignore policy for your project, and stick to it.
### Review merge requests
![Git merge request][11]
When you get a merge or pull request or a patch file through email, dont just test it to make sure it works. Its your job to read new code coming into your codebase and to understand how it produces the result it does. If you disagree with the implementation, or worse, you dont comprehend the implementation, send a message back to the person submitting it and ask for clarification. Its not a social faux pas to question code looking to become a permanent fixture in your repository, but its a breach of your social contract with your users to not know what you merge into the code theyll be using.
### Git responsible
Good software security in open source is a community effort. Dont encourage poor Git practices in your repositories, and dont overlook a security threat in repositories you clone. Git is powerful, but its still just a computer program, so be the human in the equation and keep everyone safe.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/git-repos-best-practices
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/wfh_work_home_laptop_work.png?itok=VFwToeMy (Working from home at a laptop)
[2]: https://securitylab.github.com/research/octopus-scanner-malware-open-source-supply-chain/
[3]: https://opensource.com/sites/default/files/uploads/git_repo.png (Git repository )
[4]: https://opensource.com/sites/default/files/uploads/git-binary-check.jpg (Git binary check)
[5]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[6]: https://git-scm.com/book/en/v2/Git-Tools-Submodules
[7]: https://opensource.com/sites/default/files/uploads/git-cola-manual-add.jpg (Git manual add)
[8]: https://opensource.com/sites/default/files/uploads/git-ignore.jpg (Git ignore)
[9]: https://github.com/github/gitignore
[10]: https://about.gitlab.com/releases/2016/05/22/gitlab-8-8-released
[11]: https://opensource.com/sites/default/files/uploads/git_merge_request.png (Git merge request)

View File

@ -0,0 +1,69 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Customizing my Linux terminal with tmux and Git)
[#]: via: (https://opensource.com/article/20/7/tmux-git)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Customizing my Linux terminal with tmux and Git
======
Set up your console so you always know where you are and what to do
next.
![woman on laptop sitting at the window][1]
I use GNOME Terminal, mostly because it is my distribution's default. But what happens inside my terminal is far from "default." Before I get into how I customize it, here is what it looks like:
![Moshe Zadka's terminal][2]
(Moshe Zadka, [CC BY-SA 4.0][3])
### Start at the bottom
I use [tmux][4], a terminal multiplexer technology, to manage my terminal experience.
At the bottom of the image above, you can see my green tmux bar. The `[3]` at the bottom indicates this terminal is the third one: each terminal runs its own tmux session. (I created a new one to make the font larger, so it's easier to see in this screenshot; this is the only difference between this terminal and my real ones.)
The prompt also looks funny, right? With so much information jammed into the prompt, I like to stick in a newline so that if I want to do impromptu shell programming or write a five-step pipeline, I can do it without having things spill over. The trade-off is that simple sequences of commands—touch this, copy that, move this—scroll off my screen faster.
The last thing on the line with the content is [Aleph null][5], the smallest [infinite cardinality][6]. I like it when it is obvious where a content line ends, and when I realized that both Aleph and subscript 0 are Unicode characters, I could not resist the temptation to make Aleph null part of my prompt. (Math nerds, unite!)
Before that is my username; this is moderately useful since I use the same [dotfiles][7] (stored in Git) on multiple machines with different usernames.
Before my username is the last component of the directory I am in. The full path is often too long and useless, but the current directory is invaluable for someone, like me, who constantly forgets what he's working on. Before that is the name of the machine. All my machines are named after TV shows that I like. My older laptop is `mcgyver`.
The first bit in the prompt is the bit I like the most: one letter that lets me know the Git status of the directory. It is `G` if the directory is "(not in) Git," `K` if the directory is "OK" and nothing needs to be done, `!` if there are files unknown to Git that must be added or ignored, `C` if I need to commit, `U` if there is no upstream, and `P` if an upstream exists, but I have not pushed. This scheme is not based on the current status but describes the _next action_ I need to do. (To review Git terminology, give [this article][8] a read.)
This terminal functionality is accomplished with an interesting Python utility. It runs `python -m howsit` (after I installed [howsit][9] in a dedicated virtual environment).
You can see the rendering in the image above, but for completeness, here is my PS1:
```
[$(~/.virtualenvs/howsit/bin/python -m howsit)]\h:\W \u ℵ₀  
$
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/tmux-git
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://opensource.com/sites/default/files/uploads/terminal-tmux-moshezadka.png (Moshe Zadka's terminal)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://opensource.com/article/20/1/tmux-console
[5]: https://simple.wikipedia.org/wiki/Aleph_null#:~:text=Aleph%20null%20(also%20Aleph%20naught,series%20of%20infinite%20cardinal%20numbers.
[6]: https://gizmodo.com/a-brief-introduction-to-infinity-5809689
[7]: https://opensource.com/article/19/3/move-your-dotfiles-version-control
[8]: https://opensource.com/article/19/2/git-terminology
[9]: https://pypi.org/project/howsit/

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ex-Solus Dev is Now Creating a Truly Modern Linux Distribution Called Serpent Linux)
[#]: via: (https://itsfoss.com/serpent-os-announcement/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ex-Solus Dev is Now Creating a Truly Modern Linux Distribution Called Serpent Linux
======
[Ikey Doherty][1], the developer who once created the independent Linux distribution Solus has announced his new project: Serpent OS.
[Serpent OS][2] is a Linux distribution that DOES NOT want to be categorized as “lightweight, user-friendly, privacy-focused Linux desktop distribution”.
Instead, Serpent OS has “different goals from the mainstream offering”. How? Read on.
### Serpent OS: The making of a “truly modern” Linux distribution
![][3]
Serpent takes distro-first, compatibility-later approach. This lets them take some really bold decisions.
Ikey says that it this project will not tolerate for negative actors holding Linux back. For example, NVIDIAs lack of support for accelerated Wayland support on their GPUs will not be tolerated and NVIDIA proprietary drivers will be blacklisted from the distribution.
Heres a proposed plan for the Serpent Linux project (taken from [their website][4]):
* No more usrbin split
* 100% clang-built throughout (including kernel)
* musl as libc, relying on compiler optimisations instead of inline asm
* libc++ instead of libstdc++
* LLVMs binutils variants (lld, as, etc.)
* Mixed source/binary distribution
* Moving away from x86_64-generic baseline to newer CPUs, including Intel and AMD specific optimisations
* Capability based subscriptions in package manager (Hardware/ user choice / etc)
* `UEFI` only. No more legacy boot.
* Completely open source, down to the bootstrap / rebuild scripts
* Seriously optimised for serious workloads.
* Third party applications reliant on containers only. No compat-hacks
* Wayland-only. X11 compatibility via containers will be investigated
* Fully stateless with management tools and upstreaming of patches
Ikey boldly claims that Serpent Linux is not Serpent GNU/Linux because it is not going to be dependent on a GNU toolchain or runtime.
The development for Serpent OS project starts by the end of July. There is no definite timeline of the final stable release.
### Too high claims? But Ikey has done it in the past
You may doubt if Serpent OS will see the light of the day and if it would be able to keep all the promises it made.
But Ikey Doherty has done it in the past. If I remember correctly, he first created SolusOS based on Debian. He discontinued the [Debian-based SolusOS][5] in 2013 before it even reached the beta stage.
He then went out to create [evolve OS][6] from scratch instead of using another distribution as base. Due to some naming copyright issues, the project name was changed to Solus (yes, the same old name). [Ikey quit the Solus projec][7][t][7] [in 2018][7] and other devs now handle the project.
Solus is an independent Linux distribution that gave us the beautiful Budgie desktop environment.
Ikey has done it in the past (with the help of other developers, of course). He should be able to pull this one off as well.
**Yay or Nay?**
What do you think of this Serpent Linux? Do you think it is time for developers to take a bold stand and develop the operating system with the future in the mind rather than holding on to the past? Do share your views.
--------------------------------------------------------------------------------
via: https://itsfoss.com/serpent-os-announcement/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/ikey-doherty-serpent-interview/
[2]: https://www.serpentos.com/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/serpent-linux.png?ssl=1
[4]: https://www.serpentos.com/about/
[5]: https://distrowatch.com/table.php?distribution=solusos
[6]: https://itsfoss.com/beta-evolve-os-released/
[7]: https://itsfoss.com/ikey-leaves-solus/

View File

@ -0,0 +1,100 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create LEGO designs in Blender with this plugin)
[#]: via: (https://opensource.com/article/20/7/lego-blender-bricker)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Create LEGO designs in Blender with this plugin
======
Convert your 3D digital models into LEGO designs with Bricker
![Open Lego CAD][1]
I use [LEGO CAD][2] to document some of my own creations (or "MOCs," as custom sets are called in some digital LEGO communities). The advantage of computer-aided design (CAD) is precision. When you use CAD to build something in virtual space, you can reasonably expect that it can be built in the real world. While the LEGO CAD applications I use don't have simulated physics to verify the structural integrity of my designs, I do lay every brick in the software to mimic a model I've made in real life.
LEGO bricks aren't just raw materials for design, though. They're also an aesthetic, as evident from LEGO-themed video games and movies. If you're less concerned with precision, but you still want the look of LEGO bricks, there's a great plugin for Blender called [Bricker][3] that can convert your 3D models into LEGO models with the click of a button.
### Install Bricker
You can buy Bricker for $65 USD from [BlenderMarket][4], and it's licensed under the GPLv3. Paying for it helps fund development and support.
To install Bricker, launch Blender, click the **Edit** menu, and select **Preferences**. In the Preferences pane, click the **Add-ons** tab on the left. 
![Installing an add-on in Blender][5]
(Seth Kenlon, [CC BY-SA 4.0][6])
Start typing "Bricker" in the search box in the upper-right of the **Add-ons** pane, click the **Install** button, and select the Bricker ZIP file when prompted.
### Convert a 3D model to LEGO bricks
Whether you have the universal starting point of a plain, gray cube, an elaborate model of your own creation, or something you've downloaded from a Blender model hub, you can give Bricker a try right after installation.
First, click on the model you want to convert into a LEGO model. With your model selected, press the **N** key on your keyboard to open the **Properties** panel. Click the **Bricker** properties tab, and click the **New Brick Model** button.
![Bricker properties][7]
(Seth Kenlon, [CC BY-SA 4.0][6])
Now that you've added the model to Bricker, click the new **Brickify Object** button in the Bricker panel.
The default settings render a pretty blocky model, with mostly 2x10 bricks, no plates, and not much detail.
![Blocks in Bricker][8]
(Seth Kenlon, [CC BY-SA 4.0][6])
But there are plenty of options in the Bricker plugin for you to customize, and they show up in the Bricker **Properties** panel once you brickify a model.
![Bricker settings][9]
(Seth Kenlon, [CC BY-SA 4.0][6])
The most important settings in the **Model Settings** panel are:
* **Brick height** sets the height of each brick in the model. A larger setting produces a less detailed model because fewer bricks are used for the sculpt.
* **Split model** makes every rendered brick an object you can move in Blender. Without this enabled, your model looks like lots of bricks but acts as if they are all glued together.
* **Brick types** controls whether your sculpture is made of bricks, plates, both bricks and plates, tiles, and so on.
* **Max size** sets the maximum size for bricks and plates in your sculpture.
* **Legal bricks only** ensures that all the bricks are based on real ones. For instance, enabling this prevents it from generating a 3x7 brick or a 2x11 plate because there are no such pieces in the LEGO catalog (or at least not in the [LDraw Parts][10] library).
In the **Detailing** panel, you can control whether the undersides of the bricks are flat (which isn't very realistic, but "cheaper" to render) or detailed to mimic the underside of an actual LEGO piece.
After changing a setting, you must click the **Update Model** button, near the top of the Bricker property panel, to re-render your sculpture.
![Red dragon model in Bricker][11]
(Seth Kenlon, [CC BY-SA 4.0][6])
### Brickify your designs
Bricker is a fun stylistic plugin for Blender. While it probably won't be your go-to tool for designing real LEGO sets, it's a great way to sculpt, draw, and animate with virtual LEGO. If you've been putting off your LEGO stop-motion movie magnum opus, now's the time to get started in the virtual world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/lego-blender-bricker
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open-lego.tiff_.png?itok=mQglOhW_ (Open Lego CAD)
[2]: https://opensource.com/article/20/6/open-source-virtual-lego
[3]: https://github.com/bblanimation/bricker
[4]: https://www.blendermarket.com/products/bricker/docs
[5]: https://opensource.com/sites/default/files/uploads/bricker-install.jpg (Installing an add-on in Blender)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/sites/default/files/uploads/bricker-properties.jpg (Bricker properties)
[8]: https://opensource.com/sites/default/files/uploads/bricker-blocky.jpg (Blocks in Bricker)
[9]: https://opensource.com/sites/default/files/uploads/bricker-adjust.jpg (Bricker settings)
[10]: https://www.ldraw.org/parts/official-parts.html
[11]: https://opensource.com/sites/default/files/uploads/red-dragon-bricker.jpg (Red dragon model in Bricker)

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Create a Pareto Diagram [80/20 Rule] in LibreOffice Calc)
[#]: via: (https://itsfoss.com/pareto-chart-libreoffice/)
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
How to Create a Pareto Diagram [80/20 Rule] in LibreOffice Calc
======
_**Brief: In this LibreOffice tip, youll learn to create the famous Pareto chart in Calc.**_
The [Pareto Principle][1], also known as the 80/20 Rule, The Law of the Vital Few and The Principle of Factor Sparsity, illustrates that 80% of effects arise from 20% of the causes or in laymans terms 20% of your actions/activities will account for 80% of your results/outcomes.
Although the original observation is related to [economics][2], it can be widely adopted and used across all aspects of business, economics, mathematics, and processes. In computer science, the Pareto principle can be used in [software optimization][3].
Let me show you how to create a Pareto diagram in [LibreOffice][4] spreadsheet tool, i.e. Calc.
### Creating Pareto diagram in LibreOffice Calc
![][5]
To be able to create a Pareto diagram, you need these three basic elements:
* The factors, ranked by the magnitude of their contribution
* The factors expressed numerically
* The cumulative-percent-of-total effect of the ranked factors
First, enter the data in a spreadsheet. Now lets get started!
#### Step 1: Sort the data
Mark all rows from first to the last and at the **Data** tab click on the Sort option. At the **Sort Criteria** tab choose **Sort key 1** and change the entry to **Number of Errors** or whichever name you choose. Make sure to tick **Descending** and finally **OK**.
![][6]
#### Step 2: Create the Cumulative Percentage values
To calculate the cumulative percent of a total, you will need one formula for the first cell (C5) and a different formula for cells C6 and below.
**Generic formula for the first cell**
```
=amount/total
```
**In the example shown, the formula in C5 is:** =B5/$B$15
**Generic formula for the remaining cells**:
```
=(amount/total)+previous cell result
```
**In the example shown, the formula in C6 is:** =(B6/$B$15)+C5
By dragging the fill handle down, you will get the correct formulas for the remaining cells.
![][7]
#### Step 3: Create the Pareto diagram
To create the chart go to **Insert** tab and then click on the **Chart** option.
In the upcoming Chart Wizard choose the chart type **Column and Line** with **Number of lines** set to 1 and click Next.
![][8]
Select the correct data range **$A$4:$C$14** by either using your mouse in the data range selector or by entering it manually. Leave the settings **Data series in columns**, **First row as label**, **First column as label** and click Next.
![][9]
The following Data Series window should have everything filled in correctly, click Next.
![][10]
In the last window enter titles and remove the legend:
* Title: Pareto chart
* X axis: Error Type
* Y axis: Number of Errors
* Untick **Display legend**
* click **Finish**.
![][11]
And this is the result:
![][12]
If the red line appears without any value, select it, then right click &gt; Format Data Series &gt; Align Data Series to Secondary y-Axis &gt; Click OK.
#### Step 4: Fine tune the chart
The range of the secondary y-axis is set to **0 120** , it needs to be up to **100**.
Double click on the secondary y-axis . In the **Scale** tab, untick **Automatic** and **enter 100** as the maximum value. Then click ok.
![][13]
All done!
![][14]
**Conclusion**
Using a Pareto chart to analyze problems in a business project allows focusing efforts towards the ones offering the most considerable improvement potential.
This is one of the many real-life scenario where I have used LibreOffice instead of other proprietary office software. I hope to share more LibreOffice tutorials on Its FOSS. Meanwhile, you can [learn these rather hidden LibreOffice tips][15].
Which LibreOffice functionality do you use the most? Let us know at the comments below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/pareto-chart-libreoffice/
作者:[Dimitrios Savvopoulos][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/dimitrios/
[b]: https://github.com/lujun9972
[1]: https://betterexplained.com/articles/understanding-the-pareto-principle-the-8020-rule/
[2]: https://en.wikipedia.org/wiki/Pareto_principle#In_economics
[3]: https://en.wikipedia.org/wiki/Program_optimization#Bottlenecks
[4]: https://www.libreoffice.org/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/pareto-libreoffice.png?ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/1.-sort-the-data.png?ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/2.-cumulative-percent.png?ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/3.chart_.png?ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/5.data-range.png?ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/6.data-series.png?fit=800%2C381&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/7.chart-elements.png?fit=800%2C381&ssl=1
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/8.Pareto-chart.png?ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/9.fine-tune.png?ssl=1
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/10.final_.png?ssl=1
[15]: https://itsfoss.com/libreoffice-tips/

View File

@ -0,0 +1,68 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (What does a scrum master do?)
[#]: via: (https://opensource.com/article/20/7/scrum-master)
[#]: author: (Taz Brown https://opensource.com/users/heronthecli)
What does a scrum master do?
======
Scrum master is a career path that open source enthusiasts should
consider. Here's what a day in the life of one looks like.
![Digital images of a computer desktop][1]
Turning a love of open source communities into a career is possible, and there are plenty of directions you can take. The path I'm on these days is as a scrum master.
[Scrum][2] is a framework in which software development teams deliver working software in increments of 30 days or less called "sprints." There are three roles: scrum master, product owner, and development team. A scrum master is a facilitator, coach, teacher/mentor, and servant/leader that guides the development team through executing the scrum framework correctly.
The scrum master heads up the daily scrum, sprint planning, sprint review, and sprint retrospectives. As a scrum master, you also remove impediments and help the team to become self-organizing and empowered to create, innovate, and make decisions for themselves as a unit. As a scrum master with a full list of responsibilities, I appreciate a well-organized daily schedule—here's what mine looks like.
### A day in the life of a scrum master
5:00am—Wake up and go to the gym for at least 45 minutes, but given the fact that we are working from home now, I will either walk/run on the treadmill or jump rope for 30 minutes
5:45am—Shower and get dressed
6:15am—Have some breakfast and make coffee or espresso
6:40am—Drive into work (or go for a walk around the neighborhood when working remotely)
7:00am—I get to work and clean up the team room before the team gets in (for remote work, I clean up our team wiki)
8:00am—Read emails and respond to priority emails from the team, team manager, or agile coach
8:45am —Get a caffeine refill from the break room
9:00am—Check in with the team and review the team's scrum board in Jira (or another [open source alternative][3]) just to see if there are any patterns of behavior I might need to address. Modify the team's impediment board if any impediments have been removed.
10:00am—Daily scrum (time-boxed for 15 minutes)
10:15am—Discuss parking lot items following the scrum
11:00am—Meet with the team's manager/leadership, or facilitate a community of practice or brown bag lunch around topics such as effective engineering practices
12:00pm—Lunch meeting or coffee with a product owner
1:00pm—Lunch (30 minutes is more than enough time for me)
1:30pm—Possible tasks include facilitating a backlog refinement event leading up to sprint planning, sprint review/demo, or sprint retrospective
2:30pm—Meet with test automation or DevSecOps team
3:00pm—Facilitate a team-building workshop
4:00pm—Final check-ins with the team and then answer final emails
4:30pm—Update the team's scrum journal
5:00pm—Layout my to-do list for the next day
![Scrum team room setup with sticky notes][4]
I had been in traditional IT for many years prior to becoming a scrum master. I eventually decided that I could use other skills such as my business experience and management experience to work with software development and DevOps teams to create high-performing teams.
Software/DevOps teams use scrum to deliver software incrementally, yet faster and with a high level of quality and sustainability. To me, it was a great decision. Being a scrum master is also about removing impediments. I coach the team on how to solve their own problems, but if it becomes necessary, I will step in and help resolve the issues.
The scrum master role is fun, exciting, and fulfilling, but also pressure-filled and stressful sometimes. But it ultimately, it is worth it to me, as I get to see my teams grow and not only deliver best in class software, but become better people.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/7/scrum-master
作者:[Taz Brown][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/heronthecli
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
[2]: https://opensource.com/resources/scrum
[3]: https://opensource.com/business/16/2/top-issue-support-and-bug-tracking-tools
[4]: https://opensource.com/sites/default/files/uploads/scrummaster_cropped.jpg (Scrum team room setup with sticky notes )

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Purisms Ultra-Secure Linux Machine is Now Available in a New Size)
[#]: via: (https://itsfoss.com/purism-librem-14/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
Purisms Ultra-Secure Linux Machine is Now Available in a New Size
======
[Purism][1] is well-known for its privacy and security focused hardware and software while utilizing open-source technologies. Not to forget the latest [Purism Librem Mini][2].
After a good success with Librem 15 and 13 series laptops, Purism has unveiled Librem 14.
![][3]
Librem 14 looks like a perfect laptop for an open-source enthusiast whos concerned about the security and privacy of a laptop.
In this article, we will talk about Librem 14 specifications, pricing, and its availability.
### Librem 14: Overview
Similar to other variants in the series, Librem 14 offers all the essential security features like the **hardware kill switch** to disable webcam/microphone and its secure PureBoot boot firmware.
![][4]
Librem series is one of the rare [few laptops that come preloaded with Linux][5]. Purism uses its own custom distribution called [PureOS][6]. If youre curious, you can also browse the [source code][7] for it.
As a key highlight of Librem 14 laptop, heres what [Purism mentions][8]:
> The most distinctive feature of the Librem 14 is the new 14″ 1080p IPS matte display which, due to the smaller bezel, fits within the same footprint as the Librem 13.
Even though thats not something mind-blowing, it is good to see that theyve made the laptop fit within the same footprint as its predecessor.
Its a great decision targeted for users who do not want a lot of changes with their laptop upgrade or may appreciate a compact dimension of the laptop.
### Librem 14: Specifications
![][9]
Along with the key highlight, Purisms Librem 14 offers an impressive set of specifications. Heres what you get:
* Intel Core i7-10710U (Comet Lake)
* 14″ Matte (1920×1080) Display
* Intel UHD Graphics
* RAM Up to 32GB, DDR4 at 2133 MHz
* 2 SATA + NVMe-capable M.2 slots
* 1 HDMI Port (4K capable @60Hz max)
* USB Type-C Video Out (4K capable)
* 3.5mm AudioJack
* Gigabit Ethernet Adapter with Integrated RJ45 Connector
* Atheros 802.11n w/ Two Antennas
* USB-C Power Delivery Port
* Weight: 1.4 kg
Its slightly disappointing to see Intel chipsets in 2020 — but considering the presence of PureBoot and other features that Librem 14 offers, an Intel-powered secure laptop makes sense.
Nevertheless, its good to see them including USB Type-C video port. Without dedicated graphics, it may not be a steal deal for power users but it should get a lot of work done.
Also, its worth noting that Purism offers [anti-interdiction services][10] to detect tampering during shipments for high-risk customers. Of course, that wouldnt prevent tampering — but itll help you know about it.
![][11]
### Librem 14: Pricing &amp; Availability
For now, Librem 14 laptop is available for pre-orders with an early big base price of **$1199** ($300 off from its regular price) that features 8 Gigs of RAM and 250 GB of M.2 SATA storage.
Depending on what you prefer, the price might go up to **$3,693.00** with the maxed out configuration with anti-interdiction services included.
You can expect the orders to start shipping in the early Q4 2020.
[Pre-Order Librem 14][12]
What do you think about Purisms Librem 14 laptop? Feel free to let me know your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/purism-librem-14/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://puri.sm/
[2]: https://itsfoss.com/purism-librem-mini/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/librem-14.jpg?ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/Hardware-Kill-Switches-librem.jpg?ssl=1
[5]: https://itsfoss.com/get-linux-laptops/
[6]: https://itsfoss.com/pureos-convergence/
[7]: http://repo.pureos.net/pureos/pool/main/
[8]: https://puri.sm/posts/purism-launches-librem-14-successor-to-security-focused-librem-13-product-line/
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/librem-14-1.png?ssl=1
[10]: https://puri.sm/posts/anti-interdiction-services/
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/librem14-monitors-1-cropped.png?ssl=1
[12]: https://puri.sm/products/librem-14/

View File

@ -1,141 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (nophDog)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to know if you're ready to switch from Mac to Linux)
[#]: via: (https://opensource.com/article/20/6/mac-to-linux)
[#]: author: (Marko Saric https://opensource.com/users/markosaric)
如何确定你已经准备好从 Mac 切换到 Linux
======
在开源操作系统的帮助下Linux 可以帮你完成绝大部分你在 Mac 上能做的事情。
![Digital images of a computer desktop][1]
我[从 Mac 切换到 Linux][2] 已经两年了。在 Linux 之前,我用的一直是 Apple 系统,而且 2018 年我安装第一个发行版时,还只是一个纯粹的新手。
最近,我只用 Linux我可以用它完成任何任务。浏览网页、观看 Netflix 影片、写作以及编辑[博客][3],甚至还在上面跑我的[开源网页分析项目][4]。
我可不是开发者!但是很久以前,普通人根本无法轻松玩转 Linux。
最近关于 Mac 的讨论越来越多,许多人已经在考虑从 Mac 切换到 Linux。我打算分享一些切换过程中的经验帮助其它新手也能实现无痛转移。
### 你该不该换?
在换系统之前,最好想清楚,因为有时候 Linux 可能跟你预期不一样。如果你仍希望跟 Apple Watch 无缝配对、可以用 FaceTime 给朋友打电话、或者你想打开 iMovie 看视频,那最好还是不要换了。这些都是 Apple 的专利软件,你只能在 Apple 的『围墙花园』里面使用。如果离不开 Apple 的生态系统,那么 Linux 可能不太适合你。
我对 Apple 生态没有太多挂念,我不用 iPhone所以跟手机的协作没那么必要。我也不用 iCloud、FaceTime当然也包括 Siri。我早就对开源充满兴趣只是一直没有行动。
### 检查你的必备软件清单
我还在使用 Mac 的时候,就已经开始探索开源软件,我发现大部分在 Mac 上使用的软件,在 Linux 也可以运行。
怀念用火狐浏览网页吗?在 Linux 上它也可以运行。想用 VLC 看视频?它也有 Linux 版本。喜欢用 Audacity 录制、编辑音频?它正在 Linux 上等着你呢。你用 OBS Studio 直播?在 Linux 直接下载安装吧。一直用 Telegram 跟朋友和家人保持联系吗Linux 上当然少不了它。
此外Linux 不仅仅意味着开源软件。大部分(也可能所有)你最喜欢的非 Apple 专利软件,都能在 Linux 见到它们的身影。Spotify、Slack、Zoom、Stream、Discord、Skype、Chrome 以及很多闭源软件,都可以使用。而且,在你 Mac 浏览器里面运行的任何东西,同样能够运行在 Linux 浏览器。
你能在 Linux 找到你的必备软件,或者更好的替代品吗?请再三确认,做到有备无患。用你最常用的搜索引擎,在网上检索一下。搜索『软件名 + Linux』 或者 『软件名 + Linux 替代品』,然后再去 [Flathub][5] 网站查看你能在 Linux 用 Flatpak 安装的专利软件有哪些。
### 请牢记Linux 不等于 Mac
如果你希望能够从 Mac 轻松转移到 Linux我相信有一点很重要你需要保持包容的思想以及愿意学习新操作系统的心态。Linux 并不等于 Mac所以你需要给自己一些时间去接触并了解它。
如果你想让 Linux 用起来、看起来跟你习惯的 macOS 一模一样,那么 Linux 可能也不适合你。尽管你可以通过各种方法把 Linux 桌面环境打造得跟 macOS 相似,但我觉得要想成功转移到 Linux最好的办法是接受它。
试试新的工作流,该怎么用就怎么用。不要总想着把 Linux 变成其它东西。你会跟我一样,像享受 Mac 一样享受 Linux甚至能有更好的体验感。
还记得你第一次使用 Mac 吧;你肯定花了不少时间去习惯它的用法。那么请给 Linux 同样多的时间和关怀。
### 选择一个 Linux 发行版
有别于 Windows 和 macOSLinux 不止一个单一的操作系统。不同的 Linux 操作系统被称作发行版,开始使用 Linux 之后,我尝试过好几个不同的发行版。我也用过不同的桌面环境,或者图形界面。在美观度、易用性、工作流以及集成软件上,它们有很大差异。
尽管作为 Mac 的替代品,被提及最多的是 [ElementaryOS][6] 和 [Pop!_OS][7],但我仍建议从 [Fedora Workstation] 开始,理由如下:
- 使用 [Fedora Media Writer][9],容易安装
- 开箱即支持你所有的硬件
- 支持最新的 Linux 软件
- 运行原生无改动的 GNOME 桌面环境
- 有大型开发团队以及一个庞大的社区
在我看来,从易用性、连贯性、流畅性和来自 macOS 用户的用户体验来看,[GNOME][10] 是最好用的桌面环境。在 Linux 世界,它拥有大量开发资源和用户基数,所以你的使用体验会很好。
Fedora 可以为你打开一扇 Linux 的大门,当你适应之后,就可以开始探索各个发行版、桌面环境,甚至窗口管理器之类的玩意了。
### 熟悉 GNOME
GNOME 是 Fedora 和许多其它 Linux 发行版的默认窗口管理器。它最近的升级 [升级到 GNOME 3.36][11] 带来了 Mac 用户欣赏的现代设计。
一定要做好心理准备Linux、Fedora Workstation 和 GNOME 并不是 Apple 和 macOS。GNOME 非常干净、简单、现代,新颖。它不会分散你的注意力。它没有桌面图标。没有可见的 dock 栏。窗口上甚至没有最小化、最大化按钮。但是不要慌张。如果你去尝试,它会证明这是你用过最好、最有生产力的操作系统。
GNOME 不会给你带来困扰。启动之后,你唯一能看到的东西只有顶栏和背景图片。顶栏由这几样东西组成, **Activities** 在左边时间和日期在中间这也是你的通知栏中心右边是网络、蓝牙、VPN、声音、亮度、电池托盘图标之类的东西。
#### 为什么 GNOME 像 Mac
你会注意到一些跟 macOS 的相似性,例如窗口吸附,空格预览(用起来跟 Quick Look 一模一样)。
如果你把鼠标光标移动到左上角,点击顶栏的 **Activities**,或者按下键盘上 **Super** 键(也就是 Apple 键),你会看到 **Activities Overview**。Activities Overview 有点像 macOS 系统上 Mission Control 和 Spotlite Search 的结合体。它会在屏幕中间展示已打开软件和窗口的概览。在左手边,你可以看到 dock 栏,上面有你打开的软件和常用软件。所有打开的软件下面会有一个指示标志,在右手边,你可以看到不同的工作区。
在顶栏中间,有一个搜索框。只要你开始输入,焦点就会转移到搜索框。它能搜索你已经安装的软件和文件内容,在软件中心搜索指定的软件,作计算器,向你展示时间或者天气,当然它能做的还有很多。它就像 Spotlight 一样。只需要开始输入你要搜索的内容,然后按下 **Enter** 打开软件或者文件。
你能看到一列安装好的软件(更像 Mac 上的 Launchpad。点击 dock 栏 **Show Applications** 图标,或者按 **Super + A**
总体来说Linux 是一个轻量级的系统,即使在很老的硬件上也能跑得很顺畅,跟 macOS 比起来仅仅占用很少的磁盘空间。并且不像 macOS你可以删除任何你不想要或不需要的预装软件。
#### 自定义你的 GNOME 设置
浏览一下 GNOME 设置,熟悉它的选项,做一些更改,让它用起来更舒服。下面是一些我装好 GNOME 必须做的事情。
- 在 **Mouse Touchpad** 中,我禁用 natural scrolling、启用 tap-to-click。
- 在 **Display** 中,我打开 night light在晚上屏幕会让颜色变暖减少眼睛疲劳。
- 我安装 [**GNOME Tweaks**][12],因为它可以更改额外的设置选项。
- 在 Tweaks 中,我启用了 **Over-Amplification** 选项,这样就能获得更高的音量。
- 在 Tweaks 中,相比默认的亮色主题,我更喜欢 **Adwaita Dark** 主题。
#### 习惯使用键盘操作
GNOME 是以一个鼠标为中心的操作系统,所以尽量多使用键盘。在 GNOME 设置中的**键盘快捷键**部分,你可以找到很多不同的快捷键。
你也可以自定义键盘快捷键,来形成自己的工作流。我设置 **Super** 键设置成打开我必须使用的软件。比如说,**Super + B** 打开我的浏览器,**Super + F** 打开文件,**Super + T** 打开终端。我把 **Ctrl + Q** 设置成关闭窗口。
我使用 **Super + Tab** 切换软件,**Super + H** 隐藏一个窗口。**F11** 全屏打开软件。**Super + Left arrow** 把窗口吸附到屏幕左边。**Super + Right arrow** 把窗口吸附到屏幕左边。等等。
### 在 Mac 上尝试 Linux 之后再做决定
在完全安装 Linux 之前,在你的 Mac 上先尝试 Fedora。从 [Fefora's website][9] 下载 ISO 镜像。[Etcher][13] 能够帮你把 ISO 镜像写入 USB然后在启动时点击 **option** 选项,这样你就可以在 live 模式尝试了。
现在你已经可以随意探索 live 模式的 Fedora Workstation而不用在你的 Mac 上安装任何东西。试试不一样的东西,能否正常工作:能不能连接 WiFi触控板是否正常有没有声音等等。
也记得花时间尝试 GNOME。测试我上面提到的不同功能。打开一些安装好的软件。一切看起来都很还不错如果你喜欢这样的 Fedora Workstation 和 GNOME并且很肯定这就是你想要的那么把它安装到你的 Mac 吧。
尽情探索 Linux 世界吧!
* * *
_This article is an update of [How I switched from macOS to Linux after 15 years of Apple][2] originally published on Marko Saric's website._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/6/mac-to-linux
作者:[Marko Saric][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/nophDog)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/markosaric
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_browser_web_desktop.png?itok=Bw8ykZMA (Digital images of a computer desktop)
[2]: https://markosaric.com/linux/
[3]: https://markosaric.com/how-start-blog/
[4]: https://plausible.io/open-source-website-analytics
[5]: https://flathub.org/apps
[6]: https://opensource.com/article/20/2/macbook-linux-elementary
[7]: https://support.system76.com/articles/pop-basics/
[8]: https://getfedora.org/
[9]: https://getfedora.org/en/workstation/download/
[10]: https://www.gnome.org/
[11]: https://www.gnome.org/news/2020/03/gnome-3-36-released/
[12]: https://wiki.gnome.org/Apps/Tweaks
[13]: https://www.balena.io/etcher/

View File

@ -0,0 +1,135 @@
[#]: collector: (lujun9972)
[#]: translator: (chunibyo-wly)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (One CI/CD pipeline per product to rule them all)
[#]: via: (https://opensource.com/article/19/7/cicd-pipeline-rule-them-all)
[#]: author: (Willy-Peter Schaub https://opensource.com/users/wpschaub/users/bclaster/users/matt-micene/users/barkerd427)
使用一条CI/CD流水线管理所有的产品
======
统一的持续集成与持续交付的流水线的构想是一种梦想吗?
![An intersection of pipes.][1]
当我加入 WorkSafeBC 的云端运维团队,我负责的是云端运维和工作流水线优化,我分享了一个可以对任意产品进行持续集成和持续交付的流水线。
根据 Lukas Klose 所说,[flow][2](在软件工程范围内)是“软件系统可以在一种稳定和可预测的状态下创造价值”一种状态。我认为这是我遇到的最大的挑战和机遇之一,特别是在解决紧急问题的复杂领域。我力求通过一种持续,高效和优质的解决方案,提供一种持续交付模式,并且能够构建正确的事物让我们的用户感到满意。
### 持续集成和持续交付的 (CI/CD) 流水线
CI/CD 流水线是一种代码变更的频率更高,一致并且可靠的持续交付 DevOps 实践。它可以帮组敏捷开发团队增加**部署的频率**并且减少**变更所需要的时间****变更失败的频率**,和**故障恢复的时间**等关键绩效指标 (KPIS) ,从而提高质量并且实现更快的交付。唯一的先决条件就是坚实的开发基础,对需求从构想到放弃的责任心,和一个全面的流水线(如下图所示)。
![Prerequisites for a solid development process][3]
它简化了工程流程和产品用以稳定基础架构环境;优化工作流程;创建一致的,可重复的,自动化的任务。正如 Dave Snowden [Cynefin Sensemaking][4] 所说的那样,这样就允许我们将复杂不可解决的任务变成了复杂可解决的任务,降低了维护成本,提高了质量和可靠性。
精简流程的一部分是需要最大程度地减少浪费 [wasteful practice types][5] Muri (过载), Mura (变异), 和 Muda (浪费)
* **Muri:** 避免过度工程化,和与业务价值不相关的功能以及过多的文档
* **Mura:** 改善确认和审批流(比如,安全审核);驱动测试 [shift-left][6],安全漏洞扫描与代码质量检查;并且可以改进风险评定。
* **Muda:** 避免浪费比如技术债务bugs 或者前期详细的文档等。
看起来 80 的重点都集中在提供一种可以持续集成和协作的工程产品,这些系统可以采用构想一个创意,计划,开发,测试和监视您的解决方案。然而,一个成功的转型和工程系统是由 5 的产品15 的过程和 80 的开发人员组成的。
有很多产品供我们使用。比如Azure DevOps 提供了持续集成 (CI) 和持续交付 (CD) 的丰富支持。可扩展,开源集成和商用现成品或技术 (COTS) 像软件即服务 (SaaS) 比如 Stryker, SonarQube, WhiteSource, Jenkins和 Octopus 等。
![5% about products, 15% about process, 80% about people][7]
最大的挑战是打破数十年的规则,规定和已经步入舒适区的流程:“*我们一直都是这样做的;为什么需要改变呢?*”
开发和运维人员之间的摩擦导致了各种支离破碎,重复的,不间断的集成和交付管道。开发人员希望能访问所有东西,以便快速的持续迭代,用户使用和持续发布。运维人员希望将所有东西说起来来保护业务,用户和品质。这些矛盾在不经意间导致了很难做到一种自动化的流程,进而导致发布周期晚于预期。
让我们使用最近的白板讨论中的片段来探索流水线。
想要支持流水线的变化是一项困难并且花费巨大的工作;版本控制和可追溯更使这个问题变得更加复杂,因此不断精简开发流程和流水线是一项挑战。
![Improving quality and visibility of pipelines][8]
我主张一些原则使得每个产品都能使用通用流水线:
* 使一切自动化
* 一次构建
* 保持持续集成和持续交付
* 保持持续精简和改进
* 保持一个定义构建
* 保持一个发布的流水线定义
* 频繁和尽早的扫描漏洞,并且*尽快的失败*
* 频繁和尽早的进行测试,并且*尽快的失败*
* 保持已发布版本的可追踪和监控
但是,如果我要打破这些,最重要的原则就是*保持简单*。如果你不能说明流水线化的原因,你或许是不了解自己的软件过程的。我们大多数想要的不是最好的,超现代的和具有革命意义的流水线——我们仅仅是需要一个功能强大,有价值的和能适配不同工程的流水线。首先需要解决的是那 80% —— 文化,人员和他们的心态。
### 统一流水线
让我们逐步完成我们的白板会议实践。
![CI build/CD release pipeline][9]
每个应用使用一套构建定义来定义一个 CI/CD 流水线,用来触发*合并请求前的验证*与*持续集成*的构建。生成一个带有 debug 信息的*发布*的构建并且将其上传到 [Symbol Server][10]。这允许了开发者们可以在不需要考虑什么被构建或者什么符号需要被加载的情况下在本地和远程生产环境进行 debug符号服务器对我们来说就是这样的魔法。
![Breaking down the CI build pipeline][11]
在构建过程中进行尽可能多的构建——*左移测试*——允许制作新特性的团队可以经常的失败,不断的提高整体的产品质量,并且可以为代码审核员提供每个 pull request 的无价证据。你喜欢有大量提交的 pull request 吗?或者一个带有少数提交和提供了漏洞检查,测试覆盖率,代码质量检查和 [Stryker][12] 突变残余的 pull request
![Breaking down the CD release pipeline][13]
不要通过转变构建去生成多个环境特定的构建。通过一个构建实现*发布时转型**标记化*和 XML/JSON 的*值替换*。换句话说,*左移测试*具体环境的配置。
![Shift-right the environment-specific configuration][14]
安全存储发布的配置数据,并且使他基于*可信任*和*敏感*的数据对开发和运维都能有效。使用开源的密钥管理工具Azure 密钥金库AWS 密钥管理服务,或者其他产品,记住你的工具箱中有很多方便的工具。
![Dev-QA-production pipeline][15]
使用*用户组*而不是*用户*移动权限系统,使用多流水线的多阶段转移到简单的用户组成员资格。
![Move approver management to simple group membership][16]
创建一条流水线并且对赋予特定的交付阶段的权限,而不是重复流水线让团队进入他们感兴趣的地方。
![Pipeline with access to specific delivery stages][17]
最后但并非不重要,包括 pull request 帮助提高代码仓库洞察力和透明度,增进总体质量,协作和发布预验证构建到已筛选的环境;比如,开发环境。
这是整个白板更正式的视图。
![The full pipeline][18]
所以您对CI/CD管道有什么想法和经验是我通过*一条管道来管理它们*的这个梦想吗?
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/cicd-pipeline-rule-them-all
作者:[Willy-Peter Schaub][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/chunibyo-wly)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/wpschaub/users/bclaster/users/matt-micene/users/barkerd427
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
[2]: https://continuingstudies.sauder.ubc.ca/courses/agile-delivery-methods/ii861
[3]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-2.png (Prerequisites for a solid development process)
[4]: https://en.wikipedia.org/wiki/Cynefin_framework
[5]: https://www.lean.org/lexicon/muda-mura-muri
[6]: https://en.wikipedia.org/wiki/Shift_left_testing
[7]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-3.png (5% about products, 15% about process, 80% about people)
[8]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-4_0.png (Improving quality and visibility of pipelines)
[9]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-5_0.png (CI build/CD release pipeline)
[10]: https://en.wikipedia.org/wiki/Microsoft_Symbol_Server
[11]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-6.png (Breaking down the CI build pipeline)
[12]: https://stryker-mutator.io/
[13]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-7.png (Breaking down the CD release pipeline)
[14]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-8.png (Shift-right the environment-specific configuration)
[15]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-9.png (Dev-QA-production pipeline)
[16]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-10.png (Move approver management to simple group membership)
[17]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-11.png (Pipeline with access to specific delivery stages)
[18]: https://opensource.com/sites/default/files/uploads/devops_pipeline_pipe-12.png (The full pipeline)

View File

@ -0,0 +1,74 @@
[#]: collector: (lujun9972)
[#]: translator: ( guevaraya)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source has room for everyone)
[#]: via: (https://opensource.com/article/20/4/interview-Megan-Byrd-Sanicki)
[#]: author: (Jay Barber https://opensource.com/users/jaybarber)
开源代码广阔天地造福人类
======
向 2020 开源社区最佳女性获得者 Megan Byrd-Sanicki 学习是如何让大家团结起来
![蒲公英般浮于水面][1]
“再成长的过程中,我曾经是一个小小的战地执法官,” Megan Byrd-Sanicki 2020 年[开源社区最佳女性获得者][2],笑着说。“我以前经常把同学们聚在一起玩游戏。大家来吧,我们告诉你们规则。我也会注意旁听,尝试去识别未被包含的人们并把他们拉进圈子。”
![Megan Sanicki 的照片, 已经许可使用][3]
这个想法驱使我把大家都拉到一起并建立一个有吸引力的组织并贯穿职业生涯和社区工作。“我回顾曾经二年级体育课的我,必须要承认我还是曾经的我”。
Megan 作为第一任 [Drupal 协会][4]执行主任,十年来活跃于开源社区,现在是谷歌开源项目办公室的研发和运营主管。“我很幸运在这个职位上为谷歌的 2000 多个不同的项目目标,不同的组织架构,不同的战略的开源项目提供服务。这个也是难得的学习机会。” Megan 最近被推选为 [开源代码促进会][5]的理事会成员,她竭力领导开源社区在全球范围内的项目和企业之间的加强协作。
### 地下室台阶上学到的知识
Megan 原以为她会从商远离循规的技术。坐在地下室台阶上耳濡目染父亲的销售电话到16岁时候就知道父亲的所有产品系列也熟悉了其他知识。
“我从父亲学到了做生意就是解决问题和帮助别人” Megan 说。“在我的职业生涯这个信念我始终放在第一位。在某些角度看选择这条路我并不觉得奇怪;这是我个人选择的自然延伸,并且这带给我去做梦都想不到的体验”
开源事业对 Megan 不仅仅是一个职业;她在她的社区活动中也使用同样的理念。“现在,我与一大群优秀的工程师,数据科学家以及流行病科学家工作在[冠状病毒在行动][6]。团队成员是义务提供他们的专业知识,开发合作的为政府公共人员提供数据建模,以便他们快速的做出有效的决策。”
她也活跃于 [FOSS 急救会][7],这是一个为受 COVID-19事件影响的开源项目和社区成员点亮温暖的组织。“在疫情期间很难为项目提供所需的帮忙。我们可以帮忙其他需要帮忙的组织和个人扩散他们的请求。”这个组织的一个重要的事务是管理 [FOSS 急救基金][7] 其中一个功能是寻求一个开源项目基金的融资需求,否则社区会分裂。
### 在这不断变化的世界中一群可爱的人
影响 Megan 参与到社区的两个宗旨分别是对源码代码开放的明确承诺和把大家团结在一起的动力。“人们有梦想的时候,他们就积极的去实现它,这就造就了共同的信念和强烈的 ‘为什么’。人们由于为什么很容易的聚在了一起。我知道这就是我,” Megan 在被问到她这么努力的动力时说到。
“不管是帮助联盟基金会筹集资金的任务还是赋能开源项目可持续发展,这些影响着这人们。我也实实在在感受到帮助人们达到他们的目标,实现他们的梦想和愿景而自己收获这种蝴蝶效应般的热情。”
开源技术在技术领域占的比重越来越大Megan 对未来抱有很大希望。“令人兴奋的是故事还没有结束。作为一个社区,我们仍然在努力解决问题,”她说:“还有很多东西关于开源技术需要我们学习,我们的外部环境不断发生变化,开源将用不同的方式进化着。我们需要正确的对话和找出如何共同进化。确保每个人都有一席之地。”
用她的话说,经常听到这些都是从她的父亲生意电话里来的感悟,做生意就是解决问题并帮助别人。“帮助更多的人学习如何使用和贡献开源代码来解决问题的确是一件有益的事情。不管是推动创新,提供效率加快进度,还是实现业务目标,这些都是从开源代码中获取价值。”
### 属于你的荣耀
当被问到给其他想参与到开源社区的女性有哪些建议时Megan 兴奋的说:“请记住在开源社区,所有人都有一席之地。但是从我的经验来看,这令想要获取帮助的人感到钦佩,你可以去社区求助你想要的同时,但也需要可以为社区提供力所能及的贡献,你的贡献就是其他人所求助的。”
他也在开源社区听到部分声音,社区有时缺乏集中领导,她警告说不要将社区领导者视为一种特权角色去服务于少数人。“做你期望的领导者。当社区领导角色空缺时,每个个体可以自己填补空缺。开与社区的每个贡献者都是领导者,不管是领导他人,领导社区,甚至领导自己。不要等待被动赋予属于你的权力和精彩。”
对梅金来说开源之旅就如:一个前途不明朗的心路之旅。尽管如此,对风险和未来的不确定性她从不逃避。“我把生命看作一张你正在穿着的美丽挂毯,日复一日,你仅仅看到它的线条。如果你可以看到这个全景图,你将意识到每天你正竭尽全力用各种方法为这项伟大的工作做出贡献。”
_可查阅 Jay Barber 的[采访 Netha Hussain][8]她是2020年度开源学术获得者._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/interview-Megan-Byrd-Sanicki
作者:[Jay Barber][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/guevaraya)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jaybarber
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dandelion_blue_water_hand.jpg?itok=QggW8Wnw (Dandelion held out over water)
[2]: https://www.redhat.com/en/about/women-in-open-source
[3]: https://opensource.com/sites/default/files/uploads/megan_sanicki_headshot_small_0.png (Photo by Megan Sanicki, Used with permission)
[4]: https://www.drupal.org/association
[5]: https://opensource.org/
[6]: https://www.covidactnow.org/
[7]: https://fossresponders.com/
[8]: https://opensource.com/article/20/4/interview-Netha-Hussain

View File

@ -0,0 +1,113 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Disable Dock on Ubuntu 20.04 and Gain More Screen Space)
[#]: via: (https://itsfoss.com/disable-ubuntu-dock/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
如何在 Ubuntu 20.04 上禁用 Dock 并获得更多屏幕空间
======
左侧的启动器已成为 [Ubuntu][1] 桌面的标识。它是在 [Unity 桌面][2]中引入的,甚至在 [Ubuntu 切换到 GNOME][3]时就有了,它分叉了 Dash to Panel以在 [GNOME][4] 上创建一个类似的 Dock。
就个人而言,我发现它对于快速访问常用应用非常方便。但并非所有人都希望它占用屏幕上的一些额外空间。
从 [Ubuntu 20.04][5] 开始,你可以轻松禁用 dock。在本教程中让我向你展示如何以图形和命令的方式执行此操作。
![][6]
### 通过扩展应用禁用 Ubuntu Dock
[Ubuntu 20.04 的主要功能][7]之一是引入扩展 Extension 应用来管理系统上的 GNOME 扩展。只需在 GNOME 菜单中查找它(按下 Windows 键并输入):
![Look for Extensions app in the menu][8]
没有扩展应用?
如果尚未安装,你应该启用 GNOME Shell Extensions。Extensions GUI 是此软件包的一部分。
```
sudo apt install gnome-shell-extensions
```
这仅对 [GNOME 3.36][9] 或 Ubuntu 20.04(或更高版本) 中的更高版本有效。
启动扩展应用,你应该在“内置”扩展下看到 Ubuntu Dock。你只需要关闭即可禁用 dock。
![Disable Ubuntu Dock][10]
更改是即时的,你会看到 dock 立即消失。
你可以用相同的方式恢复。只需打开它,它就会立即显示。
在 Ubuntu 20.04 中非常容易隐藏 dock不是吗
### 替代方法:通过命令行禁用 Ubuntu dock
如果你是终端爱好者,并且喜欢在终端中做事,那么我有一个对你而言的好消息。你可以从命令行禁用 Ubuntu dock。
使用 Ctrl+Alt+T 打开终端。你可能已经知道 [Ubuntu 中的键盘快捷键][11]。
在终端中,使用以下命令列出所有可用的 GNOME 扩展:
```
gnome-extensions list
```
这将显示类似于以下的输出:
![List GNOME Extensions][12]
默认的 Ubuntu dock 扩展名是 [ubuntu-dock@ubuntu.com][13]。你可以使用以下命令将其禁用:
```
gnome-extensions disable ubuntu-dock@ubuntu.com
```
屏幕上不会显示任何输出,但是你会注意到启动器或 dock 从左侧消失了。
如果需要,你可以使用与上面相同的命令再次启用它,但使用启用选项:
```
gnome-extensions enable ubuntu-dock@ubuntu.com
```
**总结**
在 Ubuntu 18.04 中也有禁用 Dock 的方法。但是,如果你尝试在 18.04 中删除它,这可能会导致不必要的情况。删除此软件包也会删除 ubuntu-desktop 包,最终可能会导致系统崩溃,例如没有应用菜单。
这就是为什么我不建议在 Ubuntu 18.04 上删除它的原因。
好消息是 Ubuntu 20.04 提供了一种隐藏任务栏的方法。用户拥有更大的自由度和更多的屏幕空间。说到更多的屏幕空间,你是否知道可以[从 Firefox 移除顶部标题栏并获得更多的屏幕空间][14]
我想知道你喜欢怎样的 Ubuntu 桌面?有 dock没有 dock 还是没有 GNOME
--------------------------------------------------------------------------------
via: https://itsfoss.com/disable-ubuntu-dock/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://ubuntu.com/
[2]: https://itsfoss.com/keeping-ubuntu-unity-alive/
[3]: https://itsfoss.com/ubuntu-unity-shutdown/
[4]: https://www.gnome.org/
[5]: https://itsfoss.com/download-ubuntu-20-04/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/disable-dock-in-ubuntu.png?ssl=1
[7]: https://itsfoss.com/ubuntu-20-04-release-features/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/GNOME-extensions-app-ubuntu.jpg?ssl=1
[9]: https://itsfoss.com/gnome-3-36-release/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/disable-ubuntu-dock.png?ssl=1
[11]: https://itsfoss.com/ubuntu-shortcuts/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/list-gnome-extensions.png?ssl=1
[13]: https://itsfoss.com/cdn-cgi/l/email-protection
[14]: https://itsfoss.com/remove-title-bar-firefox/

View File

@ -0,0 +1,208 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to stress test your Linux system)
[#]: via: (https://www.networkworld.com/article/3563334/how-to-stress-test-your-linux-system.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
如何对你的 Linux 系统进行压力测试
======
> 如果你想了解 Linux 服务器在重压之下的运行情况,那么给 Linux 服务器 施加压力是个不错的主意。在这篇文章中,我们将看一些工具,可以帮助你增加服务器压力并衡量结果。
![](https://images.idgesg.net/images/article/2020/06/stress-test2_linux-penguin-stress-ball_hand-squeezing_by-digitalsoul-getty-images_1136841639-100850120-large.jpg)
为什么你会想给你的 Linux 系统施加压力呢?因为有时你可能想知道,当一个系统由于大量运行的进程、繁重的网络流量、过多的内存使用等原因而承受很大的压力时,它的表现如何。这种压力测试可以帮助确保系统已经做好了 “上市” 的准备。
如果你需要预测应用程序可能需要多长时间才能做出反应,以及哪些(如果有的话)进程可能会在重负载下失败或运行缓慢,那么在前期进行压力测试是一个非常好的主意。
幸运的是,对于那些需要能够预测 Linux 系统在压力下的反应的人来说,你可以采用一些有用的技术和工具来使这个过程更容易。在这篇文章中,我们将研究其中的一些。
### 自己动手做个循环
第一种技术是在命令行上运行一些循环,观察它们对系统的影响。这种方式可以大大增加 CPU 的负荷。使用 `uptime` 或类似的命令可以很容易地看到结果。
在下面的命令中,我们启动了四个无尽循环。你可以通过添加数字或使用 bash 表达式,如 `{1...6}` 来代替 `1 2 3 4` 以增加循环次数:
```
for i in 1 2 3 4; do while : ; do : ; done & done
```
在命令行上输入后,将在后台启动四个无尽循环:
```
$ for i in 1 2 3 4; do while : ; do : ; done & done
[1] 205012
[2] 205013
[3] 205014
[4] 205015
```
在这种情况下,发起了作业 1-4作业号和进程号会相应显示出来。
要观察对平均负载的影响,请使用如下所示的命令。在本例中,`uptime` 命令每 30 秒运行一次:
```
$ while true; do uptime; sleep 30; done
```
如果你打算定期运行这样的测试,你可以将循环命令放入脚本 `watch-it` 中。
```
#!/bin/bash
while true
do
uptime
sleep 30
done
```
在输出中,你可以看到平均负载是如何增加的,然后在循环结束后又开始下降。
```
11:25:34 up 5 days, 17:27, 2 users, load average: 0.15, 0.14, 0.08
11:26:04 up 5 days, 17:27, 2 users, load average: 0.09, 0.12, 0.08
11:26:34 up 5 days, 17:28, 2 users, load average: 1.42, 0.43, 0.18
11:27:04 up 5 days, 17:28, 2 users, load average: 2.50, 0.79, 0.31
11:27:34 up 5 days, 17:29, 2 users, load average: 3.09, 1.10, 0.43
11:28:04 up 5 days, 17:29, 2 users, load average: 3.45, 1.38, 0.54
11:28:34 up 5 days, 17:30, 2 users, load average: 3.67, 1.63, 0.66
11:29:04 up 5 days, 17:30, 2 users, load average: 3.80, 1.86, 0.76
11:29:34 up 5 days, 17:31, 2 users, load average: 3.88, 2.06, 0.87
11:30:04 up 5 days, 17:31, 2 users, load average: 3.93, 2.25, 0.97
11:30:34 up 5 days, 17:32, 2 users, load average: 3.64, 2.35, 1.04 <== 循环停止
11:31:04 up 5 days, 17:32, 2 users, load average: 2.20, 2.13, 1.01 11:31:34 up 5 days, 17:33, 2 users, load average: 1.40, 1.94, 0.98
```
因为所显示的负载分别代表了 1、5 和 15 分钟的平均值,所以这些值需要一段时间才能恢复到系统接近正常的状态。
要停止循环,请发出像下面这样的 `kill` 命令 —— 假设作业号是 1-4就像本篇文章前面显示的那样。如果你不确定可以使用 `jobs` 命令来确认作业号。
```
$ kill %1 %2 %3 %4
```
### 增加压力的专用工具
另一种方法是使用专门为你制造系统压力的工具。其中一种叫做 `stress`(压力),可以以多种方式对系统进行压力测试。`stress` 工具是一个工作负载生成器,提供 CPU、内存和磁盘 I/O 压力测试。
在使用 `--cpu` 选项时,`stress` 命令使用平方根函数强制 CPU 努力工作。指定的 CPU 数量越多,负载上升的速度就越快。
下面第二个脚本(`watch-it-2`)可以用来衡量对系统内存使用的影响。请注意,它使用 `free` 命令来查看加压的效果。
```
$ cat watch-it-2
#!/bin/bash
while true
do
free
sleep 30
done
```
发起任务并观察压力:
```
$ stress --cpu 2
$ ./watch-it
13:09:14 up 5 days, 19:10, 2 users, load average: 0.00, 0.00, 0.00
13:09:44 up 5 days, 19:11, 2 users, load average: 0.68, 0.16, 0.05
13:10:14 up 5 days, 19:11, 2 users, load average: 1.20, 0.34, 0.12
13:10:44 up 5 days, 19:12, 2 users, load average: 1.52, 0.50, 0.18
13:11:14 up 5 days, 19:12, 2 users, load average: 1.71, 0.64, 0.24
13:11:44 up 5 days, 19:13, 2 users, load average: 1.83, 0.77, 0.30
```
在命令行中指定的 CPU 越多,负载就增加的越快。
```
$ stress --cpu 4
$ ./watch-it
13:47:49 up 5 days, 19:49, 2 users, load average: 0.00, 0.00, 0.00
13:48:19 up 5 days, 19:49, 2 users, load average: 1.58, 0.38, 0.13
13:48:49 up 5 days, 19:50, 2 users, load average: 2.61, 0.75, 0.26
13:49:19 up 5 days, 19:50, 2 users, load average: 3.16, 1.06, 0.38
13:49:49 up 5 days, 19:51, 2 users, load average: 3.49, 1.34, 0.50
13:50:19 up 5 days, 19:51, 2 users, load average: 3.69, 1.60, 0.61
```
`stress` 命令也可以通过 `--io`(输入/输出)和 `--vm`(内存)选项增加 I/O 和内存的负载来给系统施加压力。
在接下来的这个例子中,运行这个增加内存压力的命令,然后启动 `watch-it-2` 脚本。
```
$ stress --vm 2
$ watch-it-2
total used free shared buff/cache available
Mem: 6087064 662160 2519164 8868 2905740 5117548
Swap: 2097148 0 2097148
total used free shared buff/cache available
Mem: 6087064 803464 2377832 8864 2905768 4976248
Swap: 2097148 0 2097148
total used free shared buff/cache available
Mem: 6087064 968512 2212772 8864 2905780 4811200
Swap: 2097148 0 2097148
```
`stress` 的另一个选项是使用 `--io` 选项为系统添加输入/输出活动。在这种情况下,你可以使用这样的命令:
```
$ stress --io 4
```
然后你可以使用 `iotop` 观察受压的 I/O。注意运行 `iotop` 需要 root 权限。
之前:
```
$ sudo iotop -o
Total DISK READ: 0.00 B/s | Total DISK WRITE: 19.36 K/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 27.10 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
269308 be/4 root 0.00 B/s 0.00 B/s 0.00 % 1.24 % [kworker~fficient]
283 be/3 root 0.00 B/s 19.36 K/s 0.00 % 0.26 % [jbd2/sda1-8]
```
之后:
```
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
270983 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 51.45 % stress --io 4
270984 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 51.36 % stress --io 4
270985 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 50.95 % stress --io 4
270982 be/4 shs 0.00 B/s 0.00 B/s 0.00 % 50.80 % stress --io 4
269308 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.09 % [kworker~fficient]
```
`stress` 只是给系统增加压力的若干工具之一。另一个较新的工具,`stress-ng`,将在以后的文章中介绍。
### 总结
用于系统压力测试的各种工具可以帮助你预测系统在真实世界的情况下如何响应,在这些情况下,它们受到增加的流量和计算需求。
虽然我们在文章中展示的是创建和测量各种类型的压力的方法,但最终的好处是压力如何帮助确定你的系统或应用程序对它的反应。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3563334/how-to-stress-test-your-linux-system.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,84 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to crop images in GIMP [Quick Tip])
[#]: via: (https://itsfoss.com/crop-images-gimp/)
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
如何使用 GIMP 裁剪图像(快速技巧)
======
你可能有很多原因要在 [GIMP][1] 中裁剪图像。例如,你可能希望删除无用的边框或信息来改善图像,或者你可能希望最终图像的焦点实在特定细节上。
在本教程中,我将演示如何在 GIMP 中快速裁剪图像而又不影响精度。让我们来看看。
### 如何在GIMP中裁剪图像
![][2]
#### 方法 1
裁剪只是一种将图像修整到比原始图像更小区域的操作。裁剪图像的过程很简单。
你可以通过“工具”面板访问“裁剪工具”,如下所示:
![Use Crop Tool for cropping images in GIMP][3]
你还可以通过菜单访问裁剪工具:
**Tools → Transform Tools → Crop**
激活该工具后,你会注意到画布上的鼠标光标将变化以指示正在使用“裁剪工具”。
现在,你可以在图像画布上的任意位置单击鼠标左键,并将鼠标拖到某个位置以创建裁剪边界。此时你不必担心精度,因为你可以在实际裁剪之前修改最终选择。
![Crop Selection][4]
此时,将鼠标光标悬停在所选内容的四个角上会更改鼠标光标并高亮显示该区域。现在,你可以微调裁剪的选区。你可以单击并拖动任何一侧或角落来移动部分选区。
选定完区域后,你只需按键盘上的“**回车**”键即可进行裁剪。
如果你想重新开始或者不裁剪,你可以按键盘上的 “**Esc**” 键。
#### 方法 2
裁剪图像的另一种方法是使用“矩形选择工具”进行选择。
**Tools → Selection Tools → Rectangle Select**
![][5]
然后,你可以使用与“裁剪工具”相同的方式高亮选区,并调整选区。选择好后,可以通过以下方式裁剪图像来适应选区。
**Image → Crop to Selection**
![][6]
#### 总结
对于 GIMP 用户而言,精确裁剪图像可以视为一项基本功能。你可以选择哪种方法更适合你的需求并探索其潜力。
如果你对过程有任何疑问,请在下面的评论中告诉我。如果你“渴望”更多 [GIMP 教程][7],请确保在你喜欢的社交媒体平台上订阅!
--------------------------------------------------------------------------------
via: https://itsfoss.com/crop-images-gimp/
作者:[Dimitrios Savvopoulos][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/dimitrios/
[b]: https://github.com/lujun9972
[1]: https://www.gimp.org/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Crop-images-in-GIMP.png?ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Crop-tool.png?ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Crop-selection.jpg?ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/select-1.gif?ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/crop.gif?ssl=1
[7]: https://itsfoss.com/tag/gimp-tips/