Merge pull request #10 from LCTT/master

update from LCTT
This commit is contained in:
北梦南歌 2022-05-01 22:05:55 +08:00 committed by GitHub
commit 885b3be588
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
117 changed files with 3542 additions and 3991 deletions

View File

@ -0,0 +1,242 @@
[#]: collector: (lujun9972)
[#]: translator: (TravinDreek)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-14529-1.html)
[#]: subject: (9 Decentralized, P2P and Open Source Alternatives to Mainstream Social Media Platforms Like Twitter, Facebook, YouTube and Reddit)
[#]: via: (https://itsfoss.com/mainstream-social-media-alternaives/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
9 个去中心化、端到端、开源的主流社交媒体平台替代品
======
你多半知道Facebook 因可能从它的“端到端加密”的聊天服务 WhatsApp 那里共享用户数据而 [遭到抨击][1]。
这些有争议的隐私政策变化使无数人转而使用 WhatsApp 替代品。
注重隐私的人们,早就料到了会有这事。毕竟,[Facebook 可是花了 190 亿美元收购了 WhatsApp 这样的手机应用][2]而当时靠它还赚不到什么钱。现在Facebook 该回本了 —— 回那之前投进去的 190 亿美元的本。他们可能打算把你的数据共享给广告商,这样的话,你看到的广告就会更加个性化(侵入性)了。
要是你受够了 Facebook、Google、Twitter 等科技公司的“我说了算”的态度,那你应该试试一些社交媒体平台的替代品。
这些社交平台的替代品都是开源的,它们都用了端到端或区块链技术来实现去中心化,而且你可能能够自己托管其中一些平台。
### 开源和去中心化的社交网络
![Image Credit: Datonel on DeviantArt][3]
先说句实话,这些替代平台的体验,可能会和你惯用平台的体验有所差异,但这些平台是不会侵犯你的隐私和言论自由的。这就是一种权衡。
#### 1、Minds
- 用于替代Facebook 和 YouTube
- 特点:代码开源、区块链
- 自托管:否
在 Minds 上,你可以发视频、博客、图片,并设置当前状态。你也能向群聊,或者直接向好友,安全地发送消息或者进行视频聊天。通过热门内容和话题,你可以发现你感兴趣的文章。
还不止这些。你还能通过做贡献来赚取代币,这些代币可以用来升级你的频道。创作者可以从粉丝那里直接得到美元、比特币和以太坊的支付。
> **[Minds][4]**
#### 2、Aether
- 用于替代Reddit
- 特点:开源、端到端
- 自托管:否
![][5]
Aether 是一个开源、端到端的平台,用于创建自我管理的社区,并可以审查管理记录以及选举版主。
Aether 上的内容,具有存在时间短的性质,并且内容只会留存六个月,除非有人把它保存下来。因为它是端到端的,所以中心服务器不复存在。
Aether 有趣的一点在于它的民主社区。社区可以选举版主,也能投票弹劾版主。
> **[Aether][6]**
#### 3、Mastodon
- 用于替代Twitter
- 特点:开源、去中心化
- 自托管:是
![][7]
在自由开源软件爱好者中,[Mastodon][8] 已经很有名了。我们之前报道过 [Twitter 的开源替代品 Mastodon][9],并且 [你也可以在 Mastodon 上关注我们][10]。
Mastdon 并不像 Twitter 那样是一个单一网站它是个由数千个社区组成的网络这些社区都由不同的组织和个人运营并且都提供无缝的社交媒体体验。这被称之为“Fediverse”。
你可以托管自己的 Mastodon 实例,并选择将其连接到其他 Mastodon 实例,或者直接加入一个已有的 Mastodon 实例,比如说 [Mastodon Social][11]。
> **[Mastodon][8]**
#### 4、LBRY
- 用于替代YouTube
- 特点:开源、去中心化、区块链
- 自托管:否
![][12]
[LBRY][13] 的核心是一个基于区块链的去中心化协议。协议顶层,便是由其加密货币驱动的数字市场。
通过 LBRY创作者可以提供多种数字化内容例如影片、书籍和游戏。基本上它是作为 YouTube 的替代而受到推崇的。你可以在 Odysee 上访问这个视频共享平台。
我们之前 [报道过 LBRY][14],你可以去读那篇文章了解详情。
> **[LBRY][15]**
#### 5、Pixelfed
- 用于替代Instagram
- 特点:去中心化、区块链
- 自托管:否
![][31]
Pixelfed 和 Mastodon 使用了相同的底层开放协议,即 ActivityPub。
因此,你也可以通过 Pixelfed 与 Mastodon 的实例进行互动。我还没有试过,但从理论上讲,你应该可以做到这一点。你应该找到几个活跃的 Pixelfed 实例来注册。
如果你想控制你的数据和隐私Pixelfed 是 Instagram 的一个简单替代品。你可以控制你的图片的隐私,在平台上没有任何广告。
你可以得到与照片分享平台基本相同的功能。然而,它没有驱动时间线的算法,遵循时间顺序,不收集你的任何数据,以获得个性化的体验。
> **[Pixelfed][32]**
#### 6、Peertube
- 用于替代YouTube
- 特点:去中心化、端到端
- 自托管:否
![][19]
PeerTube 由法国公司 Framasoft 开发它是一个去中心化的视频平台。PeerTube 使用了 [BitTorrent 协议][20] 以在用户之间共享宽带。
PeerTube 旨在抵制企业的垄断,它不依靠广告,并且也不会追踪你。不过要注意,你的 IP 地址在这里不是匿名的。
目前有许多 PeerTube 的实例,你可以在那里托管你的视频。有些实例需要付费,不过大多数都是免费的。
> **[PeerTube][21]**
#### 7、Diaspora
- 用于替代Facebook
- 特点:去中心化、开源
- 自托管:是
Diaspora 是最早的去中心化社交网络之一。最早可以追溯到 2010 年,当时 Diaspora 就作为 Facebook 的替代品而受到吹捧。最初几年,它确实得到了一些应得的关注,但它只在小众范围内得到了使用。
和 Mastodon 类似Diaspora 由许多“<ruby>豆荚<rt>pod</rt></ruby>” (节点服务器)组成。你可以在一个“豆荚”上注册,或者托管你自己的“豆荚”。科技公司无法拥有你的数据,只有你可以。
> **[Diaspora][22]**
#### 8、Dtube
- 用于替代YouTube
- 特点:去中心化、区块链
- 自托管:否
![][23]
Dtube 是一个基于区块链的去中心化 YouTube 复制品。之所以说它是 YouTube 复制品,是因为它界面太像 YouTube 了。
Dtube 像其他基于区块链的社交媒体一样,是由 DTube 币DTC驱动的。每当有人观看创作者的视频或者与之互动创作者就会获得 DTC。这些硬币可以用于推广内容或者通过合作的加密货币交换方来提现。
> **[DTube][24]**
#### 9、Signal
用于替代WhatsApp、Facebook Messenger
特点:开源
自托管:否
![][25]
与端到端加密的 WhatsApp 聊天不同Signal 不会跟踪你,不会共享你的数据,也不会侵犯你的隐私。
[Signal 一举成名][26],是在它得到 Edward Snowden 的认可之时。而当 WhatsApp 开始与 Facebook 共享数据时Elon Musk 又发了关于 Signal 的推文,这便让 Signal 更受瞩目了。
Signal 使用了自己的开源 Signal 协议,以提供端到端加密的消息和通话服务。
> **[Signal][28]**
#### KARMA已终止
- 用于替代Instagram
- 特点:去中心化、区块链
- 自托管:否
![][16]
这也是一个基于区块链的社交网络,由加密货币驱动。
KARMA 是 Instagram 的一个复制品,它构建于开源区块链平台 [EOSIO][17] 之上。每当你的内容获得了点赞和分享,你就会得到 KARMA 代币。你可以用这些代币来推广你的内容,或者通过一个合作的加密货币交换方,来将其转换为现实货币。
KARMA 只能在手机上使用,可以在 Play Store 及 App Store 上获取。
> **[KARMA][18]**
#### 还有别的吗?
还有一些其他的服务,它们虽然不是开源或者去中心化的,但也尊重你的隐私与言论自由。
* [MeWe][29]Facebook 替代品
* [Voice][30]NFT 为数字艺术家赋能
* [ProtonMail][33]Gmail 替代品
还有一个基于 Matrix 协议的 [Element 聊天工具][34],你也可以试试。
我知道,应该还有几个别的社交媒体平台的替代品。也想分享一下?我可能会把他们加到列表中来。
要是你也得在这个列表中选一个平台,你想选哪个呢?
--------------------------------------------------------------------------------
via: https://itsfoss.com/mainstream-social-media-alternaives/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[Peaksol](https://github.com/TravinDreek)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://arstechnica.com/tech-policy/2021/01/whatsapp-users-must-share-their-data-with-facebook-or-stop-using-the-app/
[2]: https://money.cnn.com/2014/02/19/technology/social/facebook-whatsapp/index.html
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/1984-quote.png?resize=800%2C450&ssl=1
[4]: https://www.minds.com/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/aether-reddit-alternative.png?resize=800%2C600&ssl=1
[6]: https://getaether.net
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/mastodon.png?resize=800%2C623&ssl=1
[8]: https://joinmastodon.org/
[9]: https://itsfoss.com/mastodon-open-source-alternative-twitter/
[10]: https://mastodon.social/@itsfoss
[11]: https://mastodon.social
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/lbry-interface.jpg?resize=800%2C420&ssl=1
[13]: https://lbry.org
[14]: https://itsfoss.com/lbry/
[15]: https://lbry.tv/$/invite/@itsfoss:0
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/karma-app.jpg?resize=800%2C431&ssl=1
[17]: https://eos.io
[18]: https://karmaapp.io
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/peertube-federation-multiplicity.jpg?resize=600%2C341&ssl=1
[20]: https://www.slashroot.in/what-bittorrent-protocol-and-how-does-bittorrent-protocol-work
[21]: https://joinpeertube.org
[22]: https://diasporafoundation.org
[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/dtube.jpg?resize=800%2C516&ssl=1
[24]: https://d.tube
[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/12/signal-shot.jpg?resize=800%2C565&ssl=1
[26]: https://itsfoss.com/signal-messaging-app/
[27]: https://www.britannica.com/biography/Elon-Musk
[28]: https://www.signal.org
[29]: https://mewe.com
[30]: https://www.voice.com
[31]: https://itsfoss.com/wp-content/uploads/2022/04/pixelfed-decentralized.jpg
[32]: https://pixelfed.org/
[33]: https://itsfoss.com/recommends/protonmail/
[34]: https://itsfoss.com/element/

View File

@ -0,0 +1,170 @@
[#]: collector: (lujun9972)
[#]: translator: (hwlife)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-14522-1.html)
[#]: subject: (6 tips for securing your WordPress website)
[#]: via: (https://opensource.com/article/20/4/wordpress-security)
[#]: author: (Lucy Carney https://opensource.com/users/lucy-carney)
保护 WordPress 网站的 6 个技巧
======
> 即使初学者也可以,并且应该采取这些步骤来保护他们的 WordPress 网站免受网络攻击。
![](https://img.linux.net.cn/data/attachment/album/202204/29/154648l33xt7xg6gk2nr8v.jpg)
WordPress 已经驱动了互联网 30% 的网站,它是世界上增长最快的 <ruby>内容管理系统<rt>content management system</rt></ruby>CMS而且不难看出原因通过大量可用的定制化代码和插件、一流的 <ruby>搜索引擎优化<rt>Search Engine Optimization</rt></ruby>SEO以及在博客界超高的美誉度WordPress 赢得了很高的知名度。
然而随着知名度而来的也带来一些不太好的关注。WordPress 是入侵者、恶意软件和网络攻击的常见目标,事实上,在 2019 年被黑客攻击的 CMS 中WordPress [约占 90%][2]。
无论你是 WordPress 新用户或者有经验的开发者,这里有一些你可以采取的重要步骤来保护你的 WordPress 网站。以下 6 个关键技巧将帮助你起步。
### 1、选择可靠的托管主机
主机是所有网站无形的基础,没有它,你不能在线发布你的网站。但是主机的作用远不止起简单的托管你的网站,它也要对你的网站速度、性能和安全负责。
第一件要做的事情就是检查主机在它的套餐中是否包含 SSL 安全协议。
无论你是运行一个小博客或是一个大的在线商店SSL 协议都是所有网站必需的安全功能。如果你正在进行线上交易,你还需要 [高级 SSL 数字证书][3] ,但是对大多数网站来说,基本免费的 SSL 证书就很好了。
其他需要注意安全功能包括以下几种:
* 日常的自动离线网站备份
* 恶意软件和杀毒软件扫描和删除
* <ruby>分布式服务攻击<rt>Distributed denial of service</rt></ruby>DDOS保护
* 实时网络监控
* 高级防火墙保护
另外除了这些数字安全功能之外,你的主机供应商的 _物理_ 安全措施也是值得考虑的。这些包括用安全警卫、闭路监控和二次验证或生物识别来限制对数据中心的访问。
### 2、使用安全插件
保护你的网站安全最有效且容易的方法之一是安装一个安全插件,比如 [Sucuri][4],它是一个 GPLv2 许可的开源软件。安全插件是非常重要的,因为它们能将安全管理自动化,这意味着你能够集中精力运行你的网站,而不是花大量的时间来与在线威胁作斗争。
这些插件探测、阻止恶意攻击,并提醒你需要注意的任何问题。简言之,它们持续在后台运行,保护你的网站,这意味着你不必保持 7 天 24 小时地保持清醒,与黑客、漏洞和其他数字垃圾斗争。
一个好的安全插件会免费提供给你所有必要的安全功能,但是一些高级功能需要付费订阅。举个例子,如果你想要解锁 [Sucuri 的网站防火墙][5] ,你就需要付费。开启 <ruby>网站应用防火墙<rt>web application firewall</rt></ruby>WAF阻挡常见的威胁并为给你的网站添加一个额外的安全层所以当选择安全插件的时候寻找带有这个功能的插件是一个好的主意。
### 3、选择值得信任的插件和主题
WordPress 的快乐在于它是开源的,所以任何人、每个人都能提供他们开发的主题和插件。但当选择高质量的主题和插件时,这也抛出一些问题。
在挑选免费的主题或插件时,有一些设计较差,或者更糟糕的是,可能会隐藏恶意代码。
为了避免这种情况,始终从可靠的来源来获取免费主题和插件,比如 WordPress 主题库。阅读对它的评论,并研究查看开发者是否构建过其他的程序。
过时的或设计不良的主题和插件可以为攻击者进入你的网站留下“后门”或错误,这就是为什么选择时要谨慎。然而,你也应该提防无效或者破解的主题。这些已经黑客破坏了的高级主题被非法销售。你可能会购买一个无效的主题,它看起来没什么问题,但会通过隐藏的恶意代码破坏你的网站。
为了避免无效主题,不要被打折的价格所吸引,始终坚持可靠的主题商店,比如官方的 [WordPress 目录][6]。如果你在其它地方寻找,坚持选择大型且值得信任的商店,比如 [Themify][7] ,这个主题和插件商店自从 2010 年就已经在经营了。Themify 确保它的所有 WordPress 主题通过了 <ruby>[谷歌友好移动][8]<rt>Google Mobile-Friendly</rt></ruby> 测试,并在 [GNU 通用公共许可证][9] 下开源。
### 4、运行定期更新
这是 WordPress 的基本规则: 始终保持你的网站最新。然而,不是所有人都坚持了这个规则,只有 [43% 的 WordPress 网站][10] 运行的是最新版本。
问题是,当你的网站过期的时候,由于它在安全和性能修复方面落后的原因,容易受到故障、漏洞、入侵和崩溃的影响。过期的网站不能像更新的网站一样修复漏洞,攻击者能够分辨出哪些网站是过期的。这意味着他们能够依此来搜索最易受攻击的网站并袭击它们。
这就是为什么你始终要运行最新的 WordPress 版本的原因。为了保持网站安全处于最强的状态,你必须更新你的插件和主题,以及你的核心 WordPress 软件。
如果你选择一个受管理的 WordPress 托管套餐,你可能会发现你的供应商会为你检查并运行更新,以了解你的主机是否提供了软件和插件更新。如果没有,你可以安装一个开源插件管理器。比如 GPLv2 许可的 [Easy Updates Manager plugin][11] 作为替代品。
### 5、强化你的登录
除了通过仔细选择主题和安装安全插件来创建一个安全的 WordPress 网站外,你还需要防止未经授权的登录访问。
#### 密码保护
如果你在使用 [容易猜到的短语][12] 比如 “123456” 或 “qwerty” ,第一步要做的增强登录安全最简单的方法是更改你的密码。
尝试使用一个长的密码而不是一个单词,这样它们很难被破解。最好的方式是用一系列你容易记住且不相关的单词合并起来。
这里有一些其它的提示:
* 绝不要重复使用密码
* 密码不要包括像家庭成员的名字或者你喜欢的球队等明显的单词
* 不要和任何人分享你的登录信息
* 你的密码要包括大小写和数字来增加复杂程度
* 不要在任何地方写下或者存储你的登录信息
* 使用 [密码管理器][13]
#### 变更你的登录地址
将默认登录网址从标准格式 `yourdomain.com/wp-admin` 变更是一个好主意。这是因为黑客也知道这个缺省登录网址,所以不变更它会有被暴力破解的风险。
为避免这种情况,可以将登录网址变更为不同的网址。使用开源插件比如 GPLv2 许可的 [WPS Hide Login][14] 可以更加安全、快速和轻松的自定义登录地址。
#### 应用双因素认证
为了提供更多的保护,阻止未授权的登录和暴力破解,你应该添加双因素认证。这意味着即使有人 _确实_ 得到了你的登录信息,但是他们还需要一个直接发送到你的手机上的验证码,来获得对你的 WordPress 网站管理的权限。
添加双因素认证是非常容易的,只需要安装另一个插件,在 WordPress 插件目录搜索 “two-factor authentication” ,然后选择你要的插件。其中一个选择是 [Two Factor][15] ,这是一个流行的 GPLv2 许可的插件,已经有超过 10000 次安装。
#### 限制登录尝试
WordPress 可以让你多次猜测登录信息来帮助你登录。然而,这对黑客尝试获取未授权访问 WordPress 网站并发布恶意代码也是有帮助的。
为了应对暴力破解,安装一个插件来限制登录尝试,并设置你允许猜测的次数。
### 6、禁用文件编辑功能
这不是一个适合初学者的步骤,除非你是个自信的程序员,不要尝试它。并且一定要先备份你的网站。
那就是说,如果你真的想保护你的 WordPress 网站,禁用文件编辑功能 _是_ 一个重要的措施 。如果你不隐藏你的文件,它意味着任何人从管理后台都可以编辑你的主题和插件代码,如果入侵者进入,那就危险了。
为了拒绝未授权的访问,转到你的 `.htaccess` 文件并输入:
```
<Files wp-config.php>
order allow,deny
deny from all
</Files>
```
或者,要从你的 WordPress 管理后台直接删除主题和插件的编辑选项,可以添加编辑你的 `wp-config.php` 文件:
```
define( 'DISALLOW_FILE_EDIT', true );
```
保存并重新加载这个文件,插件和主题编辑器将会从你的 WordPress 管理后台菜单中消失,阻止任何人编辑你的主题或者插件代码,包括你自己。如果你需要恢复访问你的主题和插件代码,只需要删除你添加在 `wp-config.php` 文件中的代码即可。
无论你阻止未授权的访问,还是完全禁用文件编辑功能,采取行动保护你网站代码是很重要的。否则,不受欢迎的访问者编辑你的文件并添加新代码是很容易的。这意味着攻击者可以使用编辑器从你的 WordPress 站点来获取数据,或者甚至利用你的网站对其他站点发起攻击。
隐藏文件更容易的方式是利用安全插件来为你服务,比如 Sucuri 。
### WordPress 安全概要
WordPress 是一个优秀的开源平台,初学者和开发者都应该享受它,而不用担心成为攻击的受害者。遗憾的是,这些威胁不会很快消失,所以保持网站的安全至关重要。
利用以上措施,你可以创建一个更加健壮、更安全的保护水平的 WordPress 站点,并给自己带来更好的使用体验。
保持安全是一个持续的任务而不是一次性的检查清单所以一定要定期重温这些步骤并在建立和使用你的CMS时保持警惕。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/wordpress-security
作者:[Lucy Carney][a]
选题:[lujun9972][b]
译者:[hwlife](https://github.com/hwlife)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lucy-carney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://cyberforces.com/en/wordpress-most-hacked-cms
[3]: https://opensource.com/article/19/11/internet-security-tls-ssl-certificate-authority
[4]: https://wordpress.org/plugins/sucuri-scanner/
[5]: https://sucuri.net/website-firewall/
[6]: https://wordpress.org/themes/
[7]: https://themify.me/
[8]: https://developers.google.com/search/mobile-sites/
[9]: http://www.gnu.org/licenses/gpl.html
[10]: https://wordpress.org/about/stats/
[11]: https://wordpress.org/plugins/stops-core-theme-and-plugin-updates/
[12]: https://www.forbes.com/sites/kateoflahertyuk/2019/04/21/these-are-the-worlds-most-hacked-passwords-is-yours-on-the-list/#4f157c2f289c
[13]: https://opensource.com/article/16/12/password-managers
[14]: https://wordpress.org/plugins/wps-hide-login/
[15]: https://en-gb.wordpress.org/plugins/two-factor/

View File

@ -1,43 +1,40 @@
[#]: collector: (lujun9972)
[#]: translator: (hwlife)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-14524-1.html)
[#]: subject: (Automate setup and delivery for virtual machines in the cloud)
[#]: via: (https://opensource.com/article/21/1/testcloud-virtual-machines)
[#]: author: (Sumantro Mukherjee https://opensource.com/users/sumantro)
在云自动化设置和交付虚拟机
在云自动化设置和交付虚拟机
======
在几分钟之内准备好一个云镜像,通过使用 Testcloud 自动化设置过程并交付一个准备运行的虚拟机。
![Looking at a map][1]
> 通过使用 Testcloud 自动化设置过程并交付一个准备运行的虚拟机,在几分钟之内准备好一个云镜像。
如果你是一个在云上使用 Fedora [qcow2 镜像][2] 的开发者或者爱好者,在一个镜像准备使用之前,你总是不得不做一大堆初始化设置。我太清楚了,并且我渴望找到一种使设置过程更加简单的方法。碰巧,整个 Fedora 质量保证团队有同感,所以我们开发了 [Testcloud][3] 。
![](https://img.linux.net.cn/data/attachment/album/202204/30/130336l2l1a77p7m8hwp28.jpg)
Testcloud 是一个轻松的在几分钟之内准备云镜像测试的工具。它用几个命令在云上自动化设置并交付准备运行的虚拟机 (VM)
如果你是一个在云端使用 Fedora [qcow2 镜像][2] 的开发者或者爱好者,在一个镜像准备使用之前,你总是不得不做一大堆初始化设置。我对此深有体会,所以我很想找到一种使设置过程更加简单的方法。碰巧,整个 Fedora 质量保证团队也有同感,所以我们开发了 [Testcloud][3]
Testcloud:
Testcloud 是一个可以轻松的在几分钟之内准备云镜像测试的工具。它用几个命令就可以在云端自动化设置并交付准备运行的虚拟机VM
Testcloud
1. 下载 qcow2 镜像
2. 用你选择的名称创建实例
3. 创建一个密码为 `passw0rd` ,用户名为 `fedora` 的用户
4. 分配一个 IP 地址,以便于你之后用安全 shell (SSH) 登录到云上。
5. 启动, 停止,删除和列出一个实例
3. 创建一个密码为 `passw0rd`,用户名为 `fedora` 的用户
4. 分配一个 IP 地址,以便于你之后用 SSH 登录到云端
5. 启动、停止、删除和列出一个实例
### 安装 Testcloud
要开始你的过程,首先你必须安装 Testcloud 软件包。你可以通过终端或者软件程序来安装它。在这两种情况下,软件包的名字都是 `testcloud` 。用以下命令安装:
要开始你的旅程,首先你必须安装 Testcloud 软件包。你可以通过终端或者“软件”应用来安装它。在这两种情况下,软件包的名字都是 `testcloud` 。用以下命令安装:
```
`$ sudo dnf install testcloud -y`
$ sudo dnf install testcloud -y
```
一旦安装完成,添加你要添加的用户到 `testcloud` 用户组,协助 Testcloud 完成设置过程的剩余部分。执行这两个命令添加你的用户到 `testcloud` 用户组,并通过提升组权限重启会话:
一旦安装完成,将你所需要的用户添加到 `testcloud` 用户组,这有助于 Testcloud 自动完成设置过程的剩余部分。执行这两个命令,添加你的用户到 `testcloud` 用户组,并通过提升组权限重启会话:
```
$ sudo usermod -a -G testcloud $USER
@ -46,43 +43,39 @@ $ su - $USER
![添加用户到 testcloud 组][4]
(Sumantro Mukherjee, [CC BY-SA 4.0][5])
### 像老手一样玩转云镜像
一旦你的用户获得了组权限,创建一个实例:
一旦你的用户获得了所需的组权限,创建一个实例:
```
`$ testcloud instance create <instance name> -u <url for qcow2 image>`
$ testcloud instance create <instance name> -u <url for qcow2 image>
```
或者,你可以使用 `fedora:latest/fedora:XX` ( `XX` 是你的 Fedora 发行版本) 来代替 URL 完成地址:
或者,你可以使用 `fedora:latest/fedora:XX``XX` 是你的 Fedora 发行版本)来代替 完整的 URL 地址:
```
`$ testcloud instance create <instance name> -u fedora:latest`
$ testcloud instance create <instance name> -u fedora:latest
```
这将返回你的 VM 的 IP 地址:
这将返回你的虚拟机的 IP 地址:
```
$ testcloud instance create testcloud272593 -u <https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86\_64/images/Fedora-Cloud-Base-33-1.2.x86\_64.qcow2>  
$ testcloud instance create testcloud272593 -u https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86_64/images/Fedora-Cloud-Base-33-1.2.x86_64.qcow2
[...]
INFO:Successfully booted instance testcloud272593
The IP of vm testcloud272593:  192.168.122.202
\------------------------------------------------------------
The IP of vm testcloud272593: 192.168.122.202
------------------------------------------------------------
To connect to the VM, use the following command (password is 'passw0rd'):
ssh fedora@192.168.122.202
\------------------------------------------------------------
```
你可以使用密码为 `passw0rd` (注意这个 0 ) 用户名为 `fedora` 的默认用户来登录。你可以使用 `ssh` 协议登录到 VM <ruby>`virt-manager`<rt>虚拟机管理器</rt></ruby> ,或者支持连接到 libvirt 虚拟机的其他方式
你可以用默认用户 `fedora` 登录,密码是 `passw0rd`(注意是零)。你可以使用 `ssh`、`virt-manager` 或者支持连接到 libvirt 虚拟机方式来连接到它
另一种创建 Fedora 云的方式是:
```
$ testcloud instance create testcloud193 -u fedora:33
 
WARNING:Not proceeding with backingstore cleanup because there are some testcloud instances running.
You can fix this by following command(s):
testcloud instance stop testcloud272593
@ -93,11 +86,11 @@ DEBUG:Creating instance directories
DEBUG:creating seed image /var/lib/testcloud/instances/testcloud193/testcloud193-seed.img
INFO:Seed image generated successfully
INFO:Successfully booted instance testcloud193
The IP of vm testcloud193:  192.168.122.225
\------------------------------------------------------------
The IP of vm testcloud193: 192.168.122.225
------------------------------------------------------------
To connect to the VM, use the following command (password is 'passw0rd'):
ssh fedora@192.168.122.225
\------------------------------------------------------------
------------------------------------------------------------
```
### 玩转实例
@ -106,20 +99,18 @@ Testcloud 可以用来管理实例。这包括像列出镜像或者停止和启
要列出实例,使用 `list` 子命令:
```
$ testcloud instance list                
Name                            IP                      State    
\------------------------------------------------------------
------------------------------------------------------------
testcloud272593                 192.168.122.202         running    
testcloud193                    192.168.122.225         running    
testcloud252793                 192.168.122.146         shutoff    
testcloud93                             192.168.122.152         shutoff
testcloud93                     192.168.122.152         shutoff
```
要停止一个运行的实例:
```
$ testcloud instance stop testcloud193  
DEBUG:stop instance: testcloud193
@ -128,7 +119,6 @@ DEBUG:stopping instance testcloud193.
要删除一个实例:
```
$ testcloud instance destroy testcloud193  
DEBUG:remove instance: testcloud193
@ -139,7 +129,6 @@ DEBUG:removing instance /var/lib/testcloud/instances/testcloud193 from disk
要重启一个运行中的实例:
```
$ testcloud instance reboot testcloud93                                                                                        
DEBUG:stop instance: testcloud93
@ -158,7 +147,7 @@ via: https://opensource.com/article/21/1/testcloud-virtual-machines
作者:[Sumantro Mukherjee][a]
选题:[lujun9972][b]
译者:[hwlife](https://github.com/hwlife)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,110 +3,110 @@
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14518-1.html"
LibreWolf vs Firefox比较开源浏览器的隐私英雄
LibreWolf vs Firefox谁是真的隐私英雄
======
火狐是最好的跨平台[开源网络浏览器][1]之一。
![](https://img.linux.net.cn/data/attachment/album/202204/28/100907sefofznr9dgrxgxo.jpg)
更不用说,它是作为基于 Chromium 的替代品的唯一可行的选择。或者,它是吗?
Firefox 是最好的跨平台 [开源网页浏览器][1] 之一。
LibreWolf 是另一个有趣的选择,它最初是一个火狐浏览器的分叉,试图比火狐浏览器做得更好,以增强隐私/安全,开箱即用。
更不用说,它是那些基于 Chromium 的浏览器的唯一可行的替代品(也许?)
LibreWolf 是另一个有趣的选择,它最初是 Firefox 浏览器的一个复刻,试图比 Firefox 浏览器做得更好,以增强开箱即用的隐私/安全性。
但是,选择 LibreWolf 而不是 Firefox 真的有用吗?有哪些不同之处?让我们来看一看。
### 用户界面
考虑到 [LibreWolf][2] 是 Firefox 的一个分叉,用户界面是相同的,只是有一些细微的变化。
鉴于 [LibreWolf][2] 是 Firefox 的一个复刻,其用户界面是相同的,只是有一些细微的变化。
![Firefox UI][3]
例如,它在书签菜单中没有火狐网站的链接,并且去除了 “Add to Pocket” 按钮。
例如,它在书签菜单中没有到 Firefox 网站的链接,并且去除了 “<ruby>添加到 Pocket<rt>Add to Pocket</rt></ruby>” 按钮。
取而代之的是,你可以在地址栏的右边找到一个扩展的图标和下载管理器。
![LibreWolf UI][4]
是的,你不再需要前往菜单来访问下载。
是的,你不再需要前往菜单来访问下载的内容
如果你认为 Firefox 中的额外功能是恼人的,那 么LibreWolf 应该是一种干净的体验。
如果你认为 Firefox 中的额外功能令人烦恼,那么 LibreWolf 应该是一种干净的体验。
### 搜索供应商
默认情况下Firefox 使用谷歌作为其搜索引擎,考虑到它们是官方合作伙伴,也就是说,谷歌付费成为默认搜索引擎。
默认情况下Firefox 使用谷歌作为其搜索引擎,因为它们是官方合作伙伴,也就是说,谷歌付费成为默认搜索引擎。
![][5]
虽然你可以很容易地将默认的搜索供应商改为 DuckDuckGo、Startpage 或其他任何东西,但默认的搜索供应商对大多数用户来说仍然是一个大问题
虽然你可以很轻松地将默认的搜索供应商改为 DuckDuckGo、Startpage 或其他任何东西,但默认的搜索供应商对大多数用户来说仍然很重要
当谈到 LibreWolf 时,默认的搜索引擎是 DuckDuckGo。众所周知它是最好的隐私友好搜索引擎之一。
而对于 LibreWolf它的默认的搜索引擎是 DuckDuckGo。众所周知它是最好的尊重隐私的搜索引擎之一。
![][6]
应该注意的是,注重隐私的搜索引擎在某些使用情况下可能不如谷歌好。因此,如果搜索引擎的选择并不困扰你,火狐浏览器可以说是很好。
应该注意的是,注重隐私的搜索引擎在某些使用情况下可能不如谷歌好。因此,如果搜索引擎的选择对你来说并不是个问题Firefox 浏览器可以说是很好。
但是如果你想对自己的搜索历史保密LibreWolf 的默认搜索供应商可以证明是一个更好的选择。
但是如果你想对自己的搜索历史保密LibreWolf 的默认搜索供应商肯定是一个更好的选择。
### 强化隐私
### 强化隐私
Mozilla Firefox 具有令人难以置信的可定制性。如果你想付出努力,你可以在 Firefox 上增强数字隐私。
Mozilla Firefox 具有令人难以置信的可定制性。如果你想付出努力,你可以在 Firefox 上增强你的数字隐私。
然而,如果你想避免投入大量时间来调整 Firefox 的体验LibreWolf 可是一个不错的选择。
然而,如果你想避免投入大量时间来调整 Firefox 的体验LibreWolf 可是一个不错的选择。
LibreWolf 具有一些开箱即用的最佳设置,以确保你摆脱网上的跟踪器,并有一个安全的在线体验。
LibreWolf 具有一些开箱即用的最佳设置,以确保你摆脱网上的跟踪器,以获得安全的在线体验。
例如,它的默认功能是 UBlock 内容拦截器,以消除跟踪你在线活动的跟踪器/脚本。默认的搜索引擎是 DuckDuckGo在一定程度上也有帮助。
例如,它的默认带有 UBlock 内容拦截器,以消除跟踪你在线活动的跟踪器/脚本。默认的搜索引擎是 DuckDuckGo在一定程度上也有帮助。
![][7]
此外LibreWolf 还启用了 Firefox 增强跟踪保护的严格模式。换句话说,它可以积极地阻止跟踪器,这可能会导致一些网页不能像预期那样工作。
此外LibreWolf 还启用了 Firefox 增强跟踪保护的严格模式。换句话说,它可以积极地阻止跟踪器,这可能会导致一些网页不能像预期那样工作。
![][8]
虽然 LibreWolf 建议不要改变这些设置,但如果你发现网页在设置中出现故障,你可以选择使用 Firefox。
虽然 LibreWolf 建议不要改变这些设置,但如果你发现在此设置下网页被破坏,你可以选择使用 Firefox。
Firefox 使用启用的基本保护来摆脱常见的追踪器,而不会破坏网页的用户体验。
除了这些设置外LibreWolf 还默认在退出时删除 cookies 和网站数据。如果你想继续登录网站并迅速恢复你的浏览会话,这可能是令人讨厌的
除了这些设置外LibreWolf 还默认在退出时删除 Cookie 和网站数据。如果你想继续登录网站并迅速恢复你的浏览会话,这可能会很烦人
当是 Firefox 时,它确实具有相同的选项,但它仍然是默认禁用的。因此,如果你想避免调整内置设置以获得方便的体验,你应该选择 Firefox。
对于 Firefox它确实具有相同的选项但它默认情况下仍然是禁用的。因此,如果你想避免调整内置设置以获得方便的体验,你应该选择 Firefox。
![][9]
难怪 Firefox 仍然是[可用于 Linux 最佳浏览器][10]之一。大多数用户相比增强隐私更喜欢方便,同时还能跨平台使用浏览器。
难怪 Firefox 仍然是 [Linux 最佳浏览器][10] 之一。相比增强隐私,大多数用户更喜欢方便,同时还能跨平台使用浏览器。
### 谷歌安全浏览
谷歌安全浏览是一项有用的服务,可以警告/标记可疑网站的恶意活动。
<ruby>谷歌安全浏览<rt>Google Safe Browsing</rt></ruby>”是一项有用的服务,可以警告、标记可疑网站的恶意活动。
大多数浏览器使用它来实现安全的用户体验。你不需要成为发现钓鱼/恶意软件网站的专家,谷歌安全浏览可以帮助你发现它们。
Mozilla Firefox 使用它的另一个名字是 “**钓鱼保护**“,它是默认启用的。
Mozilla Firefox 使用它的另一个名字<ruby>钓鱼保护<rt>Phishing Protection</rt></ruby>,它是默认启用的。
然而,在 LibreWolf 中,谷歌安全浏览服务默认是禁用的,以避免连接到谷歌服务。你可以启用它,但它不是用户在设置浏览器时寻找的东西。
然而,在 LibreWolf 中,谷歌安全浏览服务默认是禁用的,以避免连接到谷歌服务。你可以启用它,但它不是用户通常在设置浏览器时会注意到的东西。
![][11]
因此,如果你想在避免恶意网站方面获得额外帮助Firefox 应该是一个很好的开箱即用的解决方案。如果你知道自己在做什么,你可以使用 LibreWolf并在需要时启用该设置。
因此,如果你在避免恶意网站方面需要更多帮助Firefox 应该是一个很好的开箱即用的解决方案。如果你对这些很清楚,你可以使用 LibreWolf并在需要时启用该设置。
### 附加功能
LibreWolf 可以摆脱 Firefox 上的任何附加产品。
例如,LibreWolf 默认没有与 Mozilla 服务器的任何连接。它所反映的一些变化包括:
例如,默认情况下LibreWolf 与 Mozilla 服务器没有任何连接。这也意味着 LibreWolf 摆脱了遥测。它所反映的一些变化包括:
* LibreWolf 中没有同步/签到功能。
* 没有 “Add to Pocket” 的按钮
* 没有 “添加到 Pocket” 的按钮
* 你不会在扩展页面上加载 Mozilla 的附加组件/主题。
![][12]
如果你想使用 Mozilla 帐户来同步你的历史记录/书签和浏览器数据Firefox 是最好的选择。如果你喜欢使用它,还有 Firefox VPN。
如果你想使用 Mozilla 帐户来同步你的历史记录/书签和浏览器数据Firefox 是最好的选择。如果你需要,还有 Firefox VPN。
![][13]
@ -126,15 +126,15 @@ Firefox 可用于 Android 和 iOS并且适用于各种屏幕尺寸和设备
相比之下Mozilla 基金会是一个更大的组织,并且一直在树立非凡的榜样来促进可定制性、隐私和安全性。
你将比 LibreWolf 更快地收到更新,如果你担心浏览器的安全性,这是一个重要方面。
Firefox 会比 LibreWolf 更快地收到更新,如果你担心浏览器的安全性,这是一个重要方面。
Firefox 成为更大事物的一部分并没有严重的缺点,但是 Mozilla 为其用户提出的未来可能会有一些你可能不喜欢的决定(或更改)。
Firefox 属于一个大组织并没有严重的缺点,但是 Mozilla 为其用户提出的未来可能会有一些你可能不喜欢的决定(或变化)。
但是LibreWolf 作为一个社区项目,会优先考虑用户偏好。
### 最后的评语
### 总结
如果方便是你在意的,你需要同步/登录账户功能Mozilla 特定提供的以及基本的隐私保护Mozilla Firefox 应该更适合你。
如果方便是你在意的,你需要同步/登录账户功能、Mozilla 的特定功能以及基本的隐私保护Mozilla Firefox 应该更适合你。
如果你不想要开箱即用的云同步功能、附加功能和以隐私为中心的核心设置LibreWolf 将是完美的解决方案。
@ -151,7 +151,7 @@ via: https://itsfoss.com/librewolf-vs-firefox/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,25 +3,24 @@
[#]: author: "Gaurav Kamathe https://opensource.com/users/gkamathe"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14525-1.html"
我最喜欢的 Go 构建选项
======
这些方便的 Go 构建选项可以帮助你更好地理解 Go 的编译过程。
![GitHub 推出“开源星期五”][1]
> 这些方便的 Go 构建选项可以帮助你更好地理解 Go 的编译过程。
(图源 Opensource.com)
![](https://img.linux.net.cn/data/attachment/album/202204/30/172121exam5k8vx45kzk7p.jpg)
学习一门新的编程语言最令人欣慰的部分之一,就是最终运行一个可执行文件,并获得符合预期的输出。当我开始学习 Go 这门编程语言时,我先是阅读一些样本程序来熟悉语法,然后是尝试写一些小的测试程序。随着时间的推移,这种方法帮助我熟悉了编译和构建程序的过程。
学习一门新的编程语言最令人欣慰的部分之一,就是最终运行一个可执行文件,并获得预期的输出。当我开始学习 Go 这门编程语言时,我先是阅读一些示例程序来熟悉语法,然后是尝试写一些小的测试程序。随着时间的推移,这种方法帮助我熟悉了编译和构建程序的过程。
Go 的构建选项提供了方法来更好地控制构建过程。它们还可以提供额外的信息,帮助把这个过程分成更小的部分。在这篇文章中,我将演示我所使用的一些选项。注意:我使用的 build 和 compile 这两个词是同一个意思。
Go 的构建选项提供了更好地控制构建过程的方法。它们还可以提供额外的信息,帮助把这个过程分成更小的部分。在这篇文章中,我将演示我所使用的一些选项。注意:我使用的<ruby>构建<rt>build</rt></ruby>”和“<ruby>编译<rt>compile</rt></ruby>这两个词是同一个意思。
### 开始使用 Go
我使用的 Go 版本是 1.16.7。但是,这里给出的命令应该也能在最新的版本上运行。如果你没有安装 Go你可以从 [Go 官网][2] 上下载它,并按照说明进行安装。你可以通过打开一个提示符命令,并键入下面的命令来验证你所安装的版本:
我使用的 Go 版本是 1.16.7。但是,这里给出的命令应该也能在最新的版本上运行。如果你没有安装 Go你可以从 [Go 官网][2] 上下载它,并按照说明进行安装。你可以通过打开一个命令提示符,并键入下面的命令来验证你所安装的版本:
```
$ go version
@ -33,7 +32,7 @@ $ go version
go version go1.16.7 linux/amd64
```
### Go 程序的基本编译和执行
### 基本的 Go 程序的编译和执行方法
我将从一个在屏幕上简单打印 “Hello World” 的 Go 程序示例开始,就像下面这样:
@ -79,7 +78,7 @@ hello.go
### 更多细节
运行我的程序时,上面的命令使我如鱼得水,不费吹灰之力。然而,如果你想知道 Go 在编译这些程序的过程中做了什么Go 提供了一个 `-x` 选项,它可以打印出 Go 为产生可执行文件所做的一切。
上面的命令就像一阵风一样,一下子就运行完了我的程序。然而,如果你想知道 Go 在编译这些程序的过程中做了什么Go 提供了一个 `-x` 选项,它可以打印出 Go 为产生这个可执行文件所做的一切。
简单看一下你就会发现Go 在 `/tmp` 内创建了一个临时工作目录,并生成了可执行文件,然后把它移到了 Go 源程序所在的当前目录。
@ -104,7 +103,8 @@ nDueg0kBjIygx25rYwbK/W-eJaGIOdPEWgwC6o546 \
mv $WORK/b001/exe/a.out hello
rm -r $WORK/b001/
```
这有助于解决程序运行后,在当前目录下没有生成可执行文件的问题。使用 `-x` 显示可执行文件确实在 `/tmp` 工作目录下创建并被执行。然而,与 `build` 命令不同的是,可执行文件并没有移动到当前目录,这使得看起来没有可执行文件被创建。
这有助于解决在程序运行后却在当前目录下没有生成可执行文件的谜团。使用 `-x` 显示可执行文件确实在 `/tmp` 工作目录下创建并被执行了。然而,与 `build` 命令不同的是,可执行文件并没有移动到当前目录,这使得看起来没有可执行文件被创建。
```
$ go run -x hello.go
@ -189,9 +189,9 @@ searching for runtime.a in /usr/lib/golang/pkg/linux_amd64/runtime.a
现在我已经解释了 Go 程序的编译过程,接下来,我将演示 Go 如何通过在实际的 `build` 命令之前提供 `GOOS``GOARCH` 这两个环境变量,来允许你构建针对不同硬件架构和操作系统的可执行文件。
这有什么用呢?举个例子,你会发现为 ARMarch64架构制作的可执行文件不能在 Intelx86_64架构上运行而且会产生一个 Exec 格式错误。
这有什么用呢?举个例子,你会发现为 ARMarch64架构制作的可执行文件不能在英特尔x86_64架构上运行而且会产生一个 Exec 格式错误。
下面的这些选项使得生跨平台的二进制文件变得小菜一碟:
下面的这些选项使得生跨平台的二进制文件变得小菜一碟:
```
$ GOOS=linux GOARCH=arm64 go build hello.go
@ -206,11 +206,11 @@ $ uname -m
x86_64
```
你可以阅读我之前的博文,了解更多我在 [使用 Go 进行交叉编译][3] 方面的经验。
你可以阅读我之前的博文,以更多了解我在 [使用 Go 进行交叉编译][3] 方面的经验。
### 查看底层汇编指令
源代码并不会直接转换为可执行文件,尽管它生成了一种中间汇编格式,然后最终被汇编为可执行文件。在 Go 中,这被映射为一种中间汇编格式,而不是底层硬件汇编指令。
源代码并不会直接转换为可执行文件,尽管它生成了一种中间汇编格式,然后最终被组装为可执行文件。在 Go 中,这被映射为一种中间汇编格式,而不是底层硬件汇编指令。
要查看这个中间汇编格式,请在使用 `build` 命令时,提供 `-gcflags` 选项,后面跟着 `-S`。这个命令将会显示使用到的汇编指令:
@ -246,7 +246,7 @@ TEXT main.main(SB) /test/hello.go
### 分离二进制文件以减少其大小
Go 的二进制文件通常比较大。例如, 一个简单的 Hello World 程序将会产生一个 1.9M 大小的二进制文件。
Go 的二进制文件通常比较大。例如, 一个简单的 Hello World 程序将会产生一个 1.9M 大小的二进制文件。
```
$ go build hello.go
@ -287,7 +287,7 @@ via: https://opensource.com/article/22/4/go-build-options
作者:[Gaurav Kamathe][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,254 @@
[#]: subject: "3-2-1 Backup plan with Fedora ARM server"
[#]: via: "https://fedoramagazine.org/3-2-1-backup-plan-with-fedora-arm-server/"
[#]: author: "Hanku Lee https://fedoramagazine.org/author/hankuoffroad/"
[#]: collector: "lujun9972"
[#]: translator: "hwlife"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14519-1.html"
使用 Fedora ARM 服务器来做 3-2-1 备份计划
======
![][1]
Fedora 服务器版操作系统可以运行在类似树莓派的单板计算机SBC上。这篇文章针对的用户是想要充分利用实体服务器系统并使用类似 Cockpit 的内置工具进行数据备份和个人数据的恢复。这里描述了备份的 3 个阶段。
### 必要的准备
想要使用本指南,你所需要的是一个运行着的 Fedora Linux 工作站和以下的项目:
* 你应该阅读、理解和实践 Fedora 文档中 [服务器安装][4] 和 [管理][5] 的要求
* 一块用来测试 Fedora Linux 的 SBC单板计算机。在这里查看 [硬件需求][6]
* [Fedora ARM][7] [服务器][7] 原始镜像 & ARM 镜像安装器
* SD 存储卡64 GB / Class 10和 SSD 设备两选一
* 以太网 / DHCP 预留 IP 地址或者静态 IP 地址
* 提供了 ssh 密钥的 Linux 客户端工作站
* 选择云存储服务
* 有额外可用的 Linux 工作站
对于这套环境,在写这篇文章的时候,由于成本和可用性的原因,我选择树莓派 3B+/4B+ (其中一个用来热切换)。当使用 Cockpit 远程连接树莓派服务器时,你可以将树莓派放到路由器附近以便设置。
### 加强服务器的安全
在 SBC 完成服务器的安装和管理后,用 firewalld 加强服务器的安全是一个好的做法。
连接存储设备到服务器之前一旦服务器在线你必须设置好防火墙。firewalld 是基于区域的防火墙。在依照 Fedora 文档完成安装和管理指南之后,创建一个名为 `FedoraServer` 的预定义区域。
#### firewalld 里的富规则
<ruby>富规则<rt>rich rule</rt></ruby>用来阻止或者允许一个特定的 IP 地址或者地址段。下面这条规则只从(客户端工作站)注册的 IP 地址接受 SSH 连接,并断开其它的连接。在 Cockpit 终端或者客户端工作站终端运行命令是通过 ssh 来连接到服务器的。
```
firewall-cmd --add-rich-rule='rule family=ipv4 source address=<registered_ip_address>/24 service name=ssh log prefix="SSH Logs" level="notice" accept'
```
#### 拒绝所有主机的 ping 请求
使用这个命令来设置 icmp 拒绝,并且不允许 ping 请求:
```
firewall-cmd --add-rich-rule='rule protocol value=icmp reject'
```
要进行其它防火墙控制,比如管理端口和区域,请查阅以下链接。请注意错配防火墙可能会使安全出现漏洞受到攻击。
> **[在 Cockpit 中管理防火墙][8]**
> **[firewalld 规则][9]**
### 配置文件服务器的存储
下一步是连接存储设备到 SBC然后使用 Cockpit 对新插入的存储设备进行分区。使用 Cockpit 的图形化服务器管理界面管理一个家庭实验室可以是一个或者多个服务器比之前更加简单。Fedora Linux 服务器标准提供了 Cockpit。
在这个阶段,一个通过 SBC 的 USB 插口接电的 SSD 设备无需额外电源供给就可以工作。
* 将存储设备连接到 SBC 的 USB 接口
* 运行之后(按上面的“必要的准备”所设置的那样),然后在你的客户端工作站浏览器上访问 **机器的 IP 地址:9090**
* 登录进 Cockpit 之后,点击 Cockpit 页面顶部的“<ruby>打开管理访问权限<rt>Turn on administrative access</rt></ruby>
* 点击左边面板的 “<ruby>存储<rt>Storage</rt></ruby>” 按钮
* 选择下面显示的 “<ruby>驱动器<rt>Drives</rt></ruby>”,然后分区并格式化一个空白的存储设备
![Cockpit Storage management][10]
* 在选定的存储设备这个界面上,创建一个新的分区表或者格式化并创建新的分区。当初始化磁盘的时候,在 “<ruby>Partitioning<rt>分区</rt></ruby>” 类型选项上,选择 “GPT 分区表”
* 选择一个文件系统类型,这里选择 “EXT4” 。这对于一个限制 I/O 能力(比如 USB 2.0 接口)和限制带宽(小于 200MB/s的设备是适合的
![Create a partition in Cockpit][11]
* 要在设备上创建单个占据整个存储空间的分区,指定它的挂载点,比如 `/media` 然后点击 “<ruby>确定<rt>Ok</rt></ruby>” 。
* 点击 “<ruby>Create partition<rt>创建分区</rt></ruby>”,创建一个挂载点为 `/media` 的新分区。
### 创建备份和恢复备份
备份很少是一刀切的。这里有一些选择比如数据备份在哪里,备份数据的步骤,验证一些自动化,并确定怎样来恢复备份了的数据。
![Backup workflow version 1.0][12]
#### 备份 1. 用 rsync 从客户端远程同步到文件服务器(树莓派)
这个传输用到的命令是:
```
rsync -azP ~/source syncuser@host1:/destination
```
参数:
- `-a`/`--archive`:归档
- `-z`/`--compress`:压缩
- `-P`/`--progress`:显示进度
要使用更多的选项运行 `rsync`,可以设置以下的选项:
- `--inplace`:直接替换来更新目标文档
- `--append`:追加数据到较短的文档中
在将文档备份到存储空间之前,源端文档的文件重复消除和压缩是减少备份数据容量最有效的方式。
每天工作结束,我会手动运行这个。一旦我设置了云备份工作流,自动化脚本是一个优势。
关于 `rsync` 的详细信息,请在 [这里][13] 访问 Fedora 杂志的文章。
#### 备份 2. 使用 rysnc 从文件服务器远程同步到主要的云存储上
选择云存储是考虑的因素;
* 成本:上传、存储空间和下载费用
* 支持 `rsync`、`sftp`
* 数据冗余RAID 10 或者运行中的数据中心冗余计划)
* 快照
符合这些云存储标准之一的就是 Hetzner 托管的 Nextcloud [存储盒子][14]。你不会受到供应商限制,可以自由切换而没有退出惩罚。
##### 在文件服务器上生成 SSH 密钥并创建授权密钥文件
使用 `ssh-keygen` 命令为文件服务器和云存储生成一对新的 SSH 密钥对。
```
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key . . .
```
插入要求的 SSH 公钥到新的本地授权密钥文件中。
```
cat .ssh/id_rsa.pub >> storagebox_authorized_keys
```
##### 传输密钥文件到云存储
下一步就是上传生成了的授权密钥文件到存储盒子。要做这些,先用 700 权限创建 `.ssh` 目录,然后用 SSH 公钥创建授权文件并赋予 600 权限。运行以下命令。
```
echo -e "mkdir .ssh \n chmod 700 .ssh \n put storagebox_authorized_keys .ssh/authorized_keys \n chmod 600 .ssh/authorized_keys" | sftp <username>@<username>.your-storagebox.de
```
##### 通过 ssh 使用 rsync
使用 `rsync` 同步你的文件目录当前状态到存储盒子。
```
rsync --progress -e 'ssh -p23' --recursive <local_directory> <username>@<username>.your-storagebox.de:<target_directory>
```
这个过程被叫做推送操作,因为它 “推送” 本地系统的一个目录到一个远程的系统中去。
##### 从云存储中恢复目录
要从存储盒子恢复目录,转换到这个目录:
```
rsync --progress -e 'ssh -p23' --recursive <username>@<username>.your-storagebox.de:<remote_directory> <local_directory>
```
#### 备份 3. 客户端备份到第二个云储存
[Deja Dup][15] 是 Fedora 软件仓库中为 Fedora 工作站提供快速备份解决方案的工具。它拥有 GPG 加密、计划任务、文件包含(哪个目录要备份)等功能。
![Backing up to the secondary cloud][16]
![Restoring files from cloud storage][17]
### 归档个人数据
不是所有数据都需要 3-2-1 备份策略。这就是个人数据共享。我将一台拥有 1TB 硬盘的笔记本作为我个人数据的档案(家庭照片)。
转到设置中的 “<ruby>共享<rt>Sharing</rt></ruby>” (在我的例子中是 GNOME 文件管理器)并切换滑块以启用共享。
![][18]
打开 “<ruby>文件共享<rt>file sharing</rt></ruby>”,“<ruby>网络<rt>Networks</rt></ruby>” 和 “<ruby>需要的密码<rt>Required password</rt></ruby>”,允许你使用 WebDAV 协议在你的本地网络上分享你的公共文件夹给其它的工作站。
![][19]
### 准备回滚选项
未测试的备份并不比完全没有备份好。我在家庭实验室环境中使用 “热切换” 方法来避免像频繁的断电或者液体损坏的情况发生。然而,我的建议方案远没有达到灾难恢复计划或企业 IT 中的自动故障修复。
* 定期运行文件恢复操作
* 备份 ssh/GPG 密钥到一个额外的存储设备中
* 复制一个 Fedora ARM 服务器的原始镜像到一个 SD 卡中
* 在主云存储中保持全备份的快照
* 自动化备份过程最小化减少人为错误或者疏忽
### 使用 Cockpit 追踪活动并解决问题
当你的项目在成长时,你所管理的服务器数量也在增长。在 Cockpit 中追踪活动和警告可以减轻你的管理负担。你可以使用 Cockpit 的图形化界面的三种方法来归档这些。
#### SELinux 菜单
怎样诊断网络问题,找到日志并在 Cockpit 中解决问题:
* 去 SELinux 中检查日志
* 检查“<ruby>解决方案详细信息<rt>solution details</rt></ruby>
* 当必要时,选择 “<ruby>应用这个方案<rt>Apply this solution</rt></ruby>
* 如果必要,查看自动化脚本并运行它
![SELinux logs][20]
#### 网络或者存储日志
服务器日志会跟踪 CPU 负载、内存使用、网络活动、存储性能和系统日志关联的详细指标。日志会组织在网络面板或者存储面板里显示。
![Storage logs in Cockpit][21]
#### 软件更新
在预设的时间和频率下Cockpit 可以帮助进行安全更新。当你需要时,你可以运行所有的更新。
![Software updates][22]
恭喜你在 Fedora ARM 服务器版本上搭建了一个文件/备份服务器。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/3-2-1-backup-plan-with-fedora-arm-server/
作者:[Hanku Lee][a]
选题:[lujun9972][b]
译者:[hwlife](https://github.com/hwlife)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/hankuoffroad/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2022/04/3-2-1_backup-816x345.jpg
[2]: https://unsplash.com/@markuswinkler?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/computer-backup?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://docs.fedoraproject.org/en-US/fedora-server/server-installation-sbc/
[5]: https://docs.fedoraproject.org/en-US/fedora-server/sysadmin-postinstall/
[6]: https://docs.fedoraproject.org/en-US/quick-docs/raspberry-pi/
[7]: https://arm.fedoraproject.org/
[8]: https://fedoramagazine.org/managing-network-interfaces-and-firewalld-in-cockpit/
[9]: https://www.redhat.com/sysadmin/firewalld-rules-and-scenarios
[10]: https://fedoramagazine.org/wp-content/uploads/2022/03/Screenshot-from-2022-03-29-22-05-00b-1024x576.png
[11]: https://fedoramagazine.org/wp-content/uploads/2022/03/Screenshot-from-2022-03-29-22-03-36a.png
[12]: https://fedoramagazine.org/wp-content/uploads/2022/04/Backups3-1-1024x525.jpg
[13]: https://fedoramagazine.org/copying-large-files-with-rsync-and-some-misconceptions/
[14]: https://docs.hetzner.com/robot/storage-box/
[15]: https://fedoramagazine.org/easy-backups-with-deja-dup/
[16]: https://fedoramagazine.org/wp-content/uploads/2022/03/Screenshot-from-2022-03-29-22-47-30.png
[17]: https://fedoramagazine.org/wp-content/uploads/2022/03/Screenshot-from-2022-03-29-22-41-57.png
[18]: https://fedoramagazine.org/wp-content/uploads/2022/04/Screenshot-from-2022-04-14-20-48-49-1024x733.png
[19]: https://fedoramagazine.org/wp-content/uploads/2022/04/Screenshot-from-2022-04-14-20-51-18st.png
[20]: https://fedoramagazine.org/wp-content/uploads/2022/04/Screenshot-from-2022-04-02-11-24-30b-1024x441.png
[21]: https://fedoramagazine.org/wp-content/uploads/2022/04/Screenshot-from-2022-04-04-21-47-06SL-1024x259.png
[22]: https://fedoramagazine.org/wp-content/uploads/2022/04/Screenshot-from-2022-04-04-21-35-42b.png

View File

@ -0,0 +1,110 @@
[#]: subject: "Nushell: Cross-platform Shell That Gives You More Clarity on Error Messages"
[#]: via: "https://itsfoss.com/nushell/"
[#]: author: "Marco Carmona https://itsfoss.com/author/marco/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14526-1.html"
Nushell 一个让你更清楚地了解错误信息的跨平台 Shell
======
![](https://img.linux.net.cn/data/attachment/album/202204/30/181450b5r4m5jb77llrfru.jpg)
> Nushell 是一个独特的 Shell它提供易于阅读的错误信息以及跨平台支持。在这里可以了解到更多关于它的信息。
即使你对使用终端不感兴趣Linux 终端也常常使一些繁重的工作变得更加轻松以及可以让你修复一些东西。因此可以说如果你知道自己在做什么Linux 终端是相当强大的。
这也是事实!但是当你看到一些错误消息就表明出现问题了。如果你没有足够的使用经验,可能不知道如何解决它。
虽然这些错误信息试图向你传达该问题的最佳含义,但不是每个用户都能轻易理解该如何修复。对于初学者来说,这通常需要进行一些研究。但是,如果错误能更清晰一些,用户就能更快地解决它。
不仅仅限于错误信息,例如,你在终端浏览文件时看到的输出结构,也不是最漂亮的。
![Terminal listing several files][1]
**你明白我的意思吗?** 当然,当你有更多不同类型的文件时,这可能变得更加复杂。而且,你无法从基本的 `ls` 命令的输出中了解到文件的权限、组等。
这就是 Nushell 试图解决的问题。
### Nushell一个默认提供用户友好输出的 Shell
![Nushell example screenshot][2]
Nushell 也被称为 Nu它的理念和灵感来自于 [PowerShell][3]、函数式编程语言和现代 [CLI][4] 工具等项目。
让我给你举个例子,想象一下你只想让你的输出列出你的主目录内类型为文件的项目,包括隐藏文件。那么,要实现这一点,只要输入下面的命令就可以了:
```
ls -a | where type == 'file'
```
![Listing only files with Nushell][5]
观察一下,它的语法是多么清晰和简单。现在想象一下,用 Nushell 查找进程和名称 ID、它的状态以及 CPU 或内存消耗是多么容易。**这是它魔法的一部分!**
它会尽力以专门组织的方式为你输入的命令提供适合用户的输出。
### Nushell 的特点
![Error messages in Nu, one of its primary highlights][6]
根据现有的官方信息,它的一些最受欢迎的功能包括:
* **任何操作系统都通过管道进行控制。** Nu 可以在 Linux、macOS 和 Windows 上工作。换句话说,作为一个灵活的跨平台 shell具有现代感。
* **一切都是数据。** Nu 管道使用结构化数据,所以你可以安全地选择、过滤和排序,每次都是同样的方式。
* **强大的插件。** 使用强大的插件系统,很容易扩展 Nu 的功能。
* **易于阅读的错误信息。** Nu 操作的是类型化的数据,所以它可以捕捉到其他 shell 所没有的错误。当错误发生时Nu 会告诉你确切的位置和原因。
* 清晰的 IDE 支持。
你可以看看它的 [官方文档][7],以全面了解它的功能和用法。
### 在你的系统中安装 Nushell
不幸的是,如果你是一个像我一样的 Ubuntu 用户,你将找不到安装 Nushell 的 APT 仓库。但是,你可以按照它在 [GitHub][8] 上的说明,通过安装所需的依赖项来构建它。
幸运的是,有一种方法可以在任何发行版上安装它,即使用 Homebrew。到它的官方网站去了解更多的安装选项。
> **[Nushell][9]**
你可以参考我们关于 [在 Linux 上安装和使用 Homebrew 包管理器][10] 的教程。当你在 Linux 上成功设置了它,你需要输入以下命令来安装 Nushell
```
brew install nushell
```
![Installing nushell with Homebrew][11]
当这个过程完成后,只要输入 `nu` 就可以启动 Nushell shell。**这就完成了!**
> 如果你想把 Nushell 设置为你的默认 shell你可以用命令 `chsh` 来做,但是记住,它仍然在开发阶段,这就是为什么我们不推荐它用于日常使用。
然而,在你决定尝试之前,你可以在其网站或 [GitHub 页面][8] 上了解关于它的更多信息。
你对这个有趣的 shell 什么看法?请在下面的评论中告诉我你的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/nushell/
作者:[Marco Carmona][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/marco/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/wp-content/uploads/2022/04/Terminal-with-several-files-800x477.png
[2]: https://itsfoss.com/wp-content/uploads/2022/04/Nushell-example-800x475.jpg
[3]: https://itsfoss.com/microsoft-open-sources-powershell/
[4]: https://itsfoss.com/gui-cli-tui/
[5]: https://itsfoss.com/wp-content/uploads/2022/04/Listing-only-files-with-nushell-800x246.png
[6]: https://itsfoss.com/wp-content/uploads/2022/04/Error-messages-in-Nu-800x259.png
[7]: https://www.nushell.sh/book/
[8]: https://github.com/nushell/nushell
[9]: https://www.nushell.sh/
[10]: https://itsfoss.com/homebrew-linux/
[11]: https://itsfoss.com/wp-content/uploads/2022/04/Installing-nushell-with-brew-800x470.png

View File

@ -3,21 +3,22 @@
[#]: author: "Navendu Pottekkat https://www.opensourceforu.com/author/navendu-pottekkat/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14521-1.html"
如何把开源作为一份职业
======
你是否对开源充满热情,却不知道如何在这个领域开始一段职业生涯?那么,这篇文章就是为你准备的。
![目标-成就-团队-业务-概念][1]
> 你是否对开源充满热情,却不知道如何在这个领域开始一段职业生涯?那么,这篇文章就是为你准备的。
![](https://img.linux.net.cn/data/attachment/album/202204/29/083647jjbfm44j4pt774ft.jpg)
你知道吗80% 的维护者认为招募新的贡献者是一个挑战92% 的雇主认为很难雇用到开源人才。而另一方面52% 的开发者希望为开源做出贡献33% 的人不知道从哪里开始31% 的人认为自己不够熟练。公共数据显示,社会对具有开源技能的人有很大的需求。因此,让我们看看如何才能够把开源作为一份职业,以填补这个供需之间的差距吧!
### 掌握一个技能
开源旅程的起点仅仅是你擅长的某个技能罢了。许多开发者会在空闲时间从事开源工作,他们投入精力到自己非专业的技能上并把这些技能引入到技术领域里来。像机器学习ML、云原生和大数据分析这样的技能是很受欢迎的因为许多项目都围绕着它们而进行。
开源旅程的起点仅仅是你擅长的某个技能罢了。许多开发者会在空闲时间从事开源工作,他们在不熟练的领域投入精力并把这些技能引入到技术领域里来。像机器学习ML、云原生和大数据分析这样的技能是很受欢迎的因为许多项目都围绕着它们而进行。
开发者必须不断尝试直到找到自己感兴趣的东西为止。例如当我开始在开源领域工作时我选择了移动用户界面UI和 Web 开发(包括前端和后端)方面的工作。这个选择并不简单,我花了很多时间来弄清楚我想从事什么。因此,重要的是要遵循你的兴趣,通过学习和建立项目来探索不同的领域。很多时候,理论教程可能不如建立实际项目更有帮助。掌握技能的唯一方法是将所学的东西应用到实际项目中。
@ -27,23 +28,25 @@
无论项目的内容如何,只要它是活跃的,就会产生很大的价值。但请记住,一旦它开源了,你千万不要被大家的反应所左右。并且记住,无论你是为一个应用程序建立一个 UI还是仅仅记录一个适当的注释、资源或 URL 的列表,你的工作都可以对开源用户有很大帮助。
在很大程度上学习不同的工具有助于建立开源项目。因此学习关于版本控制系统、Git、GitHub 和 GitLab大多数项目都在它们上面的一切是很重要的。由于互联网上已经有足够的教程我只收集了一些可以在 *navendu.me/osidays.* 上找到的。你需要通过撰写文档和公开自己学到的内容,来“公开学习”才行。
在很大程度上学习不同的工具有助于建立开源项目。因此学习关于版本控制系统、Git、GitHub 和 GitLab大多数项目都在它们上面的一切是很重要的。由于互联网上已经有足够的教程我只收集了一些可以在 `navendu.me/osidays` 上找到的。你需要通过撰写文档和公开自己学到的内容,来“公开学习”才行。
### 打造一份职业
你可以通过三种方式在开源领域建立一职业。
你可以通过三种方式在开源领域建立一职业。
#### 构建、扩展你自己的开源项目,并让它盈利
如果你想要建立一个自己的项目,发现并解决问题是一个很好的经验法则。记下别人可能面临的问题,一个项目需求就这样产生了。你的项目的市场规模只能通过试验和错误来估计。对于既没有太多资金的、也没有太多经验个人贡献者来说,社交媒体、博客、帖子和会议上的话,都会在很大程度上有助于接触到用户。这些平台可以为你的开源项目带来巨大的流量。
如果你想要建立一个自己的项目,发现并解决问题是一个很好的经验法则。记下别人可能面临的问题,一个项目需求就这样产生了。你的项目的市场规模只能通过试验和错误来估计。对于既没有太多资金的、也没有太多经验个人贡献者来说,社交媒体、博客、帖子和会议上的话,都会在很大程度上有助于接触到用户。这些平台可以为你的开源项目带来巨大的流量。
资金在几乎所有的商业模式中都起着重要作用。Mozilla 基金会依靠自愿捐款来资助其项目。MariaDB 采用了延迟开放源代码的商业模式。IBM 的许多开源项目遵循开放核心的商业模式,即项目的核心部分是开源的,而周围的附加部分是闭源的和专有的。红帽公司不出售代码,而是出售专业服务,如支持、工具和围绕项目的技术援助。这些商业模式的例子可以被采用,以此来建立一个项目,将它开源,并使其盈利。
> “即使你不是维护者,也要做维护者的工作。”
#### 在一个以开源商业模式建立项目的公司工作
为贡献者和维护者社区的一份子,参与会谈和参加会议将有助于你为项目做出贡献。你可以根据引导来完成第一次贡献,但它不一定得是代码。一个大的代码库可能看起来很吓人,但关键是要从小的地方着手。找到一个问题并解决它,这将有助于你了解贡献流程、代码库和项目设置等。
为贡献者和维护者社区的一份子,参与会谈和参加会议将有助于你为项目做出贡献。你可以根据引导来完成第一次贡献,但它不一定得是代码。一个大的代码库可能看起来很吓人,但关键是要从小的地方着手。找到一个问题并解决它,这将有助于你了解贡献流程、代码库和项目设置等。
非代码的贡献也是有价值的。擅长写作的人可以通过撰写文档,或者为社交媒体写作来贡献。擅长设计的人,可以设计一个惯例,一个颜色方案,或者也可以致力于创造一个更好的用户界面。与高级工程师相比,新人发现错误的概率很高。他们可以测试、确认并报告他们的用户体验,从而提升项目质量。另一个领域是新手引导,很多开源项目将导师和新手联系起来,并帮助后者做出重要贡献。还有一个选择是成为组织者或社区管理员,这意味着你将承担起项目经理的角色,确保功能完全按照预期交付,路线图遵循,贡献者得到照顾。大多数开源项目缺乏适当的管理,因为工程师们都不喜欢做这一类工作。
非代码的贡献也是有价值的。擅长写作的人可以通过撰写文档,或者为社交媒体写作来贡献。擅长设计的人,可以设计一个模板、一个颜色方案,或者也可以致力于创造一个更好的用户界面。与资深工程师相比,新人发现错误的概率很高。他们可以测试、确认并报告他们的用户体验,从而提升项目质量。另一个领域是新手引导,很多开源项目将导师和新手联系起来,并帮助后者做出重要贡献。还有一个选择是成为组织者或社区管理员,这意味着你将承担起项目经理的角色,确保功能完全按照预期交付,路线图得到遵循,贡献者得到照顾。大多数开源项目缺乏适当的管理,因为工程师们都不喜欢做这一类工作。
社会上有很多实习项目可以帮助你赚钱比如谷歌的编程之夏GSoC和 Linux 基金会的导师制(在这里,被指导者有津贴,可以根据需要全职或兼职工作)。如果你能很好地发展你的技能,你可以在你实习的公司获得一个全职的职位。例如,如果你在红帽公司的一个项目中工作,你有机会被全职雇用,因为你在那里已经有了知名度。
@ -60,9 +63,9 @@
作为一个组织,你可以通过像 Open Collective、Patreon 和 GitHub Sponsors 这样的平台来筹集资金,让人们为你的项目捐款。像 Linux 基金会和 Mozilla 基金会这样的开源巨头也提供资金来支持项目。GitHub 已经给 15 个印度贡献者的项目提供了资助。
我曾经花了三个月时间建立了一个开源项目。这个项目后来被 Product Hunt 和 JS 周刊报道,还在上过 GitHub 趋势榜排名第一的位置。正是这个项目让我走上了开源事业的道路。
我曾经花了三个月时间建立了一个开源项目。这个项目后来被 《Product Hunt》 和 《JS Weekly》报道,还在上过 GitHub 趋势榜排名第一的位置。正是这个项目让我走上了开源事业的道路。
本文由 Sharon Abhignya Katta 转录并策划
*本文由 Sharon Abhignya Katta 转录并策划*
--------------------------------------------------------------------------------
@ -71,7 +74,7 @@ via: https://www.opensourceforu.com/2022/04/how-to-build-a-career-in-open-source
作者:[Navendu Pottekkat][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,48 +3,49 @@
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14531-1.html"
Linux Mint 升级工具 以下是它如何工作的
实测 Linux Mint 升级工具
======
我们通过实际升级测试了 Linux Mint 升级工具MINTUPGRADE GUI。 这是我们发现的。
这个工具正在开发中,可能包含 bug。除非你想做实验否则请不要在你的日常中使用它。
![](https://img.linux.net.cn/data/attachment/album/202205/01/170956n81yky0l8vbvy1n8.jpg)
> 我们通过实际升级测试了 Linux Mint 升级工具mintupgrade GUI。这是我们的发现。
这个工具正在开发中,可能包含错误,除非你想实验一下,否则请不要在你的日常中使用它。
### Linux Mint 升级工具
Linux Mint 团队[宣布][1],他们建立了一个新的工具来升级 Linux Mint 的重要版本。它被称为 “mintupgrade2”。它目前正在开发中计划用于升级主要版本。例如Linux Mint 20 到 21而不是小版本的升级。
Linux Mint 团队 [宣布][1],他们建立了一个新的工具来升级 Linux Mint 的要版本。它被称为 “mintupgrade2”。它目前正在开发中计划用于升级主要版本。例如Linux Mint 20 到 21而不是小版本的升级。
虽然你可以使用标准的 apt 命令来升级版本然而Mint 团队认为主要版本的升级是很棘手的。新用户很难进行无缝升级,因为它涉及到终端和一套复杂的命令步骤。
虽然你可以使用标准的 `apt` 命令来升级版本然而Mint 团队认为主要版本的升级是很棘手的。新用户很难顺利升级,因为它涉及到终端和一套复杂的命令步骤。
此外,GUI 是 mintupgrade 程序的一个带有附加功能的封装,它带来了一套系统前检查和一键修复的升级过程。
此外,这个图形用户界面是对 mintupgrade 程序的封装,并带有一些附加功能,它带来了一套系统前检查和一键修复的升级过程。
除此之外mintupgrade 还会检查你是否连接到电源,系统是否是最新的,磁盘空间的可用性和更多的功能
除此之外mintupgrade 还会检查你是否连接到电源、系统是否是最新的、磁盘空间的可用性等等
为了让大家了解它的外观和工作情况,我们使用 LMDE 4 设置了一个测试平台并试一试。
为了让大家了解它的外观和工作情况,我们使用 LMDE 4 设置了一个测试平台做了个测试。
但在这之前,这里有一组快速的功能:
但在这之前,让我快速介绍一下它的功能:
* 完全由 GUI 驱动的升级过程
* 多语言支持
* 升级前检查:系统备份、电源、磁盘空间、删除的软件包列表
* 可配置
* 提醒你上一版本的无主软件包的情况
* 它让你可以选择修复问题
* 提醒你来自上一个版本的孤儿软件包
* 给你修复问题的选项
### 它是如何工作的
当我们通过命令 mintupgrade 运行 mint 升级工具时GUI友好的欢迎屏幕给你一个很好的起点开始了升级过程。然后它自己开始进行一系列的检查。
当我们通过命令 `mintupgrade` 运行这个 Mint 升级工具时,这个图形用户界面程序友好的欢迎屏幕是一个很好的起点,它开启了升级过程,然后它自己开始进行一系列的检查。
![Starting the upgrade process][2]
除此之外,当它在你的系统中发现一些问题时,它会停下来并给你足够的细节。当你点击修复后,它就可以再次恢复进程。
除此之外,当它在你的系统中发现一些问题时,它会停下来并给你足够的细节。当你点击修复后,它就可以再次恢复进程。
这还不是全部。如果由于网络或互联网或任何其他问题而中断,它也可以恢复升级过程。
不止如此。如果由于网络或互联网或任何其他问题而中断,它也可以恢复升级过程。
在我们的测试过程中,该工具在我们的测试系统中发现了以下错误,只需点击一下就能修复它们。
@ -64,29 +65,18 @@ Linux Mint 团队[宣布][1],他们建立了一个新的工具来升级 Linux
#### 如何获得这个升级工具
使用下面的命令,该工具的安装很简单。但正如该团队所建议的,它现在处于 BETA 状态,所以不要用它来进行严肃的升级。
使用下面的命令,该工具的安装很简单。但正如该团队所建议的,它现在处于 BETA 状态,所以不要用它来进行正式场合的升级。
```
sudo apt update
```
```
sudo apt install mintupgrade
sudo apt update
sudo apt install mintupgrade
```
### 结束语
最后,我认为这是 Linux Mint 团队的最好的工具之一。正如你在上面看到的,它自己处理了许多错误。我所做的只是点击“修复”按钮。而这个工具足够聪明,能够理解所有的故障点,并负责补救。
[mintupgrade 工具][9]将在 Linux Mint 21 “Vanessa” 发布前发布,大约在 2022 年第三季度末或第四季度初。
* * *
我们带来最新的技术、软件新闻和重要的东西。通过 [Telegram][10]、[Twitter][11]、[YouTube][12] 和 [Facebook][13] 保持联系,永远不会错过任何更新!
[mintupgrade 工具][9] 将在 Linux Mint 21 “Vanessa” 发布前发布,大约在 2022 年第三季度末或第四季度初。
--------------------------------------------------------------------------------
@ -95,7 +85,7 @@ via: https://www.debugpoint.com/2022/04/mint-upgrade-tool/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,115 @@
[#]: subject: "Shortwave 3.0 is Here With UI Upgrades, Private Stations, and More Improvements"
[#]: via: "https://news.itsfoss.com/shortwave-3-0-release/"
[#]: author: "Jacob Crume https://news.itsfoss.com/author/jacob/"
[#]: collector: "lkxed"
[#]: translator: "lkxed"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14528-1.html"
Shortwave 3.0 发布:用户界面更新、私人电台以及诸多改进
======
> Shortwave 3.0 带来了急需的用户界面改进、添加私人电台的功能以及诸多升级。
![Shortave 3.0][1]
Shortwave 是 GNOME 上的一个流行的网络广播播放器。它默认提供了很多电台,总计超过 25000 个,所有这些电台都可以分组组织、进行搜索,还可以投射到其他设备(如 Chromecast上。
Shortwave 3.0 将这些功能提升至一个全新的水平,有一些相当大的变化。让我们来看看有哪些新功能吧!
### Shortwave 3.0 新功能
主要是引入了 Libadwaita除此之外Shortwave 3.0 还包括以下更新:
* 支持 GNOME 42 的深色模式
* 支持将私人电台添加到库中
* 支持将电台数据保存到磁盘上
* 改进了搜索结果的排序
#### 用户界面的变化
![图源Felix Häcker][2]
在过去的几个月里,许多应用程序都在向 [Libadwaita][3] 过渡。由于其流畅的视觉效果、集成的开发工作流程以及与 GNOME 的整合,它已经迅速成为所有新应用程序的必备工具。
最新一个升级到 Libadwaita 的应用程序是 Shortwave。因此它现在有了一个自适应的用户界面这对类似于 [PinePhone][4] 的 Linux 手机可能很有用。
![][5]
此外,它现在采用了更现代的 Adwaita 设计,我非常喜欢。
随着用户界面的改进,它也支持新的 GNOME 42 的深色模式。下面是它的外观。
![Shortwave 3.0 深色模式][6]
#### 保存电台数据
![][7]
一个有用的新功能是支持将电台数据保存到磁盘上,而无需每次从服务器上接收。
因此,即使一个电台从服务器(`radio-browser.info`)上删除,它也会保留在应用程序中,并有消息通知用户这一变化。
#### 添加私人电台
![][8]
以前,你必须依赖 [radio-browser.info][9] 库中的可用电台。
现在,你可以从内部网络添加你的私人电台,或者通过 API 密钥添加一个独家/付费流。
![][10]
#### 其他变化
除了上面列出的那些Shortwave 3.0 还有一些其他的改进:
* 显示电台比特率信息,这也可以作为一个排序选项。
* 在搜索页面上新增了一个按钮,可以对搜索结果进行排序。
* 大幅度修改了电台对话框,显示信息更加清晰。
* 在歌曲变化时更新桌面通知,而不是为每首歌曲生成新的单独通知。
* 即使 `radio-browser.info` 处于离线/不可用状态Shortwave 也可以正常使用。
### 总结
![][11]
总的来说Shortwave 3.0 是一个很棒的版本,它既改善了用户体验,又增加了新功能。
如果你想安装它,你可以到它的 [Flathub][12] 页面查看安装指南,或者直接在你的终端键入以下命令。
```
flatpak install flathub de.haeckerfelix.Shortwave
```
如果你还没有设置 Flatpak你也可以参考我们的 [Flatpak 指南][13]。
你尝试过 Shortwave 3.0 了吗?请在下面的评论中分享你的使用体验吧!
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/shortwave-3-0-release/
作者:[Jacob Crume][a]
选题:[lkxed][b]
译者:[lkxed](https://github.com/lkxed)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lkxed
[1]: https://news.itsfoss.com/wp-content/uploads/2022/04/shortwave-3-0.jpg
[2]: https://news.itsfoss.com/wp-content/uploads/2022/04/shortwave3.0.png
[3]: https://news.itsfoss.com/gnome-libadwaita-library/
[4]: https://news.itsfoss.com/pinephone-review/
[5]: https://news.itsfoss.com/wp-content/uploads/2022/04/shortwave-3-responsive.jpg
[6]: https://news.itsfoss.com/wp-content/uploads/2022/04/shortwave-3-dark-mode.jpg
[7]: https://news.itsfoss.com/wp-content/uploads/2022/04/shortwave-station-data.png
[8]: https://news.itsfoss.com/wp-content/uploads/2022/04/shortwave-3-create.png
[9]: https://www.radio-browser.info/
[10]: https://news.itsfoss.com/wp-content/uploads/2022/04/shortwave-3-private-station.png
[11]: https://news.itsfoss.com/wp-content/uploads/2022/04/shortwave-3-0.mp4
[12]: https://flathub.org/
[13]: https://itsfoss.com/flatpak-guide/

View File

@ -1,110 +0,0 @@
[#]: subject: "Kubuntu 22.04 LTS Arrives with KDE Plasma 5.24"
[#]: via: "https://news.itsfoss.com/kubuntu-22-04-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Kubuntu 22.04 LTS Arrives with KDE Plasma 5.24
======
Ubuntu 22.04 LTS has finally been released. So, you can expect all of its official flavors to offer the latest and greatest soon after that.
And, the KDE flavour, i.e., Kubuntu 22.04 LTS is now also available to download!
You should expect all the [feature additions of Ubuntu 22.04 LTS][1] and specially tailored improvements for the Kubuntu 22.04 release.
Let me briefly emphasize the key changes.
### Kubuntu 22.04 LTS: Whats New?
The primary highlight of the release is KDE Plasma 5.24 LTS.
In addition to that, you should also notice updates to KDE apps and other pre-installed applications. Lets take a look:
#### KDE Plasma 5.24
![][2]
KDE Plasma 5.24 features an updated breeze theme, and several visual improvements. It is a long-term supported version that should get updates until Plasma 6 releases.
You had the option to use KDE Neon or Arch Linux, or a few other distros to experience KDE Plasma 5.24. Finally, Kubuntu is here as a mainstream distro featuring the latest and greatest from KDE.
You can learn more about [KDE Plasma 5.24][3] changes in our previous coverage.
If you are moving away from GNOME-based Ubuntu, you may want to check out our article on [KDE Plasma vs GNOME][4] to get some insights before making the switch.
#### Default Browser as Firefox Snap
Firefox 99 snap is the default browser. In case youre curious, Mozilla is working with Canonical to quickly push updates and conveniently maintain the browser using the snap package.
So, it only makes sense to keep it as the default. But, you can always install the deb package if you do not prefer using Snap.
#### GNOME-like Overview
With KDE Plasma 5.24, you get the ability to use a feature similar to GNOMEs Activities overview.
You can browse through your virtual desktops and windows at a glance with its help.
To access it, you can press the Super key + W, and heres how it should look:
![][5]
#### Improvements to Discover
![][6]
The software center for KDE i.e. Discover has received some upgrades. One of the neat additions includes the ability to prevent removing anything that is critical to the systems operation.
You should notice a warning when you try to remove something that could break the system.
If you prefer Flatpak, you can now open locally downloaded Flatpak packages and install it using the Flatpak repository URI.
#### Fingerprint Support
KDE Plasma 5.24 finally adds the support for fingerprint authentication. You can conveniently add up to 10 fingerprints and use them to unlock or authenticate something.
#### Other Improvements
![][7]
In addition to the major upgrades, you should expect subtle changes across the platform along with app updates that include:
* LibreOffice 7.3
* KDE Gear 21.12
* Dolphin
To explore more about the changes, you can refer to the [official release notes][8].
### Download Kubuntu 22.04 LTS
You can head to the official download page to get the latest ISO or choose to wait to receive the upgrade prompt.
[Kubuntu 20.04 LTS][9]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/kubuntu-22-04-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/ubuntu-22-04-release-features/
[2]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjY3NSIgd2lkdGg9IjEyMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[3]: https://news.itsfoss.com/kde-plasma-5-24-lts-release/
[4]: https://itsfoss.com/kde-vs-gnome/
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjY3OSIgd2lkdGg9IjEyMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU5MSIgd2lkdGg9IjgzOSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[7]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ1NyIgd2lkdGg9IjgxMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[8]: https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes/Kubuntu
[9]: https://cdimage.ubuntu.com/kubuntu/releases/22.04/release/

View File

@ -1,130 +0,0 @@
[#]: subject: "Ubuntu Budgie 22.04 LTS Released: Fast, Elegant, And More Feature-Filled Than Ever"
[#]: via: "https://news.itsfoss.com/ubuntu-budgie-22-04-release/"
[#]: author: "Jacob Crume https://news.itsfoss.com/author/jacob/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu Budgie 22.04 LTS Released: Fast, Elegant, And More Feature-Filled Than Ever
======
Since its initial release in 2016, Ive been an admirer of Ubuntu Budgie. With its sleek visuals, fluid animations, and solid Ubuntu base, it covers all my needs.
Although it is relatively new compared to other Ubuntu flavors, it has already managed to gain a significant following.
Now, with Ubuntu Budgie 22.04 LTS, this level of polish has been brought to a whole new level.
### Whats New?
![][1]
As to be expected, there are vast numbers of improvements compared to the previous 20.04 LTS release. Some changes include:
* The newly released [Budgie 10.6][2]
* budgie-applications-menu-applet improvements
* New hot corners “delay” and “pressure” options
* Panel spacing improvements
* RISCV64 support
* Theme updates from upstream
* New Chrome OS-like layout
You can expect updates for Ubuntu Budgie 22.04 up until 2025.
#### Budgie 10.6 Desktop
![][3]
Ubuntu Budgie 22.04 now ships with Budgie 10.6. As I highlighted in [our coverage][2] of the release, it is the first one under its new organization. In short, Joshua Strobl left the Solus distribution but still wanted to work on Budgie.
![][4]
To facilitate this, he forked the Budgie repository and formed the **Buddies of Budgie** organization, which now maintains and develops budgie.
Due to all of these changes, Budgie 10.6 wasnt a massive release, although it did bring a new notification system, alongside some minor code reformatting. As a result, other Budgie components can now use the notification system, opening up some interesting future options.
![][5]
Of course, Ubuntu Budgie 22.04 inherits these changes, and it will be interesting to see what they do with them in future releases.
#### App Menu Improvements
![][6]
Ubuntu Budgie 22.04 also has a few improvements to the application menu. One of my favorite features of the app menu is its fast search, which has now been improved. This comes in the form of the availability of the app context menu in the search results.
Additionally, the menu now also supports the non-default GPU flag from .desktop files. This is particularly useful for laptop users, as it allows simpler apps to use significantly less power. This is extended with a new context menu option, which allows users to easily change this option in the GUI.
Finally, the categories should now be more inclusive, meaning, fewer apps end up in the “**Other**” category.
As a laptop user, I have really been enjoying the extra battery life these changes have afforded me, and Im sure many of you will appreciate these changes too.
#### RISC-V Support
This improvement is extremely exciting. For those of you that are unaware, RISC-V is a fully open-source CPU architecture (think x86 and ARM) that is slowly becoming quite popular. As such, it is only a matter of time before we find these CPUs making their way into desktops and laptops (like what we have seen with ARM chips).
For this, Ubuntu Budgie now has support for this exciting architecture, although there is no pre-built disk image available. Instead, Ubuntu Server must be installed first, with the user than installing the `ubuntu-budgie-desktop` package.
Although the installation process is not for new users, this addition paves the way for support for the next generation of CPUs.
#### Theme Updates
![][7]
Another major highlight in Ubuntu Budgie 22.04 is the numerous theme updates. Firstly, the beautiful Arc theme has been updated with GTK 4 support, ready for apps transitioning from GTK 3. Of course, this doesnt affect Libadwaita apps, which you can read about why in our [in-depth coverage][8].
![][9]
The WhiteSur GTK and icon themes have been updated for macOS fans, with support for more apps and more accurate visuals.
Finally, the default Pocillo theme has been updated with more app icons.
You can learn more about the technical changes in the [official release notes][10].
### Getting Ubuntu Budgie 22.04
For the first time, you might want to try Ubuntu Budgie 22.04 on a virtual machine to see how it goes.
If you already have Ubuntu Budgie installed, upgrading is extremely simple. Simply copy the following command into the terminal, and follow the on-screen prompts. You may have to wait for a few days if the upgrade isnt available yet.
```
sudo do-release-upgrade -d -f DistUpgradeViewGtk3
```
For new users, just download the ISO file from the button below.
[Get Ubuntu Budgie 22.04 LTS][11]
Overall, Ubuntu Budgie is a great upgrade to an already awesome distro. Between the updated budgie version, RISC V support, and theme updates, Ubuntu Budgie continues to be a competitive alternative to other Ubuntu flavors.
_What are your thoughts on this release? Feel free to share them in the comments section below._
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntu-budgie-22-04-release/
作者:[Jacob Crume][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lujun9972
[1]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjY4MSIgd2lkdGg9IjEzNDciIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[2]: https://news.itsfoss.com/budgie-10-6-release/
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjY3NSIgd2lkdGg9IjEyMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjI3NyIgd2lkdGg9IjQ5OCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUxNiIgd2lkdGg9IjEwMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjY4MSIgd2lkdGg9IjEzNTUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[7]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUxMiIgd2lkdGg9IjgwMCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[8]: https://news.itsfoss.com/gnome-libadwaita-library/
[9]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUxNyIgd2lkdGg9IjEwMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[10]: https://ubuntubudgie.org/2022/03/ubuntu-budgie-22-04-lts-release-notes/
[11]: https://ubuntubudgie.org/downloads/

View File

@ -1,158 +0,0 @@
[#]: subject: "Ubuntu MATE 22.04 LTS Brings in a New Yaru Theme, MATE Desktop 1.26.1, and More Improvements"
[#]: via: "https://news.itsfoss.com/ubuntu-mate-22-04-release/"
[#]: author: "Rishabh Moharir https://news.itsfoss.com/author/rishabh/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Ubuntu MATE 22.04 LTS Brings in a New Yaru Theme, MATE Desktop 1.26.1, and More Improvements
======
Ubuntu 22.04 LTS is an exciting release already.
While other Ubuntu-based distros should be getting ready to offer their latest versions based on Ubuntu 22.04 LTS, Ubuntus official flavors LTS releases have also landed.
Here, we focus on **Ubuntu MATE 22.04 LTS**, featuring the latest MATE Desktop 1.26.1.
Usually, Ubuntu MATE 22.04 LTS puts great effort on top of the improvements from the upstream.
This time, it has received significant updates, especially to the look/feel.
### Ubuntu Mate 22.04: Whats New?
Ubuntu Mate 22.04 brings quite a number of changes to the appearance apart from necessary packages updates and bug fixes.
Lets take a look at the major highlights.
#### 1\. Improved Yaru Theme Support
Ubuntus popular and community-backed default theme takes the center stage in this release. MATEs compatibility with the theme has been significantly improved since it was first included in Ubuntu MATE 21.04.
Now, all the Yaru color accents have been shipped, along with a new MATE-focused “Chelsea Cucumber” theme. The stock and the legacy Ambiant/Radiant themes have been removed.
![Source: Ubuntu MATE][1]
When upgrading to the new version, an automatic settings migration takes care of selecting a relevant Yaru MATE theme if your old theme is no longer supported.
MATEs default Windows Manager Metacity or Marco has also been updated to provide a uniform look and feel. This applies to third-party compositors too.
#### 2\. Improved Panels
Expanding on the Yaru compatibility, the Appearance Control Center now accurately switches the color scheme for apps. The panel now comes in both light and dark mode, along with a host of panel icons to Yaru.
The theming brings support to Plank and Pluma too.
![][2]
![][3]
Source: Ubuntu MATE
#### 3\. Updated Ayatana Indicators
Ubuntu MATE 20.10 users must be familiar with Ayatana Indicators. For those unaware, they are basically a fork of Ubuntu indicators and created to be used across many distros and desktop environments.
![Source: Ubuntu MATE][4]
Ayatana Indicators 22.2.0 has been included in this new release. This means theres less RAM and CPU usage, leading to improved battery performance and backward compatibility with Ubuntu Indicators.
#### 4\. MATE Tweak
One of the most popular features of Ubuntu MATE is the presence of MATE Tweak which allows users to select their preferred desktop layout ranging from Cupertino (macOS) to Redmond (Windows 10).
Users should now expect an improved layout switching and restoring for custom layouts.
![Source: Ubuntu MATE][5]
Also, those who make use of third-party compositors should expect better support. Do note that support for Compton has been dropped (since its not maintained anymore) and support for picom has been added instead.
Lastly, the mate-netbook layout has been removed due to conflicting issues with client-side decorated windows.
#### 5\. MATE HUD
MATE HUD is a custom HUD that makes use of rofi to run menu commands.
It has been updated to support the latest version of rofi, as a new theme engine has been introduced. Also, the window now sports rounded corners, depending on the current GTK theme.
Users can also add their own rofi themes by pasting them in:
 `~/.local/share/rofi/themes`.
#### 6\. New Apps and Packages
Users will be pleased to know that support for PPA, Snap, AppImage, and Flatpak has been enabled by default.
The snap-desktop-integration has also been included to improve the users session and automatically install snapped themes.
GNOME users will particularly be pleased to see three new GNOME apps—Maps, Clocks, and Weather.
![Source: Ubuntu MATE][6]
Not to forget, you should also expect the latest [Firefox 99.0][7], Celluloid 0.20, Evolution 3.44, LibreOffice 7.3.2.1, and Blueman 2.2.4, among other package updates.
#### 7\. Linux Kernel 5.15 LTS and Mate Desktop 1.26.1
You can expect the latest Linux Kernel 5.15 LTS release with Ubuntu MATE 22.04.
And, the desktop environment is now the latest MATE 1.26.1, which mostly includes performance and bug fixes on top of improvements to Mate Desktop 1.26.0.
#### 8\. Lighter ISO Size
Ubuntu MATEs ISO size has been reduced to a decent 2.7 GB. So, whats the take away?
Considering, that all of the legacy themes and icons have been removed. So, its safe to say, Ubuntu MATEs theming system has completely transitioned to upstream Yaru. Furthermore, three snap-based applications have also been discarded.
Moreover, the proprietary NVIDIA graphics drivers can no longer be found in the default install.
Fret not, as a checkbox can be marked during the installation process to install third-party drivers to include NVIDIA drivers, just like some other Ubuntu-based distros. Theres even a minimal installation checkbox as well.
#### Other Changes
There have been several bug fixes and other additions as well.
* Bugs related to Plank, Brisk Menu, and a screen reader for visually-impaired users have been fixed.
* Addition of 3 beautiful AI-generated wallpapers.
* An updated Welcome screen with newer software.
* Support for indicators for battery-powered gaming peripherals.
* A new image for the Raspberry Pi.
You can have a look at the [official release notes][8] for detailed technical information.
### Download Ubuntu MATE 22.04
Ubuntu MATE 22.04 LTS is now available to download. You can grab the ISO using the button below.
[Ubuntu MATE 22.04 LTS][9]
### Closing Thoughts
Ubuntu MATE 22.04 is a feature-rich release with the main focus on visuals. Looks like community-driven Ubuntu MATE is now a more compelling alternative to Canonicals Ubuntu, especially after noticing the inclusion of Yaru-only themes.
What do you think about Ubuntu MATE 22.04? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/ubuntu-mate-22-04-release/
作者:[Rishabh Moharir][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/rishabh/
[b]: https://github.com/lujun9972
[1]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjYxNCIgd2lkdGg9IjEwMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[2]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjI3IiB3aWR0aD0iNTMzIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZlcnNpb249IjEuMSIvPg==
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjI4IiB3aWR0aD0iNTM2IiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZlcnNpb249IjEuMSIvPg==
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUzNSIgd2lkdGg9IjgxNSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjczMiIgd2lkdGg9IjgyMiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU3NiIgd2lkdGg9IjEwMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[7]: https://news.itsfoss.com/firefox-99-release/
[8]: https://ubuntu-mate.org/blog/ubuntu-mate-jammy-jellyfish-release-notes/
[9]: https://ubuntu-mate.org/download/amd64/

View File

@ -1,109 +0,0 @@
[#]: subject: "Xubuntu 22.04 LTS Releases with Updated Theme, Whisker Menu 2.7.1, and Other Upgrades"
[#]: via: "https://news.itsfoss.com/xubuntu-22-04-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Xubuntu 22.04 LTS Releases with Updated Theme, Whisker Menu 2.7.1, and Other Upgrades
======
Xubuntu is one of the most loved Ubuntu flavours featuring the Xfce desktop environment.
If you were looking to install the latest [Long-Term release][1] Ubuntu-based distro that is light on system resources, Xubuntu 22.04 should be a good pick.
Xubuntu 22.04 LTS includes a visual refresh and some package updates. Here, I shall highlight the key changes with this release.
### Xubuntu 22.04 LTS: Whats New?
Xubuntu 22.04 comes packed with new wallpapers, app upgrades, and more.
Some of the significant refinements include:
#### Theme Updates
![][2]
Greybird, the default theme for Xubuntu has introduced initial support for GTK 4 to blend in well with modern GTK applications.
It also brings back the Accessibility and Compact window manager themes.
![][3]
The elementary-xfce theme adds new icons and improves on the existing ones for a cleaner Xubuntu desktop experience.
#### New Wallpapers
Xubuntu features new default wallpapers along with six new additions from the community wallpaper contest.
![][4]
The new collection looks absolutely lovely.
#### Application Stack Updates
You should notice newer GNOME 42 applications, GTK 3.24.33, and other subsystem updates like NetworkManager 1.36, Mesa 22, PulseAudio 16, etc.
#### Xfce App Updates
![][5]
A range of Xfce applications have been updated with Xubuntu 22.04, some of the major ones include:
* **Mousepad 0.5.8**: A text editor with more features with support for session backup/restore, plugin support, and new plugins.
* **Ristretto 0.12.2**: An image viewer with improved thumbnail support and performance improvements
* **Whisker Menu Plugin 2.7.1**: More customization options
#### Firefox Snap
Just like Ubuntu 22.04, Xubuntu 22.04 includes Firefox as a snap package.
The snap package can be a secure experience with sandboxing and will get quicker updates by Mozilla.
### Other Improvements
You should expect numerous bug fixes and performance improvements along with the essential enhancements.
There are more package updates that include:
* Thunderbird 91
* LibreOffice 7.3.2
* Blueman 2.2.4
* GNOME Disk Usage Analyzer 41.0
* MATE Calculator 1.26.0
* Thunar plugins
You can learn more about the changes in its [official announcement post][6].
### Download Xubuntu 22.04
You can head to the link in the button below to get the latest ISO available. For an upgrade, you may want to wait for a few days.
[Xubuntu 22.04 LTS][7]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/xubuntu-22-04-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/long-term-support-lts/
[2]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU3NiIgd2lkdGg9IjEwMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU2MSIgd2lkdGg9IjEzNDciIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjYzNiIgd2lkdGg9Ijc4OSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU4NCIgd2lkdGg9Ijc4MyIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[6]: https://wiki.xubuntu.org/releases/22.04/release-notes#major_updates
[7]: https://cdimage.ubuntu.com/xubuntu/releases/22.04/release/

View File

@ -1,135 +0,0 @@
[#]: subject: "Lubuntu 22.04 LTS Releases with Calamares Installer, LXQt 0.17.0, & Featherpad 1.0.1"
[#]: via: "https://news.itsfoss.com/lubuntu-22-04-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Lubuntu 22.04 LTS Releases with Calamares Installer, LXQt 0.17.0, & Featherpad 1.0.1
======
Looking for a lightweight Ubuntu distro for your computer as an alternative to GNOME-powered Ubuntu 22.04 LTS?
Lubuntu 22.04 LTS is here as a replacement. In fact, you can also take a look at [Xubuntu 22.04 LTS][1], if you are exploring lightweight options.
Here, we focus on the most exciting changes with Lubuntu 22.04 LTS.
### Lubuntu 22.04 LTS: Whats New?
Lubuntu 22.04 LTS comes packed with LXQt 0.17.0, updated applications, and some improvements.
It is a [Long-Term Release][2] version. So, you can expect updates for Lubuntu 22.04 LTS until 2025, i.e., **three-year support**, which is usual for [official Ubuntu flavours][3].
Some of the significant changes include:
#### New Wallpaper
Everyone loves a new wallpaper. With various Ubuntu flavours like [MATE][4], [Kubuntu][5], and [Budgie][6] releases, you have got plenty of new wallpapers.
With Lubuntu 22.04, there is an interesting wallpaper that you can find on [Unsplash][7]:
![][8]
#### LXQt 0.17.0 (Can Upgrade to LXQt 1.1.0)
While LXQt 0.17.0 includes essential improvements, it would have been more exciting to see the recent [LXQt 1.1.0][9]. Considering it was released last week, it did not make the cut in this release.
However, they could have gone with LXQt 1.0.0 at least.
![][10]
LXQt 0.17.0 includes improvements to the session behavior for non-LXQt apps, power manager updates, an auto-hide feature added to the LXQt panel, and further adjustments.
For details on LXQt 0.17.0, you can refer to its [official release notes][11].
The LXQt project team on Twitter [mentioned][12] that it is possible to easily install LXQt 1.1.0 on Lubuntu. I tried it on a virtual machine, so you can give it a try if you like.
![][13]
All you have to do is add the following repository and then upgrade the system:
```
sudo add-apt-repository ppa:severusseptimius/lxqt
sudo apt update
sudo apt upgrade
```
I did not have any issues with it. However, I am uncertain if Lubuntu recommends it yet (or they may push an update soon).
#### Firefox as Snap
Unfortunately, every Ubuntu flavour does include Firefox as a snap package, considering Mozilla will be focusing on the Snap for faster updates and maintenance.
In a good way, you should experience enhanced security with its sandboxing.
The release notes mention that the browser can be slower to start for the first time after boot, especially in the live environment. But you shouldnt have trouble with subsequent runs.
#### Package Updates
With Lubuntu 22.04 LTS, you get upgrades to various applications that include:
* VLC 3.0.16
* Featherpad 1.0.1
* LibreOffice 7.3.2
Along with the applications, you also get an update to the Discover Software Center for a smoother experience with managing/installing software.
#### Calamares Installer
Lubuntu 22.04 LTS utilizes the Calamares installer in place of Ubiquity, as favored by most of the other Ubuntu flavours.
You get the swapfile size set to 512 MB by default. But, you can opt for no swap, if you like.
#### Dropping Trojita/k3b/fcitx
Some of the popular packages like Trojita (mail client), k3b, and fcitx are no longer present in Lubuntu 22.04 LTS.
So, if you are upgrading, you can manually install it using their [official guide][14].
To explore more about the changes in Lubuntu 22.04 LTS, you can refer to the [official release notes][15].
### Download Lubuntu 22.04 LTS
_If you are upgrading from Lubuntu 20.04 LTS that has LXQt, this new version uses a different Openbox settings configuration file._
_If you have customized, `~/.config/openbox/lxqt-rc.xml` you will want to copy that file to `~/.config/openbox/rc.xml`. This change does not impact new installations and upgrades from 21.10._
To perform a fresh installation, you can get the latest ISO from the official website (direct link/torrent available).
[Lubuntu 22.04 LTS][16]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/lubuntu-22-04-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/xubuntu-22-04-release/
[2]: https://itsfoss.com/long-term-support-lts/
[3]: https://itsfoss.com/which-ubuntu-install/
[4]: https://news.itsfoss.com/ubuntu-mate-22-04-release/
[5]: https://news.itsfoss.com/kubuntu-22-04-release/
[6]: https://news.itsfoss.com/ubuntu-budgie-22-04-release/
[7]: https://unsplash.com/photos/bviex5lwf3s
[8]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU3NiIgd2lkdGg9IjEwMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[9]: https://news.itsfoss.com/lxqt-1-1-0-release/
[10]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjY3NSIgd2lkdGg9IjEyMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[11]: https://lxqt-project.org/release/2021/04/16/lxqt-0-17-0/
[12]: https://twitter.com/lxqt_project/status/1517432593020563458?s=20&t=WVsqRk8b83pSE5_TfZ3n4Q
[13]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUwOSIgd2lkdGg9IjcxNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[14]: https://discourse.lubuntu.me/t/dropping-trojita-k3b-fcitx-from-lubuntu-jammy-22-04-seed/3044
[15]: https://discourse.lubuntu.me/t/lubuntu-22-04-lts-jammy-jellyfish-release-notes/3179
[16]: https://lubuntu.me/downloads/

View File

@ -2,7 +2,7 @@
[#]: via: "https://news.itsfoss.com/plasma-5-25-features/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "PeterPan0106"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
@ -91,7 +91,7 @@ via: https://news.itsfoss.com/plasma-5-25-features/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[PeterPan0106](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,151 +0,0 @@
[#]: subject: "Pop!_OS 22.04 LTS Arrives with Automatic Updates, GNOME 42, and PipeWire"
[#]: via: "https://news.itsfoss.com/pop-os-22-04-release/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Pop!_OS 22.04 LTS Arrives with Automatic Updates, GNOME 42, and PipeWire
======
The next LTS upgrade for Pop!_OS is finally here.
Pop!_OS 22.04 is based on Ubuntu 22.04 LTS with [Linux Kernel 5.16.19][1] at the time of launch.
Let us take a brief look at what it has to offer.
### Pop!_OS 22.04 LTS: Whats New?
Visually, it does not feature any massive changes. However, it does feature **GNOME 42** as its base for its COSMIC Desktop.
So, you should expect to see some [GNOME 42 features][2] available with Pop!_OS 22.04.
In addition to the desktop environment, there are a few new feature additions. I shall mention the key highlights as you read on.
#### Automatic Updates
With Pop!_OS 22.04 LTS, you can easily update/upgrade the system from the OS Upgrade &amp; Recovery panel via the System settings.
![][3]
You can also schedule the day/time to process the upgrade. The scheduled automatic updates include support for Debian, Flatpak, and Nix packages as well. Surely, this should help save time by eliminating the need for checking updates every other day.
By default, you will be notified of the available updates on a weekly basis. But, you can tweak the settings to get notified every day or each month. The notifications will be turned off if you have automatic updates enabled.
![][4]
And, yes, the automatic updates are disabled by default. You will have to set it up if you require it.
#### New Support Panel
![][5]
Unlike other distributions, Pop!_OS tries to take a mainstream approach to provide technical support for users running the operating system.
To give clarity, they have added a new support panel in the settings where you can quickly access the documentation, join the community support, and create log files. These options should make the support options accessible, and encourage users to refer the recommended sources for quick support.
#### New Screenshot Tool
![][6]
Thanks to GNOME 42, you get the brand new screenshot UI with the ability to record the screen as well.
With Pop!_OS 22.04 LTS, I found the positioning of the UI a bit weird. And, it looks like they tweaked it a bit for a transparent look with slightly different icons.
#### Dark vs Light Backgrounds
![][7]
Unlike other distributions, you wont find a new wallpaper collection to cater to the light/dark variants.
However, when you choose the light/dark theme, the wallpaper will change to the default light/dark variants available.
Do note that the light/dark mode can have independent backgrounds (you can change them) and isnt locked to the default.
#### Improvements to the Pop!_Shop
The Pop!_Shop has received some significant performance upgrades along with subtle changes to offer a good user experience even with small window sizes.
![][8]
You will also find a new “**Recently Updated**” section to highlight the latest updated applications.
This should come in handy to find applications that have been recently updated with new features/fixes.
#### Improvements to the Launcher
![][9]
With Pop!_OS, you do not need a launcher like [Ulauncher][10] to quickly access the settings/apps.
This is because you already get a built-in launcher that works pretty well.
With Pop!_OS 22.04 LTS, the launcher has also received some upgrades where you can also access settings for desktop options, background, appearance, dock, and workspaces.
#### PipeWire for Audio
With Pop!_OS 22.04 LTS, the default for audio processing will use PipeWire instead of PulseAudio.
This should not pose a problem for hardware compatible with PulseAudio, and potentially opens up for better quality audio, and more customizations.
#### Linux Kernel
![][11]
It is a no-brainer that Pop!_OS regularly updates the Linux Kernel offered. However, you will find [Linux Kernel 5.16.19][1] with all the hardware improvements out-of-the-box.
#### Other Improvements
![][12]
There are some additional essential improvements that can be useful for a variety of users. Some of them include:
* Better Multi-Monitor support
* Fixed layout on HiDPI displays
* Improved performance
* Pop!_OS Upgrade will only activate when you are checking or performing upgrades.
* When updating Debian packages, you get the ability to resume it if interrupted.
* Added support for laptop privacy screens.
* RDP by default for remote desktop use.
* New default profile icon.
* Workspace improvements.
* Icons are now SVG-based instead of PNG-based.
### Download Pop!_OS 22.04 LTS
If you are already running Pop!_OS, you should be easily able to upgrade it to Pop!_OS 22.04 in some time.
In either case, you can download the latest Nvidia/Intel/AMD ISO from the [official site][13].
[Pop!_OS 22.04 LTS][13]
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/pop-os-22-04-release/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/linux-kernel-5-16/
[2]: https://news.itsfoss.com/gnome-42-features/
[3]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjYxNCIgd2lkdGg9IjkyOCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[4]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjI2MSIgd2lkdGg9IjUyOCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[5]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjY1NyIgd2lkdGg9IjEwMjUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQ1NyIgd2lkdGg9IjYxNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[7]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjUyNyIgd2lkdGg9Ijg5OCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[8]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjY3NyIgd2lkdGg9Ijk4MiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[9]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU1NCIgd2lkdGg9Ijc5OSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[10]: https://itsfoss.com/ulauncher/
[11]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjU4MSIgd2lkdGg9Ijg2OCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[12]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjMzOCIgd2lkdGg9IjEyMDAiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+
[13]: https://pop.system76.com/

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/19/4/mindfulness-over-multitasking"
[#]: author: "Sarah Wall https://opensource.com/users/sarahwall"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: translator: "lkxed"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,114 +0,0 @@
[#]: subject: "Everything You Need to Know About Mozilla and Meta (Facebook) Working Together"
[#]: via: "https://news.itsfoss.com/mozilla-meta-facebook/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "sthwhl"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Everything You Need to Know About Mozilla and Meta (Facebook) Working Together
======
Im sure it is easy to make several assumptions about the story going by the headlines.
_Why?_
Well, it is **Facebook**, after all.
Even if it is “**Meta**” now, it does not change the fact that they were involved in some of the worst privacy practices ever.
If you think twice, Facebook isnt an ideal privacy-focused social media platform (even though I still use it for certain use-cases).
_With so much more to complain about, how come a privacy-focused company “Mozilla” end up working with Meta (Facebook)?_
Surprisingly, Mozilla made several remarks about Facebooks bad privacy practices in the past.
Not to forget, Mozilla Firefox was one of the first web browsers to prevent companies like Facebook from tracking users thanks to [total cookie protection][1] and some other technologies.
Furthermore, they recently started a study collaborating with **The Markup** to analyze the type of information Facebook collects.
So, why are they working with Facebook now?
### Privacy-Preserving Attribution Using IPA
Mozilla revealed in a [blog post][2] that it has been working with a team from Meta on a new proposal about a privacy-respecting attribution.
Attribution in advertising lets the advertisers/marketers know if their ad campaigns are performing as expected.
And, Mozilla plans to introduce **Interoperable Private Attribution** (or IPA) to give advertisers the ability to check insights while making the advertising privacy-friendly.
### How Does IPA Aim to Make Advertising Privacy-Friendly?
Mozilla is utilizing its expertise with its existing privacy-preserving telemetry technology, [Prio][3].
While that sounds promising, how does IPA work?
As described in the blog post, Mozilla says that IPA offers two privacy-preserving features:
* It uses Multi-party Computation (MPC) to prevent a single entity (browser, advertisers, or websites) to learn about user behavior.
* Instead of individual results linking to a track/profile users, IPA is an aggregated system that does not link back anything to individual users.
Technically, they plan to use “match keys” that are different from cookies but can be used across different browsers/devices to be able to generate useful reports.
These match keys will help produce summary statistics about the ad interaction events (whether it is clicked, seen, and if it made a conversion).
As per the proposal, the match keys would be writable but not readable, making it a critical component of the privacy properties in IPA.
### Is This Useful?
Taking a good look at its [proposal][4], it is safe to say that it sounds promising.
Considering ad revenue is still the major fuel for most businesses, it only makes sense to make it privacy-friendly and less intrusive.
The result could simply bring back the good old days when users werent worried about advertising but curious about what they see in them.
Unlike [Googles FLoC][5], this can create a win-win scenario for both advertisers and the users as well.
### How Does Meta Fit in the Picture?
![][6]
I am really not sure about this.
I have no intention of making ill-informed remarks about the technology proposed by Mozilla, collaborating with Meta.
On the other hand, I cant be confident about it, considering they chose “Meta” to collaborate on something that is important to improve the advertising industry without harming user privacy.
### Mozilla, What Are You Hiding?
Im not stirring up controversy (or a wild theory).
But, a transparent, and privacy-respecting company just decided to collaborate with a company that isnt really known for privacy?
Isnt it too obvious that the team at Mozilla already knows this?
And, they still decided to go ahead with it, without any transparent public communication on their social media channels as well.
Yes, they did publish the blog post, but it wasnt promoted, considering it is an important proposal affecting almost every industry on the web.
_Is it safe to assume that Mozilla no longer cares about its userbase with this move?_
_Its totally up for discussion in the comments down below!_
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/mozilla-meta-facebook/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[sthwhl](https://github.com/sthwhl)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/firefox-86-release/
[2]: https://blog.mozilla.org/en/mozilla/privacy-preserving-attribution-for-advertising/
[3]: https://crypto.stanford.edu/prio/
[4]: https://docs.google.com/document/d/1KpdSKD8-Rn0bWPTu4UtK54ks0yv2j22pA5SrAD9av4s/edit
[5]: https://techcrunch.com/2022/01/25/google-kills-off-floc-replaces-it-with-topics/
[6]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQzOSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=

View File

@ -1,99 +0,0 @@
[#]: subject: "Documentation Isnt Just Another Aspect of Open Source Development"
[#]: via: "https://www.opensourceforu.com/2022/04/documentation-isnt-just-another-aspect-of-open-source-development/"
[#]: author: "Harsh Bardhan Mishra https://www.opensourceforu.com/author/harsh-bardhan-mishra/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Documentation Isnt Just Another Aspect of Open Source Development
======
Some projects live on, while some die a premature death and the difference between the two often lies in the documentation. Meticulous, smart documentation can give your project the boost it needs. Here is why you should consider documentation a primary effort, on par with development, and the right way to go about it!
![Importance of documentation][1]
Often, developers simply assume that code is self-documented enough and doesnt need any extra documentation. This overconfidence can cost the project a lot. Insufficient or bad documentation can kill your project. Without proper documentation in place users wont be able to understand the purpose as well as the proper workflow of the project. That could lead to some apprehensions about adopting your open source product.
**Work on it, right from day one**
Documentation should never be a secondary effort; it should be a primary task on par with code development and management. Documentation acts as the definitive source of truth with the wide redistribution of content in the form of community threads, stack overflows, and quora answers. It should fulfil the need of contributors who would like to refer to the actual resource, and provide the necessary references to support engineers. It should also communicate essential plans with the stakeholders. A good documentation ensures continuous improvement and development of a product.
When releasing a software product, we must not only ship the code but good documentation as well. This brings us to one of the most important concepts that most open source projects with well-maintained documentation follow documentation as code.
**Documentation as code**
Today documentation is not being stored in Microsoft Word or PDF files. The new need is version control documentation, wherein all the docs are added over a version control system and released continuously. This concept was popularised by Read the Docs and has now become an essential part of the content strategy for most documentation teams.
Tools like Bugzilla and GitHub Issues can be used to track the documentation work that is pending, and take feedback from maintainers and users to validate the release of the documents. External reviews can be used to validate the documentation piece, and to continuously publish it. This ensures that not only code, but the documentation as well, is continuously improved and released quickly.
Keep in mind that no two documentations will ever be the same if they dont follow any standardised practice. This can lead to some mess, making it hard to fetch the right information.
How exactly do we classify something as messy? When most of the documentation pieces dont follow standard practices, it leads to inconsistency and hence a big mess! So how do you declutter messy open source documents?
**Decluttering messy open source documentation**
It is important to follow a documentation style guide. A style guide is a collection of guidelines for creating and presenting content. Whether youre a standalone writer or part of a large documents team, it helps to keep a consistent style, voice and tone throughout your documentation.
There are several popular style guides available, such as the Red Hat style guide, Google documentation style guide, and Apple style guide. To choose one, first start by defining your requirements. If your requirements do not differ much from other open source projects, you can follow a readily-available style guide, or adapt the same style guide for your own purpose with a few changes here and there. Most of the grammar-related guidelines and content rules may be the same, but overall terminology can vary.
You will also be required to automate the adoption of these style guides within your projects. For this, you can use Vale, which is integrated into the local machinery with a continuous integration (CI) service that will help you to ensure your documentation follows the style guide in a strict manner.
*Reference guides:* These might include some basic references to get started with or some docs to contribute to the project.
*User facing documentation:* This is the most essential part that documents the usability of the project. Without any user-facing documentation most people will be lost as to how to go about working with the project.
*Developers documentation:* This aims to support development teams as they continuously make new progress in the project. It should also provide a good pathway to internal development efforts and make sure that the features are communicated well to the stakeholders.
*Community content:* This includes essential blogs, videos and external content that aim to support community members who would like to refer to the same for a better understanding of the project.
By using the style guide, the overall premise of the documentation will be conveyed to the users in a single tone. But since these documents are prepared by a team of technical writers, there can be conflicting writing styles, as these vary from person to person. So how do you standardise the documentation?
**Standardising documentation**
There are many approaches that can be taken when it comes to standardising documentation. The first one is obviously to create predefined templates which can be used for a variety of roles. These can be used for documenting new features, identifying bugs and issues, and updating the change log to accommodate new stuff that is being added.
Try to develop a standard workflow for publishing your documentation if you are following a Git based workflow. The most standard workflow will be to fork the repo where the documentation is published, add your changes on a local branch, push these changes, make your request and ask for reviews on the same. A positive that comes out of standardising your documents is a better feedback and review process.
**Feedback and automated reviews**
Standardisation allows you to get users feedback and generate automated reviews, which can be taken into consideration for improving the project and the documentation. With this feedback, you can also evaluate whether the information being shared is making sense to users or not. Having a proper feedback service in place through documentation platforms like GitBook helps to verify if the documentation is useful or not.
Always try to seek out subject matter expert (SME) feedback on the documentation. These SMEs can be stakeholders, developers, engineers, or even external contributors. You can also use automated tests and CI to verify if your documentation is following a style guide or not.
**Crowdsourced documentation efforts**
If you are looking to open source your documentation, perhaps the best way to get started is to provide a quick start guide. The guide can be as simple as CONTRIBUTING.md; basically, a file showing how a person can set up the project and contribute to or use it.
Always try to develop user-centric documentation that signifies the purpose of each project and build learning courses to help new contributors.
* What is the goal of this documentation?
* What is the message that needs to be given?
* What action would you like the user to take up after this?
* What are the values that I share with the reader?
* Am I concise and consistent in my writing efforts?
Defining a consistent content strategyA consistent content strategy helps to ensure a long-term vision for the documentation efforts and the project infrastructure. This can revolve around two main things:
a.* Resources:* Project docs, case studies and whitepapers, project architecture
b. *Branded content:* Blogs and guest posts, news and community stories, learning courses
Every open source project should have proper documentation stating the functionality it can provide to users, so that they can opt for the most suitable solution. Proper documentation that communicates the right information also allows other developers to put in their efforts to further enhance and improve the project. Simple though it sounds, documentation can only succeed if done right. And your project, in turn, can only succeed if your documentation is right, so never underestimate its purpose or process!
Curated By: Laveesh Kocher
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/04/documentation-isnt-just-another-aspect-of-open-source-development/
作者:[Harsh Bardhan Mishra][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/harsh-bardhan-mishra/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/03/Importance-of-documentation-696x477.jpg

View File

@ -0,0 +1,169 @@
[#]: subject: "10 Reasons to Run Linux in Virtual Machines"
[#]: via: "https://itsfoss.com/why-linux-virtual-machine/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
10 Reasons to Run Linux in Virtual Machines
======
You can run any operating system as a virtual machine to test things out or for a particular use case.
When it comes to Linux, it is usually a better performer as a virtual machine when compared to other operating systems. Even if you hesitate to install Linux on bare metal, you can try setting up a virtual machine that could run as you would expect on a physical machine.
Of course, we dont rule out the possibility of running Linux distros in VM even when using Linux as your host OS.
Moreover, you get numerous benefits when trying to run Linux on virtual machines. Here, I shall mention all about that.
### Things to Keep in Mind Before Running Linux as a Virtual Machine
It is worth noting that running Linux on a virtual machine may not be a daunting task, but there are a few pointers that you should keep in mind.
Some of them include:
* The virtual machine performance will depend on your host system. If you do not have enough system resources to allocate, the virtual machine experience will not be pleasant.
* Certain features only work well with bare metal (hardware acceleration, graphics drivers, etc.)
* You should not expect intensive disk I/O tasks to work well, like testing games.
* The user experience with Linux virtual machines varies with the program you use. For instance, you can try VMware, VirtualBox, GNOME Boxes, and Hyper-V.
In addition to all these tips, you should also make a list of your requirements before choosing a virtual machine program to run Linux.
### Here Are 10 Benefits of Running Linux on Virtual Machines
While there are perks to using a Linux VM, you should consider the current opportunities available on your host OS. For instance, you may want to [install Linux using WSL on Windows][1] if you do not require a GUI desktop.
Once you are sure that you need a VM, heres why you should proceed with it:
#### 1. Easy Setup
![easy setup linux vm][2]
Compared to the traditional installation process on bare metal, setting up a virtual machine is often easier.
For Ubuntu-based distros, programs like VMware offer an **Easy Install** option where you have to type in the required fields for username and password; the rest will proceed without needing additional inputs. You do not need to select a partition, bootloader, or advanced configurations.
In some cases, you can also use prebuilt images offered by Linux distributions for a specific virtual program, where you need to open it to access the system. Think of it as a portable VM image ready to launch wherever you need it.
For example, you can check out how you can use [VirtualBox to install Arch Linux][3].
You may still need to configure things when installing other distros, but there are options where you need minimal effort.
#### 2. Does Not Affect the Host OS
![isolated linux vm][4]
With a virtual machine, you get the freedom to do anything you want, and it is because you get an isolated system.
Usually, if you do not know what youre doing with a Linux system, you could easily end up with a messed-up configuration.
So, if you set up a VM, you can quickly try whatever you want without worrying about affecting the host OS. In other words, your system will not be impacted by any changes to the VM because its entirely isolated.
Hence, a VM is the best way to test any of your ambitious or destructive changes that you may want to perform on bare metal.
#### 3. Resource Sharing
![sharing resources linux vm][5]
If you have ample free system resources, you can utilize the rest using a Virtual Machine for any other tasks. For instance, if you want a private browsing experience without leaving any traces on your host, a VM can help.
It can be a far-fetched example, but it is just one of the ideas. In that way, you get to use the resources fully without much hassle.
Also, as opposed to a dual-boot scenario, where you need to [install Linux alongside Windows][6] on separate disks or [install Windows after Linux][7], you need dedicated resources locked on to your tasks.
However, with a VM, you can always use Linux without locking up your resources, rather than temporarily sharing them to get your tasks done, which can be more convenient.
#### 4. Multi-Tasking 
![multitasking linux vm][8]
With the help of resource-sharing, you can easily multi-task.
For instance, you need to switch back and forth between a dual-boot setup to access Windows and Linux.
But, with a virtual machine, you can almost eliminate the need for [dual-booting Linux][9] and multi-task with two operating systems seamlessly.
Of course, you need to ensure that you have the required amount of system resources and external hardware (like dual monitors) to effectively use it. Nevertheless, the potential to multi-task increases with a Linux VM in place.
#### 5. Facilitates Software Testing
With virtualization, you get the freedom to test software on Linux distros by instantly creating various situations.
For instance, you can test different software versions simultaneously on multiple Linux VMs. There can be more use-cases, such as testing a software development build, early build of a Linux distro, etc.
#### 6. Great for Development
![development linux vm][10]
When you want to learn to code or just get involved in developing something, you want an environment free from any conflicts and errors.
So, a Linux VM is the perfect place to install new packages from scratch without worrying about conflicts with existing ones. For instance, you can [install and set up Flutter][11] to test things on Ubuntu.
If you mess up the system, you can quickly delete the VM and spin up a new one to learn from your mistakes.
You get a perfect isolated environment for development work and testing with a Linux VM.
#### 7. Learning or Research
Linux is something to explore. While you could use it for basic computing tasks, theres so much more that you can do with it.
You can learn how to customize the user interface, try some [popular desktop environments][12], install [various essential apps][13], and take control of your system without worrying about it.
If anything goes wrong, you create a new Linux VM. Of course, it is not just for general-purpose usage, but aspiring system administrators can also take this opportunity to test what they learn.
#### 8. Easy to Clone or Migrate
Virtual machines, in general, are easy to clone and migrate. With a Linux VM, as long as the virtual program is supported on another system or host OS, you can easily migrate it without any special requirements.
If you need to clone an existing virtual machine for any reason, that is pretty easy too, and it should take a couple of clicks to get it done.
#### 9. Try Variety of Distros
![distros linux vm][14]
Of course, with hundreds of Linux distros available, you can try all kinds of distros by creating a Linux virtual machine.
You may consider this a part of learning/research, but I believe trying out different distros is a massive task if you want to test things out before installing them on your system.
#### 10. Debugging
Whether it is for fun or serious research, debugging is relatively more straightforward in an isolated environment provided by the Linux VM.
You get the freedom to try various troubleshooting methods without thinking about the outcome. Also, you do not need root access to your host OS (if its Linux) to access the system configuration/files in the VM.
### Wrapping Up
If you are not an experienced user or depend on a different host OS, you can benefit from installing Linux using a virtual machine.
A Linux VM should be beneficial for development, learning, experimenting, or any other special use cases.
Have you used Linux on a virtual machine? What do you use it for? Let me know in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/why-linux-virtual-machine/
作者:[Ankush Das][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lkxed
[1]: https://itsfoss.com/install-bash-on-windows/
[2]: https://itsfoss.com/wp-content/uploads/2022/04/easy-setup-linux-vm.jpg
[3]: https://itsfoss.com/install-arch-linux-virtualbox/
[4]: https://itsfoss.com/wp-content/uploads/2022/04/isolated-linux-vm.jpg
[5]: https://itsfoss.com/wp-content/uploads/2022/04/sharing-resources-linux-vm.jpg
[6]: https://itsfoss.com/dual-boot-hdd-ssd/
[7]: https://itsfoss.com/install-windows-after-ubuntu-dual-boot/
[8]: https://itsfoss.com/wp-content/uploads/2022/04/multitasking-linux-vm.jpg
[9]: https://itsfoss.com/dual-boot-fedora-windows/
[10]: https://itsfoss.com/wp-content/uploads/2022/04/development-linux-vm.jpg
[11]: https://itsfoss.com/install-flutter-linux/
[12]: https://itsfoss.com/best-linux-desktop-environments/
[13]: https://itsfoss.com/essential-linux-applications/
[14]: https://itsfoss.com/wp-content/uploads/2022/04/distros-linux-vm.jpg

View File

@ -0,0 +1,47 @@
[#]: subject: "Elon Musks Plan To Open Source The Twitter Algorithm Has Flaws"
[#]: via: "https://www.opensourceforu.com/2022/04/elon-musks-plan-to-open-source-the-twitter-algorithm-has-flaws/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Elon Musks Plan To Open Source The Twitter Algorithm Has Flaws
======
![twiiter][1]
Elon Musk made his aspirations for Twitter obvious just hours after Twitter said it had accepted Elon Musks takeover offer. Musk listed the major changes he intends to make in a press release, including opening up the algorithms that govern what users see in their feed.
Musks desire to open source Twitters algorithms stems from his long-standing concern about the platforms potential for political repression, but its unlikely that doing so will have the desired effect. Experts worry that it may instead bring a slew of unexpected issues.
Although Musk has a deep dislike for authority, his ambition for algorithmic openness coincides with the wishes of legislators all around the world. In recent years, numerous governments have used this principle as a cornerstone of their efforts to combat Big Tech.
Melanie Dawes, the chief executive of Ofcom, the UK regulator of social media, has stated that social media firms would be required to explain how their code operates. In addition, the European Unions newly passed Digital Services Act, which was approved on April 23, would oblige platforms to provide more openness. In February 2022, Democratic senators in the United States submitted legislation to create an Algorithmic Accountability Act. Their goal is to increase transparency and supervision of the algorithms that regulate our timelines and news feeds, as well as other aspects of our lives.
Allowing competitors to see and adapt Twitters algorithm potentially means that someone could just steal the source code and offer a rebranded version. Vast sections of the internet run on open-source software, the most renowned of which is OpenSSL, a security toolkit used by large swaths of the online that was hacked in 2014.
There are also open source social networks that have already been created. Mastodon, a microblogging network created in response to worries about Twitters dominant position, allows users to inspect its code, which is available on the GitHub software repository.
However, reading the code behind an algorithm does not always tell you how it works, and it doesnt provide the typical individual much insight into the corporate structures and processes that go into its development.
“Its a bit like trying to understand ancient creatures with genetic material alone,” says Jonathan Gray, a senior lecturer in critical infrastructure studies at Kings College London. “It tells us more than nothing, but it would be a stretch to say we know about how they live.”
Twitter is likewise not controlled by a single algorithm. “Some of them will determine what people see on their timelines in terms of trends, content, or suggested followers,” says Catherine Flick, a researcher at De Montfort University in the United Kingdom who studies computing and social responsibility. The algorithms that regulate what information shows in users timelines will be the ones that people are most interested in, but even that wont be very useful without the training data.
Cobbe believes that the hazards outweigh the advantages. Because the computer code doesnt reveal how algorithms were developed or evaluated, what elements or considerations went into them, or what was prioritised during the process, open-sourcing it may not make a significant change in Twitters transparency. In the meantime, it may pose severe security hazards.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/04/elon-musks-plan-to-open-source-the-twitter-algorithm-has-flaws/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/twiiter-696x392.jpg

View File

@ -1,148 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (Starryi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Watching activity on Linux with watch and tail commands)
[#]: via: (https://www.networkworld.com/article/3529891/watching-activity-on-linux-with-watch-and-tail-commands.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Watching activity on Linux with watch and tail commands
======
The watch and tail commands can help monitor activity on Linux systems. This post looks at some helpful ways to use these commands.
Loops7 / Getty Images
The **watch** and **tail** commands provide some interesting options for examining activity on a Linux system in an ongoing manner.
That is, instead of just asking a question and getting an answer (like asking **who** and getting a list of currently logged in users), you can get **watch** to provide you with a display showing who is logged in along with updates as users come and go.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
With **tail**, you can display the bottoms of files and see content as it is added. This kind of monitoring is often very helpful and requires less effort than running commands periodically.
### Using watch
One of the simplest examples of using **watch** is to use the command **watch who**. You should see a list showing who is logged in along with when they logged in and where they logged in from. Notice that the default is to update the display every two seconds (top left) and that the date and time (upper right) updates itself at that interval. The list of users will grow and shrink as users log in and out.
### $ watch who
This command will dissplay a list of logins like this:
```
Every 2.0s: who dragonfly: Thu Feb 27 10:52:00 2020
nemo pts/0 2020-02-27 08:07 (192.168.0.11)
shs pts/1 2020-02-27 10:58 (192.168.0.5)
```
You can change the interval to get less frequent updates by adding a **-n** option (e.g., -n 10) to select a different number of seconds between updates.
### $ watch -n 10 who
The new interval will be displayed and the time shown will change less frequently, aligning itself with the selected interval.
[][2]
```
Every 10.0s: who dragonfly: Thu Feb 27 11:05:47 2020
nemo pts/0 2020-02-27 08:07 (192.168.0.11)
shs pts/1 2020-02-27 10:58 (192.168.0.5)
```
If you prefer to see only the command's output and not the heading (the top 2 lines), you can omit those lines by adding the **-t** (no title) option.
### $ watch -t who
Your display will then look like this:
```
nemo pts/0 2020-02-27 08:07 (192.168.0.11)
shs pts/1 2020-02-27 10:58 (192.168.0.5)
```
If every time the watched command runs, its output is the same, only the title line (if not omitted) will change. The rest of the displayed information will stay the same.
If you want your **watch** command to exit as soon as the output of the command that it is watching changes, you can use a **-g** (think of this as the "go away") option. You might choose to do this if, for example, you are simply waiting for others to start logging into the system.
You can also highlight changes in the displayed output using the **-d** (differences) option. The highlighting will only last for one interval (2 seconds by default), but can help to draw your attention to the changes.
Here's a more complex example of using the **watch** command to display services that are listening for connections and the ports they are using. While the output isn't likely to change, it would alert you to any new service starting up or one going down.
```
$ watch 'sudo lsof -i -P -n | grep LISTEN'
```
Notice that the command being run needs to be enclosed in quotes to ensure that the **watch** command doesn't send its output to the grep command.
Using the **watch -h** command will provide you with a list of the command's options.
```
$ watch -h
Usage:
watch [options] command
Options:
-b, --beep beep if command has a non-zero exit
-c, --color interpret ANSI color and style sequences
-d, --differences[=<permanent>]
highlight changes between updates
-e, --errexit exit if command has a non-zero exit
-g, --chgexit exit when output from command changes
-n, --interval <secs> seconds to wait between updates
-p, --precise attempt run command in precise intervals
-t, --no-title turn off header
-x, --exec pass command to exec instead of "sh -c"
-h, --help display this help and exit
-v, --version output version information and exit
```
### Using tail -f
The **tail -f** command has something in common with **watch**. It will both display the bottom of a file and additional content as it is added. Instead of having to run a "tail" command again and again, you run one command and get a repeatedly updated view of its output. For example, you could watch a system log with a command like this:
```
$ tail -f /var/log/syslog
```
Some files, like **/var/log/wtmp**, don't lend themselves to this type of handling because they're not formatted as normal text files, but you could get a similar result by combining **watch** and **tail** like this:
```
watch 'who /var/log/wtmp | tail -20'
```
This command will display the most recent 5 logins regardless of how many of the users are still logged in. If another login occurs, a line will be added and the top line removed.
```
Every 60.0s: who /var/log/wtmp | tail -5 dragonfly: Thu Feb 27 12:46:07 2020
shs pts/0 2020-02-27 08:07 (192.168.0.5)
nemo pts/1 2020-02-27 08:26 (192.168.0.5)
shs pts/1 2020-02-27 10:58 (192.168.0.5)
nemo pts/1 2020-02-27 11:34 (192.168.0.5)
dory pts/1 2020-02-27 12:14 (192.168.0.5)
```
Both the **watch** and **tail -f** commands can provide auto-updating views of information that you might at times want to monitor, making the task of monitoring quite a bit easier whether you're monitoring processes, logins or system resources.
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3529891/watching-activity-on-linux-with-watch-and-tail-commands.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://www.facebook.com/NetworkWorld/
[4]: https://www.linkedin.com/company/network-world

View File

@ -1,175 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (hwlife)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 tips for securing your WordPress website)
[#]: via: (https://opensource.com/article/20/4/wordpress-security)
[#]: author: (Lucy Carney https://opensource.com/users/lucy-carney)
6 tips for securing your WordPress website
======
Even beginners can—and should—take these steps to protect their
WordPress sites against cyberattacks.
![A lock on the side of a building][1]
Already powering over 30% of the internet, WordPress is the fastest-growing content management system (CMS) in the world—and it's not hard to see why. With tons of customization available through coding and plugins, top-notch SEO, and a supreme reputation for blogging, WordPress has certainly earned its popularity.
However, with popularity comes other, less appealing attention. WordPress is a common target for intruders, malware, and cyberattacks—in fact, WordPress accounted for around [90% of hacked CMS platforms][2] in 2019.
Whether you're a first-time WordPress user or an experienced developer, there are important steps you can take to protect your WordPress website. The following six key tips will get you started.
### 1\. Choose reliable hosting
Hosting is the unseen foundation of all websites—without it, you can't publish your site online. But hosting does much more than simply host your site. It's also responsible for site speed, performance, and security.
The first thing to do is to check if a host includes SSL security in its plans.
SSL is an essential security feature for all websites, whether you're running a small blog or a large online store. You'll need a more [advanced SSL certificate][3] if you're accepting payments, but for most sites, the basic free SSL should be fine.
Other security features to look out for include:
* Frequent, automatic offsite backups
* Malware and antivirus scanning and removal
* Distributed denial of service (DDoS) protection
* Real-time network monitoring
* Advanced firewall protection
In addition to these digital security features, it's worth thinking about your hosting provider's _physical_ security measures as well. These include limiting access to data centers with security guards, CCTV, and two-factor or biometric authentication.
### 2\. Use security plugins
One of the best—and easiest—ways of protecting your website's security is to install a security plugin, such as [Sucuri][4], which is an open source, GPLv2 licensed project. Security plugins are vitally important because they automate security, which means you can focus on running your site rather than committing all your time to fighting off online threats.
These plugins detect and block malicious attacks and alert you about any issues that require your attention. In short, they constantly work in the background to protect your site, meaning you don't have to stay awake 24/7 to fight off hackers, bugs, and other digital nasties.
A good security plugin will provide all the essential security features you need for free, but some advanced features require a paid subscription. For example, you'll need to pay if you want to unlock [Sucuri's website firewall][5]. Enabling a web application firewall (WAF) blocks common threats and adds an extra layer of security to your site, so it's a good idea to look for this feature when choosing a security plugin.
### 3\. Choose trustworthy plugins and themes
The joy of WordPress is that it is open source, so anyone and everyone can pitch in with themes and plugins that they've developed. This can also pose problems when it comes to picking a high-quality theme or plugin.
It serves to be cautious when picking a free theme or plugin, as some are poorly designed—or worse, may hide malicious code.
To avoid this, always source free themes and plugins from reputable sources, such as the WordPress library. Always read reviews and research the developer to see if they've built any other programs.
Outdated or poorly designed themes and plugins can leave "backdoors" open for attackers or bugs to get into your site, which is why it pays to be careful in your choices. However, you should also be wary of nulled or cracked themes. These are premium themes that have been compromised by hackers and are for sale illegally. You might buy a nulled theme believing that it's all above-board—only to have your site damaged by hidden malicious code.
To avoid nulled themes, don't get drawn in by discounted prices, and always stick to reputable stores, such as the official [WordPress directory][6]. If you're looking elsewhere, stick to large and trusted stores, such as [Themify][7], a theme and plugin store that has been running since 2010. Themify ensures all its WordPress themes pass the [Google Mobile-Friendly][8] test and are open source under the [GNU General Public License][9].
### 4\. Run regular updates
It's a fundamental WordPress rule: _always keep your site up to date._ However, it's a rule not everyone sticks to—in fact, only [43% of WordPress sites][10] are running the latest version.
The problem is that when your site becomes outdated, it becomes susceptible to glitches, bugs, intrusions, and crashes because it falls behind on security and performance fixes. Outdated sites can't fix bugs the same way as updated sites can, and attackers can tell which sites are outdated. This means they can search for the most vulnerable sites and attack accordingly.
This is why you should always run your site on the latest version of WordPress. And in order to keep your security at its strongest, you must update your plugins and themes as well as your core WordPress software.
If you choose a managed WordPress hosting plan, you might find that your provider will check and run updates for you—be clear whether your host offers software _and_ plugin updates. If not, you can install an open source plugin manager, such as the GPLv2-licensed [Easy Updates Manager plugin][11], as an alternative.
### 5\. Strengthen your logins
Aside from creating a secure WordPress website through carefully choosing your theme and installing security plugins, you also need to safeguard against unauthorized access through logins.
#### Password protection
The first and simplest way to strengthen your login security is to change your password—especially if you're using an [easily guessed phrase][12] such as "123456" or "qwerty."
Instead, try to use a long passphrase rather than a password, as they are harder to crack. The best way is to use a series of unrelated words strung together that you find easy to remember.
Here are some other tips:
* Never reuse passwords
* Don't include obvious words such as family members' names or your favorite football team
* Never share your login details with anyone
* Include capitals and numbers to add complexity to your passphrase
* Don't write down or store your login details anywhere
* Use a [password manager][13]
#### Change your login URL
It's a good idea to change your default login web address from the standard format: yourdomain.com/wp-admin. This is because hackers know this is the default URL, so you risk brute-force attacks by not changing it.
To avoid this, change the URL to something different. Use an open source plugin such as the GPLv2-licensed [WPS Hide Login][14] for safe, quick, and easy customization.
#### Apply two-factor authentication
For extra protection against unauthorized logins and brute-force attacks, you should add two-factor authentication. This means that even if someone _does_ get access to your login details, they'll need a code that's sent directly to your phone to gain access to your WordPress site's admin.
Adding two-factor authentication is pretty easy. Simply install yet another plugin—this time, search the WordPress Plugin Directory for "two-factor authentication," and select the plugin you want. One option is [Two Factor][15], a popular GPLv2 licensed project that has over 10,000 active installations.
#### Limit login attempts
WordPress tries to be helpful by letting you guess your login details as many times as you like. However, this is also helpful to hackers trying to gain unauthorized access to your WordPress site to release malicious code.
To combat brute-force attacks, install a plugin that limits login attempts and set how many guesses you want to allow.
### 6\. Disable file editing
This isn't such a beginner-friendly step, so don't attempt it unless you're a confident coder—and always back up your site first!
That said, disabling file editing _is_ an important measure if you're really serious about protecting your WordPress website. If you don't hide your files, it means anyone can edit your theme and plugin code straight from the admin area—which is dangerous if an intruder gets in.
To deny unauthorized access, go to your **wp-config.php** file and enter:
```
&lt;Files wp-config.php&gt;
order allow,deny
deny from all
&lt;/Files&gt;
```
Or, to remove the theme and plugin editing options from your WordPress admin area completely, edit your **wp-config.php** file by adding:
```
`define( 'DISALLOW_FILE_EDIT', true );`
```
Once you've saved and reloaded the file, the plugin and theme editors will disappear from your menus within the WordPress admin area, stopping anyone from editing your theme or plugin code—including you**.** Should you need to restore access to your theme and plugin code, just delete the code you added to your **wp-config.php** file when you disabled editing.
Whether you block unauthorized access or totally disable file editing, it's important to take action to protect your site's code. Otherwise, it's easy for unwelcome visitors to edit your files and add new code. This means an attacker could use the editor to gather data from your WordPress site or even use your site to launch attacks on others.
For an easier way of hiding your files, you can use a security plugin that will do it for you, such as Sucuri.
### WordPress security recap
WordPress is an excellent open source platform that should be enjoyed by beginners and developers alike without the fear of becoming a victim of an attack. Sadly, these threats aren't going anywhere anytime soon, so it's vital to stay on top of your site's security.
Using the measures outlined above, you can create a stronger, more secure level of protection for your WordPress site and ensure a much more enjoyable experience for yourself.
Staying secure is an ongoing commitment rather than a one-time checklist, so be sure to revisit these steps regularly and stay alert when building and using your CMS.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/wordpress-security
作者:[Lucy Carney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lucy-carney
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://cyberforces.com/en/wordpress-most-hacked-cms
[3]: https://opensource.com/article/19/11/internet-security-tls-ssl-certificate-authority
[4]: https://wordpress.org/plugins/sucuri-scanner/
[5]: https://sucuri.net/website-firewall/
[6]: https://wordpress.org/themes/
[7]: https://themify.me/
[8]: https://developers.google.com/search/mobile-sites/
[9]: http://www.gnu.org/licenses/gpl.html
[10]: https://wordpress.org/about/stats/
[11]: https://wordpress.org/plugins/stops-core-theme-and-plugin-updates/
[12]: https://www.forbes.com/sites/kateoflahertyuk/2019/04/21/these-are-the-worlds-most-hacked-passwords-is-yours-on-the-list/#4f157c2f289c
[13]: https://opensource.com/article/16/12/password-managers
[14]: https://wordpress.org/plugins/wps-hide-login/
[15]: https://en-gb.wordpress.org/plugins/two-factor/

View File

@ -1,137 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source live streaming with Open Broadcaster Software)
[#]: via: (https://opensource.com/article/20/4/open-source-live-stream)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Open source live streaming with Open Broadcaster Software
======
If you have something to say, a skill to teach, or just something fun to
share, broadcast it to the world with OBS.
![An old-fashioned video camera][1]
If you have a talent you want to share with the world, whether it's making your favorite sourdough bread or speedrunning through a level of your favorite video game, live streaming is the modern show-and-tell. It's a powerful way to tell the world about your hobby through a medium once reserved for exclusive and expensive TV studios. Not only is the medium available to anyone with a relatively good internet connection, but the most popular software to make it happen is open source.
[OBS][2] (Open Broadcaster Software) is a cross-platform application that serves as a control center for your live stream. A _stream_, strictly speaking, means _progressive and coherent data_. The data in a stream can be audio, video, graphics, text, or anything else you can represent as digital data. OBS is programmed to accept data as input, combine streams together (technically referred to as _mixing_) into one product, and then broadcast it.
![OBS flowchart][3]
A _broadcast_ is data that can be received by some target. If you're live streaming, your primary target is a streaming service that can host your stream, so other people can find it in a web browser or media player. A live stream is a live event, so people have to "tune in" to your stream when it's happening, or else they miss it. However, you can also target your own hard drive so you can record a presentation and then post it on the internet later for people to watch at their leisure.
### Installing OBS
To install OBS on Windows or macOS, download an installer package from [OBS's website][2].
To install OBS on Linux, either install it with your package manager (such as **dnf**, **zypper**, or **apt**) or [install it as a Flatpak][4].
### Join a streaming service
In order to live stream, you must have a stream broker. That is, you need a central location on the internet for your stream to be delivered, so your viewers can get to what you're broadcasting. There are a few popular streaming services online, like YouTube and Twitch. You can also [set up your own video streaming server][5] using open source software.
Regardless of which option you choose, before you begin streaming, you must have a destination for your stream. If you do use a streaming service, you must obtain a _streaming key_. A streaming key is a hash value (it usually looks something like **2ae2fad4e33c3a89c21**) that is private and unique to you. You use this key to authenticate yourself through your streaming software. Without it, the streaming service can't know you are who you say you are and won't let you broadcast over your user account.
* * *
* * *
* * *
**![Streaming key][6]**
* In Twitch, your **Primary Stream Key** is available in the **Channel** panel of your **Creator Dashboard**.
* On YouTube, you must enable live streaming by verifying your account. Once you've done that, your **Stream Key** is in the **Other Features** menu option of your **Channel Dashboard**.
* If you're using your own server, there's no maze-like GUI to navigate. You just [create your own streaming key][7].
### Enter your streaming key
Once you have a streaming key, launch OBS and go to the **File** &gt; **Settings** menu.
In the **Settings** window, click on the **Stream** category in the left column. Set the **Service** to your stream service (Custom, Twitch, YouTube, etc.), and enter your stream key. Click the **OK** button in the bottom right to save your changes.
### Create sources
In OBS, _sources_ represent any input signal you want to stream. By default, sources are listed at the bottom of the OBS window.
![OBS sources][8]
This might be a webcam, a microphone, an audio stream (such as the sound of a video game you're playing), a screen capture of your computer (a "screencast"), a slideshow you want to present, an image, and so on. Before you start streaming, you should define all the sources you plan on using for your stream. This means you have to do a little pre-production and consider what you anticipate for your show. Any camera you have set up must be defined as a source in OBS. Any extra media you plan on cutting to during your show must be defined as a source. Any sound effects or background music must be defined as a source.
Not all sources "happen" at once. By adding media to your **Sources** panel in OBS, you're just assembling the raw components for your stream. Once you make devices and data available to OBS, you can create your **Scenes**.
#### Setting up audio
Computers have seemingly dozens of ways to route audio. Here's the workflow to follow when setting up sound for your stream:
1. Check your cables: verify that your microphone is plugged in.
2. Go to your computer's sound control panel and set the input to whatever microphone you want OBS to treat as the main microphone. This might be a gaming headset or a boom mic or a desktop podcasting mic or a Bluetooth device or a fancy audio interface with XLR ports. Whatever it is, make sure your computer "hears" your main sound input.
3. In OBS, create a source for your main microphone and name it something obvious (e.g., boom mic, master sound, or mic).
4. Do a test. Make sure OBS "hears" your microphone by referring to the audio-level monitors at the bottom of the OBS window. If it's not responding to the input you believe you've set as input, check your cables, check your computer sound control panel, and check OBS.
I've seen more people panic over audio sources than any other issue when streaming, and we've _all_ made the same dumb mistakes (several times each, probably!) when attempting to set a microphone for a live stream or videoconference call. Breathe deep, check your cables, check your inputs and outputs, and [get comfortable with audio][9]. It'll pay off in the end.
### Create scenes
A **Scene** in OBS is a screen layout and consists of one or more sources.
![Scenes in OBS][10]
For instance, you might create a scene called **Master shot** that shows you sitting at your desk in front of your computer or at the kitchen counter ready to mix ingredients together. The source could be a webcam mounted on a tripod a meter or two in front of you. Because you want to cut to a detail shot, you might create a second scene called **Close-up**, which uses the computer screen and audio as one input source and your microphone as another source, so you can narrate as you demonstrate what you're doing. If you're doing a baking show, you might want to mount a second webcam above the counter, so you can cut to an overhead shot of ingredients being mixed. Here, your source is a different webcam but probably the same microphone (to avoid making changes in the audio).
A _scene_, in other words, is a lot like a _shot_ in traditional production vernacular, but it can be the combination of many shots. The fun thing about OBS is that you can mix and match a lot of different sources together, so when you're adding a **Scene**, you can resize and position different sources to achieve picture-in-picture, or split-screen, or any other effect you might want. It's common in video game "let's play" streams to have the video game in full-screen, with the player inset in the lower right or left. Or, if you're recording a panel or a multi-player game like D&amp;D you might have several cameras covering several players in a _Brady Bunch_ grid.
The possibilities are endless. During streaming, you can cut from one scene to another as needed. This is intended to be a dynamic system, so you can change scenes depending on what the viewer needs to see at any given moment.
Generally, you want to have some preset scenes before you start to stream. Even if you have a friend willing to do video mixing as you stream, you always want a safe scene to fall back to, so take time beforehand to set up at least a master shot that shows you doing whatever it is you're doing. If all else fails, at least you'll have your main shot you can safely and reliably cut to.
### Transitions
When switching from one scene to another, OBS uses a transition. Once you have more than one scene, you can configure what kind of transition it uses in the **Transitions** panel. Simple transitions are usually best. By default, OBS uses a subtle crossfade, but you can experiment with others as you see fit.
### Go live
To start streaming, do your vocal exercises, find your motivation, and press the **Start Streaming** button.
![Start streaming in OBS][11]
As long as you've set up your streaming service correctly, you're on the air (or on the wires, anyway).
If you're the talent (the person in front of the camera), it might be easiest to have someone control OBS during streaming. But if that's not possible, you can control it yourself as long as you've practiced a little in advance. If you're screencasting, it helps to have a two-monitor setup so you can control OBS without it being on screen.
### Streaming for success
Many of us take streaming for granted now that the internet exists and can broadcast media created by _anyone_. It's a hugely powerful means of communication, and we're all responsible for making the most of it.
If you have something positive to say, a skill to teach, words of encouragement, or just something fun that you want to share, and you feel like you want to broadcast to the world, then take the time to learn OBS. You might not get a million viewers, but independent media is a vital part of [free culture][12]. The world can always use empowering and positive open source voices, and yours may be one of the most important of all.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/4/open-source-live-stream
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_film.png?itok=aElrLLrw (An old-fashioned video camera)
[2]: http://obsproject.com
[3]: https://opensource.com/sites/default/files/obs-flowchart.jpg (OBS flowchart)
[4]: https://flatpak.org/setup
[5]: https://opensource.com/article/19/1/basic-live-video-streaming-server
[6]: https://opensource.com/sites/default/files/twitch-key.jpg (Streaming key)
[7]: https://opensource.com/article/19/1/basic-live-video-streaming-server#obs
[8]: https://opensource.com/sites/default/files/uploads/obs-sources.jpg (OBS sources)
[9]: https://opensource.com/article/17/1/linux-plays-sound
[10]: https://opensource.com/sites/default/files/uploads/obs-scenes.jpg (Scenes in OBS)
[11]: https://opensource.com/sites/default/files/uploads/obs-stream-start.jpg (Start streaming in OBS)
[12]: https://opensource.com/article/18/1/creative-commons-real-world

View File

@ -1,618 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using the systemctl command to manage systemd units)
[#]: via: (https://opensource.com/article/20/5/systemd-units)
[#]: author: (David Both https://opensource.com/users/dboth)
Using the systemctl command to manage systemd units
======
Units are the basis of everything in systemd.
![woman on laptop sitting at the window][1]
In the first two articles in this series, I explored the Linux systemd startup sequence. In the [first article][2], I looked at systemd's functions and architecture and the controversy around its role as a replacement for the old SystemV init program and startup scripts. And in the [second article][3], I examined two important systemd tools, systemctl and journalctl, and explained how to switch from one target to another and to change the default target.
In this third article, I'll look at systemd units in more detail and how to use the systemctl command to explore and manage units. I'll also explain how to stop and disable units and how to create a new systemd mount unit to mount a new filesystem and enable it to initiate during startup.
### Preparation
All of the experiments in this article should be done as the root user (unless otherwise specified). Some of the commands that simply list various systemd units can be performed by non-root users, but the commands that make changes cannot. Make sure to do all of these experiments only on non-production hosts or virtual machines (VMs).
One of these experiments requires the sysstat package, so install it before you move on. For Fedora and other Red Hat-based distributions you can install sysstat with:
```
`dnf -y install sysstat`
```
The sysstat RPM installs several statistical tools that can be used for problem determination. One is [System Activity Report][4] (SAR), which records many system performance data points at regular intervals (every 10 minutes by default). Rather than run as a daemon in the background, the sysstat package installs two systemd timers. One timer runs every 10 minutes to collect data, and the other runs once a day to aggregate the daily data. In this article, I will look briefly at these timers but wait to explain how to create a timer in a future article.
### systemd suite
The fact is, systemd is more than just one program. It is a large suite of programs all designed to work together to manage nearly every aspect of a running Linux system. A full exposition of systemd would take a book on its own. Most of us do not need to understand all of the details about how all of systemd's components fit together, so I will focus on the programs and components that enable you to manage various Linux services and deal with log files and journals.
### Practical structure
The structure of systemd—outside of its executable files—is contained in its many configuration files. Although these files have different names and identifier extensions, they are all called "unit" files. Units are the basis of everything systemd.
Unit files are ASCII plain-text files that are accessible to and can be created or modified by a sysadmin. There are a number of unit file types, and each has its own man page. Figure 1 lists some of these unit file types by their filename extensions and a short description of each.
systemd unit | Description
---|---
.automount | The **.automount** units are used to implement on-demand (i.e., plug and play) and mounting of filesystem units in parallel during startup.
.device | The **.device** unit files define hardware and virtual devices that are exposed to the sysadmin in the **/dev/directory**. Not all devices have unit files; typically, block devices such as hard drives, network devices, and some others have unit files.
.mount | The **.mount** unit defines a mount point on the Linux filesystem directory structure.
.scope | The **.scope** unit defines and manages a set of system processes. This unit is not configured using unit files, rather it is created programmatically. Per the **systemd.scope** man page, “The main purpose of scope units is grouping worker processes of a system service for organization and for managing resources.”
.service | The **.service** unit files define processes that are managed by systemd. These include services such as crond cups (Common Unix Printing System), iptables, multiple logical volume management (LVM) services, NetworkManager, and more.
.slice | The **.slice** unit defines a “slice,” which is a conceptual division of system resources that are related to a group of processes. You can think of all system resources as a pie and this subset of resources as a “slice” out of that pie.
.socket | The **.socket** units define interprocess communication sockets, such as network sockets.
.swap | The **.swap** units define swap devices or files.
.target | The **.target** units define groups of unit files that define startup synchronization points, runlevels, and services. Target units define the services and other units that must be active in order to start successfully.
.timer | The **.timer** unit defines timers that can initiate program execution at specified times.
### systemctl
I looked at systemd's startup functions in the [second article][3], and here I'll explore its service management functions a bit further. systemd provides the **systemctl** command that is used to start and stop services, configure them to launch (or not) at system startup, and monitor the current status of running services.
In a terminal session as the root user, ensure that root's home directory ( **~** ) is the [PWD][5]. To begin looking at units in various ways, list all of the loaded and active systemd units. systemctl automatically pipes its [stdout][6] data stream through the **less** pager, so you don't have to:
```
[root@testvm1 ~]# systemctl
UNIT                                       LOAD   ACTIVE SUB       DESCRIPTION              
proc-sys-fs-binfmt_misc.automount          loaded active running   Arbitrary Executable File&gt;
sys-devices-pci0000:00-0000:00:01.1-ata7-host6-target6:0:0-6:0:0:0-block-sr0.device loaded a&gt;
sys-devices-pci0000:00-0000:00:03.0-net-enp0s3.device loaded active plugged   82540EM Gigabi&gt;
sys-devices-pci0000:00-0000:00:05.0-sound-card0.device loaded active plugged   82801AA AC'97&gt;
sys-devices-pci0000:00-0000:00:08.0-net-enp0s8.device loaded active plugged   82540EM Gigabi&gt;
sys-devices-pci0000:00-0000:00:0d.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda1.device loa&gt;
sys-devices-pci0000:00-0000:00:0d.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda2.device loa&gt;
&lt;snip removed lots of lines of data from here&gt;
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.
206 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
```
As you scroll through the data in your terminal session, look for some specific things. The first section lists devices such as hard drives, sound cards, network interface cards, and TTY devices. Another section shows the filesystem mount points. Other sections include various services and a list of all loaded and active targets.
The sysstat timers at the bottom of the output are used to collect and generate daily system activity summaries for SAR. SAR is a very useful problem-solving tool. (You can learn more about it in Chapter 13 of my book [_Using and Administering Linux: Volume 1, Zero to SysAdmin: Getting Started_][7].)
Near the very bottom, three lines describe the meanings of the statuses (loaded, active, and sub). Press **q** to exit the pager.
Use the following command (as suggested in the last line of the output above) to see all the units that are installed, whether or not they are loaded. I won't reproduce the output here, because you can scroll through it on your own. The systemctl program has an excellent tab-completion facility that makes it easy to enter complex commands without needing to memorize all the options:
```
`[root@testvm1 ~]# systemctl list-unit-files`
```
You can see that some units are disabled. Table 1 in the man page for systemctl lists and provides short descriptions of the entries you might see in this listing. Use the **-t** (type) option to view just the timer units:
```
[root@testvm1 ~]# systemctl list-unit-files -t timer
UNIT FILE                    STATE  
[chrony-dnssrv@.timer][8]         disabled
dnf-makecache.timer          enabled
fstrim.timer                 disabled
logrotate.timer              disabled
logwatch.timer               disabled
[mdadm-last-resort@.timer][9]     static  
mlocate-updatedb.timer       enabled
sysstat-collect.timer        enabled
sysstat-summary.timer        enabled
systemd-tmpfiles-clean.timer static  
unbound-anchor.timer         enabled
```
You could do the same thing with this alternative, which provides considerably more detail:
```
[root@testvm1 ~]# systemctl list-timers
Thu 2020-04-16 09:06:20 EDT  3min 59s left n/a                          n/a           systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Thu 2020-04-16 10:02:01 EDT  59min left    Thu 2020-04-16 09:01:32 EDT  49s ago       dnf-makecache.timer          dnf-makecache.service
Thu 2020-04-16 13:00:00 EDT  3h 57min left n/a                          n/a           sysstat-collect.timer        sysstat-collect.service
Fri 2020-04-17 00:00:00 EDT  14h left      Thu 2020-04-16 12:51:37 EDT  3h 49min left mlocate-updatedb.timer       mlocate-updatedb.service
Fri 2020-04-17 00:00:00 EDT  14h left      Thu 2020-04-16 12:51:37 EDT  3h 49min left unbound-anchor.timer         unbound-anchor.service
Fri 2020-04-17 00:07:00 EDT  15h left      n/a                          n/a           sysstat-summary.timer        sysstat-summary.service
6 timers listed.
Pass --all to see loaded but inactive timers, too.
[root@testvm1 ~]#
```
Although there is no option to do systemctl list-mounts, you can list the mount point unit files:
```
[root@testvm1 ~]# systemctl list-unit-files -t mount
UNIT FILE                     STATE    
-.mount                       generated
boot.mount                    generated
dev-hugepages.mount           static  
dev-mqueue.mount              static  
home.mount                    generated
proc-fs-nfsd.mount            static  
proc-sys-fs-binfmt_misc.mount disabled
run-vmblock\x2dfuse.mount     disabled
sys-fs-fuse-connections.mount static  
sys-kernel-config.mount       static  
sys-kernel-debug.mount        static  
tmp.mount                     generated
usr.mount                     generated
var-lib-nfs-rpc_pipefs.mount  static  
var.mount                     generated
15 unit files listed.
[root@testvm1 ~]#
```
The STATE column in this data stream is interesting and requires a bit of explanation. The "generated" states indicate that the mount unit was generated on the fly during startup using the information in **/etc/fstab**. The program that generates these mount units is **/lib/systemd/system-generators/systemd-fstab-generator,** along with other tools that generate a number of other unit types. The "static" mount units are for filesystems like **/proc** and **/sys**, and the files for these are located in the **/usr/lib/systemd/system** directory.
Now, look at the service units. This command will show all services installed on the host, whether or not they are active:
```
`[root@testvm1 ~]# systemctl --all -t service`
```
The bottom of this listing of service units displays 166 as the total number of loaded units on my host. Your number will probably differ.
Unit files do not have a filename extension (such as **.unit**) to help identify them, so you can generalize that most configuration files that belong to systemd are unit files of one type or another. The few remaining files are mostly **.conf** files located in **/etc/systemd**.
Unit files are stored in the **/usr/lib/systemd** directory and its subdirectories, while the **/etc/systemd/** directory and its subdirectories contain symbolic links to the unit files necessary to the local configuration of this host.
To explore this, make **/etc/systemd** the PWD and list its contents. Then make **/etc/systemd/system** the PWD and list its contents, and list the contents of at least a couple of the current PWD's subdirectories.
Take a look at the **default.target** file, which determines which runlevel target the system will boot to. In the second article in this series, I explained how to change the default target from the GUI (**graphical.target**) to the command-line only (**multi-user.target**) target. The **default.target** file on my test VM is simply a symlink to **/usr/lib/systemd/system/graphical.target**.
Take a few minutes to examine the contents of the **/etc/systemd/system/default.target** file:
```
[root@testvm1 system]# cat default.target
#  SPDX-License-Identifier: LGPL-2.1+
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
Wants=display-manager.service
Conflicts=rescue.service rescue.target
After=multi-user.target rescue.service rescue.target display-manager.service
AllowIsolate=yes
```
Note that this requires the **multi-user.target**; the **graphical.target** cannot start if the **multi-user.target** is not already up and running. It also says it "wants" the **display-manager.service** unit. A "want" does not need to be fulfilled in order for the unit to start successfully. If the "want" cannot be fulfilled, it will be ignored by systemd, and the rest of the target will start regardless.
The subdirectories in **/etc/systemd/system** are lists of wants for various targets. Take a few minutes to explore the files and their contents in the **/etc/systemd/system/graphical.target.wants** directory.
The **systemd.unit** man page contains a lot of good information about unit files, their structure, the sections they can be divided into, and the options that can be used. It also lists many of the unit types, all of which have their own man pages. If you want to interpret a unit file, this would be a good place to start.
### Service units
A Fedora installation usually installs and enables services that particular hosts do not need for normal operation. Conversely, sometimes it doesn't include services that need to be installed, enabled, and started. Services that are not needed for the Linux host to function as desired, but which are installed and possibly running, represent a security risk and should—at minimum—be stopped and disabled and—at best—should be uninstalled.
The systemctl command is used to manage systemd units, including services, targets, mounts, and more. Take a closer look at the list of services to identify services that will never be used:
```
[root@testvm1 ~]# systemctl --all -t service
UNIT                           LOAD      ACTIVE SUB        DESCRIPTION                            
&lt;snip&gt;
chronyd.service                loaded    active running    NTP client/server                      
crond.service                  loaded    active running    Command Scheduler                      
cups.service                   loaded    active running    CUPS Scheduler                          
dbus-daemon.service            loaded    active running    D-Bus System Message Bus                
&lt;snip&gt;
● ip6tables.service           not-found inactive dead     ip6tables.service                  
● ipset.service               not-found inactive dead     ipset.service                      
● iptables.service            not-found inactive dead     iptables.service                    
&lt;snip&gt;
firewalld.service              loaded    active   running  firewalld - dynamic firewall daemon
&lt;snip&gt;
● ntpd.service                not-found inactive dead     ntpd.service                        
● ntpdate.service             not-found inactive dead     ntpdate.service                    
pcscd.service                  loaded    active   running  PC/SC Smart Card Daemon
```
I have pruned out most of the output from the command to save space. The services that show "loaded active running" are obvious. The "not-found" services are ones that systemd is aware of but are not installed on the Linux host. If you want to run those services, you must install the packages that contain them.
Note the **pcscd.service** unit. This is the PC/SC smart-card daemon. Its function is to communicate with smart-card readers. Many Linux hosts—including VMs—have no need for this reader nor the service that is loaded and taking up memory and CPU resources. You can stop this service and disable it, so it will not restart on the next boot. First, check its status:
```
[root@testvm1 ~]# systemctl status pcscd.service
● pcscd.service - PC/SC Smart Card Daemon
   Loaded: loaded (/usr/lib/systemd/system/pcscd.service; indirect; vendor preset: disabled)
   Active: active (running) since Fri 2019-05-10 11:28:42 EDT; 3 days ago
     Docs: man:pcscd(8)
 Main PID: 24706 (pcscd)
    Tasks: 6 (limit: 4694)
   Memory: 1.6M
   CGroup: /system.slice/pcscd.service
           └─24706 /usr/sbin/pcscd --foreground --auto-exit
May 10 11:28:42 testvm1 systemd[1]: Started PC/SC Smart Card Daemon.
```
This data illustrates the additional information systemd provides versus SystemV, which only reports whether or not the service is running. Note that specifying the **.service** unit type is optional. Now stop and disable the service, then re-check its status:
```
[root@testvm1 ~]# systemctl stop pcscd ; systemctl disable pcscd
Warning: Stopping pcscd.service, but it can still be activated by:
  pcscd.socket
Removed /etc/systemd/system/sockets.target.wants/pcscd.socket.
[root@testvm1 ~]# systemctl status pcscd
● pcscd.service - PC/SC Smart Card Daemon
   Loaded: loaded (/usr/lib/systemd/system/pcscd.service; indirect; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2019-05-13 15:23:15 EDT; 48s ago
     Docs: man:pcscd(8)
 Main PID: 24706 (code=exited, status=1/FAILURE)
May 10 11:28:42 testvm1 systemd[1]: Started PC/SC Smart Card Daemon.
May 13 15:23:15 testvm1 systemd[1]: Stopping PC/SC Smart Card Daemon...
May 13 15:23:15 testvm1 systemd[1]: pcscd.service: Main process exited, code=exited, status=1/FAIL&gt;
May 13 15:23:15 testvm1 systemd[1]: pcscd.service: Failed with result 'exit-code'.
May 13 15:23:15 testvm1 systemd[1]: Stopped PC/SC Smart Card Daemon.
```
The short log entry display for most services prevents having to search through various log files to locate this type of information. Check the status of the system runlevel targets—specifying the "target" unit type is required:
```
[root@testvm1 ~]# systemctl status multi-user.target
● multi-user.target - Multi-User System
   Loaded: loaded (/usr/lib/systemd/system/multi-user.target; static; vendor preset: disabled)
   Active: active since Thu 2019-05-09 13:27:22 EDT; 4 days ago
     Docs: man:systemd.special(7)
May 09 13:27:22 testvm1 systemd[1]: Reached target Multi-User System.
[root@testvm1 ~]# systemctl status graphical.target
● graphical.target - Graphical Interface
   Loaded: loaded (/usr/lib/systemd/system/graphical.target; indirect; vendor preset: disabled)
   Active: active since Thu 2019-05-09 13:27:22 EDT; 4 days ago
     Docs: man:systemd.special(7)
May 09 13:27:22 testvm1 systemd[1]: Reached target Graphical Interface.
[root@testvm1 ~]# systemctl status default.target
● graphical.target - Graphical Interface
   Loaded: loaded (/usr/lib/systemd/system/graphical.target; indirect; vendor preset: disabled)
   Active: active since Thu 2019-05-09 13:27:22 EDT; 4 days ago
     Docs: man:systemd.special(7)
May 09 13:27:22 testvm1 systemd[1]: Reached target Graphical Interface.
```
The default target is the graphical target. The status of any unit can be checked in this way.
### Mounts the old way
A mount unit defines all of the parameters required to mount a filesystem on a designated mount point. systemd can manage mount units with more flexibility than those using the **/etc/fstab** filesystem configuration file. Despite this, systemd still uses the **/etc/fstab** file for filesystem configuration and mounting purposes. systemd uses the **systemd-fstab-generator** tool to create transient mount units from the data in the **fstab** file.
I will create a new filesystem and a systemd mount unit to mount it. If you have some available disk space on your test system, you can do it along with me.
_Note that the volume group and logical volume names may be different on your test system. Be sure to use the names that are pertinent to your system._
You will need to create a partition or logical volume, then make an EXT4 filesystem on it. Add a label to the filesystem, **TestFS**, and create a directory for a mount point **/TestFS**.
To try this on your own, first, verify that you have free space on the volume group. Here is what that looks like on my VM where I have some space available on the volume group to create a new logical volume:
```
[root@testvm1 ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0  120G  0 disk
├─sda1          8:1    0    4G  0 part /boot
└─sda2          8:2    0  116G  0 part
  ├─VG01-root 253:0    0    5G  0 lvm  /
  ├─VG01-swap 253:1    0    8G  0 lvm  [SWAP]
  ├─VG01-usr  253:2    0   30G  0 lvm  /usr
  ├─VG01-home 253:3    0   20G  0 lvm  /home
  ├─VG01-var  253:4    0   20G  0 lvm  /var
  └─VG01-tmp  253:5    0   10G  0 lvm  /tmp
sr0            11:0    1 1024M  0 rom  
[root@testvm1 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree  
  VG01   1   6   0 wz--n- &lt;116.00g &lt;23.00g
```
Then create a new volume on **VG01** named **TestFS**. It does not need to be large; 1GB is fine. Then create a filesystem, add the filesystem label, and create the mount point:
```
[root@testvm1 ~]# lvcreate -L 1G -n TestFS VG01
  Logical volume "TestFS" created.
[root@testvm1 ~]# mkfs -t ext4 /dev/mapper/VG01-TestFS
mke2fs 1.45.3 (14-Jul-2019)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 8718fba9-419f-4915-ab2d-8edf811b5d23
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376
Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
[root@testvm1 ~]# e2label /dev/mapper/VG01-TestFS TestFS
[root@testvm1 ~]# mkdir /TestFS
```
Now, mount the new filesystem:
```
[root@testvm1 ~]# mount /TestFS/
mount: /TestFS/: can't find in /etc/fstab.
```
This will not work because you do not have an entry in **/etc/fstab**. You can mount the new filesystem even without the entry in **/etc/fstab** using both the device name (as it appears in **/dev**) and the mount point. Mounting in this manner is simpler than it used to be—it used to require the filesystem type as an argument. The mount command is now smart enough to detect the filesystem type and mount it accordingly.
Try it again:
```
[root@testvm1 ~]# mount /dev/mapper/VG01-TestFS /TestFS/
[root@testvm1 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  120G  0 disk
├─sda1            8:1    0    4G  0 part /boot
└─sda2            8:2    0  116G  0 part
  ├─VG01-root   253:0    0    5G  0 lvm  /
  ├─VG01-swap   253:1    0    8G  0 lvm  [SWAP]
  ├─VG01-usr    253:2    0   30G  0 lvm  /usr
  ├─VG01-home   253:3    0   20G  0 lvm  /home
  ├─VG01-var    253:4    0   20G  0 lvm  /var
  ├─VG01-tmp    253:5    0   10G  0 lvm  /tmp
  └─VG01-TestFS 253:6    0    1G  0 lvm  /TestFS
sr0              11:0    1 1024M  0 rom  
[root@testvm1 ~]#
```
Now the new filesystem is mounted in the proper location. List the mount unit files:
```
`[root@testvm1 ~]# systemctl list-unit-files -t mount`
```
This command does not show a file for the **/TestFS** filesystem because no file exists for it. The command **systemctl status TestFS.mount** does not display any information about the new filesystem either. You can try it using wildcards with the **systemctl status** command:
```
[root@testvm1 ~]# systemctl status *mount
● usr.mount - /usr
   Loaded: loaded (/etc/fstab; generated)
   Active: active (mounted)
    Where: /usr
     What: /dev/mapper/VG01-usr
     Docs: man:fstab(5)
           man:systemd-fstab-generator(8)
&lt;SNIP&gt;
● TestFS.mount - /TestFS
   Loaded: loaded (/proc/self/mountinfo)
   Active: active (mounted) since Fri 2020-04-17 16:02:26 EDT; 1min 18s ago
    Where: /TestFS
     What: /dev/mapper/VG01-TestFS
● run-user-0.mount - /run/user/0
   Loaded: loaded (/proc/self/mountinfo)
   Active: active (mounted) since Thu 2020-04-16 08:52:29 EDT; 1 day 5h ago
    Where: /run/user/0
     What: tmpfs
● var.mount - /var
   Loaded: loaded (/etc/fstab; generated)
   Active: active (mounted) since Thu 2020-04-16 12:51:34 EDT; 1 day 1h ago
    Where: /var
     What: /dev/mapper/VG01-var
     Docs: man:fstab(5)
           man:systemd-fstab-generator(8)
    Tasks: 0 (limit: 19166)
   Memory: 212.0K
      CPU: 5ms
   CGroup: /system.slice/var.mount
```
This command provides some very interesting information about your system's mounts, and your new filesystem shows up. The **/var** and **/usr** filesystems are identified as being generated from **/etc/fstab**, while your new filesystem simply shows that it is loaded and provides the location of the info file in the **/proc/self/mountinfo** file.
Next, automate this mount. First, do it the old-fashioned way by adding an entry in **/etc/fstab**. Later, I'll show you how to do it the new way, which will teach you about creating units and integrating them into the startup sequence.
Unmount **/TestFS** and add the following line to the **/etc/fstab** file:
```
`/dev/mapper/VG01-TestFS  /TestFS       ext4    defaults        1 2`
```
Now, mount the filesystem with the simpler **mount** command and list the mount units again:
```
[root@testvm1 ~]# mount /TestFS
[root@testvm1 ~]# systemctl status *mount
&lt;SNIP&gt;
● TestFS.mount - /TestFS
   Loaded: loaded (/proc/self/mountinfo)
   Active: active (mounted) since Fri 2020-04-17 16:26:44 EDT; 1min 14s ago
    Where: /TestFS
     What: /dev/mapper/VG01-TestFS
&lt;SNIP&gt;
```
This did not change the information for this mount because the filesystem was manually mounted. Reboot and run the command again, and this time specify **TestFS.mount** rather than using the wildcard. The results for this mount are now consistent with it being mounted at startup:
```
[root@testvm1 ~]# systemctl status TestFS.mount
● TestFS.mount - /TestFS
   Loaded: loaded (/etc/fstab; generated)
   Active: active (mounted) since Fri 2020-04-17 16:30:21 EDT; 1min 38s ago
    Where: /TestFS
     What: /dev/mapper/VG01-TestFS
     Docs: man:fstab(5)
           man:systemd-fstab-generator(8)
    Tasks: 0 (limit: 19166)
   Memory: 72.0K
      CPU: 6ms
   CGroup: /system.slice/TestFS.mount
Apr 17 16:30:21 testvm1 systemd[1]: Mounting /TestFS...
Apr 17 16:30:21 testvm1 systemd[1]: Mounted /TestFS.
```
### Creating a mount unit
Mount units may be configured either with the traditional **/etc/fstab** file or with systemd units. Fedora uses the **fstab** file as it is created during the installation. However, systemd uses the **systemd-fstab-generator** program to translate the **fstab** file into systemd units for each entry in the **fstab** file. Now that you know you can use systemd **.mount** unit files for filesystem mounting, try it out by creating a mount unit for this filesystem.
First, unmount **/TestFS**. Edit the **/etc/fstab** file and delete or comment out the **TestFS** line. Now, create a new file with the name **TestFS.mount** in the **/etc/systemd/system** directory. Edit it to contain the configuration data below. The unit file name and the name of the mount point _must_ be identical, or the mount will fail:
```
# This mount unit is for the TestFS filesystem
# By David Both
# Licensed under GPL V2
# This file should be located in the /etc/systemd/system directory
[Unit]
Description=TestFS Mount
[Mount]
What=/dev/mapper/VG01-TestFS
Where=/TestFS
Type=ext4
Options=defaults
[Install]
WantedBy=multi-user.target
```
The **Description** line in the **[Unit]** section is for us humans, and it provides the name that's shown when you list mount units with **systemctl -t mount**. The data in the **[Mount]** section of this file contains essentially the same data that would be found in the **fstab** file.
Now enable the mount unit:
```
[root@testvm1 etc]# systemctl enable TestFS.mount
Created symlink /etc/systemd/system/multi-user.target.wants/TestFS.mount → /etc/systemd/system/TestFS.mount.
```
This creates the symlink in the **/etc/systemd/system** directory, which will cause this mount unit to be mounted on all subsequent boots. The filesystem has not yet been mounted, so you must "start" it:
```
`[root@testvm1 ~]# systemctl start TestFS.mount`
```
Verify that the filesystem has been mounted:
```
[root@testvm1 ~]# systemctl status TestFS.mount
● TestFS.mount - TestFS Mount
   Loaded: loaded (/etc/systemd/system/TestFS.mount; enabled; vendor preset: disabled)
   Active: active (mounted) since Sat 2020-04-18 09:59:53 EDT; 14s ago
    Where: /TestFS
     What: /dev/mapper/VG01-TestFS
    Tasks: 0 (limit: 19166)
   Memory: 76.0K
      CPU: 3ms
   CGroup: /system.slice/TestFS.mount
Apr 18 09:59:53 testvm1 systemd[1]: Mounting TestFS Mount...
Apr 18 09:59:53 testvm1 systemd[1]: Mounted TestFS Mount.
```
This experiment has been specifically about creating a unit file for a mount, but it can be applied to other types of unit files as well. The details will be different, but the concepts are the same. Yes, I know it is still easier to add a line to the **/etc/fstab** file than it is to create a mount unit. But this is a good example of how to create a unit file because systemd does not have generators for every type of unit.
### In summary
This article looked at systemd units in more detail and how to use the systemctl command to explore and manage units. It also showed how to stop and disable units and create a new systemd mount unit to mount a new filesystem and enable it to initiate during startup.
In the next article in this series, I will take you through a recent problem I had during startup and show you how I circumvented it using systemd.
### Resources
There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup.
* The Fedora Project has a good, practical [guide][10] [to systemd][10]. It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd.
* The Fedora Project also has a good [cheat sheet][11] that cross-references the old SystemV commands to comparable systemd ones.
* For detailed technical information about systemd and the reasons for creating it, check out [Freedesktop.org][12]'s [description of systemd][13].
* [Linux.com][14]'s "More systemd fun" offers more advanced systemd [information and tips][15].
There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers.
* [Rethinking PID 1][16]
* [systemd for Administrators, Part I][17]
* [systemd for Administrators, Part II][18]
* [systemd for Administrators, Part III][19]
* [systemd for Administrators, Part IV][20]
* [systemd for Administrators, Part V][21]
* [systemd for Administrators, Part VI][22]
* [systemd for Administrators, Part VII][23]
* [systemd for Administrators, Part VIII][24]
* [systemd for Administrators, Part IX][25]
* [systemd for Administrators, Part X][26]
* [systemd for Administrators, Part XI][27]
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/systemd-units
作者:[David Both][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
[2]: https://opensource.com/article/20/4/systemd
[3]: https://opensource.com/article/20/4/systemd-startup
[4]: https://en.wikipedia.org/wiki/Sar_%28Unix%29
[5]: https://en.wikipedia.org/wiki/Pwd
[6]: https://en.wikipedia.org/wiki/Standard_streams#Standard_output_(stdout)
[7]: http://www.both.org/?page_id=1183
[8]: mailto:chrony-dnssrv@.timer
[9]: mailto:mdadm-last-resort@.timer
[10]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html
[11]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet
[12]: http://Freedesktop.org
[13]: http://www.freedesktop.org/wiki/Software/systemd
[14]: http://Linux.com
[15]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/
[16]: http://0pointer.de/blog/projects/systemd.html
[17]: http://0pointer.de/blog/projects/systemd-for-admins-1.html
[18]: http://0pointer.de/blog/projects/systemd-for-admins-2.html
[19]: http://0pointer.de/blog/projects/systemd-for-admins-3.html
[20]: http://0pointer.de/blog/projects/systemd-for-admins-4.html
[21]: http://0pointer.de/blog/projects/three-levels-of-off.html
[22]: http://0pointer.de/blog/projects/changing-roots
[23]: http://0pointer.de/blog/projects/blame-game.html
[24]: http://0pointer.de/blog/projects/the-new-configuration-files.html
[25]: http://0pointer.de/blog/projects/on-etc-sysinit.html
[26]: http://0pointer.de/blog/projects/instances.html
[27]: http://0pointer.de/blog/projects/inetd.html

View File

@ -1,193 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A guide to setting up your Open Source Program Office (OSPO) for success)
[#]: via: (https://opensource.com/article/20/5/open-source-program-office)
[#]: author: (J. Manrique Lopez de la Fuente https://opensource.com/users/jsmanrique)
A guide to setting up your Open Source Program Office (OSPO) for success
======
Learn how to best grow and maintain your open source communities and
allies.
![community team brainstorming ideas][1]
Companies create Open Source Program Offices (OSPO) to manage their relationship with the open source ecosystems they depend on. By understanding the company's open source ecosystem, an OSPO is able to maximize the company's return on investment and reduce the risks of consuming, contributing to, and releasing open source software. Additionally, since the company depends on its open source ecosystem, ensuring its health and sustainability shall ensure the company's health, sustainable growth, and evolution.
### How has OSPO become vital to companies and their open source ecosystem?
Marc Andreessen has said that "software is eating the world," and more recently, it could be said that open source is eating the software world. But how is that process happening?
Companies get involved with open source projects in several ways. These projects comprise the company's open source ecosystem, and their relationships and interactions can be seen through Open Source Software's (OSS) inbound and outbound processes.
From the OSS inbound point of view, companies use it to build their own solutions and their own infrastructure. OSS gets introduced because it's part of the code their technology providers use, or because their own developers add open source components to the company's information technology (IT) infrastructure.
From the OSS outbound point of view, some companies contribute to OSS projects. That contribution could be part of the company's requirements for their solutions that need certain fixes in upstream projects. For example, Samsung contributes to certain graphics-related projects to ensure its hardware has software support once it gets into the market. In some other cases, contributing to OSS is a mechanism to retain talent by allowing the people to contribute to projects different from their daily work.
Some companies release their own open source projects as an outbound OSS process. For companies like Red Hat or GitLab, it would be expected. But, there are increasingly more non-software companies releasing a lot of OSS, like Lyft.
![OSS inbound and outbound processes][2]
OSS inbound and outbound processes
Ultimately, all of these projects involved in the inbound and outbound OSS flow are the company's OSS ecosystem. And like any living being, the company's health and sustainability depend on the ecosystem that surrounds it.
### OSPO responsibilities
Following the species and their ecosystem, people working in the OSPO team could be seen as the rangers in the organization's OSS ecosystem. They take care of the ecosystem and its relationship with the company, to keep everything healthy and sustainable.
When the company consumes open source software projects, they need to be aware of licenses and compliance, to check the project's health, to ensure there are no security flaws, and, in some cases, to identify talented community members for potential hiring processes.
When the company contributes to open source software projects, they need to be sure there are no Intellectual Property (IP) issues, to ensure the company contributions' footprint and its leadership in the projects, and sometimes, also to help talented people stay engaged with the company through their contributions.
And when the company releases and maintains open source projects, they are responsible for ensuring community engagement and growth, for checking there are no IP issues, that the company maintains its footprint and leadership, and perhaps, to attract new talent to the company.
Have you realized the whole set of skills required in an OSPO team? When I've asked people working in OSPO about the size of their teams, the number is around 1 to 5 people per 1,000 developers in the company. That's a small team to monitor a lot of people and their potential OSS related activity.
### How to manage an OSPO
With all these activities in OSPO people's minds and all the resources they need to worry about, how are they able to manage all of this?
There are at least a couple of open source communities with valuable knowledge and resources available for them:
* The [TODO Group][3] is "an open group of companies who want to collaborate on practices, tools, and other ways to run successful and effective open source projects and programs." For example, they have a complete set of [guides][4] with best practices for and from companies running OSPOS.
* The [CHAOSS (Community Health Analytics for Open Source Software)][5] community develops metrics, methodologies, and software for managing open source project health and sustainability. (See more on CHAOSS' active communities and working groups below).
OSPO managers need to report a lot of information to the rest of the company to answer many questions related to their OSS inbound and outbound processes, i.e., Which projects are we using in our organization? What's the health of those projects? Who are the key people in those projects? Which projects are we contributing to? Which projects are we releasing? How are we dealing with community contributions? Who are the key contributors?
### Data-driven OSPO
As William Edwards Deming said, "Without data, you are just a person with an opinion."
Having opinions is not a bad thing, but having opinions based on data certainly makes it easier to understand, discuss, and determine the processes best suited to your company and its goals. CHAOSS is the recommended community to look to for guidance about metrics strategies and tools.
Recently, the CHAOSS community has released [a new set of metric definitions][6]. These metrics are only subsets of all the ones being discussed in the focus areas of each working group (WG):
* [Common WG][7]: Defines the metrics that are used by both working groups or are important for community health, but that do not cleanly fit into one of the other existing working groups. Areas of interest include organizational affiliation, responsiveness, and geographic coverage.
* [Diversity and Inclusion WG][8]: Gathers experiences regarding diversity and inclusion in open source projects with the goal of understanding, from a qualitative and quantitative point of view, how diversity and inclusion can be measured.
* [Evolution WG][9]: Refines the metrics that inform evolution and works with software implementations.
* [Risk WG][10]: Refines the metrics that inform risk and works with software implementations.
* [Value WG][11]: Focuses on industry-standard metrics for economic value in open source. Their main goal is to publish trusted industry-standard value metrics—a kind of S&amp;P for software development and an authoritative source for metrics significance and industry norms.
On the tooling side, projects like [Augur][12], [Cregit][13], and [GrimoireLab][14] are the reference tools that report these metrics, but also many others related to OSPO activities. They are also the seed for new tools and solutions provided by the OSS community like [Cauldron.io][15], a SaaS open source solution to ease OSS ecosystem analysis.
![CHAOSS Metrics for 15 years of Unity OSS activity. Source: cauldron.io][16]
CHAOSS Metrics for 15 years of Unity OSS activity. Source: cauldron.io
All these metrics and data are useless without a metrics strategy. Usually, the first approach is to try to measure as much as possible, producing overwhelming reports and dashboards full of charts and data. What is the value of that?
Experience has shown that a very valid approach is the [Goal, Questions, Metrics (GQM)][17] strategy. But how do we put that in practice in an OSPO?
First of all, we need to understand the company's goals when using, consuming, contributing to, or releasing and maintaining OSS projects. The usual goals are related to market positioning, required upstream features development, and talent attraction or retention. Based on these goals, we should write down related questions that can be answered with numbers, like the following:
#### Who/how many are the core maintainers of my OSS ecosystem projects?
![Uber OSS code core, regular, and casual contributors evolution. Source: uber.biterg.io][18]
Uber OSS code core, regular, and casual contributors evolution. Source: uber.biterg.io
People contribute through different mechanisms or tools (code, issues, comments, tests, etc.). Measuring the core contributors (those that have done 80% of the contributions), the regular ones (those that have done 15% of the contributions), and the casual ones (those have made 5% of the contributions) can answer questions related to participation over time, but also how people move between the different buckets. Adding affiliation information helps to identify external core contributors.
#### Where are the contributions happening?
![Uber OSS activity based on location. Source: uber.biterg.io][19]
Uber OSS activity based on location. Source: uber.biterg.io
The growth of OSS ecosystems is also related to OSS projects spread across the world. Understanding that spread helps OSPO, and the company, to manage actions that improve support for people from different countries and regions.
#### What is the company's OSS network?
![Uber OSS network. Source: uber.biterg.io][20]
Uber OSS network. Source: uber.biterg.io
The company's OSS ecosystem includes those projects that the company's people contribute to. Understanding which projects they contribute to offers insight into which technologies or OSS components are interesting to people, and which companies or organizations the company collaborates with.
#### How is the company dealing with contributions?
![Github Pull Requests backlog management index and time to close analysis. Source: uber.biterg.io][21]
Github Pull Requests backlog management index and time to close analysis. Source: uber.biterg.io
One of the goals when releasing OSS projects is to grow the community around them. Measuring how the company handles contributions to its projects from outside its boundaries helps to understand how "welcoming" it is and identifies mentors (or bottlenecks) and opportunities to lower the barrier to contribute.
#### Consumers vs. maintainers
Over the last months, we have been hearing that corporations are taking OSS for free without contributing back. The typical arguments are that these corporations are making millions of dollars thanks to free work, plus the issue of OSS project maintainer burnout due to users' complaints and requests for free support.
The system is unbalanced; usually, the number of users exceeds the number of maintainers. Is that good or bad? Having users for our software is (or should be) good. But we need to manage expectations on both sides.
From the corporation's point of view, consuming OSS without care is very, very risky.
OSPO can play an important role in educating the company about the risks they are facing, and how to reduce them by contributing back to their OSS ecosystem. Remember, a company's overall sustainability could rely heavily on its ecosystem sustainability.
A good strategy is to start shifting your company from being pure OSS consumers to becoming contributors to their OSS inbound projects. From just submitting issues and asking questions to help solve issues, answering questions, and even sending patches, contributing helps grow and maintain the project while giving back to the community. It doesn't happen immediately, but over time, the company will be perceived as an OSS ecosystem citizen. Eventually, some people from the company could end up helping to maintain those projects too.
And what about money? There are plenty of ways to support the OSS ecosystem financially. Some examples:
* Business initiatives like [Tidelift][22], or [OpenCollective][23]
* Foundations and their supporting mechanisms, like [Software Freedom Conservancy][24], or [CommunityBridge][25] from the Linux Foundation
* Self-funding programs (like [Indeed][26] and [Salesforce][27] have done)
* Emerging gig development approaches like [Github Sponsors][28] or [Patreon][29]
Last but not least, companies need to avoid the "not invented here" syndrome. For some OSS projects, there might be companies providing consulting, customization, maintenance, and/or support services. Instead of taking OSS and spending time and people to self-host, self-customize, or try to bring those kinds of services in-house, it might be smarter and more efficient to hire some of those companies to do the thought work.
As a final remark, I would like to emphasize the importance of an OSPO for a company to succeed and grow in the current market. As shepherds of the company's OSS ecosystem, they are the best people in the organization to understand how the ecosystem works and flows, and they should be empowered to manage, monitor, and make recommendations and decisions to ensure sustainability and growth.
Does your organization have an OSPO yet?
Six common traits of successful open source programs, and a look back at how the open source...
Why would a company not in the business of software development create an open source program...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/5/open-source-program-office
作者:[J. Manrique Lopez de la Fuente][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jsmanrique
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/meeting_discussion_brainstorm.png?itok=7_m4CC8S (community team brainstorming ideas)
[2]: https://opensource.com/sites/default/files/uploads/ospo_1.png (OSS inbound and outbound processes)
[3]: https://todogroup.org/
[4]: https://todogroup.org/guides/
[5]: https://chaoss.community/
[6]: https://chaoss.community/metrics/
[7]: https://github.com/chaoss/wg-common
[8]: https://github.com/chaoss/wg-diversity-inclusion
[9]: https://github.com/chaoss/wg-evolution
[10]: https://github.com/chaoss/wg-risk
[11]: https://github.com/chaoss/wg-value
[12]: https://github.com/chaoss/augur
[13]: https://github.com/cregit
[14]: https://chaoss.github.io/grimoirelab/
[15]: https://cauldron.io/
[16]: https://opensource.com/sites/default/files/uploads/ospo_2.png (CHAOSS Metrics for 15 years of Unity OSS activity. Source: cauldron.io)
[17]: https://en.wikipedia.org/wiki/GQM
[18]: https://opensource.com/sites/default/files/uploads/ospo_3.png (Uber OSS code core, regular, and casual contributors evolution. Source: uber.biterg.io)
[19]: https://opensource.com/sites/default/files/uploads/ospo_4.png (Uber OSS activity based on location. Source: uber.biterg.io)
[20]: https://opensource.com/sites/default/files/uploads/ospo_5_0.png (Uber OSS network. Source: uber.biterg.io)
[21]: https://opensource.com/sites/default/files/uploads/ospo_6.png (Github Pull Requests backlog management index and time to close analysis. Source: uber.biterg.io)
[22]: https://tidelift.com/
[23]: https://opencollective.com/
[24]: https://sfconservancy.org/
[25]: https://funding.communitybridge.org/
[26]: https://engineering.indeedblog.com/blog/2019/02/sponsoring-osi/
[27]: https://sustain.codefund.fm/23
[28]: https://help.github.com/en/github/supporting-the-open-source-community-with-github-sponsors
[29]: https://www.patreon.com/

View File

@ -1,162 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (systemd-resolved: introduction to split DNS)
[#]: via: (https://fedoramagazine.org/systemd-resolved-introduction-to-split-dns/)
[#]: author: (zbyszek https://fedoramagazine.org/author/zbyszek/)
systemd-resolved: introduction to split DNS
======
![][1]
Photo by [Ruvim Noga][2] on [Unsplash][3]
Fedora 33 switches the default DNS resolver to [systemd-resolved][4]. In simple terms, this means that systemd-resolved will run as a daemon. All programs wanting to translate domain names to network addresses will talk to it. This replaces the current default lookup mechanism where each program individually talks to remote servers and there is no shared cache.
If necessary, systemd-resolved will contact remote DNS servers. systemd-resolved is a “stub resolver”—it doesnt resolve all names itself (by starting at the root of the DNS hierarchy and going down label by label), but forwards the queries to a remote server.
A single daemon handling name lookups provides significant benefits. The daemon caches answers, which speeds answers for frequently used names. The daemon remembers which servers are non-responsive, while previously each program would have to figure this out on its own after a timeout. Individual programs only talk to the daemon over a local transport and are more isolated from the network. The daemon supports fancy rules which specify which name servers should be used for which domain names—in fact, the rest of this article is about those rules.
### Split DNS
Consider the scenario of a machine that is connected to two semi-trusted networks (wifi and ethernet), and also has a VPN connection to your employer. Each of those three connections has its own network interface in the kernel. And there are multiple name servers: one from a DHCP lease from the wifi hotspot, two specified by the VPN and controlled by your employer, plus some additional manually-configured name servers. _Routing_ is the process of deciding which servers to ask for a given domain name. Do not mistake this with the process of deciding where to send network packets, which is called routing too.
The network interface is king in systemd-resolved. systemd-resolved first picks one or more interfaces which are appropriate for a given name, and then queries one of the name servers attached to that interface. This is known as “split DNS”.
There are two flavors of domains attached to a network interface: _routing domains_ and _search domains_. They both specify that the given domain and any subdomains are appropriate for that interface. Search domains have the additional function that single-label names are suffixed with that search domain before being resolved. For example, a lookup for “server” is treated as a lookup for “server.example.com” if the search domain is “example.com.” In systemd-resolved config files, routing domains are prefixed with the tilde (~) character.
#### Specific example
Now consider a specific example: your VPN interface _tun0_ has a search domain _private.company.com_ and a routing domain _~company.com_. If you ask for _mail.private.company.com_, it is matched by both domains, so this name would be routed to _tun0_.
A request for _[www.company.com][5]_ is matched by the second domain and would also go to _tun0_. If you ask for _www_, (in other words, if you specify a single-label name without any dots), the difference between routing and search domains comes into play. systemd-resolved attempts to combine the single-label name with the search domain and tries to resolve _[www.private.company.com][6]_ on _tun0_.
If you have multiple interfaces with search domains, single-label names are suffixed with all search domains and resolved in parallel. For multi-label names, no suffixing is done; search and routing domains are are used to route the name to the appropriate interface. The longest match wins. When there are multiple matches of the same length on different interfaces, they are resolved in parallel.
A special case is when an interface has a routing domain _~._ (a tilde for a routing domain and a dot for the root DNS label). Such an interface always matches any names, but with the shortest possible length. Any interface with a matching search or routing domain has higher priority, but the interface with _~._ is used for all other names. Finally, if no routing or search domains matched, the name is routed to all interfaces that have at least one name server attached.
### Lookup routing in systemd-resolved
#### Domain routing
This seems fairly complex, partially because of the historic names which are confusing. In actual practice its not as complicated as it seems.
To introspect a running system, use the _resolvectl domain_ command. For example:
```
$ resolvectl domain
Global:
Link 4 (wlp4s0): ~.
Link 18 (hub0):
Link 26 (tun0): redhat.com
```
You can see that _www_ would resolve as _[www.redhat.com][7]_. over _tun0_. Anything ending with _redhat.com_ resolves over _tun0_. Everything else would resolve over _wlp4s0_ (the wireless interface). In particular, a multi-label name like _[www.foobar][8]_ would resolve over _wlp4s0_, and most likely fail because there is no _foobar_ top-level domain (yet).
#### Server routing
Now that you know which _interface_ or interfaces should be queried, the _server_ or servers to query are easy to determine. Each interface has one or more name servers configured. systemd-resolved will send queries to the first of those. If the server is offline and the request times out or if the server sends a syntactically-invalid answer (which shouldnt happen with “normal” queries, but often becomes an issue when DNSSEC is enabled), systemd-resolved switches to the next server on the list. It will use that second server as long as it keeps responding. All servers are used in a round-robin rotation.
To introspect a running system, use the _resolvectl dns_ command:
```
$ resolvectl dns
Global:
Link 4 (wlp4s0): 192.168.1.1 8.8.4.4 8.8.8.8
Link 18 (hub0):
Link 26 (tun0): 10.45.248.15 10.38.5.26
```
When combined with the previous listing, you know that for _[www.redhat.com][7]_, systemd-resolved will query 10.45.248.15, and—if it doesnt respond—10.38.5.26. For _[www.google.com][9]_, systemd-resolved will query 192.168.1.1 or the two Google servers 8.8.4.4 and 8.8.8.8.
### Differences from nss-dns
Before going further detail, you may ask how this differs from the previous default implementation (nss-dns). With nss-dns there is just one global list of up to three name servers and a global list of search domains (specified as _nameserver_ and _search_ in _/etc/resolv.conf_).
Each name to query is sent to the first name server. If it doesnt respond, the same query is sent to the second name server, and so on. systemd-resolved implements split-DNS and remembers which servers are currently considered active.
For single-label names, the query is performed with each of the the search domains suffixed. This is the same with systemd-resolved. For multi-label names, a query for the unsuffixed name is performed first, and if that fails, a query for the name suffixed by each of the search domains in turn is performed. systemd-resolved doesnt do that last step; it only suffixes single-label names.
A second difference is that with _nss-dns_, this module is loaded into each process. The process itself communicates with remote servers and implements the full DNS stack internally. With systemd-resolved, the _nss-resolve_ module is loaded into the process, but it only forwards the query to systemd-resolved over a local transport (D-Bus) and doesnt do any work itself. The systemd-resolved process is heavily sandboxed using systemd service features.
The third difference is that with systemd-resolved all state is dynamic and can be queried and updated using D-Bus calls. This allows very strong integration with other daemons or graphical interfaces.
### Configuring systemd-resolved
So far, this article talked about servers and the routing of domains without explaining how to configure them. systemd-resolved has a configuration file (_/etc/systemd/resolv.conf_) where you specify name servers with _DNS=_ and routing or search domains with _Domains=_ (routing domains with _~_, search domains without). This corresponds to the _Global:_ lists in the two listings above.
In this articles examples, both lists are empty. Most of the time configuration is attached to specific interfaces, and “global” configuration is not very useful. Interfaces come and go and it isnt terribly smart to contact servers on an interface which is down. As soon as you create a VPN connection, you want to use the servers configured for that connection to resolve names, and as soon as the connection goes down, you want to stop.
How does then systemd-resolved acquire the configuration for each interface? This happens dynamically, with the network management service pushing this configuration over D-Bus into systemd-resolved. The default in Fedora is NetworkManager and it has very good integration with systemd-resolved. Alternatives like systemds own systemd-networkd implement similar functionality. But the [interface is open][10] and other programs can do the appropriate D-Bus calls.
Alternatively, _resolvectl_ can be used for this (it is just a wrapper around the D-Bus API). Finally, _resolvconf_ provides similar functionality in a form compatible with a tool in Debian with the same name.
#### Scenario: Local connection more trusted than VPN
The important thing is that in the common scenario, systemd-resolved follows the configuration specified by other tools, in particular NetworkManager. So to understand how systemd-resolved names, you need to see what NetworkManager tells it to do. Normally NM will tell systemd-resolved to use the name servers and search domains received in a DHCP lease on some interface. For example, look at the source of configuration for the two listings shown above:
![][11]![][12]
There are two connections: “Parkinson” wifi and “Brno (BRQ)” VPN. In the first panel _DNS:Automatic_ is enabled, which means that the DNS server received as part of the DHCP lease (192.168.1.1) is passed to systemd-resolved. Additionally. 8.8.4.4 and 8.8.8.8 are listed as alternative name servers. This configuration is useful if you want to resolve the names of other machines in the local network, which 192.168.1.1 provides. Unfortunately the hotspot DNS server occasionally gets stuck, and the other two servers provide backup when that happens.
The second panel is similar, but doesnt provide any special configuration. NetworkManager combines routing domains for a given connection from DHCP, SLAAC RDNSS, and VPN, and finally manual configuration and forward this to systemd-resolved. This is the source of the search domain _redhat.com_ in the listing above.
There is an important difference between the two interfaces though: in the second panel, “Use this connection only for resources on its network” is **checked**. This tells NetworkManager to tell systemd-resolved to only use this interface for names under the search domain received as part of the lease (_Link 26 (tun0): redhat.com_ in the first listing above). In the first panel, this checkbox is **unchecked**, and NetworkManager tells systemd-resolved to use this interface for all other names (_Link 4 (wlp4s0): ~._). This effectively means that the wireless connection is more trusted.
#### Scenario: VPN more trusted than local network
In a different scenario, a VPN would be more trusted than the local network and the domain routing configuration reversed. If a VPN without “Use this connection only for resources on its network” is active, NetworkManager tells systemd-resolved to attach the default routing domain to this interface. After unchecking the checkbox and restarting the VPN connection:
```
$ resolvectl domain
Global:
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): ~. redhat.com
$ resolvectl dns
Global:
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): 10.45.248.15 10.38.5.26
```
Now all domain names are routed to the VPN. The network management daemon controls systemd-resolved and the user controls the network management daemon.
### Additional systemd-resolved functionality
As mentioned before, systemd-resolved provides a common name lookup mechanism for all programs running on the machine. Right now the effect is limited: shared resolver and cache and split DNS (the lookup routing logic described above). systemd-resolved provides additional resolution mechanisms beyond the traditional unicast DNS. These are the local resolution protocols MulticastDNS and LLMNR, and an additional remote transport DNS-over-TLS.
Fedora 33 does not enable MulticastDNS and DNS-over-TLS in systemd-resolved. MulticastDNS is implemented by _nss-mdns4_minimal_ and Avahi. Future Fedora releases may enable these as the upstream project improves support.
Implementing this all in a single daemon which has runtime state allows smart behaviour: DNS-over-TLS may be enabled in opportunistic mode, with automatic fallback to classic DNS if the remote server does not support it. Without the daemon which can contain complex logic and runtime state this would be much harder. When enabled, those additional features will apply to all programs on the system.
There is more to systemd-resolved: in particular LLMNR and DNSSEC, which only received brief mention here. A future article will explore those subjects.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/systemd-resolved-introduction-to-split-dns/
作者:[zbyszek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/zbyszek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/10/systemd-resolved2-816x345.jpg
[2]: https://unsplash.com/@ruvimnogaphoto?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/colors?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html
[5]: http://www.company.com
[6]: http://www.private.company.com
[7]: http://www.redhat.com
[8]: http://www.foobar
[9]: http://www.google.com
[10]: https://www.freedesktop.org/software/systemd/man/org.freedesktop.resolve1.html
[11]: https://fedoramagazine.org/wp-content/uploads/2020/10/nm-default-network-with-additional-servers.png
[12]: https://fedoramagazine.org/wp-content/uploads/2020/10/nm-vpn-brno.png

View File

@ -1,216 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (9 Decentralized, P2P and Open Source Alternatives to Mainstream Social Media Platforms Like Twitter, Facebook, YouTube and Reddit)
[#]: via: (https://itsfoss.com/mainstream-social-media-alternaives/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
9 Decentralized, P2P and Open Source Alternatives to Mainstream Social Media Platforms Like Twitter, Facebook, YouTube and Reddit
======
You probably are aware that [Facebook is going to share the user data from its end to end encrypted chat service WhatsApp][1]. This is not optional. You have to accept that or stop using WhatsApp altogether.
Privacy cautious people had seen it coming a long time ago. After all, [Facebook paid $19 billion to buy a mobile app like WhatsApp][2] that hardly made any money at that time. Now its time for Facebook to get the return on its $19 billion investment. They will share your data with advertisers so that you get more personalized (read invasive) ads.
If you are fed up with the “**my way or highway**” attitude of the big tech like Facebook, Google, Twitter, perhaps you may try some alternative social media platforms.
These alternative social platforms are open source, use a decentralized approach with P2P or Blockchain technologies, and you may be able to self-host some of them.
### Open source and decentralized social networks
![Image Credit: Datonel on DeviantArt][3]
Ill be honest with you here. These alternative platforms may not give you the same kind of experience you are accustomed to, but these platforms would not infringe on your privacy and freedom of speech. Thats a trade off.
#### 1\. Minds
Alternative to: Facebook and YouTube
Features: Open Source code base, Blockchain
Self-host: No
On Minds, you can post videos, blogs, images and set statuses. You can also message and video chat securely with groups or directly with friends. Trending feeds and hashtags allows you to discover articles of your interest.
Thats not it. You also have the option to earn tokens for your contributions. These tokens can be used to upgrade your channel. Creators can receive direct payments in USD, Bitcoin and Ether from fans.
[Minds][4]
#### 2\. Aether
Alternative to: Reddit
Features: Open Source, P2P
Self-host: No
![][5]
Aether is an open source, P2P platform for self-governing communities with auditable moderation and mod elections.
The content on Aether is ephemeral in nature and it is kept only for six months unless someone saves it. Since it is P2P, there is no centralized servers.
An interesting feature of Aether is its democratic communities. Communities elect mods and can impeach them by votes.
[Aether][6]
#### 3\. Mastodon
Alternative to: Twitter
Features: Open Source, Decentralized
Self-host: Yes
![][7]
[Mastodon][8] is already known among FOSS enthusiasts. We have covered [Mastodon as an open source Twitter alternative][9] in the past, and [we also have a profile on Mastodon][10].
Mastodon isnt a single website like Twitter, its a network of thousands of communities operated by different organizations and individuals that provide a seamless social media experience. You can host your own Mastodon instance and choose to connect it with other Mastodon instances or you simply join one of the existing Mastodon instances like [Mastodon Social][11].
[Mastodon][8]
#### 4\. LBRY
Alternative to: YouTube
Features: Open Source, Decentralized, Blockchain
Self-host: No
![][12]
At the core, [LBRY][13] is a blockchain based decentralization protocol. On top of that protocol, you get a digital marketplace powered by its own cryptocurrency.
Though LBRY allows creators to offer l kind of digital content like movies, books and games, it is essentially promoted as an YouTube alternative.
We have covered [LBRY on Its FOSS][14] in the past and you may read that for more details. If you are joining LBRY, dont forget to follow Its FOSS there.
[LBRY][15]
#### 5\. KARMA
Alternative to: Instagram
Features: Decentralized, Blockchain
Self-host: No
![][16]
Heres another blockchain based social network governed by cryptocurrency.
KARMA is an Instagram clone built on top of open source blockchain platform, [EOSIO][17]. Every like and share your content gets, earns you KARMA tokens. You can use these tokens to boost your content or convert it to real money through one of the partner crypto exchanges.
KARMA is a mobile only app and available on Play Store and App Store.
[KARMA][18]
#### 6\. Peertube
Alternative to: YouTube
Features: Decentralized, P2P
Self-host: No
![][19]
Developed by French company Framasoft, PeerTube is a decentralized video streaming platform. PeerTube uses the [BitTorrent protocol][20] to share bandwidth between users.
PeerTube aims to resist corporate monopoly. It does not rely on ads and does not track you. Keep in mind that your IP address is not anonymous here.
There are various instances of PeerTube available where you can host your videos. Some instances may charge money while most are free.
[PeerTube][21]
#### 7\. Diaspora
Alternative to: Facebook
Features: Decentralized, Open Source
Self-host: Yes
Diaspora was one of the earliest decentralized social networks. This was back in 2010 and Diaspora was touted as a Facebook alternative. It did get some deserving limelight in its initial years but it got confined to only a handful of niche members.
Similar to Mastodon, Diaspora is composed of pods. You can register with a pod or host your own pod. The Big Tech doesnt own your data, you do.
[Diaspora][22]
#### 8\. Dtube
Alternative to: YouTube
Features: Decentralized, Blockchain
Self-host: No
![][23]
Dtube is a blockhain based decentralized YouTube clone. I use the word YouTube clone because the interface is way too similar to YouTube.
Like most other blockchain based social networks, Dtube is governed by DTube Coins (DTC) that creator earns when someone watches or interact with their content. The coins can be used to promote the content or cashed out from partner crypto exhcnages.
[DTube][24]
#### 9\. Signal
Alternative to: WhatsApp, Facebook Messenger
Features: Open Source
Self-host: No
![][25]
Unlike the end to end encrypted chats in WhatsApp, Signal doesnt track you, share your data and invade your privacy.
[Signal rose to fame][26] when Edward Snowden endorsed it. It got even more famous when [Elon Musk][27] tweeted about it after WhatsApp sharing user data with Facebook.
Signal uses its own open source Signal protocol to give you end-to-end encrypted messages and calls.
[Signal][28]
#### What else?
There are some other platforms that are not open source or decentralized, but they respect your privacy and free speech.
* [MeWe][29]: Alternative to Facebook
* [Voice][30]: Alternative to Medium
There is also Element messenger based on Matrix protocol which you may try.
I know there are probably several other such alternative social media platforms. Care to share them? I might add them to this list.
If you had to choose one of the platforms from the list, which one would you choose?
--------------------------------------------------------------------------------
via: https://itsfoss.com/mainstream-social-media-alternaives/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://arstechnica.com/tech-policy/2021/01/whatsapp-users-must-share-their-data-with-facebook-or-stop-using-the-app/
[2]: https://money.cnn.com/2014/02/19/technology/social/facebook-whatsapp/index.html
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/1984-quote.png?resize=800%2C450&ssl=1
[4]: https://www.minds.com/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/01/aether-reddit-alternative.png?resize=800%2C600&ssl=1
[6]: https://getaether.net
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/mastodon.png?resize=800%2C623&ssl=1
[8]: https://joinmastodon.org/
[9]: https://itsfoss.com/mastodon-open-source-alternative-twitter/
[10]: https://mastodon.social/@itsfoss
[11]: https://mastodon.social
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/lbry-interface.jpg?resize=800%2C420&ssl=1
[13]: https://lbry.org
[14]: https://itsfoss.com/lbry/
[15]: https://lbry.tv/$/invite/@itsfoss:0
[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/01/karma-app.jpg?resize=800%2C431&ssl=1
[17]: https://eos.io
[18]: https://karmaapp.io
[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/peertube-federation-multiplicity.jpg?resize=600%2C341&ssl=1
[20]: https://www.slashroot.in/what-bittorrent-protocol-and-how-does-bittorrent-protocol-work
[21]: https://joinpeertube.org
[22]: https://diasporafoundation.org
[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/01/dtube.jpg?resize=800%2C516&ssl=1
[24]: https://d.tube
[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/12/signal-shot.jpg?resize=800%2C565&ssl=1
[26]: https://itsfoss.com/signal-messaging-app/
[27]: https://www.britannica.com/biography/Elon-Musk
[28]: https://www.signal.org
[29]: https://mewe.com
[30]: https://www.voice.com

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (hwlife)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,125 +0,0 @@
[#]: subject: "Crop and resize photos on Linux with Gwenview"
[#]: via: "https://opensource.com/article/22/2/crop-resize-photos-gwenview-kde"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Crop and resize photos on Linux with Gwenview
======
Gwenview is an excellent photo editor for casual photographers to use on
the Linux KDE desktop.
![Polaroids and palm trees][1]
A good photo can be a powerful thing. It expresses what you saw in a very literal sense, but it also speaks to what you experienced. Little things say a lot: the angle you choose when taking the photo, how large something looms in the frame, and by contrast the absence of those conscious choices.
Photos are often not meant as documentation of what really happened, and instead they become insights into how you, the photographer, perceived what happened.
This is one of the reasons photo editing is so commonplace. When you're posting pictures to your online image gallery or social network, you shouldn't have to post a photo that doesn't accurately represent the feelings the photo encapsulates. But by the same token, you also shouldn't have to become a professional photo compositer just to crop out the random photo bomber who poked their head into your family snapshot at the last moment. If you're using KDE, you have a casual photo editor available in the form of Gwenview.
### Install Gwenview on Linux
If you're running the KDE Plasma Desktop, you probably already have Gwenview installed. If you don't have it installed, or you're using a different desktop and you want to try Gwenview, then you can install it with your package manager.
I recommend installing both Gwenview and the Kipi plugin set, which connects Gwenview with several online photo services so you can easily upload photos. On Fedora, Mageia, and similar distributions:
```
`$ sudo dnf install gwenview kipi-plugins`
```
On Debian, Elementary, and similar:
```
`$ sudo apt install gwenview kipi-plugins`
```
### Using Gwenview
Gwenview is commonly launched in one of two ways. You can click on an image file in Dolphin and choose to open it in Gwenview, or you can launch Gwenview and hunt for a photo in your folders with Gwenview acting more or less as your file manager. The first method is a direct method, great for previewing an image file quickly and conveniently. The second method you're likely to use when you're browsing through lots of photos, unsure of which version of a photo is the "right" one.
Regardless of how you launch Gwenview, the interface and functionality is the same: there's a workspace on the right, and a panel on the left.
![Gwenview][2]
(Seth Kenlon [CC BY-SA 4.0][3], Photo courtesy [Andrea De Santis][4])
Below the panel on the left, there are three tabs.
* Folders: Displays a tree view of the folders on your computer so you can browse your files for more photos.
* Information: Provides metadata about the photo you're currently viewing.
* Operations: Allows you to make small modifications to the current photo, such as rotating between landscape and portrait, resizing, and cropping.
Gwenview is always aware of the file system, so you can press the **Right** or **Left** **Arrow** on your keyboard to see the previous or next photo in a folder.
To leave the single-photo view and see all of the images in a folder, click the **Browse** button in the top toolbar.
![Browsing photos in a folder][5]
(Seth Kenlon, [CC BY-SA 4.0][3])
You can also have both views at the same time. Click the **Thumbnail Bar** button at the bottom of Gwenview to see the other images in your current folder as a filmstrip, with the currently selected photo in the main panel.
![Thumbnail view][6]
(Seth Kenlon, [CC BY-SA 4.0][3])
### Editing photos with Gwenview
Digital photos are pretty common, and so it's equally as common to need to make minor adjustments to a photo before posting it online or sharing it with friends. There are very good applications that can edit photos, and in fact one of the best of them is another KDE application called Krita (you can read about how I use it for photographs in my [Krita for photographers][7] article), but small adjustments shouldn't require an art degree. That's exactly what Gwenview ensures: easy and quick photo adjustments with a casual but powerful application that's integrated into the rest of your Plasma Desktop.
The most common adjustments most of us make to photos are:
* **Rotation**: When your camera doesn't provide the correct metadata for your computer to know whether a photo is meant to be viewed in landscape or portrait orientation, you can fix it manually.
* **Mirror**: Many laptop or face cameras mimic a mirror, which is useful because that's how we're used to seeing ourselves. However, it renders writing backward. The **Mirror** function flips (or flops?) an image from right to left.
* **Flip**: Less common with digital cameras and laptops, the phenomenon of taking a photo with an upside-down device is not uncommon with a mobile phone with a screen that flips no matter how you're holding your phone. The **Flip** function rotates an image 180 degrees.
* **Resize**: Digital images are often in super HD sizes now, and sometimes that's a lot more than you need. If you're sending a photo by email or posting it on a web page you want to optimize for loading time, you can shrink the dimensions (and file size accordingly) to something smaller.
* **Crop**: You have a great picture of yourself, and accidentally a random person you thought was just out of frame. Cut out everything you don't want in frame with the **Crop** tool.
* **Red** **Eye**: When your retinas reflect the flash of your camera back into the camera, you get a red eye effect. Gwenview can reduce this by desaturating and darkening the red channel in an adjustable area.
All of these tools are available in the **Operations** side panel or in the **Edit** menu. The operations are destructive, so after you make a change, click **Save As** to save a _copy_ of the image.
![Cropping a photo in Gwenview][8]
(Seth Kenlon, [CC BY-SA 4.0][3], Photo courtesy [Elise Wilcox][9])
### Sharing photos
When you're ready to share a photo, click the **Share** button in the top toolbar, or go to the **Plugins** menu and select **Export**. Gwenview, along with the Kipi plug-in set, can share photos with [Nextcloud][10], [Piwigo][11], plain old email, and services like Google Drive, Flickr, Dropbox, and more.
### Photo editing essentials on Linux
Gwenview has all the essentials for a desktop photo manager. If you need more than the basics, you can open a photo in Krita or [Digikam][12] and make major modifications as needed. For everything else, from browsing, ranking, tagging, and small adjustments, you have Gwenview close at hand.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/2/crop-resize-photos-gwenview-kde
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/design_photo_art_polaroids.png?itok=SqPLgWxJ (Polaroids and palm trees)
[2]: https://opensource.com/sites/default/files/kde-gwenview-ui.jpg (Gwenview)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: http://unsplash.com/@santesson89
[5]: https://opensource.com/sites/default/files/kde-gwenview-browse.jpg (Browsing photos in a folder)
[6]: https://opensource.com/sites/default/files/kde-gwenview-thumbnail.jpg (Thumbnail view)
[7]: https://opensource.com/article/21/12/open-source-photo-editing-krita
[8]: https://opensource.com/sites/default/files/kde-gwenview-crop.jpg (Cropping a photo in Gwenview)
[9]: http://unsplash.com/@elise_outside
[10]: https://opensource.com/article/20/7/nextcloud
[11]: https://opensource.com/alternatives/google-photos
[12]: https://opensource.com/life/16/5/how-use-digikam-photo-management

View File

@ -1,268 +0,0 @@
[#]: subject: "Boost your home network with DNS caching on the edge"
[#]: via: "https://opensource.com/article/22/3/dns-caching-edge"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Boost your home network with DNS caching on the edge
======
Create your own edge by running a DNS caching service on your home or
business network.
![Mesh networking connected dots][1]
If you've been hearing a lot of talk about "the cloud" over the past several years, then you may also have heard rumblings about something called "the edge."
The term _edge computing_ reflects the recognition that the cloud has boundaries. To reach those boundaries, your data has to connect with one of the physical datacenters powering the cloud. Getting data from a user's computer to a cluster of servers might be quick in some settings, but it depends heavily on geographic location and network infrastructure. The cloud itself can be as fast and powerful as possible, but it can't do much to offset the time required for the roundtrip your data has to make.
**[ What's the latest in edge? See [Red Hat's news roundup][2] from Mobile World Congress 2022. ]**
The answer is to use the edge of the boundaries of regional networks and the cloud. When initial services or computation happen on servers at the edge, it speeds up a user's interactions with the cloud.
By the same principle, you can create your own edge by running some services on your home server to minimize roundtrip lag times. Don't let the special terminology intimidate you. Edge computing can be as simple as an IoT device or running a server connected to [federated services][3].
One particularly useful and easy change you can make to your home or business network to give it a boost is running a DNS caching service.
### What is DNS?
The Domain Name System (DNS) is what enables us to translate the IP addresses of servers, whether they're in the cloud or just across town, to friendly website names like `opensource.com`.
Behind every domain name is a number—names are simply a convenience for humans, who are more likely to remember a few words than a string of numerals. When you type `example.com` into a web browser, your web browser silently sends a request over port 53 to a DNS server to cross-reference the name `example.com` with its registry, then sends back the last known IP address assigned to that name.
That's one roundtrip from your computer to the internet.
Armed with the correct number, your web browser makes a second request, this time with the number instead of the name, directly to your destination.
That's another roundtrip.
To make matters worse, your computer (depending on your configuration) may also be sending requests to DNS servers for named devices on your local network.
You can cut out all of this extra traffic by using a local cache. With a DNS caching service running on your network, once any one device on your network obtains a number assigned to a website, that number is stored locally, so no request from your network need ask for that number again.
As a bonus, running your own DNS caching server also enables you to block ads and generally take control of how any device on your network interacts with some of the low-level technologies of the internet.
### Install Dnsmasq on Linux
Install Dnsmasq using your package manager.
On Fedora, CentOS, Mageia, and similar:
```
$ sudo dnf install dnsmasq dnsmasq-utils
```
On Debian and Debian-based systems, use `apt` instead of `dnf`.
### Configure Dnsmasq
There are many options in Dnsmasq's default configuration file.
It's located at `/etc/dnsmasq.conf` by default, and it's well commented, so you can read through it and choose what you prefer for your network.
Here are some of the options I like.
Keep your local domains local:
```
# Never forward plain names (without a dot or domain part)
domain-needed
# Never forward addresses in the non-routed address spaces
bogus-priv
```
Ignore content from common ad sites. This syntax replaces the string between the first forward-slashes with the trailing address:
```
# replace ad site domain names with an IP with no ads
address=/double-click.net/127.0.0.1
```
Set the cache size. The default suggestion is 150, but I've never felt that 150 websites sounded like enough.
```
# Set the cachesize here
cache-size=1500
```
### Finding resolv.conf
On most Linux systems, the systemd `resolved` service manages the `/etc/resolv.conf` file, which governs what DNS nameservers your computer contacts for name to IP address resolution.
You can disable `resolved` and run `dnsmasq` alone, or you can run them both, pointing `dnsmasq` to its own resolver file.
To disable `resolved`:
```
`$ sudo systemctl disable --now systemd-resolved`
```
Alternately, to run them both:
```
$ cat &lt;&lt; EOF &gt;&gt; /etc/resolvmasq.conf
# my network name
domain home.local
# local hosts
enterprise 10.0.170.1
yorktown 10.0.170.4
# nameservers
nameserver 208.67.222.222
nameserver 208.67.220.220
EOF
```
In this example, `home.local` is a domain name I give, either over Dynamic Host Configuration Protocol (DHCP) or locally, to all devices on my network. The computers `enterprise` and `yorktown` are my home servers, and by listing them here along with their local IP addresses, I can contact them by name through `dnsmasq`. Finally, the `nameserver` entries point to known good nameservers on the internet. You can use the nameservers listed here, or you can use nameservers provided to you by your ISP or any public nameserver you prefer.
In your `dnsmasq.conf` file, set the `resolv-file` value to `resolvmasq.conf`:
```
resolv-file=/etc/resolvmasq.conf
```
### Start dnsmasq
Some distributions may have already started `dnsmasq` automatically upon installation. Others let you start it yourself when you're ready. Either way, you can use systemd to start the service:
```
$ sudo systemd enable --now dnsmasq
```
Test it with the `dig` command.
When you first contact a server, the query time might be anywhere from 50 to 500 milliseconds (hopefully not more than that):
```
$ dig example.com | grep Query\ time
;; Query time: 56 msec
```
The next time you try it, however, the query time is drastically reduced:
```
$ dig example.com | grep Query\ time
;; Query time: 0 msec
```
Much better!
### Enable dnsmasq for your whole network
Dnsmasq is a useful tool on one device, but it's even better when you let all the devices on your network benefit.
Here's how you open the `dnsmasq` service up to your whole local network:
#### 1\. Get the IP address of the server running the `dnsmasq` service
On the computer running `dnsmasq`, get the local IP address:
```
$ dig example.com | grep Query\ time
;; Query time: 0 msec
```
In this example, the IP address of the Raspberry Pi I'm running `dnsmasq` on is 10.0.170.170. Because this Pi is now an important part of my network infrastructure, I have its address statically assigned by my DHCP router. Were I to allow it to get a dynamic IP address, it _probably_ would not change (DHCP is designed to be helpful that way) but if it did then my whole network would miss out on the benefit of `dnsmasq`.
#### 2\. Modify the server's firewall to allow traffic on port 53
Open a port in your server's firewall using [firewall-cmd][4] so it allows DNS requests and sends responses.
```
$ sudo firewall-cmd --add-service dns --permanent
```
#### 3\. Add the IP address of the server to the `nameserver` entry of your home router
Knowing that my local DNS server's address is 10.0.170.170 (remember that it's almost certainly different on your own network), I can add it as the primary nameserver in my home router.
There are many routers out there, and there's no singular interface.
However, the task is the same, and the workflow is usually relatively similar from model to model.
In my [Turris Omnia router][5], the advanced interface allows DNS forwarding, which sends DNS requests to a server of my choosing.
Entering `10.0.170.170` (the IP of my `dnsmasq` server) here forces all DNS traffic to be routed through Dnsmasq for caching and resolution.
![Screenshot of fields for DNS server settings for Turris Omnia router][6]
(Seth Kenlon, [CC BY-SA 4.0][7])
 
In my TP-Link router, on the other hand, DNS settings are configured in the DHCP panel.
 
![Advanced DNS settings for tp-link router][8]
(Seth Kenlon, [CC BY-SA 4.0][7])
It may take some exploration, so don't be afraid to look around in your router's interface for DNS server settings. When you find it, enter your Dnsmasq server address and then save the changes.
Some models require the router to reboot when changes are made.
All devices on your network inherit settings from the router, so now all DNS traffic passing from a device to the internet gets passed through your Dnsmasq server.
### Close to the edge
As more and more websites get added to your server's DNS cache, DNS traffic will have to go farther than your local Dnsmasq server less and less often.
The principle of computing locally and quickly whenever possible drives edge computing. You can imagine how important it is, just by going through this exercise, that technologies use strategic geographic locations to speed up internet interactions.
Whether you're working on edge computing at home, at work, or as a cloud architect, the edge is an important component of the cloud, and it's one you can use to your advantage.
--------------------------------------------------------------------------------
via: https://opensource.com/article/22/3/dns-caching-edge
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3 (Mesh networking connected dots)
[2]: https://www.redhat.com/en/blog/red-hat-telecommunications-news?intcmp=7013a000002qLH8AAM
[3]: https://opensource.com/article/17/4/guide-to-mastodon
[4]: https://opensource.com/article/20/2/firewall-cheat-sheet
[5]: https://opensource.com/article/22/1/turris-omnia-open-source-router
[6]: https://opensource.com/sites/default/files/uploads/turris-dns.jpeg (Turris Omnia)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://opensource.com/sites/default/files/uploads/tplink-dns.jpeg (tp-link)

View File

@ -1,235 +0,0 @@
[#]: subject: "Top 10 Linux Distributions for Programmers in 2022 [Featured]"
[#]: via: "https://www.debugpoint.com/2022/03/top-linux-distributions-programmers-2022/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "aREversez"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Top 10 Linux Distributions for Programmers in 2022 [Featured]
======
WE REVIEW THE TOP 10 BEST LINUX DISTRIBUTIONS FOR PROGRAMMERS AND
DEVELOPERS (IN 2022) TO HELP WITH THEIR WORK AND PERSONAL PROJECTS.
The developers or the programmers use various tools and applications for their job or projects. It includes code editors, programming language compilers, add-ons, databases, etc. If you categorise the workflow of a modern developer it contains a typical workflow as below
* accessing to the code repo
* programming
* debugging
* testing
* deploying
And this typical workflow may need a wide range of tools. A standard list might be like this
* Code editors
* Simple Text Editors
* Web browsers (all variants for a web developer)
* Database engine
* A local server
* The respective programming language compiler
* Debuggers
* Monitoring or profiling tools (executables or network)
Arguably, Linux is the best choice for programming compared to Windows. I am not comparing macOS in this article for several reasons. The primary reason for Linux is the best is because packages and apps with modern technology come as pre-installed or very easy to install in Linux distributions than Windows.
Hence, in this post, we would like to list the best Linux Distributions for programmers in 2022.
### Top 10 Linux Distributions for Programmers in 2022
#### 1\. Fedora Workstation
![Fedora 35 Workstation][1]
Perhaps the perfect Linux distribution among this list is Fedora Linux. Its default workstation edition for desktop brings an authentic GNOME desktop experience with its choice of packages.
Fedora Linux default installation gives you all major development packages out of the box. They include PHP, OpenJDK, PostgreSQL, Django, Ruby on Rails, Ansible, etc.
Installing additional applications such as Code editors and other packages are super simple with the dnf package manager. You can also take advantage of Software which is an app store where you can search and install applications with just a click of a button.
Fedora Linux supports Snap and Flatpak, and that gives you more flexibility. You can also take advantage of the RPM Fusion repository in Fedora. The RPM Fusion repo gives you access to many free and non-free packages. Fedora Linux doesnt want to include these packages in their main repo for license and other obvious reasons.
You can check out the latest Fedora Linux on their official website below.
[Download Fedora][2]
#### 2\. Ubuntu Linux
![Ubuntu Desktop is a perfect Linux Distribution for Programmers.][3]
The second Linux distribution in this list is Ubuntu Linux. Ubuntu Linux is the most used Linux distribution today in server and desktop both. Ubuntu provides a long-term support release with five years of official support (plus another five years of maintenance support) and two short-term releases per year for power users.
Due to its popularity, all the latest packages and application vendors provide Ubuntu (.deb) variants. The popularity also brings massive support in forums and documentation, perfect for developers, especially when you are stuck with errors during the development phase. Learn more about Ubuntu in the below link.
[Download Ubuntu][4]
#### 3\. openSUSE
openSUSE is one of the most stable and professionally built Linux distributions used in critical systems worldwide. This Linux Distribution is a go-to solution for enterprise-level workloads that include desktops, servers and thin clients.
It has some advantages over Ubuntu and Fedora. First, it has two variants Leap and Tumbleweed. The openSUSE Leap is a long term support release (LTS) that gives you up-to-date stability. The openSUSE Tumbleweed is a rolling release software that features bleeding edge packages.
If you need the latest packages and hardware support for your development, then Tumbleweed is your choice. If you need stability and a longer running system with low maintenance, choose openSUSE Leap.
One of the advantages of using openSUSE for your development work is its package manager YaST. You can automate many activities with ease using the YaST package manager.
On top of that, the openSUSE software delivery method is outstanding. It has its software portal on the web, which you can visit, search for a package and click install.
If you are a little experienced in Linux compared to the new users, choose openSUSE for your development work.
[Download openSUSE][5]
#### 4\. Manjaro Linux
Manjaro Linux is an Arch Linux based distribution that makes Arch installation easy. It is based on Arch Linux but brings several features such as a GUI installer like Ubuntu or Linux Mint, pamac installer, its curated repositories and more. Manjaro comes in three primary desktop flavours GNOME, KDE Plasma and Xfce to cater to almost all user base.
If you want Arch Linux and its rolling release package base for your development needs but do not want to get into the hassles of installing vanilla Arch, Manjaro is your perfect choice.
[Download Manjaro][6]
#### 5\. Arch Linux
While Manjaro and other Arch-based easy installation Linux distributions are out there, you may still want to get your hands dirty with the [vanilla Arch installation][7] with your custom desktop.
This is more for power developers or programmers who want more control and a custom Linux operating system built for projects or needs. In those cases, you may want to install Arch Linux with your favourite desktop to set up your development operating system.
[][8]
SEE ALSO:   Top Nitrux Applications (Maui) Everyone Should Try
Suppose you are experienced in Arch Linux and computers in general. In that case, this is the best choice among all because it gives you complete control over each package in your custom build Linux operating system.
[Download Arch Linux][9]
#### 6\. Pop OS
The Pop OS (represented as Pop!_OS) was developed by computer manufacturer System76 for their series of hardware. Pop OS is free and open-source based on Ubuntu. It follows the Ubuntu release cycle for its base while bringing additional tweaks, packages customised for users.
![Pop OS 21.10 Desktop][10]
Pop OS is perfect for programmers because it natively supports many programming languages based on Ubuntu. It markets itself as popular among computer scientists and programmers for its curated software centre, which has a dedicated section featuring applications for development and programming.
On top of that, the COSMIC desktop (customised GNOME desktop) in Pop OS gives a unique experience to programmers with auto-tiling, a lovely colour palette, native dark mode and a wide range of settings.
If you need an Ubuntu base and want a stable programmer-friendly Linux distribution, then choose Pop OS.
[Download POP OS][11]
#### 7\. KDE Neon
If you are a developer who feels comfortable in the KDE Plasma desktop and wants a Qt-based development environment, then KDE Neon is perfect for you.
KDE Neon is a Linux distribution based on the Ubuntu LTS version with the latest KDE Plasma desktop, KDE Framework packages. So, in KDE Neon, you get Ubuntu LTS stability with bleeding-edge KDE packages with Qt.
This is a perfect Linux Distribution if you need a fast system with out of the box applications, a friendly user interface and huge community support.
[Download KDE Neon][12]
#### 8\. Debian
Debian GNU/Linux needs no introduction. Debians stable branch is the base of Ubuntu and all its derivatives. Hence it is one of the primary and stable Linux. And it is perfect for your development environment because it gives you ultimate stability with multi-year support.
Although, Debians stable branch is slightly conservative on adopting the latest packages. Debian maintainers carefully check and merges packages because the entire world (well, almost) depends on Debian stability.
It is a perfect programming environment for advanced users and sysadmins if you are looking for a stable and long-running dev environment with low maintenance effort.
[Download Debian Linux][13]
#### 9\. Kali Linux
The Kali Linux is developed by Offensive Security primarily targeted to ethical hackers and penetration testers who look out for vulnerabilities in networks. It comes with tons of hacking tools and applications pre-installed.
It can act as a perfect Linux distribution for programmers and developers if you are experienced enough. Go for Kali Linux if you are well versed with Linux with some experience in navigating around errors and dependencies.
[Download Kali Linux][14]
#### 10\. Fedora Labs Options
And the final Linux Distribution in this list is some combinations of Linux Distributions from Fedora Linux.
Fedora Labs provides specially curated Linux Distributions for programmers, scientists and students with pre-loaded applications, respective packages and utilities. Many people are not aware of these, and when appropriately configured, they can act as perfect ready-made Linux distribution for you.
Heres a summary of them.
* Combination of Scientific and numerical open-source tools with KDE Plasma desktop.
* Application list includes
* GNU Scientific Library for C/C++
* MATLAB Compatible MGNU Octave
* LaTeX
* Maxima Computer Algebra System
* Gnuplot for drawing 2D and 3D graphs
* Pandas Python library for data science
* IPython
* Packages for Java and R programming languages
* Learn more about Fedora Scientific and [download it here][15].
* Open source neuroscience applications and packages with GNOME Desktop environment. Learn more and download here.
* Perfect Linux distribution combines the best open-source robotics applications and packages targeted to beginner and experienced Robotics scientists and programmers.
* Learn more and [download it here][16].
from Fedora Linux includes [Fedora Security Labs][17], [Fedora Astronomy][18] and [Fedora Python Classroom][19], which you want to check out.
These Fedora Labs options can be perfect Linux distributions for programming projects or working in specific science fields.
### Summary
So, how do you choose your favourite among this list of best Linux Distributions for programmers?
If you are unsure and want to have a development system up and running with minimal effort, go for Fedora Workstation or Ubuntu.
If you have spare time or want more control in your system, like experimenting and being comfortable with occasional errors, then go for Arch Linux based systems.
Pop OS is also a good choice for new developers new to the Linux ecosystem. For specific needs, go to the Fedora Labs options.
I hope this list of best Linux Distributions for programmers in 2022 gives you some guidance on choosing your favourite Linux distributions for programming and development.
Cheers.
* * *
We bring the latest tech, software news and stuff that matters. Stay in touch via [Telegram][20], [Twitter][21], [YouTube][22], and [Facebook][23] and never miss an update!
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2022/03/top-linux-distributions-programmers-2022/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[aREversez](https://github.com/aREversez)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/wp-content/uploads/2021/11/Fedora-35-Workstation-1024x528.jpg
[2]: https://getfedora.org/
[3]: https://www.debugpoint.com/wp-content/uploads/2022/03/Ubuntu-Desktop-is-a-perfect-Linux-Distribution-for-Programmers-1024x579.jpg
[4]: https://ubuntu.com/download
[5]: https://www.opensuse.org/
[6]: https://manjaro.org/download/
[7]: https://www.debugpoint.com/2022/01/archinstall-guide/
[8]: https://www.debugpoint.com/2022/03/top-nitrux-maui-applications/
[9]: https://archlinux.org/download/
[10]: https://www.debugpoint.com/wp-content/uploads/2021/12/Pop-OS-21.10-Desktop-1024x579.jpg
[11]: https://pop.system76.com/
[12]: https://neon.kde.org/download
[13]: https://www.debian.org/distrib/
[14]: https://www.kali.org/
[15]: https://labs.fedoraproject.org/en/scientific/
[16]: https://labs.fedoraproject.org/en/robotics/
[17]: https://labs.fedoraproject.org/en/security
[18]: https://labs.fedoraproject.org/en/astronomy
[19]: https://labs.fedoraproject.org/en/python-classroom
[20]: https://t.me/debugpoint
[21]: https://twitter.com/DebugPoint
[22]: https://www.youtube.com/c/debugpoint?sub_confirmation=1
[23]: https://facebook.com/DebugPoint

View File

@ -1,128 +0,0 @@
[#]: subject: "5 Less Popular Features that Make Ubuntu 22.04 LTS an Epic Release"
[#]: via: "https://www.debugpoint.com/2022/04/ubuntu-22-04-release-unique-feature/"
[#]: author: "Arindam https://www.debugpoint.com/author/admin1/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
5 Less Popular Features that Make Ubuntu 22.04 LTS an Epic Release
======
A LIST OF MINOR HIGHLIGHTED FEATURES OF UBUNTU 22.04 LTS THAT MAKES IT
ONE OF THE BEST LTS RELEASE SO FAR.
Canonicals latest LTS instalment of [Ubuntu code-named “Jammy Jellyfish”][1] is well received from users worldwide. There are hundreds of new tiny features and some less popular ones that didnt catch much attention. So, here are five unique Ubuntu 22.04 release features which we think make it an epic release.
![Ubuntu 22.04 LTS Desktop \(GNOME\)][2]
### Ubuntu 22.04 Release Five Unique Features
#### Optimised for Data-Driven Solutions
Data analysis and processing are the core of every business today. And to do that, you need enormous computing power. Ubuntu 22.04 LTS brings out of the box [NVIDIA virtual GPU (vGPU)][3] driver support. That means you can take advantage of NVIDIA virtual GPU software, enabling you to use GPU computing power in virtual machines shared from physical GPU servers.
Not only that, if your business relies on SQL Servers, Ubuntu LTS for Azure brings SQL Server for Ubuntu which is backed by “Micro$oft” providing optimised performance and scalability.
#### Improved Active Directory Integration
Furthermore, Many businesses deploy Ubuntu in multiple workstations for the entire enterprise users. And it is important to deploy workstation policies to monitor and control user access and various business-critical controls.
Active Directory, which enables the policy-based workstation administration (introduced in Ubuntu 20.04), is further improved in this release. In addition to that, this release brings [ADsys][4] client, which helps to remotely manage the group policy, privilege escalation and remote script execution via command line. Active Directory also now supports installer integration beginning this release with the Advanced Group Policy object.
#### Realtime Kernel Support
Moreover, one of the interesting announcements from Canonical during the Ubuntu 22.04 LTS release is the offering “real-time” Kernel option, which is now beta. For telecom and other industries, a low-latency operating system is required for time-sensitive work. So, with that in mind and a vision to penetrate those areas, Ubuntu 22.04 LTS brings a real-time kernel build with PREEMPT_RT patches applied. It is avialble for x86_64 and AArch64 architectures.
However, the [patch][5] is not in mainline Kernel yet and hopefully, and it will be streamlined soon.
#### Latest Apps, Packages and Drivers
In addition to the above changes, this release also brings a vast list of package and toolchain upgrades. For example, this release brings multiple Linux Kernel types based on usage, such as Ubuntu desktop can opt into [Kernel 5.17][6], whereas the hardware enablement Kernel remains 5.15.
[][7]
SEE ALSO:   10 Things to Do After Installing Ubuntu 22.04 [With Bonus Tip]
Not only that, Ubuntu Server features long-term-support [Kernel 5.15][8], while the Ubuntu Cloud images have the option to use a more optimised Kernel in collaboration with cloud providers.
Moreover, if you are an NVIDIA user, it is worth knowing that Linux-restricted modules of NVIDIA drivers on ARM64 are now available (already available in x86_64). You can use the [ubuntu-drivers][9] program to install and configure NVIDIA drivers.
A complete operating system works flawlessly because of core modules and subsystems. So, with that in mind, Ubuntu 22.04 LTS upgraded all of them carefully to cater for this great release. Heres a brief.
* GCC 11.2.0
* binutils 2.38
* glibc 2.35
* Python 3.10.4
* Perl 5.34.0
* LLVM 14
* golang 1.18
* rustc 1.58
* OpenJDK 11 (option to use OpenJDK 18)
* Ruby 3.0
* PHP 8.1.2Apache 2.4.52
* PostgreSQL 14.2
* Django 3.2.12
* MySQL 8.0
* Updated NFS and Samba Server
* Systemd 249.11
* OpenSSL 3.0
* qemu 6.2.0
* libvirt 8.0.0
* virt-manager 4.0.0
#### Performance Boost
But thats not all. Thanks to some long-pending updates, you should experience a much faster Ubuntu 22.04 Jammy Jellyfish experience, which eventually lands in this release.
Firstly, the long-pending [Tripple buffering code][10] for GNOME desktop lands. The Tripple buffering is enabled automatically when the prior frame buffering lags behind, and it yields a much faster desktop performance in Intel and Raspberry Pi drivers. Not only that, but the code also monitors the last frame so that the system doesnt run into excess buffering situations.
Secondly, improved power management that works at runtime for AMD and NVIDIA GPUs will help laptop users.
In addition, Wayland is now the default display server for most systems except for NVIDIA GPU hardware which defaults to X11. Wayland gives you a much faster desktop experience across applications, including web browsers.
Finally, the customised GNOME 42 and its [unique features][11] such as balanced and power saver power profiles gives more advantage to heavy laptop users. Also, the new accent colour with a light/dark look and GTK4/libadwaita port of selected GNOME modules is just an addon to this epic Ubuntu 22.04 LTS release.
### Conclusion
To conclude, I believe this is one of the best LTS releases that Canonical shipped in terms of all the above under the hood changes and many others.
We hope it is well received and remains stable in the coming days.
* * *
We bring the latest tech, software news and stuff that matters. Stay in touch via [Telegram][12], [Twitter][13], [YouTube][14], and [Facebook][15] and never miss an update!
--------------------------------------------------------------------------------
via: https://www.debugpoint.com/2022/04/ubuntu-22-04-release-unique-feature/
作者:[Arindam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.debugpoint.com/author/admin1/
[b]: https://github.com/lujun9972
[1]: https://www.debugpoint.com/2022/01/ubuntu-22-04-lts/
[2]: https://www.debugpoint.com/wp-content/uploads/2022/04/Ubuntu-22.04-LTS-Desktop-GNOME-1024x580.jpg
[3]: https://docs.nvidia.com/grid/latest/grid-vgpu-release-notes-ubuntu/index.html
[4]: https://github.com/ubuntu/adsys
[5]: https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git/
[6]: https://www.debugpoint.com/2022/03/linux-kernel-5-17/
[7]: https://www.debugpoint.com/2022/04/10-things-to-do-ubuntu-22-04-after-install/
[8]: https://www.debugpoint.com/2021/11/linux-kernel-5-15/
[9]: https://launchpad.net/ubuntu/+source/ubuntu-drivers-common
[10]: https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1441
[11]: https://www.debugpoint.com/2022/03/gnome-42-release/
[12]: https://t.me/debugpoint
[13]: https://twitter.com/DebugPoint
[14]: https://www.youtube.com/c/debugpoint?sub_confirmation=1
[15]: https://facebook.com/DebugPoint

View File

@ -0,0 +1,42 @@
[#]: subject: "Bloomberg Open Sources Memray, A Python Memory Profiler"
[#]: via: "https://www.opensourceforu.com/2022/04/bloomberg-open-sources-memray-a-python-memory-profiler/"
[#]: author: "Laveesh Kocher https://www.opensourceforu.com/author/laveesh-kocher/"
[#]: collector: "lkxed"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Bloomberg Open Sources Memray, A Python Memory Profiler
======
![soft][1]
Memray is a memory profiler that was developed at Bloomberg and is now open source. It can track memory allocations in Python code, including native extensions and the Python interpreter itself. Memory profiling is a strong tool for understanding how a program utilises memory and, as a result, detecting memory leaks or determining which areas of the program consume the most memory.
In contrast to sampling memory profilers like py-spy, Memray can track every function call, including calls into C/C++ libraries, and display the call stack in detail. Bloomberg claims that this does not come at the sacrifice of performance, with profiling only slowing down interpreted code by a little amount. However, native code profiling is slower and must be enabled directly.
Memray may generate a variety of reports based on the acquired memory consumption data, including flame graphs, which are valuable for rapidly and precisely identifying the most common code-paths.
According to Yury Selivanov, co-founder and CEO of EgdeDB, the tool gives previously unavailable insights into Python applications. Memray can be used to execute and profile a Python application from the command line:
```python
$ python3 -m memray run -o output.bin my_script.py
$ python3 -m memray flamegraph output.bin
```
Alternatively, you can use pytest-memray to integrate Memray into your test suite. You can also profile all C/C++ calls with the —native command line option, or analyse memory allocation in real time while a programme is executing with the —live command line option. Memray can be installed with python3 -m pip install memray on a Linux x86/64 system.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/04/bloomberg-open-sources-memray-a-python-memory-profiler/
作者:[Laveesh Kocher][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/laveesh-kocher/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/04/soft-1-696x363.jpg

View File

@ -2,7 +2,7 @@
[#]: via: "https://itsfoss.com/gnome-text-editor/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "aREversez"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
@ -141,7 +141,7 @@ via: https://itsfoss.com/gnome-text-editor/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[aREversez](https://github.com/aREversez)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,302 @@
[#]: subject: "Updating Edge Devices with OSTree and Pulp"
[#]: via: "https://fedoramagazine.org/updating-edge-devices-with-ostree-and-pulp/"
[#]: author: "lubosmj https://fedoramagazine.org/author/lubosmj/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Updating Edge Devices with OSTree and Pulp
======
![][1]
Photo by [Shubham Dhage][2] on [Unsplash][3]
Connecting industrial machinery to the internet has given birth to infinite opportunities that range from performance improvements and predictive maintenance to data modelling that can lead to novel solutions and use cases. The possibilities are endless. Connecting machinery on such a scale can test the limits of cloud connectivity, depending on your location and network limitations.
An edge device is any piece of hardware that sits at the boundary between two networks. When initial computation happens on servers at the edge, it speeds up users interactions with the cloud. Therefore, adding edge devices provides opportunities to optimize performance, shorten the journey, and lighten the load on your cloud connection.
As amazing as it sounds, managing all of this functionality demands continuous attention from administrators. Having a reliable solution to distribute, deploy, and update systems for edge devices from the outset will help you spend time on things that matter.
In this article, we look at how OSTree is well-positioned for upgrading and updating edge devices with versioned updates of Linux-based operating systems. Furthermore, well explore how Pulp facilitates managing and preparing updates of the OSTree content, as well as making it available to edge devices. Together, they provide a powerful free and open-source solution for administering edge devices.
### How does OSTree help manage Edge devices?
If you need to deploy hundreds of operating systems to edge devices, safe in the knowledge that you can easily manage future updates and maintenance, an OSTrees immutable and image-based operating system is ready for the task.
[OSTree][4] functions like git, but for operating system binaries. It has git-like content-addressed repositories. The ability to commit and branch entire root filesystem trees resembles the way you submit changes in git. With OSTree, you build an operating system with pre-installed packages, known as an operating system image. After you build the operating system image, it is possible to track it, sign it, test it, and deploy it. These images function as immutable file system trees. When the time comes to change or update, you simply build a new image and deploy it. By atomically switching between different versions of images, you are completely replacing filesystem trees.
OSTree also has a simple CLI that you can use for managing simple workflows, for example, for switching between different versions of images/filesystem trees.
### Where do Fedora-IoT Images feature?
As a standalone tool, the base OSTree CLI is not the most feature-rich utility for managing repository content. To make life easier, in the following demo, we will use _[rpm-ostree][5]_. _rpm-ostree_ is a hybrid image/package system that combines the standard OSTree technology as a base image format and accepts RPM on both the client and server-side.
_rpm-ostree_ integrates with Fedora IoT. In comparison to other ecosystems, instead of installing packages via DNF, you install packages with _rpm-ostree_. After rebooting all changes are applied to a new version of the image.
You can also upgrade or install a new Fedora IoT image with the _rpm-ostree_ utility.
### Where and how does Pulp come into this?
[Pulp][6] is a platform that handles content management workflows. Using Pulp, you can sync packages from remote repositories such as an RPM server, PyPI, Docker Hub, Ansible Galaxy, and many more. You can host and modify synced packages in repositories inside the Pulp server. You can publish repositories that contain packages available for deployment to production environments.
In our scenario, Pulp provides a platform for storing particular versions of OSTree content, promoting approved content through the content management lifecycle, for example from _dev_ to _test_, and from _test_ to _prod_. Pulp also provides a method for publishing content that is consumed by edge devices. Using Pulp, you can pull the latest packages, test, and publish only when safe to do so. Pulp ensures the safety, security, and repeatability of your content supply chain.
The following diagram provides a simplified overview of Pulp. On the left are shown different content types that are mirrored into Pulp from remote sources. These repositories are then served, for instance, to different CI/CD or production environments.
![A simplified overview of Pulp. The content is mirrored from remote repositories and made available to different types of environments.][7]
Pulp creates a new repository version automatically when updating or removing packages in a repository. You can distribute each repository version independently.
Pulp has a plugin-based architecture, which means that you must add a plugin for each content type you want to use. For managing OSTree content, you need [the OSTree plugin][8]. You can then mirror content from a remote repository, import content from a local tarball, and modify content within a Pulp repository while preserving the integrity of the original content. You can move commits and refs from one repository to another or delete them. Pulp ensures that you are safe to experiment while your production environment remains pinned to a particular version.
### Putting it all together
In this section, lets look at how to build an image with an OSTree commit.
#### Building a Customized Fedora-IoT Image
We start by booting a new virtual machine (VM) that will have an installed Fedora-IoT OS. For the purposes of this example, it is best to have the same version of the OS installed as the running edge devices have.
All commands in this section are executed on the main admin VM (Fedora IoT 35 OS). On this admin VM, we will build the images that we will then distribute to the edge devices.
##### Before you begin:
* First, ensure that the VM is accessible via SSH. To test, enter the following command from within the target OS:
```
$ systemctl is-active sshd
```
* Next, ensure that the following tools for composing operating system images are installed: 
```
$ sudo rpm-ostree install osbuild-composer composer-cli
$ sudo systemctl enable --now osbuild-composer.socket
```
* Now, apply the installed packages by rebooting the system.
* * *
In this example a nano editor package is installed on all edge devices. We need to build an image containing a commit with the package.
Create a blueprint file that describes what changes you want to make to the image as shown here:
```
$ cat install-nano.toml
name = "nano-commit"
description = "Installing nano"
version = "0.0.1"
[[packages]]
name = "nano"
version = "*"
```
Push this blueprint to the os build composer utility, which is a tool for composing operating system images. _composer-cli_ communicates with _osbuild composer_ through the CLI:
```
$ composer-cli blueprints push install-nano.toml
```
Build a new image:
```
$ composer-cli compose start-ostree nano-commit fedora-iot-commit --ref fedora/stable/x86_64/iot
```
The composer will use resources available in your current OS (such as a default operating system version).
Regularly check the status of the build:
```
$ composer-cli compose status
```
When the build finishes, download the image:
```
$ composer-cli compose image ${IMAGE_UUID}
```
The downloaded image is basically an OSTree repository packed into a tarball. When you extract the archived content, you will notice that one ref is referencing the checksum of a commit. You can find it inside the _refs/heads/_ directory.
#### Publishing the Customized Image with Pulp
All commands shown in this section are executed on the main admin VM (Fedora IoT 35 OS).
##### Before you begin:
* Ensure that you have installed Pulp and the Pulp CLI for managing OSTree repositories:
```
$ python3 -m venv venv && source venv/bin/activate
$ pip install pulp-cli-ostree
```
* Then [configure][9] the reference to the Pulp server:
```
$ pulp config create && pulp status
```
Now configure a proxy server or SSH port forwarding to enable network communication between the VM and Pulp. Ensure that you can ping the Pulp server from the VM.
* * *
First, create a new OSTree repository:
```
$ pulp ostree repository create --name fedora-iot
```
The following command will import the tarball created in the previous section into Pulp:
```
$ pulp ostree repository import-commits --name fedora-iot --file ${IMAGE_TARBALL_C1} --repository_name repo
```
Publish the parsed commit as a remote OSTree repository hosted by Pulp:
```
$ pulp ostree distribution create --name fedora-iot --base-path fedora-iot --repository fedora-iot
```
Try to fetch the commit checksum from the ref:
```
$ curl http://${PULP_BASE_ADDR}/pulp/content/pulp-fedora-iot/refs/heads/fedora/stable/x86_64/iot
```
#### **Distributing the Customized Image to an Edge Device**
The Edge device can be another VM or a real device running Fedora IoT.
All commands shown in this section are executed on an Edge device (Fedora IoT 35 OS).
##### Before you begin:
* Configure a proxy server or SSH port forwarding to enable network communication between an Edge device and Pulp. Ensure that you can ping the Pulp server from the Edge device. 
* Ensure that the Edge device is accessible with SSH:
```
$ systemctl is-active sshd
```
* * *
The nano package should NOT come pre-installed with the official bare Fedora IoT 35 image. Verify that by attempting to run _nano_ inside your terminal.
In Fedora IoT, updates are retrieved from the URL defined in **/etc/ostree/remotes.d/fedora-iot.conf**. This file can be modified manually or by adding a new remote repository. Learn more at [Adding and Removing Remote Repositories][10].
You can automate the upgrade procedure with an upgrade policy that will be configured at the beginning of deployment. This is done by writing a kickstart file that will boot up an edge device into a headless state. However, for demonstrative purposes, lets act like a villain and update the aforementioned configuration file manually to have the following content:
```
[remote "fedora-iot"]
url=http://${PULP_BASE_ADDR}/pulp/content/pulp-fedora-iot/refs/heads/fedora/stable/x86_64/iot
gpg-verify=false
ref=fedora/stable/x86_64/iot
```
Do not forget to replace the variable _${PULP_BASE_ADDR}_ with a valid base path to the pulp server.
The following command shows you that some packages are going to be installed:
```
$ rpm-ostree upgrade
```
Reboot the edge device:
```
$ systemctl reboot
```
_…rebooting…_
Log in to the edge VM via ssh, and check the presence of the nano package that comes from Pulp:
```
$ nano
```
**Done! You have successfully distributed a customized Fedora IoT image via Pulp!**
In case of any questions, do not hesitate to reach out to us at [https://pulpproject.org/help][11].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/updating-edge-devices-with-ostree-and-pulp/
作者:[lubosmj][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/lubosmj/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2022/04/updating_edge_devices-816x345.jpg
[2]: https://unsplash.com/@theshubhamdhage?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[3]: https://unsplash.com/s/photos/upload-network?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[4]: https://ostree.readthedocs.io/en/latest/
[5]: https://coreos.github.io/rpm-ostree/
[6]: https://pulpproject.org/
[7]: https://fedoramagazine.org/wp-content/uploads/2022/04/pulp101-simplified-overview.png
[8]: https://github.com/pulp/pulp_ostree
[9]: https://docs.pulpproject.org/pulp_cli/configuration/
[10]: https://docs.fedoraproject.org/en-US/iot/rebasing/#_adding_and_removing_remote_repositories
[11]: https://pulpproject.org/help/#pulp-community-discourse

Some files were not shown because too many files have changed in this diff Show More