Merge pull request #25 from LCTT/master

update from LCTT
This commit is contained in:
perfiffer 2021-11-24 17:50:47 +08:00 committed by GitHub
commit 13360fbcaa
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
84 changed files with 9350 additions and 2943 deletions

View File

@ -0,0 +1,176 @@
[#]: collector: (lujun9972)
[#]: translator: (FigaroCao)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-14011-1.html)
[#]: subject: (Using Powershell to automate Linux, macOS, and Windows processes)
[#]: via: (https://opensource.com/article/20/2/devops-automation)
[#]: author: (Willy-Peter Schaub https://opensource.com/users/wpschaub)
使用 Powershell 来自动化 Linux、macOS 以及 Windows 流程
======
> 自动化是 DevOps 的关键,但是,是否任何事都可以自动化?
![](https://img.linux.net.cn/data/attachment/album/202111/23/123000eexe7iez7wsew72e.jpg)
自动化控制了那些手工的、费力的和容易出错的过程,用运行自动化脚本的计算机代替了执行手工任务的工程师。每个人都认同手工流程是健康的 DevOps 模式的敌人。一些人认为自动化不是一件好事,因为它取代了辛勤工作的工程师,而另一些人则意识到它提高了一致性、可靠性和效率,节省了时间,(最重要的是)使工程师能够聪明地工作。
> “DevOps 并不只是自动化或者基础架构即代码。” — [Donovan Brown][2]
自从上个世纪 80 年代早期开始使用自动化流程和工具链以来,每当我听到或读到“自动化一切”的建议时,我总是会激动不已。虽然在技术上可以实现一切自动化,但自动化是复杂的,并且需要付出开发、调试和维护方面的代价。如果你曾经重新启用一个许久不用的 Azure 资源管理器ARM模板或很久以前编写的宝贵维护脚本并期望它在几个月或几年之后仍然能够完美地执行那么你就会明白自动化就像任何其他代码一样是脆弱的需要持续的维护和培养。
所以,你应该对什么进行自动化并在何时进行自动化?
* 当你手动执行自动化流程超过一两次
* 当你需要经常地持续地执行自动化流程
* 自动化任何可被自动化的
更重要的是,什么是你不应该自动化的?
* 不要自动化一次性的流程,因为不值得投入,除非你会重新使用它作为参考文档,并定期验证它的可用性
* 不要自动化高度不稳定的流程,因为太复杂且昂贵
* 不要自动化有问题的流程,在自动化前先修复它们
举例来说,我的团队使用我们通用的协作和工程系统来不断的监控数百个用户活动。如果一个用户在三个月或者更长时间处于非活动状态,并且这个用户被分配了一个昂贵的许可证,我们就会重分配这个用户一个功能少一些但是免费的许可证。
如图 1 所示,这是一个没有技术挑战性的流程。这是一个令人费解且容易出错的过程,尤其是在执行上下文时与其他开发和运维任务切换时。
![Manual process to switch user license][3]
*图 1 手工流程切换用户许可证*
顺带的,这里有一个用简单三步创建的价值流图的例子:
1. 可视化所有活动: 列出用户、过滤用户、重置许可证。
2. 确定利益相关者,即运营和授权团队。
3. 措施:
* 总交货时间TLT= 13 小时
* 总周期时间TCT = 1.5 小时
* 总效率百分比 = TLT/TCT*100 = 11.5%
如果你在人群流量大和容易看到的区域挂一个这些可视化的副本,比如在你的团队的讨论区、餐厅,或在去洗手间的路上,你将引发大量的讨论和主动反馈。例如,从视觉上看,很明显,手工任务是一种浪费,主要是由于漫长的流程等待时间造成的。
让我们研究一个简单的 PowerShell 脚本,它可以自动化该流程,如图 2 所示,将总交付时间从 13 小时减少到 4 小时加 60 秒,并将总体效率从 11.5 提高到 12.75%。
![Semi-automated PowerShell-based process to switch user license][4]
*图 2 半自动化的 PowerShell 脚本切换用户许可*
[PowerShell][5] 是一种开源的基于任务的脚本语言。它可以在 [GitHub][6] 上找到。它构建在 .NET 上,允许你自动化 Linux、macOS 和 Windows 流程。具有开发背景的用户,特别是 C# 用户,将享受到 PowerShell 的全部好处。
下面的 PowerShell 脚本示例通过它的服务 [REST API][8] 与 [Azure DevOps][7] 进行通信。脚本结合了在图 1 中的手动列表用户和过滤用户任务,识别了 Demo 组织中的所有两个月没有活动的、使用基本许可证或更昂贵的基本+测试许可证的用户,并将用户的详细信息输出到控制台。很简单!
首先,设置认证标头和其他变量,这些变量将在稍后的初始化脚本中使用:
```
param(
[string] $orgName = "DEMO",
[int] $months = "-2",
[string] $patToken = "<PAT>"
)
# Basic authentication header using the personal access token (PAT)
$basicAuth = ("{0}:{1}" -f "",$patToken)
$basicAuth = [System.Text.Encoding]::UTF8.GetBytes($basicAuth)
$basicAuth = [System.Convert]::ToBase64String($basicAuth)
$headers = @{Authorization=("Basic {0}" -f $basicAuth)}
# REST API Request to get all entitlements
$request_GetEntitlements = "https://vsaex.dev.azure.com/" + $orgName + "/_apis/userentitlements?top=10000&api-version=5.1-preview.2";
# Initialize data variables
$members = New-Object System.Collections.ArrayList
[int] $count = 0;
[string] $basic = "Basic";
[string] $basicTest = "Basic + Test Plans";
```
接下来,使用此脚本查询所有授权,以识别不活动用户:
```
# Send the REST API request and initialize the members array list.
$response = Invoke-RestMethod -Uri $request_GetEntitlements -headers $headers -Method Get
$response.items | ForEach-Object { $members.add($_.id) | out-null }
# Iterate through all user entitlements
$response.items | ForEach-Object {
$name = [string]$_.user.displayName;
$date = [DateTime]$_.lastAccessedDate;
$expired = Get-Date;
$expired = $expired.AddMonths($months);
$license = [string]$_.accessLevel.AccountLicenseType;
$licenseName = [string]$_.accessLevel.LicenseDisplayName;
$count++;
if ( $expired -gt $date ) {
# Ignore users who have NEVER or NOT YET ACTIVATED their license
if ( $date.Year -eq 1 ) {
Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
}
# Look for BASIC license
elseif ( $licenseName -eq $basic ) {
Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
}
# Look for BASIC + TEST license
elseif ( $licenseName -eq $basicTest ) {
Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
}
}
}
```
当你运行脚本时,你将得到以下输出,你可以将其转发给授权团队,以重置用户许可证:
```
**INACTIVE** Name: Demo1 Last Access: 2019/09/06 11:01:26 AM License: Basic
**INACTIVE** Name: Demo2 Last Access: 2019/06/04 08:53:15 AM License: Basic
**INACTIVE** Name: Demo3 Last Access: 2019/09/26 12:54:57 PM License: Basic
**INACTIVE** Name: Demo4 Last Access: 2019/06/07 12:03:18 PM License: Basic
**INACTIVE** Name: Demo5 Last Access: 2019/07/18 10:35:11 AM License: Basic
**INACTIVE** Name: Demo6 Last Access: 2019/10/03 09:21:20 AM License: Basic
**INACTIVE** Name: Demo7 Last Access: 2019/10/02 11:45:55 AM License: Basic
**INACTIVE** Name: Demo8 Last Access: 2019/09/20 01:36:29 PM License: Basic + Test Plans
**INACTIVE** Name: Demo9 Last Access: 2019/08/28 10:58:22 AM License: Basic
```
如果你将最后一步自动化自动将用户许可设置为一个自由的利益相关方许可如图3所示你可以进一步将总体交付时间减少到65秒并将总体效率提高到77%。
![Fully automated PowerShell-based process to switch user license][9]
*图 3 完全自动化的基于 Powershell 的流程来切换用户许可证。*
这个 PowerShell 脚本的核心价值不仅在于能够实现 _自动化_,还在于能够 _定期_、_持续_ 和 _快速地_ 执行这个流程。进一步的改进是使用 Azure 管道等调度器每周或每天触发脚本,但我将把程序化的许可证重置和脚本调度保留在未来的文章中。
这里有一个图表,可以直观地看到进展情况:
![Graph to visualize progress][10]
*图 4措施措施措施*
我希望你能喜欢这个简短的关于自动化、PowerShell、REST API 和价值流图的介绍。请在评论中分享你的想法和反馈。
------
via: https://opensource.com/article/20/2/devops-automation
作者:[Willy-Peter Schaub][a]
选题:[lujun9972][b]
译者:[FigaroCao](https://github.com/FigaroCao)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/wpschaub
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: http://www.donovanbrown.com/post/what-is-devops
[3]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_1.png (Manual process to switch user license)
[4]: https://opensource.com/sites/default/files/uploads/the_devops_quest_to_automate_everything_automatable_using_powershell_picture_2.png (Semi-automated PowerShell-based process to switch user license)
[5]: https://opensource.com/article/19/8/variables-powershell
[6]: https://github.com/powershell/powershell
[7]: https://docs.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops
[8]: https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-5.1
[9]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_3.png (Fully automated PowerShell-based process to switch user license)
[10]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_4.png (Graph to visualize progress)

View File

@ -0,0 +1,334 @@
[#]: collector: "lujun9972"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13983-1.html"
[#]: subject: "Scaling a GraphQL Website"
[#]: via "https://theartofmachinery.com/2020/06/29/scaling_a_graphql_site.html"
[#]: author: "Simon Arneaud https://theartofmachinery.com"
扩展一个 GraphQL 网站
======
![](https://img.linux.net.cn/data/attachment/album/202111/14/113411shrp6jpp3a8x1cjq.jpg)
我通常会抽象地总结我为他人所做的工作(出于显而易见的原因),但是我被允许公开谈论一个网站:[Vocal][1] 。我去年为它做了一些 SRE 工作。实际上早在 2 月份,我就在 [GraphQL 悉尼会议上做过一次演讲][2],不过这篇博客推迟了一点才发表。
Vocal 是一个基于 GraphQL 的网站,它获得了人们的关注,然后就遇到了可扩展性问题,而我是来解决这个问题的。这篇文章会讲述我的工作。显然,如果你正在扩展一个 GraphQL 站点,你会发现这篇文章很有用,但其中大部分内容讲的都是当一个站点获得了足够的流量而出现的必须解决的技术问题。如果你对站点可扩展性有兴趣,你可能想先阅读 [最近我发表的一系列关于可扩展性的文章][3]。
### Vocal
![][4]
Vocal 是一个博客平台,内容包括日记、电影评论、文章评论、食谱、专业或业余摄影、美容和生活小贴士以及诗歌,当然,还有可爱的猫猫和狗狗照片。
![][5]
Vocal 的不同之处在于,它允许人们制作观众感兴趣的作品而获得报酬。作者的页面每次被浏览都可以获得一小笔钱,还能获得其他用户的捐赠。有很多专业人士在这个平台上展示他们的工作,但对于大多数普通用户来说,他们只是把 Vocal 当作一个兴趣爱好,碰巧还能赚些零花钱作为奖励。
Vocal 是新泽西初创公司 ~~[Jerrick Media][6]~~ 的产品更新Jerrick Media 已经更名为 Creatd在纳斯达克上市。2015 年,他们与 [Thinkmill][7] 合作一起开发Thinkmill 是一家悉尼中型软件开发咨询公司,擅长 JavaScript、React 和 GraphQL 开发。
### 剧透
不幸的是有人告诉我由于法律原因我不能提供具体的流量数字但公开的信息可以说明。Alexa 对所有网站按照流量进行排名。这是我演讲中展示的 Alexa 排名图,从 2019 年 11 月到今年 2 月Vocal 流量增长到全球排名第 5567 位。
![去年 11 月到今年 2 月 Vocal 的全球排名从 9574 名增长到 5567 名][8]
曲线增长变慢是正常的因为它需要越来越多的流量来赢得每个位置。Vocal 现在排名 4900 名左右,显然还有很长的路要走,但对于一家初创公司来说,这一点也不寒酸。大多数初创公司都很乐意与 Vocal 互换排名。
在网站升级后不久Creatd 开展了一项营销活动,使流量翻了一番。在技术方面,我们要做的就是观察仪表盘上的上升的数字。自发布以来的 9 个月里,只有两个平台问题需要员工干预:[3 月份每五年一次的 AWS RDS 证书轮换][9],以及一款应用推出时遇到的 Terraform 错误。作为一名 SRE我很高兴看到 Vocal 不需要太多的平台工作来保持运行。更新:该系统也抗过了 2020 年的美国大选,没有任何意外。
以下是本文技术内容的概述:
* 技术和历史背景
* 从 MongoDB 迁移到 Postgres
* 部署基础设施的改造
* 使应用程序兼容扩展措施
* 让 HTTP 缓存发挥作用
* 其他一些性能调整
### 一些背景信息
Thinkmill 使用 [Next.js][10](一个基于 React 的 Web 框架)构建了一个网站,和 [Keystone][11] 在 MongoDB 前面提供的 GraphQL API 进行交互。Keystone 是一个基于 GraphQL 的无头 CMS 库:在 JavaScripy 中定义一个模式,将它与一些数据存储挂钩,并获得一个自动生成的 GraphQL API 用于数据访问。这是一个自由开源软件项目,由 Thinkmill 提供商业支持。
#### Vocal V2
Vocal 的第一版就受到了关注,它找到了一个喜欢它的用户群,并不断壮大,最终 Creatd 请求 Thinkmill 帮助开发 V2并于去年 9 月成功推出。Creatd 员工避免了 [第二个系统效应][12],他们一般都是根据用户的反馈进行改变,所以他们 [主要是 UI 和功能更改,我就不赘述了][13]。相反,我将讨论下我的工作内容:使新站点更加健壮和可扩展。
声明:我很感谢能与 Creatd 以及 Thinkmill 在 Vocal 上的合作,并且他们允许我发表这个故事,但 [我仍然是一名独立顾问][14],我写这篇文章没有报酬,甚至没有被要求写它,这仍然是我自己的个人博客。
### 迁移数据库
Thinkmill 在使用 MongoDB 时遇到了几个可扩展性问题,因此决定升级到 Keystone 5 以利用其新的 Postgres 支持。
如果你从事技术工作的时间足够长,那你可能还记得 00 年代末的 “NOSQL” 营销这可能听起来很有趣。NoSQL 的一个重要特点是,像 Postgres 这样的关系数据库SQL不像 MongoDB 这样“网站级规模”的 NoSQL 数据库那样具有可扩展性。从技术上将,这种说法是正确的,但 NoSQL 数据库的可扩展性来自它可以有效处理各种查询的折衷。简单的非关系数据库如文档数据库和键值数据库有其一席之地但当它们用作应用的通用后端时应用程序通常会在超出关系数据库的理论扩展限制之前就超出了数据库的查询限制。Vocal 的原本的大多数数据库查询在 MongoDB 上运行良好,但随着时间推移,越来越多的查询需要特殊技巧才能工作。
在技术要求方面Vocal 与维基百科非常相似。维基百科是世界上最大的网站之一,它运行在 MySQL或者说它的分支 MariaDB上。当然这需要一些重要的工程来实现但在可预见的未来我认为关系数据库不会对 Vocal 的扩展构成严重威胁。
我做过一个比较,托管的 AWS RDS Postgres 实例的成本不到旧 MongoDB 实例的五分之一,但 Postgres 实例的 CPU 使用率仍然低于 10%,尽管它提供的流量比旧站点多。这主要是因为一些重要的查询在文档数据库架构下一直效率很低。
迁移的过程可以新写一篇博客文章来讲述,但基本上是 Thinkmill 开发人员构建了一个 [ETL 管道][15],使用 [MoSQL][16] 来完成这项繁重的工作。由于 Keystone 对于 Postgres 支持仍然比较基础,但它是一个 FOSS 项目,所以我能够解决在 SQL 生成性能方面遇到的问题。对于这类事情,我总是推荐 Markys Winand 的 SQL 博文:[使用 Luke 索引][17] 和 [现代 SQL][18]。他的文章很友好,甚至对那些暂时不太关注 SQL 人来说也是容易理解的,但他拥有你大多数需要的理论知识。如果你仍然有问题,一本好的、专注于即可性能的书可以帮助你。
### 平台
#### 架构
V1 是几个 Node.js 应用,运行在 Cloudflare作为 CDN背后的单个虚拟专用服务器VPS上。我喜欢把避免过度工程化作为一个高度优先事项所以我对这一架构竖起了大拇指。然而在 V2 开始开发时很明显Vocal 已经超越了这个简单的架构。在处理巨大峰值流量时,它没有给 Thinkmill 开发人员提供很多选择,而且它很难在不停机情况下安全部署更新。
这是 V2 的新架构:
![Vocal V2 的技术架构,请求从 CDN 进入,然后经过 AWS 的负载均衡。负载均衡将流量分配到两个应用程序 “Platform” 和 “Website”。“Platform” 是一款 Keystone 应用程序,将数据存储在 Redis 和 Postgres 中。][19]
基本上就是两个 Node.js 应用程序复制放在负载均衡器后面,非常简单。有些人认为可扩展架构要比这复杂得多,但是我曾经在一些比 Vocal 规模大几个数量级的网站工作过,它们仍然只是在负载均衡器后面复制服务,带有 DB 后端。你仔细想想,如果平台架构需要随着站点的增长而变得越来越复杂,那么它就不是真正“可扩展的”。网站可扩展性主要是解决那些破坏可扩展的实现细节。
如果流量增长得足够多Vocal 的架构可能需要一些补充但它变得更加复杂的主要原因是新功能。例如如果出于某种原因Vocal 将来需要处理实时地理空间数据,那将是一个与博客文章截然不同的技术,所以我预期它会进行架构上的更改。大型网站架构的复杂性主要来自于复杂的功能。
不知道未来的架构应该是什么样子很正常,所以我总是建议你尽可能从简单开始。修复一个简单架构要比复杂架构更容易,成本也更低。此外,不必要的复杂架构更有可能出现错误,而这些错误将更难调试。
顺便说一下Vocal 分成了两个应用程序但这并不重要。一个常见的扩展错误是以可扩展的名义过早地将应用分割成更小的服务但将应用分割在错误的位置从而导致更多的可扩展性问题。作为一个单体应用Vocal 可以扩展的很好,但它的分割做的也很好。
#### 基础设施
Thinkmill 有一些人有使用 AWS 经验,但它主要是一个开发车间,需要一些比之前的 Vocal 部署更容易上手的东西。我最终在 AWS Fargate 上部署了新的 Vocal这是弹性容器服务ECS的一个相对较新的后端。在过去许多人希望 ECS 作为一个“托管服务运行 Docker 容器”的简单产品,但人们仍然必须构建和管理自己的服务器集群,这让人感到失望。使用 ECS FargateAWS 就可以管理集群了。它支持运行带有复制、健康检查、滚动更新、自动缩放和简单警报等基本功能的 Docker 容器。
一个好的替代方案是平台即服务PaaS比如 App Engine 或 Heroku。Thinkmill 已经在简单的项目中使用它们,但通常在其他项目中需要更大的灵活性。在 PaaS 上运行的网站规模要大得多,但 Vocal 的规模决定了使用自定义云部署是有经济意义的。
另一个明显的替代方案是 Kubernetes。Kubernetes 比 ECS Fargate 拥有更多的功能,但它的成本要高得多 —— 无论是资源开销还是维护(例如定期节点升级)所需的人员。一般来说,我不向任何没有专门 DevOps 员工的地方推荐 Kubernetes。Fargate 具有 Vocal 需要的功能,使得 Thinkmill 和 Creatd 能专心于网站改进,而不是忙碌于搭建基础设施。
另一种选择是“无服务器”功能产品,例如 AWS Lambda 或 Google 云。它们非常适合处理流量很低或很不规则的服务,但是 ECS Fargate 的自动伸缩功能足以满足 Vocal 的后端。这些产品的另一个优点是它们允许开发人员在云环境中部署东西但无需了解很多关于云环境的知识。权衡的结果是无服务器产品与开发过程、测试以及调试过程紧密耦合。Thinkmill 内部已经有足够的 AWS 专业知识来管理 Fargate 的部署,任何知道如何开发 Node.js 简单的 Hello World 应用程序的开发人员都可以在 Vocal 上工作,而无需了解无服务器功能或 Fargate 的知识。
ECS Fargate 的一个明显缺点是供应商锁定。但是,避免供应商锁定是一种权衡,就像避免停机一样。如果你担心迁移,那么在平台独立性花费比迁移上更多的钱是没有意义的。在 Vocal 中,依赖于 Fargate 的代码总量小于 500 行 [Terraform][23]。最重要的是 Vocal 应用程序代码本身与平台无关,它可以在普通开发人员的机器上运行,然后打包成一个 Docker 容器,几乎可以运行在任何可以运行 Docker 容器的地方,包括 ECS Fargate。
Fargate 的另一个缺点是设置复杂。与 AWS 中的大多数东西一样,它处于一个 VPC、子网、IAM 策略世界中。幸运的是,这类东西是相对静态的(不像服务器集群一样需要维护)。
### 制作一个可扩展的应用程序
如果你想轻松地运行一个大规模的应用程序,需要做一大堆正确的事情。如果你遵循 <ruby>[应用程序设计的十二个守则][22]<rt>the Twelve-Factor App design</rt></ruby>,事情会变得容易,所以我不会在这里赘述。
如果员工无法规模化操作,那么构建一个“可扩展”系统就毫无意义 —— 就像在独轮车上安装喷气式发动机一样。使 Vocal 可扩展的一个重要部分是建立 CI/CD 和 [基础设施即代码][23] 之类的东西。同样,有些部署的思路也不值得考虑,因为它们使生产与开发环境相差太大(参阅 [应用程序设计守则第十守则][24])。生产和开发之间的任何差异都会降低应用程序的开发速度,并且(在实践中)最终可能会导致错误。
### 缓存
缓存是一个很大的话题 —— 我曾经做过 [一个关于 HTTP 缓存的演讲][25],相对比较基础。我将在这里谈论缓存对于 GraphQL 的重要性。
首先,一个重要的警告:每当遇到性能问题时,你可能会想:“我可以将这个值放入缓存中吗,以便再次使用时速度更快?”**微基准测试 _总是_ 告诉你:是的。** 然而,由于缓存一致性等问题,随处设置缓存往往会使整个系统 **变慢**。以下是我对于缓存的检查表:
1. 是否需要通过缓存解决性能问题
2. 再仔细想想(非缓存的性能往往更加健壮)
3. 是否可以通过改进现有的缓存来解决问题
4. 如果所有都失败了,那么可以考虑添加新的缓存
在 Web 系统中,你经常使用的一个缓存是 HTTP 缓存系统,因此,在添加额外缓存之前,试着使用 HTTP 缓存是一个好主意。我将在这篇文章中重点讨论这一点。
另一个常见的陷阱是使用哈希映射或应用程序内部其他东西进行缓存。[它在本地开发中效果很好,但在扩展时表现糟糕][26]。最好的办法是使用支持显式缓存库,支持 Redis 或 Memcached 这样的可插拔后端。
#### 基础知识
HTTP 规范中有两种类型缓存:私有和公共。私有缓存不会和多个用户共享数据 —— 实际上就是用户的浏览器缓存。其余的就是公共缓存。它们包括受你控制的(例如 CDN、Varnish 或 Nginx 等服务器)和不受你控制的(代理)。代理缓存在当今的 HTTPS 世界中很少见,但一些公司网络会有。
![][27]
缓存查找键通常基于 URL因此如果你遵循“相同的内容相同的 URL不同的内容不同的 URL” 规则,即,给每个页面一个规范的 URL避免从同一个 URL 返回不同的内容这样“聪明”的技巧,缓存就会强壮一点。显然,这对 GraphQL API 端点有影响(我将在后面讨论)。
你的服务器可以采用自定义配置,但配置 HTTP 缓存的主要方法是在 Web 响应上设置 HTTP 头。最重要的标头是 `cache-control`。下面这一行说明所有缓存都可以缓存页面长达 3600 秒(一小时):
```
cache-control: max-age=3600, public
```
对于针对用户的页面(例如用户设置页面),重要的是使用 `private` 而不是 `public` 来告诉公共缓存不要存储响应,防止其提供给其他用户。
另一个常见的标头是 `vary`,它告诉缓存,响应是基于 URL 之外的一些内容而变化。(实际上,它将 HTTP 头和 URL 一起添加到缓存键中。)这是一个非常生硬的工具,这就是为什么尽可能使用良好 URL 结构的原因,但一个重要的用例是告诉浏览器响应取决于登录的 cookie以便在登录或注销时更新页面。
```
vary: cookie
```
如果页面根据登录状态而变化,你需要 `cache-control:private`(和 `vary:cookie`),即使是在公开的、已登出的页面版本上,以确保响应不会混淆。
其他有用的标头包括 `etag``last-modified`,但我不会在这里介绍它们。你可能仍然会看到一些诸如 `expires``pragma:cache` 这种老旧的 HTTP 标头。它们早在 1997 年就被 HTTP/1.1 淘汰了,所以我只在我想禁用缓存或者我感到偏执时才使用它们。
#### 客户端标头
鲜为人知的是HTTP 规范允许在客户端请求中使用 `cache-control` 标头以减少缓存时间并获得最新响应。不幸的是,似乎大多数浏览器都不支持大于 0 的 `max-age` ,但如果你有时在更新后需要一个最新响应,`no-cache` 会很有用。
#### HTTP 缓存和 GraphQL
如上,正常的缓存键是 URL。但是 GraphQL API 通常只使用一个端点(比如说 `/api/`)。如果你希望 GraphQL 查询可以缓存,你需要查询参数出现在 URL 路径中,例如 `/api/?query={user{id}}&variables={"x":99}`(忽略了 URL 转义)。诀窍是将 GraphQL 客户端配置为使用 HTTP GET 请求进行查询(例如,[将 `apollo-link-http` 设置为 `useGETForQueries`][28] )。
Mutation 不能缓存,所以它们仍然需要使用 HTTP POST 请求。对于 POST 请求,缓存只会看到 `/api/` 作为 URL 路径,但缓存会直接拒绝缓存 POST 请求。请记住GET 用于非 Mutation 查询即幂等POST 用于 Mutation即非幂等。在一种情况下你可能希望避免使用 GET 查询如果查询变量包含敏感信息。URL 经常出现在日志文件、浏览器历史记录和聊天中,因此 URL 中包含敏感信息通常是一个坏主意。无论如何,像身份验证这种事情应该作为不可缓存的 Mutation 来完成,这是一个特殊的情况,值得记住。
不幸的是有一个问题GraphQL 查询往往比 REST API URL 大得多。如果你简单地切换到基于 GET 的查询,你会得到一些非常长的 URL很容易超过 2000 字节的限制,目前一些流行的浏览器和服务器还不会接受它们。一种解决方案是发送某种查询 ID而不是发送整个查询即类似于 `/api/?queryId=42&variables={"x":99}`。Apollo GraphQL 服务器对此支持两种方式:
一种方法是 [从代码中提取所有 GraphQL 查询,并构建一个服务器端和客户端共享的查询表][29]。缺点之一是这会使构建过程更加复杂,另一个缺点是它将客户端项目与服务器项目耦合,这与 GraphQL 的一个主要卖点背道而驰。还有一个缺点是 X 版本和 Y 版本的代码可能识别一组不同的查询,这会成为一个问题,因为 1复制的应用程序将在更新推出或回滚期间提供多个版本2客户端可能会使用缓存的 JavaScript即使你升级或降级服务器。
另一种方式是 Apollo GraphQL 所宣称的 [自动持久查询APQ][30]。对于 APQ 而言,查询 ID 是查询的哈希值。客户端向服务器发出请求,通过哈希查询。如果服务器无法识别该查询,则客户端会在 POST 请求中发送完整的查询,服务器会保存此次查询的散列值,以便下次识别。
![][31]
#### HTTP 缓存和 Keystone 5
如上所述Vocal 使用 Keystone 5 生成 GraphQL API而 Keystone 5 和 Apollo GraphQL 服务器配合一起工作。那么我们是如何设置缓存标头的呢?
Apollo 支持 GraphQL 模式的<ruby>缓存提示<rt>cache hint</rt></ruby>。巧妙地是Apollo 会收集查询涉及的所有内容的所有缓存提示,然后它会自动计算适当的缓存标头值。例如,以这个查询为例:
```
query userAvatarUrl {
authenticatedUser {
name
avatar_url
}
}
```
如果 `name` 的最长期限为 1 天,而 `avatar_url` 的最长期限为 1 小时,则整体缓存最长期限将是最小值,即 1 小时。`authenticatedUser` 取决于登录 cookie因此它需要一个 `private` 提示,它会覆盖其他字段的 `public`,因此生成的 HTTP 头将是 `cache-control:max-age=3600,private`
我 [对 Keystone 列表和字段添加了缓存提示支持][32]。以下是一个简单例子,在文档的待办列表演示中给一个字段添加缓存提示:
```
const keystone = new Keystone({
name: 'Keystone To-Do List',
adapter: new MongooseAdapter(),
});
keystone.createList('Todo', {
schemaDoc: 'A list of things which need to be done',
fields: {
name: {
type: Text,
schemaDoc: 'This is the thing you need to do',
isRequired: true,
cacheHint: {
scope: 'PUBLIC',
maxAge: 3600,
},
},
},
});
```
#### 另一个问题CORS
令人沮丧的是,<ruby>跨域资源共享<rt>Cross-Origin Resource Sharing</rt></ruby>CORS规则会与基于 API 网站中的缓存产生冲突。
在深入问题细节之前,让我们跳到最简单的解决方案:将主站点和 API 放在一个域名上。如果你的站点和 API 位于同一个域名上,就不必担心 CORS 规则(但你可能需要考虑 [限制 cookie][33])。如果你的 API 专门用于网站,这是最简单的解决方案,你可以愉快地跳过这一节。
在 Vocal V1 中网站Next.js和平台Keystone GraphQL应用程序处于不同域`vocal.media` 和 `api.vocal.media`)。为了保护用户免受恶意网站的侵害,现代浏览器不会随便让一个网站与另一个网站进行交互。因此,在允许 `vocal.media``api.vocal.media` 发出请求之前,浏览器会对 `api.vocal.media` 进行“预检”。这是一个使用 `OPTIONS` 方法的 HTTP 请求,主要是询问跨域资源共享是否允许。预检通过后,浏览器会发出最初的正常请求。
令人沮丧的是,预检是针对每个 URL 的。浏览器为每个 URL 发出一个新的 `OPTIONS` 请求,服务器每次都会响应。[服务器没法说 `vocal.media` 是所有 `api.vocal.media` 请求的可信来源][34] 。当所有内容都是对一个 API 端点的 POST 请求时,这个问题并不严重,但是在为每个查询提供 GET 式 URL 后每个查询都因预检而延迟。更令人沮丧的是HTTP 规范说 `OPTIONS` 请求不能被缓存,所以你会发现你所有的 GraphQL 数据都被完美地缓存在用户身旁的 CDN 中,但浏览器仍然必须向源服务器发出所有的预检请求。
如果你不能只使用一个共享的域,有几种解决方案。
如果你的 API 足够简单,你或许可以利用 [CORS 规则的例外][35]。
某些缓存服务器可以配置为忽略 HTTP 规范,任何情况都会缓存 `OPTIONS` 请求。例如,基于 Varnish 的缓存和 AWS CloudFrone。这不如完全避免预检那么有效但比默认的要好。
另一个很魔改的选项是 [JSONP][36]。当心:如果做错了,那么可能会创建安全漏洞。
#### 让 Vocal 更好地利用缓存
让 HTTP 缓存在底层工作之后,我需要让应用程序更好地利用它。
HTTP 缓存的一个限制是它在响应级别上要么是全有要么是全无的。大多数响应都可以缓存,但如果一个字节不能缓存,那整个页面就无法缓存。作为一个博客平台,大多数 Vocal 数据都是可缓存的,但在旧网站上,由于右上角的菜单栏,几乎没有页面可以缓存。对于匿名用户,菜单栏将显示邀请用户登录或创建账号的链接。对于已登录用户,它会变成用户头像和用户个人资料菜单,因为页面会根据用户登录状态而变化,所以无法在 CDN 中缓存任何页面。
![Vocal 的一个典型页面。该页面的大部分内容都是高度可缓存的内容,但在旧网站中,由于右上角有一个小菜单,实际上没有一个内容是可缓存的。][37]
这些页面是由 React 组件的服务器端渲染SSR的。解决方法是将所有依赖于登录 cookie 的 React 组件,强制它们 [只在客户端进行延迟呈现][38]。现在,服务器会返回完全通用的页面,其中包含用于个性化组件(如登录菜单栏)的占位符。当页面在浏览器中加载时,这些占位符将通过调用 GraphQL API 在客户端填充。通用页面可以安全地缓存到 CDN 中。
这一技巧不仅提高了缓存命中率,还帮助改善了人们感知的页面加载时间。空白屏幕和旋转动画让我们不耐烦,但一旦第一个内容出现,它会分散我们几百毫秒的注意力。如果人们在社交媒体上点击一个 Vocal 帖子的链接,主要内容就会立即从 CDN 中出现,很少有人会注意到,有些组件直到几百毫秒后才会完全出现。
顺便说一下,另一个让用户更快地看到第一个内容的技巧是 [流式渲染][39],而不是等待整个页面渲染完成后再发送。不幸的是,[Node.js 还不支持这个功能][40]。
拆分响应来提高可缓存性也适用于 GraphQL。通过一个请求查询多个数据片段的能力通常是 GraphQL 的一个优势但如果响应的不同部分具有差别很大的缓存那么最好将它们分开。举个简单的例子Vocal 的分页组件需要知道当前页面的页数和内容。最初,组件在一个查询中同时获取两个页面,但由于页面的总数是所有页面的一个常量,所有我将其设置为一个单独的查询,以便缓存它。
#### 缓存的好处
缓存的明显好处是它减轻了 Vocal 后端服务器的负载。这很好。但是依赖缓存来获得容量是危险的,你仍然需要一个备份计划,以便当有一天你不可避免地放弃缓存。
提高页面响应速度是使用缓存是一个更好的理由。
其他一些好处可能不那么明显。峰值流量往往是高度本地化的。如果一个有很多社交媒体粉丝的人分享了一个页面的链接,那么 Vocal 的流量就会大幅上升,但主要是指向分享的那个页面及其元素。这就是为什么缓存擅长吸收最糟糕的流量峰值,它使后端流量模式相对更平滑,更容易被自动伸缩处理。
另一个好处是<ruby>优雅的退化<rt>graceful degradation</rt></ruby>。即使后端由于某些原因出现了严重的问题,站点最受欢迎的部分仍然可以通过 CDN 缓存来提供服务。
### 其他的性能调整
正如我常说的,可扩展的秘诀不是让事情变得更复杂。它只是让事情变得不比需要的更复杂,然后彻底解决所有阻碍扩展的东西。扩展 Vocal 的规模涉及到许多小事,在这篇文章中无法一一说明。
一个经验:对于分布式系统中难以调试的问题,最困难的部分通常是获取正确的数据,从而了解发生的原因。我能想到很多时候,我被困住了,只能靠猜测来“即兴发挥”,而不是想办法找到正确的数据。有时这行得通,但对复杂的问题却不行。
一个相关技巧是,你可以通过获取系统中每个组件的实时数据(甚至只是 `tail -F` 的日志),在不同的窗口中显示,然后在另一个窗口中单击网站来了解很多信息。比如:“为什么切换这个复选框会在后台产生这么多的 DB 查询?”
这里有个修复的例子。有些页面需要几秒钟以上的时间来呈现,但这个情况只会在部署环境中使用 SSR 时会出现。监控仪表盘没有显示任何 CPU 使用量峰值,应用程序也没有使用磁盘,所以这表明应用程序可能正在等待网络请求,可能是对后端的请求。在开发环境中,我可以使用 [sysstat 工具][42]来记录 CPU、RAM、磁盘使用情况以及 Postgres 语句日志和正常的应用日志来观察应用程序是如何工作的。[Node.js 支持探测跟踪 HTTP 请求][42],比如使用 [bpftrace][44],但处于某些无聊的原因,它们不能在开发环境中工作,所以我在源代码中找到了探针,并创建了一个带有请求日志的 Node.js 版本。我使用 [tcpdump][45] 记录网络数据,这让我找到了问题所在:对于网站发出的每一个 API 请求,都要创建一个新的网络连接到 “Platform”。如果这都不起作用我想我会在应用程序中添加请求跟踪功能。
网络连接在本地机器上速度很快,但在现实网络上却不可忽略。建立加密连接(比在生产环境中)需要更长时间。如果你向一个服务器(比如一个 API发出大量请求保持连接打开并重用它是很重要的。浏览器会自动这么做但 Node.js 默认不会,因为它不知道你是否发出了很多请求,所以这个问题只出现在 SSR 上。与许多漫长的调试过程一样,修复却非常简单:只需将 SSR 配置为 [保持连接存活][46],这样会使页面的呈现时间大幅下降。
如果你想了解更多这方面的知识,我强烈建议你阅读《[高性能浏览器网络][47]》这本书(可免费在线阅读),并跟进 [Brendan Gregg 发表的指南][48]。
### 你的站点呢?
实际上,我们还可以做很多事情来提升 Vocal 的速度,但我们没有全做。这是因为在初创公司和在大公司身为一个固定员工做 SRE 工作还是有很大区别的。我们的目标、预算和发布日期都很紧张,但最终我们的网站得到了很大改善,给了用户他们想要的东西。
同样的,你的站点有它自己的目标,并且可能与 Vocal 有很大的不同。然而,我希望这篇文章和它的链接至少能给你一些有用的思路,为用户创造更好的东西。
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2020/06/29/scaling_a_graphql_site.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://vocal.media
[2]: https://www.meetup.com/en-AU/GraphQL-Sydney/events/267681845/
[3]: https://theartofmachinery.com/2020/04/21/what_is_high_traffic.html
[4]: https://theartofmachinery.com/images/scaling_a_graphql_site/vocal1.png
[5]: https://theartofmachinery.com/images/scaling_a_graphql_site/vocal2.png
[6]: https://jerrick.media
[7]: https://www.thinkmill.com.au/
[8]: https://theartofmachinery.com/images/scaling_a_graphql_site/alexa.png
[9]: https://aws.amazon.com/blogs/database/amazon-rds-customers-update-your-ssl-tls-certificates-by-february-5-2020/
[10]: https://github.com/vercel/next.js
[11]: https://www.keystonejs.com/
[12]: https://wiki.c2.com/?SecondSystemEffect
[13]: https://vocal.media/resources/vocal-2-0
[14]: https://theartofmachinery.com/about.html
[15]: https://en.wikipedia.org/wiki/Extract,_transform,_load
[16]: https://github.com/stripe/mosql
[17]: https://use-the-index-luke.com/
[18]: https://modern-sql.com/
[19]: https://theartofmachinery.com/images/scaling_a_graphql_site/architecture.svg
[20]: https://aws.amazon.com/fargate/
[21]: https://www.terraform.io/docs/providers/aws/r/ecs_task_definition.html
[22]: https://12factor.net/
[23]: https://theartofmachinery.com/2019/02/16/talks.html
[24]: https://12factor.net/dev-prod-parity
[25]: https://www.meetup.com/en-AU/Port80-Sydney/events/lwcdjlyvjblb/
[26]: https://theartofmachinery.com/2016/07/30/server_caching_architectures.html
[27]: https://theartofmachinery.com/images/scaling_a_graphql_site/http_caches.svg
[28]: https://www.apollographql.com/docs/link/links/http/#options
[29]: https://www.apollographql.com/blog/persisted-graphql-queries-with-apollo-client-119fd7e6bba5
[30]: https://www.apollographql.com/blog/improve-graphql-performance-with-automatic-persisted-queries-c31d27b8e6ea
[31]: https://theartofmachinery.com/images/scaling_a_graphql_site/apq.png
[32]: https://www.keystonejs.com/api/create-list/#cachehint
[33]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#Define_where_cookies_are_sent
[34]: https://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0236.html
[35]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simple_requests
[36]: https://en.wikipedia.org/wiki/JSONP
[37]: https://theartofmachinery.com/images/scaling_a_graphql_site/cachablepage.png
[38]: https://nextjs.org/docs/advanced-features/dynamic-import#with-no-ssr
[39]: https://medium.com/the-thinkmill/progressive-rendering-the-key-to-faster-web-ebfbbece41a4
[40]: https://github.com/vercel/next.js/issues/1209
[41]: https://linux.die.net/man/1/tail
[42]: https://github.com/sysstat/sysstat/
[43]: http://www.brendangregg.com/blog/2016-10-12/linux-bcc-nodejs-usdt.html
[44]: https://theartofmachinery.com/2019/04/26/bpftrace_d_gc.html
[45]: https://danielmiessler.com/study/tcpdump/
[46]: https://www.npmjs.com/package/agentkeepalive
[47]: https://hpbn.co/
[48]: http://www.brendangregg.com/

View File

@ -0,0 +1,99 @@
[#]: subject: (Set and use environment variables in FreeDOS)
[#]: via: (https://opensource.com/article/21/6/freedos-environment-variables)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-13995-1.html)
在 FreeDOS 中设置和使用环境变量
======
> 环境变量几乎在每种命令行环境中都是很有帮助的,自然包括 FreeDOS 。
![](https://img.linux.net.cn/data/attachment/album/202111/18/152155twzasgwwrzsmmvs2.jpg)
几乎在每个命令行环境中的一个有用的功能是 _环境变量_。其中的一些变量允许你控制命令行的行为或功能,其它的变量仅允许你存储可能稍后需要的数据,在 FreeDOS 中也使用了环境变量。
### 在 Linux 上的变量
在 Linux 上,你可能已经熟悉其中的一些重要的环境变量。在 Linux 上的 [Bash][2] shell 中,`PATH` 变量标示着 shell 可以在哪里找到程序和命令。例如,在我的 Linux 系统上,我的  `PATH` 值如下:
```
bash$ echo $PATH
/home/jhall/bin:/usr/lib64/ccache:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin
```
这意味着,当我输入一个像 `cat` 这样的命令的名称时Bash 将会按顺序检查我在 `PATH` 变量中所列出的每个目录:
1. `/home/jhall/bin`
2. `/usr/lib64/ccache`
3. `/usr/local/bin`
4. `/usr/local/sbin`
5. `/usr/bin`
6. `/usr/sbin`
在我的实例中,`cat` 命令位于 `/usr/bin` 目录,因此,完整的路径是 `/usr/bin/cat`
为在 Linux 上设置一个环境变量,你可以输入一个变量的名称,接着输入一个等于符号(`=`),接着输入一个要存储在变量中的值。为了随后使用 Bash 引用这个值,你需要在变量的名称前输入一个美元符号(`$`)。
```
bash$ var=Hello
bash$ echo $var
Hello
```
### 在 FreeDOS 上的变量
在 FreeDOS 上,环境变量提供一种类似的功能。一些变量控制 DOS 系统的行为,另一些变量用于存储一些临时值。
为在 FreeDOS 上设置一个环境变量,你需要使用 `SET` 关键字。FreeDOS 是 _不区分大小写的_ ,因此你可以输入大写字母也可以使用小写字母。接下来,像你在 Linux 上一样设置变量,使用变量名称,一个等于符号(`=`),你想要存储的值。
不过,在 FreeDOS 中引用或 _扩展_ 一个环境变量的值的方法,与你在 Linux 上所使用的方法是完全不同的。在 FreeDOS 中,你不能使用美元符号(`$`)来引用一个变量。你反而需要使用百分符号 `%`)来包围变量的名称。
![Use % (not $) to reference a variable's value][3]
在名称前后使用百分符号是非常重要,因为这就是 FreeDOS 知悉变量名称在哪里开始和结束的方式。这是非常有用的,因为它会允许你引用一个变量的值,与此同时,它会立即附加(或预置)其它的文本到值中。让我通过设置一个新的名称为 `reply` 的值为 `yes` 的变量,然后在 “11” 之前和 “22” 之后引用这个值来演示这一点:
![Set and reference an environment variable][5]
因为 FreeDOS 是不区分大小写的,所以你可以使用大写字母称或小写字母的变量名称以及 `SET` 关键字。不过,变量的值将使用你在命令行中所输入的字母。
最后,你可以看到当前在 FreeDOS 中定义的所有的环境变量。不使用任何参数的 `SET` 关键字将显示所有的变量,因此你可以一目了然:
![Show all variables at once with SET][6]
环境变量是一个有用的基本的命令行环境,同样适用于 FreeDOS 。你可以设置你自己的变量以满足你自己的需要,但是要仔细地更改 FreeDOS 使用的一些变量。这些变量会更改你正在运行的 FreeDOS 系统的行为:
* `DOSDIR`FreeDOS 安装目录的位置,通常是 `C:\FDOS`
* `COMSPEC`FreeDOS 的 shell 的当前实例,通常是 `C:\COMMAND.COM` 或 `%DOSDIR%\BIN\COMMAND.COM`
* `LANG`:用户的首选语言
* `NLSPATH`:系统语言文件的位置,通常是 `%DOSDIR%\NLS` 
* `TZ`:系统的时区
* `PATH`一个目录列表FreeDOS 可以在其中找到要运行的程序,例如 `%DOSDIR%\BIN`
* `HELPPATH`:系统文档文件的位置,通常是 `%DOSDIR%\HELP`
* `TEMP`一个临时目录FreeDOS 在其中存储来自每个命令的输出,如同它在命令行上的程序之间的 “管道” 数据
* `DIRCMD`:一个控制 `DIR` 命令如何显示文件和目录的变量,通常设置 `/OGNE` 来排序(`O`)内容,先通过分组(`G`)目录,接下来按照名称(`N` 、扩展名(`E`)来排序条目
如果你偶然间更改了任意的 FreeDOS 的 “内部” 变量,你可能会阻碍 FreeDOS 的一些部分的正常工作。在这种情况下只需要简单地重新启动你的计算机FreeDOS 将会按照系统默认值重新设置变量。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/freedos-environment-variables
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY
[2]: https://opensource.com/article/19/8/using-variables-bash
[3]: https://opensource.com/sites/default/files/uploads/env-path.png
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/env-vars.png
[6]: https://opensource.com/sites/default/files/uploads/env-set.png

View File

@ -0,0 +1,204 @@
[#]: subject: "Automate tasks with BAT files on FreeDOS"
[#]: via: "https://opensource.com/article/21/6/automate-tasks-bat-files-freedos"
[#]: author: "Jim Hall https://opensource.com/users/jim-hall"
[#]: collector: "lujun9972"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13970-1.html"
在 FreeDOS 上使用 BAT 文件自动执行任务
======
> FreeDOS 下批处理文件的实用指南。
![](https://img.linux.net.cn/data/attachment/album/202111/10/104345whfjagaahm9nb2j3.jpg)
即使你以前没有使用过 DOS你也可能知道它的命令行 shell`COMMAND.COM`。它已经成为 DOS 的同义词FreeDOS 为此也实现了一个类似的 shell称为 “FreeCOM”但也命名为 `COMMAND.COM`,就像在其他 DOS 系统上一样。
但是 FreeCOM shell 可以做的不仅仅是为你提供一个命令行提示符让你在其中运行命令,如果你需要在 FreeDOS 上自动执行任务,你可以使用 _批处理文件_,也称为 “BAT 文件”,因为这些脚本使用 `.BAT` 扩展名。
批处理文件可能比你在 Linux 编写的脚本要简单得多。因为在很久以前,这个功能最初被添加到 DOS 时,它是为了让 DOS 用户“批量处理”某些命令。它的条件分支没有太大的灵活性,也不支持更高级的功能,例如算术扩展、标准输出和错误消息的重定向、后台进程、测试、循环(这项支持)和 Linux 脚本中常见的其他结构。
本文是 FreeDOS 下批处理文件的实用指南。记住通过用百分号(`%`)包裹变量名称来引用环境变量,例如 `%PATH%`。但是,请注意,由于历史原因,`FOR` 循环的构造略有不同。
### 打印输出
批处理文件可能需要向用户打印消息,让用户知道发生了什么。使用 `ECHO` 语句打印消息。例如,一个批处理文件可能使用以下语句表明它已完成了任务:
```
ECHO Done
```
`ECHO` 语句不需要引号。FreeCOM `ECHO` 语句不会以任何特殊方式处理引号,它会像普通文本一样打印它们。
通常FreeDOS 在执行批处理文件时会打印每一行。这在一个非常短的批处理文件中通常不是问题,它只为用户定义了几个环境变量。但是对于执行更多工作的较长批处理文件而言,批处理行的这种一直显示可能会变得很麻烦。要阻止此输出,在 `ECHO` 语句中使用 `OFF` 关键字,如下所示:
```
ECHO OFF
```
使用 `ON` 关键字在 FreeDOS 运行时恢复显示批处理行。
```
ECHO ON
```
大多数批处理文件在第一行包含一个 `ECHO OFF` 语句,以阻止消息,但是 shell 在执行语句时仍然会在屏幕上打印 `ECHO OFF`。为了隐藏该语句,批处理文件通常在前面使用 `@` 符号。这样,任何以这个特殊字符开头的行都不会打印,即使打开了 `ECHO`
```
@ECHO OFF
```
### 注释
编写较长批处理文件时,大多数程序员都喜欢使用 _注释_ 来提醒自己这个批处理文件的用途。在批处理文件中注释,使用 `REM`remark关键字。`REM` 之后的任何内容都会被 FreeCOM shell 忽略。
```
@ECHO OFF
REM This is a comment
```
### 执行“辅助”批处理文件
通常FreeCOM 一次只运行一个批处理文件。但是,你可能需要使用另一个批处理文件来执行其他操作,例如为多个批处理文件设置公共环境变量。
如果你从"正在运行"的批处理文件中直接调用第二个批处理文件FreeCOM 将完全切换到第二个批处理文件,并停止处理第一个。要改为在第一个批处理文件“内部”运行第二个批处理文件,你需要告诉 FreeDOS shell 使用 `CALL` 关键字去 _调用_ 第二个批处理文件。
```
@ECHO OFF
CALL SETENV.BAT
```
### 条件分支
批处理文件确实支持使用 `IF` 语句的简单条件分支。它有三种基本形式:
1. 测试上一条命令的返回状态
2. 测试一个变量是否等于一个值
3. 测试文件是否存在
`IF` 语句的一个常见用途是测试程序是否成功返回。如果它们正常运行,大多数程序将返回零值,或者在出现错误时返回一些其他值。在 DOS 中,这称为 _错误级别_,这是 `IF` 测试的特例。
测试名为 `MYPROG` 的程序是否成功退出,实际上是检查程序是否返回“零”。使用 `ERRORLEVEL` 关键字来测试特定值。例如:
```
@ECHO OFF
MYPROG
IF ERRORLEVEL 0 ECHO Success
```
使用 `ERRORLEVEL` 测试错误级别是检查程序退出状态的笨拙方法。检查 DOS 程序的不同返回值,更有用的方法是使用 FreeDOS 为你定义的特殊变量,称为 `ERRORLEVEL`。它存储了最近执行程序的错误级别,然后你可以使用 `==` 测试不同的值。
你可以使用 `==``IF` 语句来测试变量是否等于某个值。就像一些编程语言,你可以使用 `==` 直接比较两个值。通常,在一侧引用一个环境变量,在另一侧引用一个值,但你也可以比较两个变量的值以查看它们是否相同。例如,你可以使用此批处理文件重写上面的 `ERRORLEVEL` 代码:
```
@ECHO OFF
MYPROG
IF %ERRORLEVEL%==0 ECHO Success
```
`IF` 语句的另一个常见用途是测试文件是否存在,如果存在则采取操作。你可以使用 `EXIST` 关键字来测试。例如,要删除名为 `TEMP.DAT` 的临时文件,你可以在批处理文件中使用以下行:
```
@ECHO OFF
IF EXIST TEMP.DAT DEL TEMP.DAT
```
对于任何 `IF` 语句,你都可以使用 `NOT` 关键字来 _否定_ 测试。在文件 _不_ 存在时打印消息,你可以这样写:
```
@ECHO OFF
IF NOT EXIST TEMP.DAT ECHO No file
```
### 分支执行
利用 `IF` 测试的一种方法是跳转到批处理文件中完全不同的部分,这取决于 `IF` 测试的结果。在最简单的情况下,如果一个关键命令失败,你可能希望跳到批处理文件的末尾。或者,如果某些环境变量设置不正确,你可能想要执行其他语句。
你可以使用 `GOTO` 指令跳转到批处理文件的其他部分。它会跳转到批处理文件中称为 _标签_ 的特定行。注意,这是一个严格的 “go-to” 跳转:批处理文件执行将在新标签处启动。
假设程序需要一个现有的空文件来存储临时数据,如果文件不存在,则需要在运行程序之前创建一个文件。你可以将这些动作添加到批处理文件中,这样你的程序始终有一个临时文件可供使用:
```
@ECHO OFF
IF EXIST temp.dat GOTO prog
ECHO Creating temp file...
TOUCH temp.dat
:prog
ECHO Running the program...
MYPROG
```
当然,这是一个非常简单的例子。对于这种情况,你可以重写批处理文件,将创建临时文件作为 `IF` 语句的一部分:
```
@ECHO OFF
IF NOT EXIST temp.dat TOUCH temp.dat
ECHO Running the program...
MYPROG
```
### 迭代
如果你需要对一组文件执行相同的任务怎么办?你可以使用 `FOR` 循环 _迭代_ 一组文件。这是一个单行循环,每次使用不同的文件运行单个命令。
`FOR` 循环对迭代变量使用一种特殊的语法,它的用法与其他 DOS 环境变量不同。要循环编辑一组文本文件可以使用以下语句LCTT 译注:原文此处写错了,少写了一个 `%`
```
@ECHO OFF
FOR %%F IN (*.TXT) DO EDIT %%F
```
注意,如果在命令行中运行此循环,而不是在批处理文件中,那么迭代变量仅需要指定一个百分号(`%`
```
C:\> FOR %F IN (*.TXT) DO EDIT %F
```
### 命令行处理
在运行批处理文件时FreeDOS 提供了一种简单的方法来检测用户可能提供的命令行选项。FreeDOS 解析命令行输入,并将前九个选项存储在特殊变量 `%1`、`%2` ..... 等中,直到 `%9`。注意,无法通过这种方式直接访问第十一个(及之后)选项。特殊变量 `%0` 存储批处理文件的名称。
如果你的批处理文件需要处理 9 个以上的选项,你可以使用 `SHIFT` 语句移除第一个选项,并将每个选项向下 _移动_ 一个值。所以第二个选项变成了 `%1`,第十个选项变成了 `%9`
大多数批处理文件只需要移动一个值。但是,如果你需要以其他增量进行移位,可以将参数提供给 `SHIFT` 语句。例如:
```
SHIFT 2
```
下面是一个简单的批处理文件,演示了移位操作:
```
@ECHO OFF
ECHO %1 %2 %3 %4 %5 %6 %7 %8 %9
ECHO Shift by one ..
SHIFT 1
ECHO %1 %2 %3 %4 %5 %6 %7 %8 %9
```
执行带有十个选项的批处理文件显示了 `SHIFT` 语句如何重新排列命令行选项,因此批处理文件现在可以用 `%9` 访问第十个参数:
```bash
C:\SRC>args 1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9
Shift by one ..
2 3 4 5 6 7 8 9 10
C:\SRC>
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/automate-tasks-bat-files-freedos
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk "Tips and gears turning"

View File

@ -0,0 +1,143 @@
[#]: subject: (How to use FreeDOS as an embedded system)
[#]: via: (https://opensource.com/article/21/6/freedos-embedded-system)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-14014-1.html)
如何将 FreeDOS 作为嵌入式系统使用
======
> 现在,很多嵌入式系统都是在 Linux 上运行的。但是,在很久很久以前,嵌入式系统要么在一个定制的专有的平台上运行,要么在 DOS 上运行。
![](https://img.linux.net.cn/data/attachment/album/202111/24/134734s6zuftzjgtt8herp.jpg)
[FreeDOS 网站][2] 宣称,大多数人使用 FreeDOS 来完成三项主要任务:
1. 玩经典的 DOS 游戏
2. 运行老式的 DOS 软件
3. 运行一款嵌入式系统
但是,运行一个“嵌入式”系统的意义是什么呢?
嵌入式系统基本上是一款非常小的系统,专用于运行一个特定的任务。你可以把现在的嵌入式系统当作是 _物联网_IoT的一部分这包括传感器、恒温器和门铃摄像头。现在很多嵌入式系统都是在 Linux 上运行的。
但是,在很久很久以前,嵌入式系统要么在一个定制的专有的平台上运行,要么在 DOS 系统上运行。在现在,一些基于 DOS 的嵌入式系统仍然在运行例如收银机或电话专用交换机PBX系统。举个例子来说在 2017 年,酷爱列车的人发现一个正在运行 FreeDOS 的俄罗斯的电动列车控制系统 (俄语: _САВПЭ_),它使用特殊的软件来控制和监控郊区列车的线路,并发布乘客通告。
在 DOS 上建立一个嵌入式系统需要定义一个最小化的 DOS 环境来运行单个应用程序。幸运的是,设置一个最小化的 FreeDOS 环境是非常容易的。从技术上来说,启动 FreeDOS 并运行 DOS 应用程序仅需要内核和一个 `FDCONFIG.SYS` 配置文件。
### 安装一款最小化的系统
我们可以使用 QEMU 仿真器来模拟一个专用的、最小化的 FreeDOS 系统,并给它分配很少的资源。为了更准确地反映一个嵌入式系统,我将定义一个只有 8 MB 的存储器和仅仅有 2 MB 的硬盘驱动器的虚拟机。
为创建这个微小的虚拟硬盘,我将使用这个 `qemu-img` 命令来定义一个 2M 的文件:
```
$ qemu-img create tiny.img 2M
Formatting 'tiny.img', fmt=raw size=2097152
```
下面的这行命令定义了一个 32 位的 “i386” CPU、8MB 的存储器,使用 2MB 的 `tiny.img` 文件作为硬盘驱动器镜像,使用 FreeDOS 1.3 RC4 LiveCD 作为 CD-ROM 介质。我们也将机器设置为从 CD-ROM 驱动器启动(`-boot order=d`),尽管我们只需要用它来安装系统。在我们完成所有的设置后,我们将从该硬盘启动完成的嵌入式系统:
```
qemu-system-i386 -m 8 -hda tiny.img -cdrom FD13LIVE.iso -boot order=d
```
使用 “<ruby>现场环境模式<rt>Live Environment mode</rt></ruby>” 来启动系统,这将为我们提供一个正在运行的 FreeDOS 系统,我们可以使用它来将一个最小化的 FreeDOS 转移到硬盘上。
![embedded setup][3]
*启动到 LiveCD 环境之中Jim Hall, [CC-BY SA 4.0][4]*
我们需要在虚拟硬盘驱动器上为我们的程序创建一个分区。为此,从命令行中运行 `FDISK` 程序。`FDISK` 是 FreeDOS 上的一个标准的 _磁盘分区_ 实用程序。使用 `FDISK` 来创建一个单个硬盘驱动器分区占用整个2 MB硬盘驱动器。
![embedded setup][5]
*FDISK在创建 2 MB 分区后Jim Hall, [CC-BY SA 4.0][4]*
但是,在你重新启动 FreeDOS 之前FreeDOS 不会看到新的硬盘驱动器分区 — FreeDOS 仅在启动时读取硬盘详细信息。退出 `FDISK` ,并重新启动 FreeDOS 。
在重新启动后,你需要在新的硬盘驱动器上创建一个 DOS 文件系统。因为这里只有一个虚拟硬盘FreeDOS 将识别其为 `C:` 驱动器。你可以使用 `FORMAT` 命令来在 `C:` 驱动器上创建一个 DOS 文件系统。使用 `/S` 选项将把操作系统文件(内核,外加一个 `COMMAND.COM` shell 的副本)转移到新的驱动器上。
![embedded setup][6]
*格式化新的驱动器来创建一个 DOS 文件系统Jim Hall, [CC-BY SA 4.0][4]*
 
你已经创建了硬盘驱动器并将其格式化,现在,你可以安装应用程序,这些应用程序是将会在新安装的嵌入式系统上运行的。
### 安装专用的应用程序
嵌入式系统实际上只是一个运行在一个专用系统上的单一用途的应用程序。这些应用程序通常是为其将要控制的系统而自定义构建的,例如,一台收银机、显示终端、或控制环境。在这个演示中,让我们使用一个来自 FreeDOS 1.3 RC4 安装光盘中的程序。它需要足够小,以适应我们为其创建的 2 MB 微型硬盘驱动器。这可以是任何东西,所以,为了好玩,让我们把它变成一个游戏。
FreeDOS 1.3 RC4 包含一些有趣的游戏。我喜欢的一个游戏是一个名称为 “Simple Senet” 的棋类游戏。它是一个基于 Senet 的古埃及棋类游戏。游戏的细节对这个演示并不重要,我们将安装它,并将其设置为嵌入式系统的专业应用程序。
为安装应用程序,在 FreeDOS 1.3 RC4 LiveCD 上,进入 `\PACKAGES\GAMES` 目录。你将在其中看到一个很长的软件包列表,而我们想要的 `SENET.ZIP`
![embedded setup][7]
*来自 FreeDOS 1.3 RC4 的一个游戏软件包列表Jim Hall, [CC-BY SA 4.0][4]*
为解压缩 “Simple Senet” 软件包到虚拟硬盘上,使用 `UNZIP` 命令。所有的 FreeDOS 软件包都是 Zip 文件,因此,你可以使用任意与 Zip 兼容的档案实用程序来管理它们。FreeeDOS 1.3 RC4 包含创建 Zip 档案文件的 `ZIP` 和提取 Zip 档案文件的 `UNZIP` 。它们都来自 [Info-Zip 项目][8] 。
```
UNZIP SENET.ZIP -d C:\FDOS
```
通常,使用 `UNZIP` 来提取 Zip 文件到当前目录中。在命令行结尾的 `-d C:\FDOS` 选项将告诉 `UNZIP` 来提取 Zip 文件到 `C:\FDOS` 目录之中。(`-d` 指的是“目的地”)
![embedded setup][9]
*解压缩 Simple Senet 游戏Jim Hall, [CC-BY SA 4.0][4]*
为了让嵌入式系统启动时运行 “Simple Senet” 游戏,我们需要告诉 FreeDOS 来使用 Senet 作为系统的 “shell” 。 默认的 FreeDOS 的 shell 是 `COMMAND.COM` 程序,但是,你可以在 `FDCONFIG.SYS` 内核配置文件中使用 `SHELL=` 指令来定义一个不同的 shell 程序。我们可以使用 FreeDOS 的 Edit 来创建新的 `C:\FDCONFIG.SYS` 文件。
![Embedded edit senet][10]
*Jim Hall, [CC-BY SA 4.0][4]*
如果你需要定义其它的参数来支持嵌入式系统,你可以将其添加到 `FDCONFIG.SYS` 文件之中。例如,你可能需要使用 `SET` 动作来设置环境变量,或者使用 `FILES=` 或 `BUFFERS=` 语句来调整 FreeDOS 内核。
### 运行嵌入式系统
在全面地完成嵌入式系统的定义之后,现在,我们可以重新启动计算机来运行嵌入式应用程序。运行一个嵌入式系统通常仅需要有限的资源,因此,在这个演示中,我们需要调整 QEMU 命令行来只从硬盘驱动器(`-boot order=c`)中启动,而不再定义一个 CD-ROM 驱动器:
```
qemu-system-i386 -m 8 -hda tiny.img -boot order=c
```
当 FreeDOS 内核启动时,它将读取 `FDCONFIG.SYS` 文件以获取启动参数。然后,它将使用 `SHELL=` 行的定义来运行 shell 。这将自动地运行 “Simple Senet” 游戏。
![embedded setup][11]
*作为一个嵌入式系统运行 Simple SenetJim Hall, [CC-BY SA 4.0][4]*
我们已经使用了 “Simple Senet” 来演示如何在 FreeDOS 上设置一个嵌入式系统。根据你的需要,你可以使用任何你喜欢的独立应用程序。在 `FDCONFIG.SYS` 中使用 `SHELL=` 行将其定义为 DOS 的 shell FreeDOS 将在启动时自动地启动该应用程序。
不过,在这里有一个限制。嵌入式系统通常不需要退回到一个命令行提示符之中,因此这些专用应用程序通常不允许用户退出到 DOS 之中。如果你设法退出了嵌入式应用程序,你可能会看到一个 “Bad or missing Command Interpreter” 的提示,你将需要在其中输入一个新的 shell 的完整路径。对于一个以用户为中心的桌面系统来说,这将是一个问题。但是在一个嵌入式系统上,它只专注执行一种工作的,那么,你也永远不需要退出嵌入式应用程序。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/freedos-embedded-system
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://www.freedos.org/
[3]: https://opensource.com/sites/default/files/uploads/embedded-setup02.png (Boot into the LiveCD environment)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/embedded-setup09.png (FDISK, after creating the 2 megabyte partition)
[6]: https://opensource.com/sites/default/files/uploads/embedded-setup19.png (Format the new drive to create a DOS filesystem)
[7]: https://opensource.com/sites/default/files/uploads/games-dir.png (A list of game packages from FreeDOS 1.3 RC4)
[8]: http://infozip.sourceforge.net/
[9]: https://opensource.com/sites/default/files/uploads/senet-unzip.png (Unzipping the Simple Senet game)
[10]: https://opensource.com/sites/default/files/pictures/embedded-edit-senet.png (Embedded edit senet)
[11]: https://opensource.com/sites/default/files/uploads/senet.png (Running Simple Senet as an embedded system)

View File

@ -0,0 +1,198 @@
[#]: subject: "Parse command-line arguments with argparse in Python"
[#]: via: "https://opensource.com/article/21/8/python-argparse"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: "MjSeven"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13986-1.html"
Python 中使用 argparse 解析命令行参数
======
> 使用 argparse 模块为应用程序设置命令行选项。
![](https://img.linux.net.cn/data/attachment/album/202111/15/110139bakkfdt4zoadqiv0.jpg)
有一些第三方库用于命令行解析,但标准库 `argparse` 与之相比也毫不逊色。
无需添加很多依赖,你就可以编写带有实用参数解析功能的漂亮命令行工具。
### Python 中的参数解析
使用 `argparse` 解析命令行参数时,第一步是配置一个 `ArgumentParser` 对象。这通常在全局模块内完成因为单单_配置_一个解析器没有副作用。
```
import argparse
PARSER = argparse.ArgumentParser()
```
`ArgumentParser` 中最重要的方法是 `.add_argument()`,它有几个变体。默认情况下,它会添加一个参数,并期望一个值。
```
PARSER.add_argument("--value")
```
查看实际效果,调用 `.parse_args()`
```
PARSER.parse_args(["--value", "some-value"])
```
```
Namespace(value='some-value')
```
也可以使用 `=` 语法:
```
PARSER.parse_args(["--value=some-value"])
```
```
Namespace(value='some-value')
```
为了缩短在命令行输入的命令,你还可以为选项指定一个短“别名”:
```
PARSER.add_argument("--thing", "-t")
```
可以传入短选项:
```
PARSER.parse_args("-t some-thing".split())
```
```
Namespace(value=None, thing='some-thing')
```
或者长选项:
```
PARSER.parse_args("--thing some-thing".split())
```
```
Namespace(value=None, thing='some-thing')
```
### 类型
有很多类型的参数可供你使用。除了默认类型,最流行的两个是布尔类型和计数器。布尔类型有一个默认为 `True` 的变体和一个默认为 `False` 的变体。
```
PARSER.add_argument("--active", action="store_true")
PARSER.add_argument("--no-dry-run", action="store_false", dest="dry_run")
PARSER.add_argument("--verbose", "-v", action="count")
```
除非显式传入 `--active`,否则 `active` 就是 `False`。`dry-run` 默认是 `True`,除非传入 `--no-dry-run`。无值的短选项可以并列。
传递所有参数会导致非默认状态:
```
PARSER.parse_args("--active --no-dry-run -vvvv".split())
```
```
Namespace(value=None, thing=None, active=True, dry_run=False, verbose=4)
```
默认值则比较单一:
```
PARSER.parse_args("".split())
```
```
Namespace(value=None, thing=None, active=False, dry_run=True, verbose=None)
```
### 子命令
经典的 Unix 命令秉承了“一次只做一件事,并做到极致”,但现代的趋势把“几个密切相关的操作”放在一起。
`git`、`podman` 和 `kubectl` 充分说明了这种范式的流行。`argparse` 库也可以做到:
```
MULTI_PARSER = argparse.ArgumentParser()
subparsers = MULTI_PARSER.add_subparsers()
get = subparsers.add_parser("get")
get.add_argument("--name")
get.set_defaults(command="get")
search = subparsers.add_parser("search")
search.add_argument("--query")
search.set_defaults(command="search")
```
```
MULTI_PARSER.parse_args("get --name awesome-name".split())
```
```
Namespace(name='awesome-name', command='get')
```
```
MULTI_PARSER.parse_args("search --query name~awesome".split())
```
```
Namespace(query='name~awesome', command='search')`
```
### 程序架构
使用 `argparse` 的一种方法是使用下面的结构:
```
## my_package/__main__.py
import argparse
import sys
from my_package import toplevel
parsed_arguments = toplevel.PARSER.parse_args(sys.argv[1:])
toplevel.main(parsed_arguments)
```
```
## my_package/toplevel.py
PARSER = argparse.ArgumentParser()
## .add_argument, etc.
def main(parsed_args):
...
# do stuff with parsed_args
```
在这种情况下,使用 `python -m my_package` 运行。或者,你可以在包安装时使用 [console_scprits][2] 入口点。
### 总结
`argparse` 模块是一个强大的命令行参数解析器,还有很多功能没能在这里介绍。它能实现你想象的一切。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/python-argparse
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/bitmap_0.png?itok=PBXU-cn0 "Python options"
[2]: https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point

View File

@ -0,0 +1,105 @@
[#]: subject: "How to change Ubuntu Terminal Font and Size [Beginners Tip]"
[#]: via: "https://itsfoss.com/change-terminal-font-ubuntu/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "robsean"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13992-1.html"
入门:如何更改 Ubuntu 的终端的字体和大小
======
![](https://img.linux.net.cn/data/attachment/album/202111/17/132645otre44u7ge68tzzb.jpg)
如果你在 Ubuntu 上使用终端的时间很长,你可能会希望调整终端的字体和大小以获取一种良好的体验。
更改字体是一种最简单但最直观的 [Linux 的终端自定义][1] 的方法。让我向你展示在 Ubuntu 中更改终端的字体的详细步骤,以及一些字体选择的提示和建议。
注意:这些步骤可能也对大多数的其它的 [Linux 的终端模拟器][2] 有效,但是你访问选项的方法可能会有所不同。
### 使用 GUI 更改 Ubuntu 的终端的字体和大小
**步骤 1.** [在 Ubuntu 中打开终端窗口][3] ,方法是按 `Ctrl+Alt+T` 组合键。
**步骤 2.** 打开终端“<ruby>首选项<rt>Preferences</rt></ruby>”。你可以单击菜单按钮来找到它。
![][4]
你也可以右击终端屏幕的任意位置来访问首选项,如下图所示。
![][5]
**步骤 3.** 现在,你应该能够访问针对终端的设置。在默认情况下,在这里将会有一个未命名的配置文件。这就是默认的配置文件。**我建议创建一个新的配置文件**,以便你的更改不会影响默认的设置。
![][6]
**步骤 4.** 为了更改字体,你需要启用“<ruby>自定义字体<rt>Custom font</rt></ruby>” 选项,接下来单击 “**Monospace Regular**” 。
![][7]
它将显示针对这个选项的可用的一个字体列表。
![][8]
在这里,你可以在字体列表的底部快速预览字体,也可以调整你的 Ubuntu 的终端的字体大小。
在默认情况下,字体的大小是 **12** ,字体的样式是 **Ubuntu mono**
**步骤 5.** 最后,你也可以搜索你喜欢的字体样式,并在查看预览和调整字体大小后,通过点击 “<ruby>选择<rt>Select</rt></ruby>” 来完成更改字体。
![][9]
这就是全部的步骤。你已经成功地更改字体。在下面的图像中查看更改。移动划款来查看不同点。
![Ubuntu terminal font change][10]
### 获取针对 Ubuntu 的终端的新字体的提示
你可以中从因特网上下载 TTF 格式的字体文件,右击这些 TTF 文件 [可以很容易在 Ubuntu 中安装这些新的字体][11] 。
![][12]
你可以打开一个新的终端窗口来加载新安装的字体。
不过,请记住,**Ubuntu 不会在终端中显示所有新安装的字体**。为什么?因为终端已经被分配使用 monospaced 字体。如果字体中字母间靠得很近的,那看起来就很诡异了。一些字体不能清晰区分字母 “o” 和数字 “0” 。同样,你也可能面临区分小写字母 “l” 和小写字母 “i” 的难题。
这就是为什么你会在终端中所看到的字体的名称中通常会带有 “mono” 的原因。
一般来说,在字体方面可能会有很多可读性难题,这可能会造成更加混乱的局面。因此,最好选择一种不会在终端上使人难易阅读的字体。
你还应该检查在你增大/减少字体大小期间该字体是否看起来良好/诡异,以确保在你自定义你的终端时没有问题。
### 针对终端自定义的字体建议
Free mono 和 Noto mono 是默认字体选择列表中可用的一些好字体,可应用于你的终端。
你可以尝试 [在 Linux 中安装新的字体][11] ,像 **JetBrains Mono** 、**Robo Mono** 、Larabiefont 、Share Tech Mono 以及来自 Google 和其它来源的字体。
你喜欢在 Ubuntu 的终端上使用什么样的字体和大小?请在下面的评论区告诉我们!
--------------------------------------------------------------------------------
via: https://itsfoss.com/change-terminal-font-ubuntu/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/customize-linux-terminal/
[2]: https://itsfoss.com/linux-terminal-emulators/
[3]: https://itsfoss.com/open-terminal-ubuntu/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/terminal-preference.png?resize=800%2C428&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/terminal-right-click-menu.png?resize=800%2C341&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-terminal-preference-option.png?resize=800%2C303&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/enable-font-change-ubuntu-terminal.png?resize=798%2C310&ssl=1
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/monospace-font-default.png?resize=800%2C651&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-custom-font-selection.png?resize=800%2C441&ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-terminal-font-2.png?resize=723%2C353&ssl=1
[11]: https://itsfoss.com/install-fonts-ubuntu/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/12/install-new-fonts-ubuntu.png?resize=800%2C463&ssl=1

View File

@ -3,38 +3,38 @@
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13954-1.html"
如何在 Ubuntu Linux 中正确地设置 JAVA_HOME 变量
======
如果你 [在 Ubuntu 上运行 Java 程序][1] ,使用 Eclipse[Maven][2] 或 Netbeans 等等,你将需要设置 JAVA_HOME 到你的路径。否则,你的系统将会向你控诉 “java_home environment variable is not set”。
![](https://img.linux.net.cn/data/attachment/album/202111/05/122020qr5pys4p851sf1zs.jpg)
在这篇初学者教程中,我将向你展示在 Ubuntu 上正确地设置 Java_Home 变量的步骤。这些步骤应该也适用于大多数的其它的 Linux 发行版。
如果你 [在 Ubuntu 上运行 Java 程序][1] ,使用 Eclipse、[Maven][2] 或 Netbeans 等等,你将需要将 `JAVA_HOME` 环境变量设置为正确的路径。否则,你的系统将会向你控诉 “java_home 环境变量没有设置”。
在这篇初学者教程中,我将向你展示在 Ubuntu 上正确地设置 `JAVA_HOME` 变量的步骤。这些步骤应该也适用于大多数的其它的 Linux 发行版。
设置过程包含这些步骤:
* 确保已经安装 Java 开发工具包 (JDK)
* 确保已安装 Java 开发工具包JDK
* 查找 JDK 可执行文件的正确的位置。
* 设置 JAVA_HOME 变量和永久的更改。
* 设置 `JAVA_HOME` 环境变量,并永久更改它。
### 步骤 1: 核查 JDK 是否已经安装
核查 Java 开发工具包 (JDK) 是否已经安装在你的 Linux 系统上的最简单的方法是运行这个命令:
核查 Java 开发工具包JDK是否已经安装在你的 Linux 系统上的最简单的方法是运行这个命令:
```
javac --version
```
上面的命令将核查 Java 编译器的版本。如果已经安装了 Java 编译器,它将显示 Java 版本
上面的命令将核查 Java 编译器的版本。如果已经安装了 Java 编译器,它将显示 Java 版本
![Java Compiler is installed][3]
如果上面的命令显示一个像未找到 javac 命令的错误信息,你将必须安装 JDK 。
如果上面的命令显示像这样未找到 `javac` 命令的错误信息,你得先安装 JDK
![Java Compiler is not installed][4]
@ -44,13 +44,13 @@ javac --version
sudo apt install default-jdk
```
这将在你当前的 Ubuntu 版本中安装默认的 Java 版本。如果你需要一些其它的特定的 Java 版本,那么你必须 [在 Ubuntu 中安装 Java 时][5]具体指出它的版本。
这将在你当前的 Ubuntu 版本中安装默认的 Java 版本。如果你需要一些其它版本的 Java 版本,那么你必须 [在 Ubuntu 中安装 Java 时][5] 具体指出它的版本。
在你确保 Java 编译器存在于你的系统之中后,接下来就到了查找其位置的时了。
在你确保 Java 编译器存在于你的系统之中后,接下来就到了查找其位置的时了。
### 步骤 2: 获取 JDK 可执行文件 (Java 编译器) 的位置
### 步骤 2: 获取 JDK 可执行文件Java 编译器)的位置
可执行文件通常位于 /usr/lib/jvm 目录之中。我不会让你来玩一个猜谜游戏。相反,让我们找出 Java 可执行文件的路径。
可执行文件通常位于 `/usr/lib/jvm` 目录之中。我不会让你来玩一个猜谜游戏,让我们找出 Java 可执行文件的路径。
[使用 which 命令][6] 来获取 Java 编译器可执行文件的位置:
@ -62,25 +62,25 @@ which javac
![][8]
最简单的方法是按照符合链接来直接使用这条命令以获取实际的可执行文件:
最简单的方法是直接使用下面这条命令跟随符号链接来以获取实际的可执行文件:
```
readlink -f `which java`
```
readlink 命令跟着一个符号链接。我在 _which java_ 的外侧使用 ` 。readlink 将使用 which java 的输出来替换符号链接,这被称之为命令替换。因此,在这个实例中,上面的命令大体上相当于 _readlink -f /usr/bin/java_
`readlink` 命令会跟随一个符号链接。我在 `which java` 的外侧使用 `readlink` 将会使用 `which java` 的输出来替换要检查的符号链接,这被称之为命令替换。因此,在这个实例中,上面的命令大体上相当于 `readlink -f /usr/bin/java`
在我的示例中,可执行文件的位置是 **/usr/lib/jvm/java-11-openjdk-amd64/bin/java** 。对你来说可能会不一样。在你的系统中,复制上述命令所获取的正确的路径。你知道,你可以 [在 Ubuntu 的终端中复制和粘贴][9] 。
在我的示例中,可执行文件的位置是 `/usr/lib/jvm/java-11-openjdk-amd64/bin/java` 。对你来说可能会不一样。在你的系统中,复制上述命令所获取的正确的路径。你知道,你可以 [在 Ubuntu 的终端中复制和粘贴][9] 。
### 步骤 3: 设置 JAVA_HOME 变量
现在,你以及获取了位置,使用它来设置 JAVA_HOME 环境变量:
现在,你已经获取了位置,使用它来设置 `JAVA_HOME` 环境变量:
```
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/bin/java
```
核查 JAVA_HOME 目录的值:
核查 `JAVA_HOME` 目录的值:
```
echo $JAVA_HOME
@ -88,21 +88,21 @@ echo $JAVA_HOME
![][10]
尝试在同一个终端总运行你的程序或工程,并查看它是否工作。
尝试在同一个终端中运行你的 Java 程序或工程,并查看它是否工作。
这尚未结束。你刚刚声明的 JAVA_HOME 变量是临时的。如果你关闭这个终端或开始一个新的会话,它将会再次变成空的。
这尚未结束。你刚刚声明的 `JAVA_HOME` 环境变量是临时的。如果你关闭这个终端或开始一个新的会话,它将会再次变成空的。
为了“永久地”设置 JAVA_HOME 变量,你应该将其添加到你 home 命令中的 bashrc 文件中。
为了“永久地”设置 `JAVA_HOME` 变量,你应该将其添加到你的家目录中的 `.bashrc` 文件中。
你可以 [在 linux 终端中使用 Nano 编辑器来编译文件][11]。 如果你不想使用它,并想采取一种简单的复制和粘贴的方法,使用下面的命令:
你可以 [在 Linux 终端中使用 Nano 编辑器来编辑文件][11]。 如果你不想使用它,并想采取一种简单的复制和粘贴的方法,使用下面的命令:
备份你的 bashrc 文件 (万一你把它弄坏了,你还可以将其再恢复回来)
首先备份你的 `.bashrc` 文件(以防万一你把它弄坏了,你还可以将其再恢复回来)
```
cp ~/.bashrc ~/.bashrc.bak
```
接下来,[使用 echo 命令来追加][12] 你在这部分开头处所使用的 export 命令。_**你应该适当地更改下面的命令,以便其正确地使用系统所显示的路径**_.
接下来,[使用 echo 命令来追加][12] 在这一节开头使用的 `export` 命令。**你应该适当地更改下面的命令,以便其正确地使用你的系统所显示的路径**。
```
echo "export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/bin/java" >> ~/.bashrc
@ -116,15 +116,15 @@ tail -3 ~/.bashrc
上面的 [tail 命令][13] 将显示所具体指定文件的最后 3 行。
这里是上面的三个命令的全部的输出
这里是上面的三个命令的全部的输出
![][14]
现在即使你退出会话或重新启动系统JAVA_HOME 变量都仍将设置为你所具体指定的值。这就是你所想要的,对吧?
现在,即使你退出会话或重新启动系统,`JAVA_HOME` 环境变量都仍将设置为你所具体指定的值。这就是你所想要的,对吧?
注意,如果你将来更改默认的 Java 版本,你将需要更改 JAVA_HOME 的值并将其指向正确的可执行文件的路径。
注意,如果你将来更改默认的 Java 版本,你将需要更改 `JAVA_HOME` 环境变量的值并将其指向正确的可执行文件的路径。
我希望这篇教程不仅会帮助你设置 Java_Home ,也会教会你如何完成这项工作。
我希望这篇教程不仅会帮助你设置 `JAVA_HOME` 环境变量,也会教会你如何完成这项工作。
如果你仍然面临难题或者有一些疑问或建议,请在评论区告诉我。
@ -135,7 +135,7 @@ via: https://itsfoss.com/set-java-home-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,112 @@
[#]: subject: "What is Build Essential Package in Ubuntu? How to Install it?"
[#]: via: "https://itsfoss.com/build-essential-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13953-1.html"
构建基础包的基础知识
======
![][5]
> 这是一篇快速提示,旨在给 Ubuntu 的新用户解释构建基础包是什么、它的用处和安装步骤。
在 Ubuntu 中安装构建基础包(`build-essential`),只需要在终端中简单输入这个命令:
```
sudo apt update && sudo apt install build-essential
```
但围绕它有几个问题,你可能想知道答案:
* 什么是构建基础包?
* 它包含什么内容?
* 为什么要安装它(如果安装的话)?
* 如何安装它?
* 如何删除它?
### 什么是 Ubuntu 中的构建基础包?
构建基础包(`build-essential`)实际上是属于 Debian 的。在它里面其实并不是一个软件。它包含了创建一个 Debian 包(`.deb`)所需的软件包列表。这些软件包包括 `libc`、`gcc`、`g++`、`make`、`dpkg-dev` 等。构建基础包包含这些所需的软件包作为依赖,所以当你安装它时,你只需一个命令就能安装所有这些软件包。
请不要认为构建基础包是一个可以在一个命令中神奇地安装从 Ruby 到 Go 的所有开发工具的超级软件包。它包含一些开发工具,但不是全部。
#### 你为什么要安装构建基础包?
它用来从应用的源代码创建 DEB 包。一个普通用户不会每天都去创建 DEB 包,对吗?
然而,有些用户可能会使用他们的 Ubuntu Linux 系统进行软件开发。如果你想 [在 Ubuntu 中运行 c 程序][1],你需要 gcc 编译器。如果你想 [在 Ubuntu 中运行 C++ 程序][2],你需要 g++ 编译器。如果你要使用一个不寻常的、只能从源代码中获得的软件,你的系统会抛出 “[make 命令未找到的错误][3]”,因为你需要先安装 `make` 工具。
当然,所有这些都可以单独安装。然而,利用构建基础包的优势,一次性安装所有这些开发工具要容易得多。这就是你得到的好处。
这就像 [ubuntu-restricted-extras 包允许你一次安装几个媒体编解码器][4]。
现在你知道了这个包的好处,让我们看看如何安装它。
### 在 Ubuntu Linux 中安装构建基础包
在 Ubuntu 中按 `Ctrl+Alt+T` 快捷键打开终端,输入以下命令:
```
sudo apt update
```
使用 `sudo` 命令,你会被要求输入你的账户密码。当你输入时,屏幕上没有任何显示。这没问题。这在大多数 Linux 系统中都是这样的。盲打输入你的密码,然后按回车键。
![][6]
`apt update` 命令刷新了本地软件包的缓存。这对于一个新安装的 Ubuntu 来说是必不可少的。
之后,运行下面的命令来安装构建基础包:
```
sudo apt install build-essential
```
它应该显示所有要安装的软件包。当要求确认时按 `Y`
![][7]
等待安装完成。就好了。
### 从 Ubuntu 中删除构建基础包
保留这些开发工具不会损害你的系统。但如果你的磁盘空间不足,你可以考虑删除它。
在 Ubuntu 中,由于有 `apt remove` 命令,删除软件很容易:
```
sudo apt remove build-essential
```
运行 `autoremove` 命令来删除剩余的依赖包也是一个好主意:
```
sudo apt autoremove
```
你现在知道了构建基础包的基础知识(双关语)。希望对你有用~
--------------------------------------------------------------------------------
via: https://itsfoss.com/build-essential-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/run-c-program-linux/
[2]: https://itsfoss.com/c-plus-plus-ubuntu/
[3]: https://itsfoss.com/make-command-not-found-ubuntu/
[4]: https://itsfoss.com/install-media-codecs-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/10/Build-Essential-Ubuntu.png?resize=800%2C450&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/10/apt-update.png?resize=800%2C467&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/10/install-build-essential-ubuntu.png?resize=800%2C434&ssl=1

View File

@ -0,0 +1,182 @@
[#]: subject: "7 handy tricks for using the Linux wget command"
[#]: via: "https://opensource.com/article/21/10/linux-wget-command"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "zengyi1001"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14007-1.html"
七个使用 wget 命令的技巧
======
> 用你的 Linux 终端中从互联网上下载文件。
![](https://img.linux.net.cn/data/attachment/album/202111/22/102927pjuji5juxzuikkg6.jpg)
`wget` 是一个下载网页文件的免费工具。它将互联网上的数据保存到一个文件或展示在终端上。实际上这也是像 Firefox 或 Chromium 这样的网页浏览器的工作原理。有一个区别是,网页浏览器默认将网页 <ruby>渲染<rt>render</rt></ruby>在图形窗口中,并且通常需要用户主动来操作它们。而 `wget` 工具是无交互的使用方式,也就是说你可以使用脚本或定期使用 `wget` 来下载文件,不论你人是否在电脑面前。
### 使用 wget 下载文件
你可以通过提供一个特定 URL 的链接,用 `wget` 下载一个文件。如果你提供一个默认为 `index.html` 的 URL那么就会下载该索引页。默认情况下文件会被下载到你当前的工作目录并保持原来的名字。
```
$ wget http://example.com
--2021-09-20 17:23:47-- http://example.com/
Resolving example.com... 93.184.216.34, 2606:2800:220:1:248:1893:25c8:1946
Connecting to example.com|93.184.216.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1256 (1.2K) [text/html]
Saving to: 'index.html'
```
通过使用 `--output-document``-` 符号,你可以指示 `wget` 将数据发送到 <ruby>标准输出<rt>stdout</rt></ruby>
```
$ wget http://example.com --output-document - | head -n4
<!doctype html>
<html>
<head>
<title>Example Domain</title>
```
你可以使用 `--output-document` 选项(简写为 `-O`)将下载文件命名为任何你想要的名称:
```
$ wget http://example.com --output-document foo.html
```
### 断点续传
如果你正在下载一个超大文件,你可能会遇到中断下载的情况。使用 `--continue`(简写为 `-c``wget` 可以确定从文件的哪个位置开始继续下载。也就是说,下次你在下载一个 4 GB 的 Linux 发行版 ISO 时,如果出现了错误导致中断,你不必重新从头开始。
```
$ wget --continue https://example.com/linux-distro.iso
```
### 下载一系列的文件
如果你要下载的不是单个大文件,而是一系列的文件,`wget` 也能提供很好的帮助。假如你知道要下载文件的路径以及文件名的通用范式,你可以使用 Bash 语法指示一个数字范围的起始和终点来表示这一系列文件名:
```
$ wget http://example.com/file_{1..4}.webp
```
### 镜像整个站点
使用 `--mirror` 选项你可以下载整个站点,包括它的目录结构。这与使用选项 `--recursive --level inf --timestamping --no-remove-listing` 的效果是一样的,该选项表明可以进行无限制的递归,得到你指定域下的所有内容。但你也可能会得到比预期多得多的内容,这取决于站点本身的老旧程度。
如果你正在使用 `wget` 来打包整个站点,选项 `--no-cookies --page-requisites --convert-links` 非常有用,它可以确保打包的站点是全新且完整的,站点副本或多或少是<ruby>自包含的<rt>self-contained</rt></ruby>
### 修改 HTML 标头
在计算机发送的通信报文里含有大量用于数据交换的<ruby>元数据<rt>metadata</rt></ruby>。HTTP 标头是初始数据的组成部分。当你浏览某个站点时,你的浏览器会发送 HTTP 请求标头。使用 `--debug` 选项可以查看 `wget` 为每个请求发送了什么样的标头信息:
```
$ wget --debug example.com
---request begin---
GET / HTTP/1.1
User-Agent: Wget/1.19.5 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: example.com
Connection: Keep-Alive
---request end---
```
你可以使用 `--header` 选项修改请求标头。实际上经常使用这种方式来模仿某特定浏览器,来测试或兼容某些编码糟糕、只能与特定代理通信的站点。
让请求被识别为来自 Windows 系统的 Microsoft Edge
```
$ wget --debug --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.59" http://example.com
```
你也可以假装为某个移动设备:
```
$ wget --debug --header="User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 13_5_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Mobile/15E148 Safari/604.1" http://example.com
```
### 查看响应标头
与浏览器发送请求标头的方式一样,响应也包含有标头信息。你可以使用 `--debug` 选项来查看响应中的标头信息:
```
$ wget --debug example.com
[...]
---response begin---
HTTP/1.1 200 OK
Accept-Ranges: bytes
Age: 188102
Cache-Control: max-age=604800
Content-Type: text/html; charset=UTF-8
Etag: "3147526947"
Server: ECS (sab/574F)
Vary: Accept-Encoding
X-Cache: HIT
Content-Length: 1256
---response end---
200 OK
Registered socket 3 for persistent reuse.
URI content encoding = 'UTF-8'
Length: 1256 (1.2K) [text/html]
Saving to: 'index.html'
```
### 处理 301 响应
200 响应码意味着一切都在预料之中。而 301 响应则表示 URL 已经被永久迁移到了另外一个地方。这是站点管理员的一种常用手段,内容迁移后,为访问旧地址的用户留下寻找新地址的“线索”。`wget` 会默认跟随<ruby>重定向<rt>redirect</rt></ruby>,这也是大部分情况下用户所希望的。
当然,你可以使用 `--max-redirect` 选项,用于控制 `wget` 默认处理 301 响应重定向的次数。设置为 `0` 意味着不会自动重定向到新的地址:
```
$ wget --max-redirect 0 http://iana.org
--2021-09-21 11:01:35-- http://iana.org/
Resolving iana.org... 192.0.43.8, 2001:500:88:200::8
Connecting to iana.org|192.0.43.8|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.iana.org/ [following]
0 redirections exceeded.
```
同时,你也可以设置为其他的数值来控制 `wget` 能重定向多少次。
### 展开 URL 缩写
使用`--max-redirect` 选项用于在实际访问之前查看 <ruby>URL 缩写<rt>shortened URL</rt></ruby>非常有用。缩写 URL 可用于用户无法完整拷贝和粘贴一个长 URL 时的<ruby>印刷媒体<rt>print media</rt></ruby>,或是具有字数限制的社交网络(在类似 [Mastondon][2] 这种现代开源的社交网络上这并不是一个问题)。这种缩写具有一定的风险,因为本质上这些目的地是隐藏的。组合使用 `--head` 选项和 `--location` 选项来来查看 HTTP 头部并解开最终的目的地,你可以在不加载整个资源的情况下查看到缩写 URL 的完整内容:
```
$ wget --max-redirect 0 "https://bit.ly/2yDyS4T"
--2021-09-21 11:32:04-- https://bit.ly/2yDyS4T
Resolving bit.ly... 67.199.248.10, 67.199.248.11
Connecting to bit.ly|67.199.248.10|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://example.com/ [following]
0 redirections exceeded.
```
`Location` 开始的倒数第二行输出,展示了实际的目的地。
### 使用 wget
若你开始考虑使用单个命令来实现整个网站访问的过程,`wget` 可以快速高效的帮你获取互联网上的信息,而不用在图形界面上耗费精力。为了帮你将它构造在你平常的工作流中,我们创建了一个 `wget` 常用使用方式和语法清单,包括使用它来查询 API 的概述。[在这里下载 Linux wget 速查表][3]。
--------------------------------------------------------------------------------
来源: https://opensource.com/article/21/10/linux-wget-command
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[zengyi1001](https://github.com/zengyi1001)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
[2]: https://opensource.com/article/17/4/guide-to-mastodon
[3]: https://opensource.com/downloads/linux-wget-cheat-sheet

View File

@ -0,0 +1,206 @@
[#]: subject: "What you need to know about Kubernetes NetworkPolicy"
[#]: via: "https://opensource.com/article/21/10/kubernetes-networkpolicy"
[#]: author: "Mike Calizo https://opensource.com/users/mcalizo"
[#]: collector: "lujun9972"
[#]: translator: "perfiffer"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14005-1.html"
Kubernetes 网络策略基础
======
> 在你通过 Kubernetes 部署一个应用之前,了解 Kubernetes 的网络策略是一个基本的要求。
![](https://img.linux.net.cn/data/attachment/album/202111/21/130217ocykri3zbv37i6ou.jpg)
随着越来越多的云原生应用程序通过 Kubernetes 部署到生产环境,安全性是你必须在早期就需要考虑的一个重要检查项。在设计云原生应用程序时,预先嵌入安全策略非常重要。不这样做会导致后续的安全问题,从而导致项目延迟,并最终给你带来不必要的压力和金钱投入。
这么多年来,人们总是把安全留到最后,直到他们的部署即将发布到生产环境时才考虑安全。这种做法会导致项目交付的延迟,因为每个组织都有要遵守的安全标准,这些规定被绕过或不遵守,并承受大量的风险才得以实现可交付成果。
对于刚开始学习 Kubernetes 实施的人来说,理解 Kubernetes <ruby>网络策略<rt>NetworkPolicy</rt></ruby> 可能会令人生畏。但这是在将应用程序部署到 Kubernetes 集群之前必须了解的基本要求之一。在学习 Kubernetes 和云原生应用程序时,请把“不要把安全抛在脑后!”定为你的口号。
### 网络策略概念
<ruby>[网络策略][2]<rt>NetworkPolicy</rt></ruby> 取代了你所知道的数据中心环境中的防火墙设备 —— 如<ruby>吊舱<rt>Pod</rt></ruby>之于计算实例、网络插件之于路由器和交换机以及卷之于存储区域网络SAN
默认情况下Kubernetes 网络策略允许 <ruby>[吊舱][3]<rt>Pod</rt></ruby> 从任何地方接收流量。如果你不担心吊舱的安全性,那么这可能没问题。但是,如果你正在运行关键工作负载,则需要保护吊舱。控制集群内的流量(包括入口和出口流量),可以通过网络策略来实现。
要启用网络策略,你需要一个支持网络策略的网络插件。否则,你应用的任何规则都将变得毫无用处。
[Kubernetes.io][4] 上列出了不同的网络插件:
* CNI 插件:遵循 <ruby>[容器网络接口][5]<rt>Container Network Interface</rt></ruby>CNI规范旨在实现互操作性。
* Kubernetes 遵循 CNI 规范的 [v0.4.0][6] 版本。
* Kubernetes 插件:使用桥接器和主机本地 CNI 插件实现基本的 `cbr0`
### 应用网络策略
要应用网络策略,你需要一个工作中的 Kubernetes 集群,并有支持网络策略的网络插件。
但首先,你需要了解如何在 Kubernetes 的环境使用网络策略。Kubernetes 网络策略允许 [吊舱][3] 从任何地方接收流量。这并不是理想情况。为了吊舱安全,你必须了解吊舱是可以在 Kubernetes 架构内进行通信的端点。
1、使用 `podSelector` 进行吊舱间的通信:
```
- namespaceSelector:
    matchLabels:
      project: myproject
```
2、使用 `namespaceSelector` 和/或 `podSelector``namespaceSelector` 的组合进行命名空间之间的通信和命名空间到吊舱的通信。:
```
- namespaceSelector:
    matchLabels:
      project: myproject
- podSelector:
    matchLabels:
      role: frontend
```
3、对于吊舱的 IP 块通信,使用 `ipBlock` 定义哪些 IP CIDR 块决定源和目的。
```
- ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
```
注意吊舱、命名空间和基于 IP 的策略之间的区别。对于基于吊舱和命名空间的网络策略,使用选择器来控制流量,而对基于 IP 的网络策略,使用 IP 块CIDR 范围)来定义控制。
把它们放在一起,一个网络策略应如下所示:
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 192.168.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978
```
参考上面的网络策略,请注意 `spec` 部分。在此部分下,带有标签 `app=backend``podSelector` 是我们的网络策略的目标。简而言之,网络策略保护给定命名空间内称为 `backend` 的应用程序。
此部分也有 `policyTypes` 定义。此字段指示给定策略是否适用于选定吊舱的入口流量、选定吊舱的出口流量,或两者皆有。
```
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  - Egress
```
现在,请看 `Ingress`(入口)和 `Egress`(出口)部分。该定义规定了网络策略的控制。
首先,检查 `ingress from` 部分。
此实例中,网络策略允许从以下位置进行吊舱连接:
* `ipBlock`
* 允许 172.17.0.0/16
* 拒绝 192.168.1.0/24
* `namespaceSelector`
* `myproject`: 允许来自此命名空间并具有相同标签 project=myproject 的所有吊舱。
* `podSelector`
* `frontend`: 允许与标签 `role=frontend` 匹配的吊舱。
```
ingress:
- from:
  - ipBlock:
      cidr: 172.17.0.0/16
      except:
      - 192.168.1.0/24
  - namespaceSelector:
      matchLabels:
        project: myproject
  - podSelector:
      matchLabels:
        role: frontend
```
现在,检查 `egress to` 部分。这决定了从吊舱出去的连接:
* `ipBlock`
* 10.0.0.0/24: 允许连接到此 CIDR
* Ports: 允许使用 TCP 和端口 5978 进行连接
```
egress:
- to:
  - ipBlock:
      cidr: 10.0.0.0/24
  ports:
  - protocol: TCP
    port: 5978
```
### 网络策略的限制
仅靠网络策略无法完全保护你的 Kubernetes 集群。你可以使用操作系统组件或 7 层网络技术来克服已知限制。你需要记住,网络策略只能解决 IP 地址和端口级别的安全问题 —— 即开放系统互联OSI中的第 3 层或第 4 层。
为了解决网络策略无法处理的安全要求,你需要使用其它安全解决方案。以下是你需要知道的一些 [用例][7],在这些用例中,网络策略需要其他技术的增强。
### 总结
了解 Kubernetes 的网络策略很重要,因为它是实现(但不是替代)你通常在数据中心设置中使用的防火墙角色的一种方式,但适用于 Kubernetes。将此视为容器安全的第一层仅仅依靠网络策略并不是一个完整的安全解决方案。
网络策略使用选择器和标签在吊舱和命名空间上实现安全性。此外,网络策略还可以通过 IP 范围实施安全性。
充分理解网络策略是在 Kubernetes 环境中安全采用容器化的一项重要技能。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/10/kubernetes-networkpolicy
作者:[Mike Calizo][a]
选题:[lujun9972][b]
译者:[perfiffer](https://github.com/perfiffer)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mcalizo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
[2]: https://kubernetes.io/docs/concepts/services-networking/network-policies/
[3]: https://kubernetes.io/docs/concepts/workloads/pods/
[4]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
[5]: https://github.com/containernetworking/cni
[6]: https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md
[7]: https://kubernetes.io/docs/concepts/services-networking/network-policies/#what-you-can-t-do-with-network-policies-at-least-not-yet

View File

@ -3,13 +3,15 @@
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13961-1.html"
只有 4MB如何修复 Etcher 和 Rufus 创建 Linux USB 后“破坏”的 USB
======
![](https://img.linux.net.cn/data/attachment/album/202111/07/165254zlhgz6an6vgpv2qd.jpg)
情况是这样的。你用 Etcher 或者 Rufus 工具在 Windows 或者 Linux 中创建了一个可启动的、Live Linux USB。
你用它来安装 LinuxUSB 的目的已经达到了。现在你想格式化这个 USB用它来进行常规的数据传输或存储。
@ -44,8 +46,7 @@
启动这个工具,它将显示你电脑上存在的所有磁盘。当然,这包括插入的 USB。
_**选择正确的磁盘是非常重要的**_。从 U 盘的大小或“可移动”的标签中辨别出它是哪一个。
**选择正确的磁盘是非常重要的**。从 U 盘的大小或“可移动”的标签中辨别出它是哪一个。
![][4]
@ -53,19 +54,19 @@ _**选择正确的磁盘是非常重要的**_。从 U 盘的大小或“可移
我们的想法是删除 U 盘上的任何现有分区。未分配的空间不能被删除,但这也没关系。
在该分区上点击右键,然后点击**删除卷**
在该分区上点击右键,然后点击<ruby>删除卷<rt>Delete Volume</rt></ruby>
![Delete partitions on the USB disk][5]
当要求你确认时,按是。
当要求你确认时,按<ruby><rt>Yes</rt></ruby>
![Confirm deletion of partition][6]
你的目标是只有一个未分配的空间块。当你看到它时,右击它并点击“新的简单卷”来创建一个分区。
你的目标是只有一个未分配的空间块。当你看到它时,右击它并点击“<ruby>新建简单卷……<rt>New Simple Volume...</rt></ruby>”来创建一个分区。
![Create New Simple Volume \(partition\)][7]
接下来的步骤很简单。点击“下一步”选择整个可用空间给它分配一个字母选择文件系统FAT32 或 NTFS并将其格式化。
接下来的步骤很简单。点击“<ruby>下一步<rt>Next &gt;</rt></ruby>选择整个可用空间给它分配一个字母选择文件系统FAT32 或 NTFS并将其格式化。
![Click Next][8]
@ -91,11 +92,11 @@ _**选择正确的磁盘是非常重要的**_。从 U 盘的大小或“可移
除此之外,你可以像在 Windows 中那样做:删除现有的分区,用整个可用空间创建一个新的分区。
这里使用 GNOME Disks 工具。它已经安装在 Ubuntu 和许多其他 Linux 发行版上。
这里使用 GNOME “磁盘” 工具。它已经安装在 Ubuntu 和许多其他 Linux 发行版上。
![Start disk app][14]
_**同样,确保你在这里选择了外部 USB 盘。**_
**同样,确保你在这里选择了外部 USB 盘。**
你会看到 U 盘上的各种分区。试着从上面的菜单中格式化该磁盘。
@ -111,7 +112,7 @@ _**同样,确保你在这里选择了外部 USB 盘。**_
### 总结
像 Rufus 和 Etcher 这样的工具并没有真正破坏你的 USB。这就是它们的功能通过在磁盘上创建一个不同的文件系统。但这样一来操作系统就不能正确理解它。
像 Rufus 和 Etcher 这样的工具并没有真正破坏你的 USB。这就是它们的功能通过在磁盘上创建一个不同的文件系统。但这样一来Windows 操作系统就不能正确理解它。
好在只需付出一点努力就可以修复。我希望你也能够修复它。如果没有,请与我分享你的问题,我将尽力帮助你。
@ -122,7 +123,7 @@ via: https://itsfoss.com/format-live-linux-usb/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -3,57 +3,55 @@
[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13957-1.html"
如何动态生成 Jekyll 配置文件
如何动态生成 Jekyll 配置文件
======
使用 Python 或 Bash 将动态数据插入 Jekyll 静态网站中,并且避免创建一个 API 后端。
![Digital creative of a browser on the internet][1]
[Jekyll][2],静态网站生成器,使用 `_config.yml` 进行配置。这些配置都是 Jekyll 特有的。但你也可以在这些文件中[用我们自己的内容定义变量][3],并在整个网站中使用它们。在本文中,我将重点介绍动态创建 Jekyll 配置文件的一些优势。
> 使用 Python 或 Bash 将动态数据插入 Jekyll 静态网站中,并且避免创建一个 API 后端。
![](https://img.linux.net.cn/data/attachment/album/202111/06/172709dqcv65spvl363fav.jpg)
静态网站生成器 [Jekyll][2] 使用 `_config.yml` 进行配置。这些配置都是 Jekyll 特有的。但你也可以在这些文件中 [用我们自己的内容定义变量][3],并在整个网站中使用它们。在本文中,我将重点介绍动态创建 Jekyll 配置文件的一些优势。
在我的本地笔记本电脑上,我使用以下命令来服务我的 Jekyll 网站进行测试:
```
`bundle exec jekyll serve --incremental --config _config.yml`
bundle exec jekyll serve --incremental --config _config.yml
```
### 结合多个配置文件
在本地测试中,有时需要覆盖配置选项。我的网站的[当前 _config.yml][4] 有以下设置:
在本地测试中,有时需要覆盖配置选项。我的网站的 [当前 _config.yml][4] 有以下设置:
```
# Jekyll Configuration
# Site Settings
url: "<https://notes.ayushsharma.in>"
website_url: "<https://notes.ayushsharma.in/>"
url: "https://notes.ayushsharma.in"
website_url: "https://notes.ayushsharma.in/"
title: ayush sharma's notes ☕ + 🎧 + 🕹️
email: [ayush@ayushsharma.in][5]
email: ayush@ayushsharma.in
images-path: /static/images/
videos-path: /static/videos/
js-path: /static/js/
baseurl: "" # the subpath of your site, e.g. /blog
```
由于本地的 `jekyll serve` URL 是 http://localhost:4000上面定义的 URL 就不能用了。我可以创建一个 `_config.yml` 的副本 `_config-local.yml` 并替换所有的值。但还有一个更简单的选择。
由于本地的 `jekyll serve` URL 是 `http://localhost:4000`,上面定义的 URL 就不能用了。我可以创建一个 `_config.yml` 的副本 `_config-local.yml` 并替换所有的值。但还有一个更简单的选择。
Jekyll 允许[指定多个配置文件][6],后面的声明覆盖前面的声明。这意味着我可以用以下代码定义一个新的 `_config-local.yml`
```
`url:""`
url:""
```
然后我可以把上述文件和我的主 `_config.yml` 结合起来,像这样:
```
`bundle exec jekyll serve --incremental --config _config.yml,_config-local.yml`
bundle exec jekyll serve --incremental --config _config.yml,_config-local.yml
```
通过合并这两个文件,这个 `jekyll serve``url` 的最终值将是空白。这就把我网站中定义的所有 URL 变成了相对的 URL并使它们在我的本地笔记本电脑上工作。
@ -62,31 +60,27 @@ Jekyll 允许[指定多个配置文件][6],后面的声明覆盖前面的声
一个简单的例子,假设你想在你的网站上显示当前日期。它的 bash 命令是:
```
&gt; date '+%A, %d %B %Y'
Saturday, 16 October 2021
```
我知道我也可以[使用 Jekyll 的 _config.yml 的自定义内容][3]。我将上述日期输出到一个新的 Jekyll 配置文件中。
我知道我也可以 [使用 Jekyll 的 _config.yml 的自定义内容][3]。我将上述日期输出到一个新的 Jekyll 配置文件中。
```
`my_date=`date '+%A, %d %B %Y'`; echo 'my_date: "'$my_date'"' > _config-data.yml`
my_date=`date '+%A, %d %B %Y'`; echo 'my_date: "'$my_date'"' > _config-data.yml
```
现在 `_config-data.yml` 包含:
```
`my_date: "Saturday, 16 October 2021"`
my_date: "Saturday, 16 October 2021"
```
我可以把我的新配置文件和其他文件结合起来,在我的网站上使用 `my_date` 变量。
```
`bundle exec jekyll serve --incremental --config _config.yml,_config-local.yml,_config-data.yml`
bundle exec jekyll serve --incremental --config _config.yml,_config-local.yml,_config-data.yml
```
在运行上述命令时,`{{ site.my_date }}` 输出其配置的值。
@ -101,9 +95,7 @@ Saturday, 16 October 2021
我希望这能在你的下一个静态网站项目中给你一些帮助。继续阅读,并祝你编码愉快。
* * *
_这篇文章最初发布在[作者的网站][11]上并经授权转载。_
这篇文章最初发布在 [作者的网站][11] 上,并经授权转载。
--------------------------------------------------------------------------------
@ -112,7 +104,7 @@ via: https://opensource.com/article/21/11/jekyll-config-files
作者:[Ayush Sharma][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,77 @@
[#]: subject: "Use the Linux cowsay command for a colorful holiday greeting"
[#]: via: "https://opensource.com/article/21/11/linux-cowsay"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lujun9972"
[#]: translator: "unigeorge"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13977-1.html"
使用 Linux cowsay 命令制作丰富多彩的节日问候
======
> 用这个有趣的 Linux 命令行工具来庆祝节日吧。
![](https://img.linux.net.cn/data/attachment/album/202111/12/101540nq1nut3gzkzz1qus.jpg)
你可能听说过这样一个小程序:它能接受输入信息(比如你通过键盘输入的消息),并输出一张引用了输入消息的牛的图像。这个小程序被称为 `cowsay`,之前我们已经 [介绍][2] 过了。
所以,为了搞点有趣的事,我想用它来庆祝 <ruby>亡灵节<rt>Día de los Muertos</rt></ruby>LCTT 译注:墨西哥传统的鬼节,著名动画电影《<ruby>寻梦环游记<rt>Coco</rt></ruby>》即以此为背景)。
除了牛之外,其实还有一些其他的可用图像。当安装 `cowsay` 时,程序会自动安装其他几个图像,并存储在 `/usr/share/cowsay` 目录中。你可以用 `-l` 参数来获取图像列表。
```
$ sudo dnf install cowsay
$ cowsay -l
```
实际上还有很多与 `cowsay` 或类似程序相关的开发活动。你可以创建自己的图像文件也可以下载其他人制作的图像。例如GitHub 上就有 [Charc0al 的 cowsay 文件转换器][3]。你可以用这一工具将自己的图片转换为 `cowsay` 所需的特殊 ASCII 格式文件。根据 Linux 或 FreeBSD 终端设置不同,你可能会启用颜色支持,而 `cowsay` 也可以显示彩色图像。Charc0al 的转换器也提供了许多现成的颜色文件。
我选择了“<ruby>甲壳虫汁<rt>Beetlejuice</rt></ruby>LCTT 译注:同名美国奇幻喜剧电影中的主角大法师)文件来开展我的“庆祝活动”。首先,我将 [beetlejuice.cow][4] 文件保存到了 `/usr/share/cowsay` 目录。这个目录权限属于 root 用户,你可以先将该文件保存到家目录,然后再复制过去。此外我们还需要将该文件的读取权限赋予所有用户。
```
$ sudo cp beetlejuice.cow /usr/share/cowsay
$ sudo chmod o+r /usr/share/cowsay/beetlejuice.cow
```
关注一下图像是如何生成的(过程很有趣)。首先将各种 ASCII 颜色控制代码设置为变量,然后用这些变量,以传统的 ASCII 艺术风格绘制图像。生成的图像几乎是全身的,并且在不滚动屏幕的情况下,不适配我的终端的高度,所以我编辑了一下该文件,删除了最后 15 行以降低高度。
这个图像也可以被 `cowsay` 程序检测到,并展示在列表中。
```
$ cowsay -l
Cow files in /usr/share/cowsay:
beavis.zen beetlejuice blowfish bud-frogs bunny cheese cower default dragon
...
```
现在,只要运行程序,并使用 `-f` 选项指定该图像就可以了。别忘了提供要输出的信息。
```
$ cowsay -f beetlejuice "Happy Day of the Dead!"
```
![ASCII display of Beetlejuice via cowsay][5]
*“甲壳虫汁”祝你亡灵节快乐 (CC BY-SA 4.0)*
`cowsay` 是 Linux 中一个有趣的搞怪小玩意。发挥你的创意,探索一下 `cowsay` 以及 ASCII 的艺术吧。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/linux-cowsay
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[unigeorge](https://github.com/unigeorge)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/drew-hays-unsplash.jpg?itok=uBrvJkTW (Pumpkins painted for Day of the Dead)
[2]: https://opensource.com/article/18/12/linux-toy-cowsay
[3]: https://charc0al.github.io/cowsay-files/converter/
[4]: https://raw.githubusercontent.com/charc0al/cowsay-files/master/cows/beetlejuice.cow
[5]: https://opensource.com/sites/default/files/cowsay_beetlejuice.png

View File

@ -0,0 +1,86 @@
[#]: subject: "4 ways to edit photos on the Linux command line"
[#]: via: "https://opensource.com/article/21/11/edit-photos-linux-command-line"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13964-1.html"
在 Linux 命令行上编辑照片的 4 种方法
======
![](https://img.linux.net.cn/data/attachment/album/202111/08/114427mq12hqvqiixv1j1b.jpg)
> 这里有一些我最喜欢的 ImageMagick 技巧,以及如何在没有 GUI 的情况下使用它们。
Linux 对摄影师和图形艺术家很有用。它提供了许多工具来编辑包括照片在内的不同类型的图像文件和格式。这表明你甚至不需要一个图形界面来处理你的照片。这里有四种你可以在命令行中编辑图像的方法。
### 给你的图片应用效果
几年前Seth Kenlon 写过一篇文章,[4 个有趣的半无用的Linux 玩具][2],其中包括对 ImageMagick 编辑工具套件的介绍。在 2021 年的今天ImageMagick 甚至更有意义。
这篇文章让我们了解了 Fred 的 ImageMagick 脚本这些脚本真的很有用。Fred Weinhaus 维护着 200 多个脚本用于对你的图像文件应用各种效果。Seth 向我们展示了 Fred 的 `vintage3` 脚本的一个例子,该脚本使图像变得怀旧。
### 创建照片拼贴画
今年Jim Hall 用他的文章 [从 Linux 命令行创建照片拼贴][3] 向我们展示了如何从照片中创建拼贴画。
拼贴画在小册子和手册中使用得很多。它们是一种在一张照片中展示几张图片的有趣方式。可以应用效果来将它们进一步融合在一起。事实上,我以他的文章为指导,创造了上面的图片拼贴。这是我小时候的样子。以下是我使用的命令:
```
$ montage Screenshot-20211021114012.png \
Screenshot-20211021114220.png \
Screenshot-20211021114257.png \
Screenshot-20211021114530.png \
Screenshot-20211021114639.png \
Screenshot-20211021120156.png \
-tile 3x2 -background black \
screenshot-montage.png
```
### 调整图像大小
Jim 发表了另一篇文章,[从 Linux 终端调整图像的大小][4]。这个教程演示了如何使用 ImageMagick 改变一个图像文件的尺寸并将其保存为一个新的文件。例如,上面的 `montage` 命令所产生的拼贴画没有达到要求的尺寸。学习如何调整尺寸,使我能够调整宽度和高度,从而使它能够被包括在内。这是我用来调整这张图片大小的命令。
![Montage of Alan as a Kid][1]
```
$ convert screenshot-montage.png -resize 520x292\! alanfd-kid-montage.png
```
### 自动化图像处理
最近,我决定自己看一下 ImageMagick 套件。这一次,我把它的工具组合成一个 Bash 脚本。文章的题目是 [用这个 bash 脚本自动处理图像][5]。这个例子是一个简单的脚本,可以自动为我的文章制作图片。它是根据 Opensource.com 上的要求定制的。如果你想使用这个脚本,我在文章中提供了一个 Git 仓库连接。它很容易修改和扩展,可以满足任何人的需要。
### 总结
我希望你喜欢这些文章并在你的艺术创作中使用 Linux。如果你想看看更多的 Linux 图像软件,可以看看 Fedora [Design Suite][6] Spin。它是一个完整的操作系统包括许多不同的开源多媒体制作和发布工具例如
* GIMP
* Inkscape
* Blender
* Darktable
* Krita
* Scribus
* 等等
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/edit-photos-linux-command-line
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/alanfd-kid-montage.png?itok=r1kgXpLc (Montage of Alan as a Kid)
[2]: https://opensource.com/life/16/6/fun-and-semi-useless-toys-linux
[3]: https://opensource.com/article/21/9/photo-montage-imagemagick
[4]: https://opensource.com/article/21/9/resize-image-linux
[5]: https://opensource.com/article/21/10/image-processing-bash-script
[6]: https://labs.fedoraproject.org/en/design-suite/

View File

@ -0,0 +1,103 @@
[#]: subject: "Motrix: A Beautiful Cross-Platform Open-Source Download Manager"
[#]: via: "https://itsfoss.com/motrix/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13984-1.html"
Motrix一个漂亮的跨平台开源下载管理器
======
> 一个开源的下载管理器,提供了一个简洁的用户界面,同时提供了跨平台操作的所有基本功能。在这里了解关于它的更多信息。
![](https://img.linux.net.cn/data/attachment/album/202111/14/114909gwv5jbe0055tc6b6.jpg)
Linux 下有大量的下载管理器。如果你想下载一些东西并可以管理它们,你可以选择任何一个可用的下载管理器。
然而,如果你想要一个好看的下载管理器,提供现代的用户体验,同时又不影响功能设置,我有个软件你可能会喜欢。
### 看看 Motrix一个功能丰富的开源下载管理器
![][1]
Motrix 是一个不折不扣的下载管理器,开箱即用,外观简洁。它是自由开源软件。
你可以选择在 Linux、Windows 和 macOS 中使用。
它也可以成为一些 [Linux 中的 torrent 客户端][2] 的潜在替代品。
让我强调一些关键的功能以及安装说明。
### Motrix 的特点
![][3]
你应该能找到所有你通常在下载管理器中期待的功能。下面是它们的列表:
* 跨平台支持
* 易于使用的界面
* BitTorrent 的选择性下载
* 自动更新 tracker 列表
* UPnP 及 NAT-PMP 端口映射
* 多个下载任务(最多 10 个)
* 在一个任务中最多支持 64 个线程
* 能够设置速度限制
* 可选择改变用户代理
* 支持系统托盘
* 黑暗模式
* 支持多国语言
![][4]
总的来说,它在处理 torrent 文件时工作得很好,也能从剪贴板上检测到下载链接。在下载文件之前可以直接访问高级选项,所以这应该是很方便的。
![][5]
在我短暂的测试中,我在 Ubuntu 上以 Snap 包使用它时没有发现任何问题。
### 在 Linux 中安装 Motrix
你有多种安装 Motrix 的选项。因此,你应该能够在你选择的任何 Linux 发行版上安装它。
它主要提供了一个 AppImage 供下载。但是,它还有 [Flatpak 包][6],并可在 [Snap 商店][7]找到它。
如果你使用的是 Ubuntu你可以通过软件中心找到它。
除了这些,它也在 [AUR][8] 中提供给 Arch Linux 用户。在任何一种情况下,你都可以从他们的 [GitHub 发布栏][9] 获得 DEB/RPM 包。
你可以在他们的[官方网站][10]和 [GitHub 页面][11]上找到下载链接和更多安装的信息。
- [Motrix][10]
### 总结
Motrix 提供了所有你想要的下载管理器中的好东西,额外还有一个现代的用户体验。
我建议你试试把它作为你的下载管理器,看看它是否能取代你目前的工具。我很想知道你的 Linux 系统上常用的下载管理器。请在下面的评论中告诉我更多关于它的信息。
--------------------------------------------------------------------------------
via: https://itsfoss.com/motrix/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/motrix-download-manager.png?resize=800%2C604&ssl=1
[2]: https://itsfoss.com/best-torrent-ubuntu/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/motrix-dm-setting.png?resize=800%2C607&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/motrix-dm-white.png?resize=800%2C613&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/motrix-dm-options.png?resize=800%2C596&ssl=1
[6]: https://itsfoss.com/what-is-flatpak/
[7]: https://itsfoss.com/enable-snap-support-linux-mint/
[8]: https://itsfoss.com/aur-arch-linux/
[9]: https://github.com/agalwood/Motrix/releases
[10]: https://motrix.app/
[11]: https://github.com/agalwood/Motrix

View File

@ -0,0 +1,59 @@
[#]: subject: "4 tips to becoming a technical writer with open source contributions"
[#]: via: "https://opensource.com/article/21/11/technical-writing-open-source"
[#]: author: "Ashley Hardin https://opensource.com/users/ashleyhardin"
[#]: collector: "lujun9972"
[#]: translator: "yingmanwumen"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13974-1.html"
成为为开源做贡献的技术写手的四个建议
======
> 你的开源贡献将会向潜在雇主表明,你会主动寻求学习、成长和挑战自我的机会。
![](https://img.linux.net.cn/data/attachment/album/202111/11/100737uebjijhwz0l4zhoo.jpg)
不管你是一个对技术写作有所涉足的技术爱好者,还是一个想要转职为职业技术写手的成熟技术专家,你都可以构建你的技术写作作品集,并将你的开源贡献作为作品集的一部分。为开源项目写作是一件有趣、灵活且低风险的事情。按照自己的时间安排来为你感兴趣的项目做贡献,你将会为社区是多么热情或你对社区产生影响的速度而感到惊喜。
你的开源贡献将会向潜在雇主表明,你会主动寻求学习、成长和挑战自我的机会。和任何事情一样,你需要从某个地方开始。为开源项目做贡献可以让你在展示你的才华的同时学到新技巧、新技术。另外,为开源项目写作能让你接触到新的社区、跨越时区与新鲜面孔合作,并建立你的社交网络。当你挖掘到新的开源机会后,你的简历将更抢眼,让你在其他候选人中脱颖而出。以下是为开源做出贡献的四个建议,这可以让你走向技术写作的职业生涯。
### 学习行业工具
作为开始,我建议先熟悉 [Git][2],并建立 [GitLab][3] 和 [GitHub][4] 帐号,然后寻找一个趁手的文本编辑器。我个人喜欢使用开源工具 [Atom][5]。关于 Git它能从网络上获取到丰富的免费学习资源包括一些优秀的互动教程。你不需要成为一个 Git 高手才能深入开源世界。我建议先学习一些基本操作,然后让你的技能随着你的贡献逐渐成长。
### 找到一个项目
为开源做贡献最难的部分大概是找到一个项目来做贡献。你可以查看 [Up For Grabs][6] 并找一些感兴趣的项目。[First Timers Only][7] 有更多的起步资源。别犹豫,联系项目维护者来了解更多有关于项目的东西,并了解他们在何处需要帮助。请坚持下去。找到一个适合你的项目可能会花费一些时间。
### 告别“冒充者综合症”
一个常见的误区是你必须是一个程序员才能为开源项目做贡献。作为一个没有工程或计算机科学领域有关证书的、自学成材的贡献者,我能保证事实并非如此。文档往往开发项目中最价值但最被忽视的部分。这些项目经常缺少人手和资源来建立完善的、高质量的文档。这给你展现了一个绝佳机会来参与提交拉取请求或归档该项目的议题。你可以做到的!
LCTT 译注:<ruby>冒充者综合症<rt>Impostor Syndrome</rt></ruby>,又称自我能力否定倾向,指个体按照客观标准评价为已经获得了成功或取得成就,但是其本人却认为这是不可能的,他们没有能力取得成功,感觉是在欺骗他人,并且害怕被他人发现此欺骗行为的一种现象。)
### 从小处开始
查看你感兴趣的项目的仓库,找到可能存在的贡献指南并遵循。然后,寻找更新 README 文档或提交修改错别字的机会。没有什么贡献是微不足道的。项目维护者可能会为这些帮助感到高兴,而你也将会因把你提交的第一个的拉取请求收录进你的技术写作作品集而感到愉悦。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/technical-writing-open-source
作者:[Ashley Hardin][a]
选题:[lujun9972][b]
译者:[yingmanwumen](https://github.com/yingmanwumen)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ashleyhardin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E (A person writing.)
[2]: https://git-scm.com/
[3]: https://about.gitlab.com/
[4]: https://github.com/
[5]: https://atom.io/
[6]: https://up-for-grabs.net/#/
[7]: https://www.firsttimersonly.com/

View File

@ -0,0 +1,63 @@
[#]: subject: "Google to Pay up to $50,337 for Exploiting Linux Kernel Bugs"
[#]: via: "https://news.itsfoss.com/google-linux-kernel-bounty/"
[#]: author: "Rishabh Moharir https://news.itsfoss.com/author/rishabh/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13956-1.html"
多达 5 万美元,谷歌将奖励利用 Linux 内核提权的安全专家
======
> 成功利用内核漏洞以实现提权的安全研究人员将获得 31,337 美元至 50,337 美元的奖金。
![](https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/google-linux-kernel-bounty-ft.jpg?w=1200&ssl=1)
谷歌的平台大量使用了 Linux尤其是在安卓及其庞大的服务器方面。多年来谷歌一直青睐开源项目和计划。
最近,这家科技巨头赞助了 100 万美元,用于资助 Linux 基金会开展的一个以安全为重点的开源项目,更多细节参见我们 [原来的报道][1]。
而现在,谷歌将在未来三个月内将赏金奖励增加两倍,以奖励那些致力于寻找有助于实现提权(即,当攻击者利用一个错误/缺陷获得管理员权限)的内核漏洞的安全研究人员。
毫无疑问,总会有某种形式的错误和缺陷困扰着内核的安全和开发。幸运的是,来自各个组织和个人的数百名安全研究人员致力于改善其安全状态,这就是为什么这些漏洞不一定会在野外被利用。
谷歌在奖励安全研究人员方面有着良好的记录,但它在接下来的三个月里加大了力度,宣布了 **31,377 美元的基本奖励,最高可达 50,377 美元。
### 计划细节和奖励
这些漏洞利用可以针对目前已修补的漏洞和未修补的新漏洞,以及采用新的技术。
**31,337 美元** 的基本奖励用于利用已公开了补丁的漏洞进行提权的技术。如果发现未修补的漏洞或新的利用技术,奖励可高达 **50,337 美元**
此外,该计划还可以与 Android VRP 和“补丁奖励”计划一起使用。这意味着,如果该漏洞在安卓系统上发挥作用,除了这个计划之外,你还可以获得高达 25 万美元的奖励。
如果你希望了解更多关于安卓系统的信息,你可以在他们的 [官方门户网站][2] 上了解。
增加的奖励将在未来三个月内开放,也就是说,直到 2022 年 1 月 31 日。
安全研究人员可以通过他们的 [官方博文][3] 来设置实验室环境,并在他们的 [GitHub 官方网页][4] 上阅读更多关于要求的内容。
### 总结
这项计划是谷歌的一项出色的举措。毫无疑问,它将吸引并惠及许多安全专家和研究人员。
不要忘记Linux 内核的安全状况将最终受益。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/google-linux-kernel-bounty/
作者:[Rishabh Moharir][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/rishabh/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/google-sos-sponsor/
[2]: https://bughunters.google.com/about/rules/6171833274204160
[3]: https://security.googleblog.com/2021/11/trick-treat-paying-leets-and-sweets-for.html
[4]: https://google.github.io/kctf/vrp

View File

@ -0,0 +1,185 @@
[#]: subject: "Turn any website into a Linux desktop app with open source tools"
[#]: via: "https://opensource.com/article/21/11/linux-apps-nativefier"
[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13975-1.html"
用开源工具将任何网站变成 Linux 桌面应用
======
> 使用 Nativefier 和 Electron 从任何网站创建桌面应用。
![](https://img.linux.net.cn/data/attachment/album/202111/11/115302e25o5laz8sex5ea6.jpg)
Mastodon 是一个很好的开源、去中心化的社交网络。我每天都在使用 Mastodon通过它的网页界面使用 Mastodon 可能是最常见的方式(尽管因为开源,它有许多不同的交互方式,包括基于终端的应用和移动应用),但我更喜欢专门的应用窗口。
最近,我发现了 [Nativefier][2],现在我可以在我的 Linux 桌面上把 Mastodon 或其他任何网页应用作为桌面应用来使用。Nativefier 将一个 URL 用 Electron 框架包装起来,它将开源的 Chromium 浏览器作为后端但使用自己的可执行程序运行。Nativefier 采用 MIT 许可证,可用于 Linux、Windows 和 MacOS。
### 安装 Nativefier
Nativefier 需要 Node.js。
安装 Nativefier 只需运行:
```
$ sudo npm install -g nativefier
```
在我的 Ubuntu 桌面上,我必须先升级 NodeJS所以当你安装 Nativefier 时,一定要检查需要哪个 Node 版本。
安装完毕后,你可以检查你的 Nativefier 的版本,以验证它是否已经安装:
```
$ nativefier --version
45.0.4
```
运行 `nativefier --help` 列出了应用支持的所有选项。
### 设置
我建议你在开始用 Nativefier 创建应用之前,创建一个名为 `~/NativeApps` 的新文件夹。这有助于保持你的应用有序。
```
$ mkdir ~/NativeApps
cd ~/NativeApps
```
### 为 Mastodon 创建一个应用程序
我将首先为 [mastodon.technology][3] 创建一个应用。
使用以下命令:
```
$ nativefier --name Mastodon \
--platform linux --arch x64 \
--width 1024 --height 768 \
--tray --disable-dev-tools \
--single-instance https://mastodon.technology
```
这个例子中的选项做了以下工作:
* `--name`:设置应用的名称为 Mastodon
* `--platform`:设置应用程序的平台为 Linux
* `--arch x64`:设置架构为 x64
* `--width 1024 --height 768`:设置应用启动时的大小
* `--tray`:为应用创建一个托盘图标
* `--disable-dev-tools`:禁用 Chrome 开发工具
* `--single-instance`:只允许应用有一个实例
运行这条命令会显示以下输出:
```
Preparing Electron app...
Converting icons...
Packaging... This will take a few seconds, maybe minutes if the requested Electron isn't cached yet...
Packaging app for platform linux x64 using electron v13.4.0 Finalizing build...
App built to /home/tux/NativeApps/Mastodon-linux-x64, move to wherever it makes sense for you and run the contained executable file (prefixing with ./ if necessary)
Menu/desktop shortcuts are up to you, because Nativefier cannot know where you're going to move the app. Search for "linux .desktop file" for help, or see https://wiki.archlinux.org/index.php/Desktop_entries
```
输出显示,文件被放置在 `/home/tux/NativeApps/Mastodon-linux-x64`。当你 `cd` 进入这个文件夹,你会看到一个名为 `Mastodon` 的文件。这是启动该应用的主要可执行文件。在你启动它之前,你必须给它适当的权限。
```
$ cd Mastodon-linux-x64
chmod +x Mastodon
```
现在,执行 `./Mastodon` 就可以看到你的 Linux 应用启动了!
![Mastodon app launched][4]
### 为我的博客创建一个应用
为了好玩,我也要为我的博客创建一个应用。如果没有 Linux 应用,拥有一个技术博客有什么用?
![Ayush Sharma blog][6]
命令是:
```
$ nativefier -n ayushsharma \
-p linux -a x64 \
--width 1024 --height 768 \
--tray --disable-dev-tools \
--single-instance https://ayushsharma.in
$ cd ayushsharma-linux-x64
chmod +x ayushsharma
```
### 为 findmymastodon.com 创建一个应用
最后,这是为我的宠物项目 [findmymastodon.com][7] 制作的应用。
![Find my mastodon website][8]
命令是:
```
$ nativefier -n findmymastodon \
-p linux -a x64 \
--width 1024 --height 768 \
--tray --disable-dev-tools \
--single-instance https://findmymastodon.com
$ cd findmymastodon-linux-x64
chmod +x findmymastodon
```
### 创建 Linux 桌面图标
应用已经创建并可以执行了,现在是创建桌面图标的时候了。
作为示范,以下是如何为 Mastodon 启动器创建一个桌面图标。首先,下载一个 [Mastodon][9] 的图标。将该图标放在其 Nativefier 应用目录下,名为 `icon.png`
然后创建一个名为 `Mastodon.desktop` 的文件并输入以下文本:
```
[Desktop Entry]
Type=Application
Name=Mastodon
Path=/home/tux/NativeApps/Mastodon-linux-x64
Exec=/home/tux/NativeApps/Mastodon-linux-x64/Mastodon
Icon=/home/tux/NativeApps/Mastodon-linux-x64/icon.png
```
你可以把 `.desktop` 文件移到你的 Linux 桌面上,把它作为一个桌面启动器。你也可以把它复制到 `~/.local/share/applications` 中,这样它就会出现在你的应用菜单或活动启动器中。
### 总结
我喜欢为我经常使用的工具配备专门的应用。我最喜欢的一个 Mastodon 应用特点是,当我登录到 Mastodon 之后,我就不必再次登录了! Nativefier 在底层运行 Chromium。所以它能够像其他浏览器一样记住你的会话。我想特别感谢 Nativefier 团队,他们让 Linux 桌面离完美更近了一步。
本文最初发表在 [作者的网站][10] 上,并经授权转载。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/linux-apps-nativefier
作者:[Ayush Sharma][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayushsharma
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_blue_text_editor_web.png?itok=lcf-m6N7 (Text editor on a browser, in blue)
[2]: https://github.com/nativefier/nativefier
[3]: https://mastodon.technology/
[4]: https://opensource.com/sites/default/files/uploads/2_launch-mastodon-app.png (Mastodon app launched)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/3_ayush-shama-blog.png (Ayush Sharma blog)
[7]: https://findmymastodon.com/
[8]: https://opensource.com/sites/default/files/uploads/4_find-my-mastodon-app.png (Find my mastodon website)
[9]: https://icons8.com/icons/set/mastodon
[10]: https://ayushsharma.in/2021/10/make-linux-apps-for-notion-mastodon-webapps-using-nativefier

View File

@ -0,0 +1,82 @@
[#]: subject: "After Moving From FreeBSD to Void Linux, Project Trident Finally Discontinues"
[#]: via: "https://news.itsfoss.com/project-trident-discontinues/"
[#]: author: "John Paul https://news.itsfoss.com/author/john/"
[#]: collector: "lujun9972"
[#]: translator: "zd200572"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13960-1.html"
从 FreeBSD 转到 Void Linux 后Trident 项目终于结束了
======
> Trident 项目为我们提供了与操作系统无关的 Lumina 桌面。
![](https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/project-trident-discontinues.png?w=1200&ssl=1)
令人遗憾,[Trident 项目][1] 团队宣布将结束他们的 Linux 发行版的开发。
### 那段故事
你或许没有听说过 Trident 项目,让我来讲点关于它的一点回忆。那是在 2005 年Kris Moore 推出了 [PC-BSD][2],提供了一种用桌面界面来设置 FreeBSD 的简单方法。次年,它被 [iXsystems][3] 收购。十年后2016 年 9 月,这个项目被改名为 TrueOS。这个项目也变成了基于 FreeBSD Current 分支的滚动发行版。两年后TrueOS [宣布][4] 他们将取消其操作系统的桌面版本,而专注于商业和服务器市场。其桌面元素被 [剥离][5] 到一个新项目Trident。
有一段时间Trident 开发团队尽力在 FreeBSD 之上打造良好的桌面体验。可是,由于 [FreeBSD 的问题][6],包括 “硬件兼容性、通信标准,或软件包的可用性一直限制着 Trident 项目的用户”,他们决定将其建立在其他基础之上。他们的解决方案是在 2019 年将其项目重新构建在 [Void Linux][7] 之上。有那么一段时间,看起来 Trident 项目似乎有了未来。然后2020 年来了。
![Trident 桌面][8]
### 项目的终止
10 月 29 号Trident 项目团队发布了以下 [公告][9]
> 我们非常悲伤地宣布Trident 项目将从 2021 年 11 月 1 号起进入“夕阳”阶段,并将于 2022 年 3 月关掉商店。项目的核心团队共同做出了这个决定。随着过去两年中,生活、工件和家庭等方面的事情和变故;我们个人的优先事项也发生了改变。
>
> 我们将保持 Trident 项目的软件包存储库和网站的运行,直到 2022 年 3 月 1 日的终止期,但是我们强烈推荐用户在即将到来的新年假期中开始寻找其他桌面系统替代。
>
> 感谢大家的支持和鼓励!过去几年中,该项目得以良好运转,我们也非常高兴在这些年里结识了你们中的许多人。
### Lumina 项目继续
贯穿 PC-BSD/TrueOS/Trident 项目传奇故事的一个永恒主题是桌面环境。2012 年,[Ken Moore][10]Kris 的弟弟)开始开发一个基于 Qt 的桌面环境 [Lumina][11]。2014 年,它成为 PC-BSD 的默认桌面环境,并一直保持到 Trident 项目出现。Lumina 不同于其他桌面环境,因为它的设计与操作系统无关。其他桌面系统像 KDE 和 GNOME 都具有 Linux 特定代码,这使得它们难以移植到 BSD。
![Lumina 桌面环境][15]
今年 6 月Ken 把 [Lumina 的领导权][12] 交给了 Trident 的开发者 [JT Pennington][13](也因 [BSDNow][14] 知名)。
[公告][12] 中说:
> 经过长达 7 年的工作,我决定是时候让其他人接手 Lumina 桌面项目的开发了。这是个难以置信的任务,推动我进入之前从未考虑过的开发领域。可是,由于工作和生活的变化,我几乎没有为 Lumina 开发新功能的时间了,特别是即将在明年或者晚些时候到来的 Qt5->Qt6 升级。通过把火炬传递给 JT GitHub 昵称是 q5sys我希望这个项目能获得更及时的更新以造福每个人。
>
> 感谢大家,我希望 Lumina 桌面项目能继续成功!!
### 总结
我一直对 Trident 项目抱有很高的期望。与我们介绍的许多发行版相比,它很小巧。它不是只增加了一两个新工具的、对 Arch 或 Ubuntu 的翻版。不仅如此,他们还努力改进一个与他们理念相同的发行版 Void Linux。可是生活会发生变故即使是我们中最好的人也难以避免遇到变故。我祝愿 Ken、JT 和其他人一切顺利,他们已经在这个项目上花费了很多时间。希望,我们未来能看到他们的更多作品。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/project-trident-discontinues/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[zd200572](https://github.com/zd200572)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://project-trident.org/
[2]: https://en.wikipedia.org/wiki/TrueOS
[3]: http://ixsystems.com/
[4]: https://itsfoss.com/trueos-plan-change/
[5]: https://itsfoss.com/project-trident-interview/
[6]: https://project-trident.org/post/os_migration/
[7]: https://voidlinux.org/
[8]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/project-trident.png?w=850&ssl=1
[9]: https://project-trident.org/post/2021-10-29_sunset/
[10]: https://github.com/beanpole135
[11]: https://lumina-desktop.org/
[12]: https://lumina-desktop.org/post/2021-06-23/
[13]: https://github.com/q5sys
[14]: https://www.bsdnow.tv/
[15]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/lumina.png?w=850&ssl=1

View File

@ -0,0 +1,105 @@
[#]: subject: "How to update a Linux symlink"
[#]: via: "https://opensource.com/article/21/11/update-linux-file-system-link"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13981-1.html"
如何更新 Linux 的符号链接
======
> 链接一直是 UNIX 文件系统的一个独特的高级功能。
![](https://img.linux.net.cn/data/attachment/album/202111/13/185626ubb832pphjhlpmly.jpg)
UNIX 和 Linux 用户发现链接有很多用途,特别是符号链接。我喜欢使用符号链接的一种方式是管理各种 IT 设备的配置备份。
我有一个目录结构用来存放我的文档、更新及网络上其他和计算机和设备有关的文件。设备可以包括路由器、接入点、NAS 服务器和笔记本电脑,通常有不同的品牌和版本。配置备份本身可能在目录树的深处,例如 `/home/alan/Documents/network/device/NetgearRL5000/config`
为了简化备份过程,我在主目录中有一个名为 `Configuration` 的目录。我使用这个目录的符号链接来指向特定的设备目录:
```
:~/Configuration/ $ ls -F1
Router@
Accesspoint@
NAS@
```
**注意**`ls` 命令的 `-F` 选项在每个文件名上附加特殊字符以表示其类型。如上所示,`@` 符号表示这些是链接。
### 创建一个链接
符号链接 `Router` 指向我的 Netgear RL5000 的 `config` 目录。创建它的命令是 `ln -s`
```
$ ln -s /home/alan/Documents/network/device/NetgearRL5000/config Router
```
然后,用 `ls -l` 看一下并确认:
```
:~/Configuration/ $ ls -l
Router -> /home/alan/Documents/network/device/NetgearRL5000/config
NAS -> /home/alan/Documents/network/device/NFSBox/config
...
```
这样做的好处是,当对这个设备进行维护时,我只需进入 `~/Configuration/Router`
如果我决定用一个新的型号替换这个路由器,使用符号链接的第二个好处就很明显了。我可能会把旧的路由器改成一个接入点。因此,它的目录并没有被删除。相反,我有一个新的目录,对应于新的路由器,也许是华硕 DF-3760。我创建这个目录并确认它的存在
```
$ mkdir -p ~/Documents/network/device/ASUSDF-3760/config
```
```
:~/Documents/network/device/ $ ls
NetgearRL5000
ASUSDF-3760
NFSBox
...
```
另一个例子是,如果你的办公室里有几个接入点。你可以使用符号链接在逻辑上代表每一个,用一个通用的名字,如 `ap1``ap2`,等等,或者你可以使用描述性的词语,如 `ap_floor2``ap_floor3`,等等。这样,当物理设备随时间变化时,你不必持续更新任何可能管理它们的进程,因为它们是在处理链接而不是实际的设备目录。
### 更新一个链接
由于我的主路由器已经改变,我想让路由器的符号链接指向它的目录。我可以使用 `rm``ln` 命令来删除和创建一个新的符号链接,但是有一种方法可以只用 `ln` 命令和几个选项就可以一步完成:
```
:~/Configuration/ $ ln -vfns ~/Documents/network/device/ASUSDF-3760/config/ Router
'Router' -> '/home/alan/Documents/network/device/ASUSDF-3760/config/'
:~/Configuration/ $ ls -l
Router -> /home/alan/Documents/network/device/ASUSDF-3760/config
NAS -> /home/alan/Documents/network/device/NFSBox/config
```
根据手册页,这些选项如下:
- `-v`、`--verbose`:打印每个链接文件的名称
- `-f`、`--force`:删除目标文件(有必要,因为已经存在一个链接)
- `-n`、`--no-dereference`:如果链接名是一个目录的符号链接,就把它当作一个正常的文件
- `-s`、`--symbolic`:制作符号链接而不是硬链接
### 总结
链接是 UNIX 和 Linux 文件系统中最强大的功能之一。其他操作系统也曾试图模仿这种能力,但由于他们的文件系统缺乏基本的链接设计,这些系统从来没有工作得那么好,也没有那么可用。
上面的演示只是利用链接在生活生产环境中无缝浏览不断变化的目录结构的众多可能性中的一种。链接提供了一个永远不会长期静态的组织所需的灵活性。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/update-linux-file-system-link
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/links.png?itok=enaPOi4L (Links)

View File

@ -0,0 +1,103 @@
[#]: subject: "What is the Release Schedule for Linux Kernel? How Long a Linux Kernel is Supported?"
[#]: via: "https://itsfoss.com/linux-kernel-release-support/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13963-1.html"
Linux 内核的发布时间表是什么?它的支持时间是多久?
======
![](https://img.linux.net.cn/data/attachment/album/202111/08/104610egbqhs8lbldgd6ad.png)
Linux 内核很复杂。我说的甚至不是代码的问题。
Linux 内核的代码本身很复杂,但你不需要为这个问题而烦恼。我说的是 Linux 内核的发布时间表。
一年内多久发布一个新的内核版本?内核被支持多长时间?还有一些 LTS长期支持内核LTS Linux 内核的支持时间有多长?
问题是,虽然这些问题看起来很简单,但答案却不简单。
这些问题没有一个直接明了的答案,需要做一些解释,这就是我在这篇文章中要做的。
### Linux 内核发布时间表:有吗?
![][1]
短的回答是,每两到三个月就有一个新的内核版本发布。长的回答是,这不是一个硬性规定。
这个意思是,你经常会看到每两到三个月就有一个新的内核版本发布。这是内核维护者团队的目标,但并没有规定新版本必须在前一个版本的 8 周后准时发布的期限。
新的内核版本(通常)是由 Linus Torvalds 在它准备好的时候发布的。通常是每 2 到 3 个月发布一次。该版本被宣布为“稳定”,一般以 X.Y 的格式编号。
但这并不是 X.Y 开发的结束。稳定版会有更多的小版本以进行错误的修复。这些小版本在稳定版的内核上又增加了一个点,就像是 X.Y.Z。
虽然 X.Y通常是由 Linux 创造者 Linus Torvalds 发布的,但是维护稳定的 X.Y 内核、合并错误修复和发布 X.Y.Z 版本的责任是由另外的内核开发者负责的。
### 一个内核版本支持多长时间?
![][2]
和发布一样,一个内核版本支持多长时间也没有固定的日期和时间表。
一个普通的稳定内核版本通常会被支持两个半月到三个月,这取决于下一个稳定内核版本的发布时间。
例如,稳定版内核 5.14 会在稳定版内核 5.15 发布后的几周内达到 [生命末期][3]。结束支持是由该稳定内核版本的维护者在 Linux 内核邮件列表中宣布的。用户和贡献者会被要求切换到新发布的稳定版本。
但这只适用于正常的稳定内核版本,还有 LTS长期支持内核版本它们的支持期要比 3 个月长得多。
### LTS 内核:它支持多长时间?
LTS 内核也没有固定的发布时间表。通常,每年都有一个 LTS 内核版本,一般是当年的最后一个版本,它至少会被支持两年。但同样,这里也没有固定的规则。
LTS 内核的维护者可以同意某个 LTS 内核的维护时间超过通常的两年。这个协议是根据必要性和参与的人员来达成的。
这种情况经常发生在 Android 项目中。由于两年的时间不足以让制造商结束对他们的硬件和软件功能的支持,你经常会发现一些 LTS 内核会被支持六年之久。
![Linux LTS 内核计划支持日期][4]
你可以 [在 Linux 内核网站上][5] 找到这个信息。
### 你的发行版可能没有跟随通常的 Linux 内核版本
如果你检查你的 Linux 内核版本,你可能会发现 [你的发行版使用了一个旧的内核][6]。也有可能该发行版提供的内核已经在内核网站上被标记为到达了生命末期。
不要惊慌。你的发行版会负责修补内核的错误和漏洞。除非你真的在使用一个不知名的 Linux 发行版,否则你可以相信你的发行版会保持它的安全和健全。
如果你有足够的理由,比如为了支持更新的硬件,你可以自由地在你使用的任何发行版或 [Ubuntu 中安装最新的 Linux 内核][7] 。
如果你想了解更多细节,我已经 [在这里解释了为什么你的发行版使用过时的 Linux 内核][6]。
![][8]
### 没有直接明了的答案
正如你所看到的,对于 Linux 内核发布时间表的问题,没有直接明了的答案。一切都是暂定的。
在我看来,好的方面是,如果你使用一个常规的 Linux 发行版,你不需要为 Linux 内核版本的发布或终止而烦恼。那是由你的发行版处理的事情。
我希望你对 Linux 内核的发布周期有了更多的了解,或者是我把你搞糊涂了。无论是哪种情况,请在评论区告诉我你的观点。
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-kernel-release-support/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/torvalds-kernel-release.webp?resize=800%2C450&ssl=1
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/kernel-release.png?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/end-of-life-ubuntu/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/linux-lts-kernel-end-of-life.png?resize=785%2C302&ssl=1
[5]: https://www.kernel.org/category/releases.html
[6]: https://itsfoss.com/why-distros-use-old-kernel/
[7]: https://itsfoss.com/upgrade-linux-kernel-ubuntu/
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/12/Keep_Calm_and_Trust_Your_Distribution.png?resize=800%2C400&ssl=1

View File

@ -0,0 +1,144 @@
[#]: subject: "How I build command-line apps in JavaScript"
[#]: via: "https://opensource.com/article/21/11/javascript-command-line-apps"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13989-1.html"
如何用 JavaScript 构建命令行应用
======
> 为你的用户提供选项是任何应用的一个重要功能,而 Commander.js 使它变得容易做到。你最喜欢的 JavaScript 命令行构建器是什么?
![](https://img.linux.net.cn/data/attachment/album/202111/16/114501u11upndpphhu2uhh.jpg)
JavaScript 是一种为 Web 开发的语言,但它的用处已经远远超出了互联网的范畴。由于 Node.js 和 Electron 这样的项目JavaScript 既是一种通用的脚本语言,也是一种浏览器组件。有专门设计的 JavaScript 库来构建命令行界面。是的,你可以在你的终端中运行 JavaScript。
现在,当你在终端中输入一个命令时,一般都有 [选项][2],也叫 _开关__标志_,你可以用来修改命令的运行方式。这是由 [POSIX 规范][3] 定义的一个有用的惯例,所以作为一个程序员,知道如何检测和解析这些选项是很有帮助的。要从 JavaScript 获得此功能,使用旨在简化构建命令行界面的库很有用。我最喜欢的是 [Commander.js][4]。它很简单,很灵活,而且很直观。
### 安装 node
要使用 Commander.js 库,你必须安装 Node.js。在 Linux 上,你可以用你的包管理器安装 Node。例如在 Fedora、CentOS、Mageia 和其他系统上:
```
$ sudo dnf install nodejs
```
在 Windows 和 macOS 上,你可以 [从 nodejs.org 网站下载安装程序][5]。
### 安装 Commander.js
要安装 Commander.js请使用 `npm` 命令:
```
$ npm install commander
```
### 在你的 JavaScript 代码中添加一个库
在 JavaScript 中,你可以使用 `require` 关键字在你的代码中包含(或者导入,如果你习惯于 Python一个库。创建一个名为 `example.js` 的文件,并在你喜欢的文本编辑器中打开它。在顶部添加这一行,以包括 Commander.js 库:
```
const { program } = require('commander');
```
### JavaScript 中的选项解析
要解析选项你必须做的第一件事是定义你的应用可以接受的有效选项。Commander.js 库可以让你定义短选项和长选项,同时还有一个有用的信息来澄清每个选项的目的。
```
program
.description('A sample application to parse options')
.option('-a, --alpha', 'Alpha')
.option('-b, --beta <VALUE>', 'Specify a VALUE', 'Foo');
```
第一个选项,我称之为 `--alpha`(简写 `-a`),是一个布尔型开关:它要么存在,要么不存在。它不需要任何参数。第二个选项,我称之为 `--beta`(简写 `-b`),接受一个参数,甚至在你没有提供任何参数的情况下指定一个默认值。
### 访问命令行数据
当你定义了有效的选项,你就可以使用长的选项名称来引用这些值:
```
program.parse();
const options = program.opts();
console.log('Options detected:');
if (options.alpha) console.log('alpha');
const beta = !options.beta ? 'no' : options.beta;
console.log('beta is: %s', beta);
```
### 运行应用
试着用 `node` 命令来运行它,首先不使用选项:
```
$ node ./example.js
Options detected:
beta is: Foo
```
在用户没有覆盖的情况下,`beta` 的默认值被使用。
再次运行它,这次使用选项:
```
$ node ./example.js --beta hello --alpha
Options detected:
alpha
beta is: hello
```
这次,测试脚本成功检测到了选项 `--alpha`,以及用户提供的 `--beta` 选项的值。
### 选项解析
下面是完整的演示代码供你参考:
```
const { program } = require('commander');
program
.description('A sample application to parse options')
.option('-a, --alpha', 'Alpha')
.option('-b, --beta <VALUE>', 'Specify a VALUE', 'Foo');
program.parse();
const options = program.opts();
console.log('Options detected:');
console.log(typeof options);
if (options.alpha) console.log(' * alpha');
const beta = !options.beta ? 'no' : options.beta;
console.log(' * beta is: %s', beta);
```
在该项目的 [Git 仓库][4] 中还有更多例子。
对任何应用来说,包括用户的选项都是一个重要的功能,而 Commander.js 使它很容易做到。除了 Commander.js还有其他库但我觉得这个库使用起来很方便快捷。你最喜欢的 JavaScript 命令行构建器是什么?
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/javascript-command-line-apps
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_javascript.jpg?itok=60evKmGl (Javascript code close-up with neon graphic overlay)
[2]: https://opensource.com/article/21/8/linux-terminal
[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[4]: https://github.com/tj/commander.js
[5]: https://nodejs.org/en/download

View File

@ -0,0 +1,108 @@
[#]: subject: "openSUSE Leap vs Tumbleweed: Whats the Difference?"
[#]: via: "https://itsfoss.com/opensuse-leap-vs-tumbleweed/"
[#]: author: "John Paul https://itsfoss.com/author/john/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13966-1.html"
openSUSE Leap 与 Tumbleweed我该选择哪一个
======
![][10]
[openSUSE 是一个非常受欢迎的 Linux 发行版][1],尤其是在企业界。[SUSE][2] 从 1996 年起就以这样或那样的形式出现了。很久以来,他们只有一个分支版本。
然后,在 2015 年他们改变了现状决定提供两个分支Leap 和 Tumbleweed。
如果你是 [openSUSE][3] 的新手,很容易把 Tumbleweed 和 Leap 搞混。最近有位读者要求我们解释这两者之间的异同,这正是我们今天要做的。
### Leap 和 Tumbleweed 之间有什么区别?
两者之间最重要的区别是发布时间表。openSUSE Leap LCTT 译注leap => “飞跃”)每隔几年就会按照固定的时间表发布一个新版本,类似于 Ubuntu 和 Fedora。另一方面Tumbleweed LCTT 译注tumbleweed => “风滚草”)是一个紧密跟随 openSUSE 开发时间表的滚动发布,就像 Arch 或 Void。
![openSUSE Tumbleweed vs Leap][4]
你知道 [滚动发行版][5] 的优势吧?它为你提供了最新的软件版本,你不需要为一个重大版本发布而升级你的系统,因为你的系统会定期得到更新。
所以,在 openSUSE Tumbleweed 中,你会得到更新的桌面环境版本、内核版本等等,你会得到一个最先进的、新鲜的系统。
另一方面openSUSE Leap 坚持使用较早的、LTS 版本的桌面环境和 Linux 内核,给你一个可靠的系统。当然也会有系统和安全的补丁,并且每隔几年会有一个重大版本,为你的系统提供更新的软件和内核。
#### 快速回顾一下 openSUSE 发布模式的变化历史
![OpenSUSELeap 安装程序][6]
从提供一个发行版分支到两个似乎是一个很大的飞跃所以让我给你介绍一下历史背景。Tumblewee 项目是由 [Greg Kroah-Hartman][8] 在 2010 年 11 月宣布的。其目的是创建一个 “滚动更新版本的 openSUSE 存储库,包含供人们使用的最新‘稳定’版软件包”。这个项目并不是一个新的发行版,而是对现有 openSUSE 系统的附加部分。
这在 2014 年发生了变化,当时 openSUSE 背后的团队决定将下一个版本基于 SUSE Linux Enterprise ServerSLES开发。他们将这个新版本命名为 “Leap 42”解释一下“42” 这个数字来自《<ruby>银河系漫游指南<rt>Hitchhikers Guide to the Galaxy</rt></ruby>》,其中 “42” 被认为是生命、宇宙和一切的答案)。( LCTT 译注:在瞎飙了版本后之后,它们又回到了 15.x 这种按部就班的版本号)目前 openSUSE Leap 的版本是 15.2。
随着这一变化Tumbleweed 成为 openSUSE 的官方发行版。有趣的是,根据 openSUSE 2020 年底的 [社区调查][9],越来越多的人选择使用 Tumbleweed。
### 你应该使用 Leap 还是 Tumbleweed
下一个问题是,“如果底层技术基本相同,那么应该使用这两个中的哪一个?”让我为你分析一下。
openSUSE Leap 是稳定的,经过高度测试的。它应该用于较旧的系统和需要长期无问题运行的计算机。这是因为所提供的软件不是最新和最好的,而是最稳定的。因为新的版本每 3 年才会发布一次所以你安排的任何工作流程都是相对安全的。一定要记得备份。Leap 在其整个发布周期中坚持使用同一个 Linux 内核。
![OpenSUSE Leap 桌面][11]
使用 Leap你不会收到最新版本的软件。你也将以较慢的速度获得硬件支持。你将需要每年至少更新一次你的系统以继续获得更新。Leap 就像 Ubuntu LTS 一样。
另一方面openSUSE Tumbleweed 拥有所有软件的最新版本,包括内核、驱动程序和桌面环境。由于它是一个滚动发行版,所以你所使用的版本基本上没有寿命结束的可能。
Tumbleweed 不断接受更新的事实也会导致一些问题比如工作流程或工具的损坏一般来说它打磨得比较粗糙。如果发生这种情况Tumbleweed 确实有工具可以回滚到以前的状态以避免这些问题。Tumbleweed 非常紧跟 Linux 内核的发布。
![openSUSE Tumbleweed 桌面][12]
让我为你总结一下,以帮助你做出决定。
如果:
* 稳定性对你来说很重要
* 你是 openSUSE 的新手
* 你的硬件较旧
* 你在运行一个生产服务器
* 如果你正在为一个不懂技术的朋友或家人建立一个系统
那么你应该使用 Leap。
如果:
* 你想尝试最新、最棒的软件
* 你的硬件较新
* 你对 Linux 比较有经验
* 你是一个软件开发者
* 你需要专有的硬件驱动,比如 Nvidia 或 Radeon 显卡,或者 Broadcom 的 Wi-Fi 适配器
* 你想要最新的内核版本
那么你应该使用 Tumbleweed。
我希望能为你解开疑惑。如果你已经在使用 Leap 或 Tumbleweed请在评论区告诉我们你的偏好和建议。
--------------------------------------------------------------------------------
via: https://itsfoss.com/opensuse-leap-vs-tumbleweed/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/why-use-opensuse/
[2]: https://en.wikipedia.org/wiki/SUSE_Linux
[3]: https://www.opensuse.org/
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/opensuse-leap-vs-tumbleweed.webp?resize=800%2C264&ssl=1
[5]: https://itsfoss.com/rolling-release/
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/opensuse-leap-installer.png?resize=800%2C600&ssl=1
[7]: https://lists.opensuse.org/archives/list/project@lists.opensuse.org/message/NNRPP2KJ6TJ3QLLYJC2E62JADHT5GWMY/
[8]: https://en.wikipedia.org/wiki/Greg_Kroah-Hartman
[9]: https://en.opensuse.org/End-of-year-surveys/2020/Data#Uses_Tumbleweed_as_Desktop_on_a_regular_basis
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/opensuse-leap-vs-tumbleweed.png?resize=800%2C450&ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/opensuse-leap-deaktop.png?resize=800%2C600&ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/opensuse-tumbleweed-deaktop.png?resize=800%2C603&ssl=1

View File

@ -0,0 +1,174 @@
[#]: subject: "How to package your Python code"
[#]: via: "https://opensource.com/article/21/11/packaging-python-setuptools"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13993-1.html"
如何打包你的 Python 代码
======
> 使用 setuptools 来向用户交付 Python 代码。
![](https://img.linux.net.cn/data/attachment/album/202111/17/180249s8s1cnnn18gh3fsk.jpg)
你花了几周的时间来完善你的代码。你已经对它进行了测试,并把它发送给一些亲近的开发者朋友以保证质量。你已经将所有的源代码发布在 [你的个人 Git 服务器][2] 上,并且从一些勇敢的早期使用者收到了一些有用的错误报告。现在你已经准备好将你的 Python 代码提供给全世界。
就在这时你遇到一个问题。你不知道如何交付产品。
将代码交付给它的目标用户是一件大事。这是软件开发的一个完整的分支,是 CI/CD 中的 “D”但很多人都忘记了至少到最后才知道。我写过关于 [Autotools][3] 和 [Cmake][4] 的文章,但有些语言有自己的方法来帮助你将你的代码提供给用户。对于 Python 来说,向用户提供代码的一个常见方法是使用 `setuptools`
### 安装 setuptools
安装和更新 `setuptools` 的最简单方法是使用 `pip`
```
$ sudo python -m pip install --upgrade setuptools
```
### 示例库
我创建了一个简单的 Python 库,名为 `myhellolib`,来作为需要打包的示例代码。这个库接受一个字符串,然后用大写字母打印出这个字符串。
它只有两行代码,但项目结构很重要,所以首先创建目录树:
```
$ mkdir -p myhellolib.git/myhellolib
```
为了确认这个项目是一个可导入的库(即 Python “模块”),在代码目录中创建一个空文件 `__init__.py`,同时创建一个包含代码的文件:
```
$ touch myhellolib.git/myhellolib/__init__.py
$ touch myhellolib.git/myhellolib/myhellolib.py
```
`myhellolib.py` 文件中,输入简单的 Python 代码:
```
def greeter(s):
print(s.upper())
```
这就是写好的库。
### 测试它
在打包之前,测试一下你的库。创建一个 `myhellolib.git/test.py` 文件并输入以下代码:
```
import myhellolib.myhellolib as hello
hello.greeter("Hello Opensource.com.")
```
运行该脚本:
```
$ cd myhellolib.git
$ python ./test.py
HELLO OPENSOURCE.COM
```
它可以工作,所以现在你可以把它打包了。
### Setuptools
要用 `setuptools` 打包一个项目,你必须创建一个 `.toml` 文件,将 `setuptools` 作为构建系统。将这段文字放在项目目录下的 `myhellolib.toml` 文件中。
```
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
```
接下来,创建一个名为 `setup.py` 的文件,包含项目的元数据:
```
from setuptools import setup
setup(
name='myhellolib',
version='0.0.1',
packages=['myhellolib'],
install_requires=[
'requests',
'importlib; python_version == "3.8"',
],
)
```
不管你信不信,这就是 `setuptools` 需要的所有设置。你的项目已经可以进行打包。
### 打包 Python
要创建你的 Python 包,你需要一个构建器。一个常见的工具是 `build`,你可以用 `pip` 安装它:
```
$ python -m pip install build --user
```
构建你的项目:
```
$ python -m build
```
过了一会儿,构建完成了,在你的项目文件夹中出现了一个新的目录,叫做 `dist`。这个文件夹包含一个 `.tar.gz` 和一个 `.whl` 文件。
这是你的第一个 Python 包! 下面是包的内容:
```
$ tar --list --file dist/myhellolib-0.0.1.tar.gz
myhellolib-0.0.1/
myhellolib-0.0.1/PKG-INFO
myhellolib-0.0.1/myhellolib/
myhellolib-0.0.1/myhellolib/__init__.py
myhellolib-0.0.1/myhellolib/myhellolib.py
myhellolib-0.0.1/myhellolib.egg-info/
myhellolib-0.0.1/myhellolib.egg-info/PKG-INFO
myhellolib-0.0.1/myhellolib.egg-info/SOURCES.txt
myhellolib-0.0.1/myhellolib.egg-info/dependency_links.txt
myhellolib-0.0.1/myhellolib.egg-info/requires.txt
myhellolib-0.0.1/myhellolib.egg-info/top_level.txt
myhellolib-0.0.1/setup.cfg
myhellolib-0.0.1/setup.py
$ unzip -l dist/myhellolib-0.0.1-py3-none-any.whl
Archive: dist/myhellolib-0.0.1-py3-none-any.whl
Name
----
myhellolib/__init__.py
myhellolib/myhellolib.py
myhellolib-0.0.1.dist-info/METADATA
myhellolib-0.0.1.dist-info/WHEEL
myhellolib-0.0.1.dist-info/top_level.txt
myhellolib-0.0.1.dist-info/RECORD
-------
6 files
```
### 让它可用
现在你知道了打包你的 Python 包是多么容易,你可以使用 Git 钩子、GitLab Web 钩子、Jenkins 或类似的自动化工具来自动完成这个过程。你甚至可以把你的项目上传到 PyPi这个流行的 Python 模块仓库。一旦它在 PyPi 上,用户就可以用 `pip` 来安装它,就像你在这篇文章中安装 `setuptools``build` 一样!
当你坐下来开发一个应用或库时打包并不是你首先想到的事情但它是编程的一个重要方面。Python 开发者在程序员如何向世界提供他们的工作方面花了很多心思,没有比 `setuptools` 更容易的了。试用它,使用它,并继续用 Python 编码!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/packaging-python-setuptools
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_programming_question.png?itok=cOeJW-8r (Python programming language logo with question marks)
[2]: https://opensource.com/life/16/8/how-construct-your-own-git-server-part-6
[3]: https://opensource.com/article/19/7/introduction-gnu-autotools
[4]: https://opensource.com/article/21/5/cmake

View File

@ -0,0 +1,94 @@
[#]: subject: "LibreWolf: An Open-Source Firefox Fork Without the Telemetry"
[#]: via: "https://itsfoss.com/librewolf/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14004-1.html"
自由之狼:一个没有遥测的开源火狐复刻
======
> LibreWolf 是一个火狐浏览器的复刻,它关注于隐私和安全,消除了遥测并增加了其他好处。让我们来了解一下它。
![](https://img.linux.net.cn/data/attachment/album/202111/21/121135e2mmb6ym53hxzlmj.jpg)
<ruby>火狐<rt>Firefox</rt></ruby> 是 Linux 上最好的网页浏览器之一。然而,一些用户并不喜欢其中的遥测机制。
除此之外,有些人更喜欢一个开箱即用的、为最佳隐私和安全而调整过的浏览器,即使火狐浏览器是提供了最好的定制功能的浏览器之一。
如果你不想要火狐浏览器的干扰功能想要一个无需你亲自调整的私密网络体验LibreWolf自由之狼或许就是答案。
### LibreWolf更好的火狐
![][1]
假设你想使用火狐,而不使用火狐的帐户同步的能力和其他一些火狐特有的功能,如 “添加到 Pocket” 按钮。在这种情况下LibreWolf 可以是一个不错的选择。
不同于其他火狐浏览器的复刻(例如,[Basilisk 浏览器][2]),它是定期更新的。而且,它只专注于提供私密的网页浏览体验,而不影响你在火狐中所希望得到的用户体验。
![][3]
### LibreWolf 的特点
LibreWolf 为安全的网页浏览体验提供了一套相当有用的开箱即用的功能。让我强调其中的一些特点:
* 移除遥测功能
* 不使用火狐账户进行云同步
* 私密搜索供应商,如 Searx、QwantDuckDuckGo 被设置为默认)
* 包含 uBlock Origin 以阻止脚本/广告
* 没有 “添加到 Pocket” 按钮
* 主页上默认没有赞助/推荐的内容
* 从设置中删除火狐<ruby>状态条<rt>snippets</rt></ruby>,它用于在新标签中添加新闻/提示
* 没有赞助的快捷提示
* 追踪保护默认设置为“严格”模式
* Cookies 和历史记录设置为关闭浏览器时删除
* 默认启用 HTTPS-only 模式
正如你所注意到的LibreWolf 的目标是提供一个更清洁和有利于隐私的体验,而不需要调整任何东西。
![][4]
如果你不想反复登录网络服务和浏览历史记录来回忆你的浏览活动,一些选项如退出时清除 cookies/历史记录可能会很不方便。
所以,如果你想从火狐切换到 LibreWolf你可能想在决定之前测试一下网页浏览体验。
![][5]
### 在 Linux 中安装 LibreWolf
对于任何 Linux 发行版,你可以使用 AppImage 文件或 Flathub 的 Flatpak 包。
如果你不知道,你可以参考我们关于 [使用 AppImage][6] 的指南和 [Flatpak 的资源][7] 。
对于 Arch Linux 用户,它也可以在 [Arch 用户仓库AUR][8] 中找到。
也可以在他们的 [官方网站][9] 或 [GitLab 页面][10] 中找到其他安装说明。
你试过 LibreWolf 吗?你喜欢用什么作为你的网页浏览器?请在下面的评论中分享你的想法!
--------------------------------------------------------------------------------
via: https://itsfoss.com/librewolf/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/librewolf-about.png?resize=800%2C566&ssl=1
[2]: https://itsfoss.com/basilisk-browser/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/librewolf-firefox.png?resize=800%2C572&ssl=1
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/librewolf-tracking.png?resize=800%2C565&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/librewolf-addon.png?resize=800%2C340&ssl=1
[6]: https://itsfoss.com/use-appimage-linux/
[7]: https://itsfoss.com/flatpak-guide/
[8]: https://itsfoss.com/aur-arch-linux/
[9]: https://librewolf-community.gitlab.io/install/
[10]: https://gitlab.com/librewolf-community
[11]: https://librewolf-community.gitlab.io/

View File

@ -0,0 +1,77 @@
[#]: subject: "Canonical Makes it Easy to Run a Linux VM on Apple M1"
[#]: via: "https://news.itsfoss.com/canonical-multipass-linux-m1/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13978-1.html"
在苹果 M1 上运行 Linux 虚拟机变得容易了
======
![](https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/linux-apple-m1-vm.png?w=1200&ssl=1)
> Canonical 使用户可以借助 Multipass一个免费的虚拟机程序在苹果 M1 上运行 Linux 虚拟机。
自从苹果推出 M1 芯片以来,人们为在其上运行 Linux 做出了许多努力。
尽管这项工作仍在进行中,但 Canonical 似乎已经实现了在苹果 M1 上以虚拟机VM形式运行 LinuxUbuntu
### 苹果 M1 上的 Linux 虚拟机
对大多数开发者来说,启动一个 Linux 虚拟机实例,并继续在他们的系统上工作是很方便的,这样不会中断任何工作。
不幸的是,在 M1 设备上启动和运行 Linux 实例不是一项轻松的任务。
虽然你可以用像 VMware 和 VirtualBox 这样的工具来创建虚拟机,但它们并不能在基于 ARM 的苹果 M1 芯片上工作。
截至目前VMware 正在慢慢增加对其产品的支持,使其能够在苹果 M1 上工作。然而,这仍处于封闭测试阶段,对用户来说并不可行。
而 VirtualBox 还不支持 ARM 平台,也没有这方面的计划。
因此,你运行虚拟机的最佳选择是在 macOS 上使用 parallels 或 [UTM][1](免费)。因此,跨平台支持的选择相当有限。
此外,要使用 parallels 你需要购买许可证,这可能很昂贵。
### Canonical 的 Multipass 1.8 是一个支持 M1 的免费虚拟机程序
[Multipass][2] 是一个免费的虚拟机软件,旨在帮助你在苹果 M1 上创建 Linux 实例,而没有任何麻烦。
Canonical [宣布][3] 发布了他们最新的 Multipass 1.8,终于增加了对苹果 M1 的支持,使其成为唯一可行的选择。它作为一个跨平台的虚拟机软件,可以帮助你运行 Ubuntu Linux。
在公告中Canonical 产品经理 Nathan Hart 提到。
> “Canonical 希望比市场上的其他选择更快地让开发者运行起来 Linux而 Multipass 团队帮助实现了这一点。”
在增加支持的同时Multipass 1.8 还带来了一些有用的功能,包括。
* 别名,可以将虚拟机上的命令与主机操作系统关联起来。换句话说,你可以在虚拟机中无缝地直接从主机操作系统中运行一个软件。
* 统一的跨平台体验,支持 Windows、Linux、Mac OSIntel/AMD 和 ARM 平台)。
Multipass 应该可以处理好配置问题,让你轻松地在苹果 M1 上创建/维护虚拟机。因此,你不需要任何人工干预,就可以让 Linux 在搭载 M1 的 macOS 机器内工作。
你可以在他们的 [官方网站][2] 上了解到更多信息。
### 总结
既然现在你可以使用 Canonical 的 Multipass 在苹果 M1 上启动一个 Linux 实例,你会考虑买一台苹果 M1 系统用于你的开发工作吗?
或者,还是你更喜欢使用 parallels 来在 M1 上运行 Linux请在下面的评论中告诉我你的想法。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/canonical-multipass-linux-m1/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://github.com/utmapp/UTM
[2]: https://multipass.run/
[3]: https://ubuntu.com/blog/canonical-transforms-linux-on-mac

View File

@ -0,0 +1,172 @@
[#]: subject: "Transfer files between your phone and Linux with this open source tool"
[#]: via: "https://opensource.com/article/21/11/transfer-files-phone-linux"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13999-1.html"
使用 qrcp 在你的手机和 Linux 之间传输文件
======
> qrcp 项目提供了一种快速地从你的 iPhone 或 Android 设备中复制文件到你的 Linux 电脑的方法,反之也可。
![](https://img.linux.net.cn/data/attachment/album/202111/19/114121wt40ilipix1oo1zh.jpg)
你是否在寻找一种快速复制文件的方法,从你的 iPhone 或 Android 移动设备到你的 Linux 电脑,或者从你的 Linux 电脑到你的设备?我最近发现了一个开源的应用,它很容易安装,并且传输文件只需一个二维码。
`qrcp` 项目提供了一个命令,可以在你的终端生成一个二维码,让你通过网络向你的电脑发送或接收文件。
### 在 Linux、Windows 或 Mac 上安装 qrcp
开发者 Claudio d'Angelis 以 MIT 许可证发布了 `qrcp` 应用。我发现它很容易安装,也很容易使用。它适用于 Linux、Windows 和 macOS可以作为 RPM、DEB 或 tarball 下载。它为几乎所有的平台做了构建,包括树莓派。
如果你想在 Linux 上安装它,下载 RPM 或 DEB并使用你的包管理器进行本地安装。例如在 Fedora、CentOS 或 Mageia或类似的平台上
```
$ sudo dnf install ./qrcp*rpm
```
如果你只是想试试,你可以下载 tar.gz 压缩包并在本地运行它:
```
$ tar --extract --file qrcp*tar.gz
$ ./qrcp version
qrcp 0.x.y
```
### 设置 qrcp
你可以通过使用 `--help` 选项查看所有可用的 `qrcp` 选项:
```
$ qrcp --help
$ ./qrcp --help
Usage:
qrcp [flags]
qrcp [command]
Available Commands:
completion Generate completion script
config Configure qrcp
help Help about any command
receive Receive one or more files
send Send a file(s) or directories from this host
version Print version number and build information.
[...]
```
默认配置文件位于 `~/.config/qrcp/config.json` ,你可以使用你喜欢的编辑器编辑,或从命令行调用配置向导来配置应用。
```
$ qrcp config
```
第一步是创建一个配置文件。`qrcp config` 命令将带你完成这个过程,但会问你几个问题。
第一个问题是要求你提供一个“完全限定域名”。如果你在一个不使用完全限定域名的本地网络上使用 `qrcp`(或者你不知道哪种方式),那么就把这个留空。`qrcp` 命令将使用你的本地 IP 地址代替。
下一个问题是提示你选择端口。大多数防火墙会阻止非标准的端口,但会将 8080 端口作为互联网流量的情况并不少见。如果你的防火墙屏蔽了 8080 端口,那么你还是要添加一个例外。假设你的系统使用 `firewalld`,你可以用这个命令允许 8080 端口的流量:
```
$ sudo firewall-cmd --add-port 8080/tcp --permanent
```
拒绝在“传输完成后保持网络连接”的选项,让 `qrcp` 生成一个随机路径。
假设你在一个可信的网络上,使用 HTTP而不是 HTTPS连接那么你不必配置 TLS。
配置保存在 `~/.config/qrcp/config.json` 中,并且之后可以编辑,所以如果你想改变设置,它很容易更新。
更新后的配置看起来像这样:
```
{
"fqdn": "",
"interface": "wlp0s20f3",
"port": 8080,
"keepAlive": false,
"path": "",
"secure": false,
"tls-key": "",
"tls-cert": "",
"output": "/home/don"
}
```
### 用 qrcp 传输文件
现在你已经准备好从你的 Linux 电脑向你的移动设备发送一个文件。在这个例子中,我使用了我的 iPhone它完全不支持 Linux这是臭名昭著的。这个过程在安卓设备上是完全一样的。
我是这样做的。首先,我在我的电脑上创建一个示例文件:
```
$ echo "Hello world"> ~/example.txt
```
接下来,我使用 `send` 子命令将文件从我的 Linux 电脑发送到我的手机:
```
Linux~$ qrcp send example.txt
```
![example of sending a file][2]
*使用 `qrcp send example.txt` 发送文件的例子CC BY-SA 4.0*
我打开我的相机应用(在 Android 上我使用一个保护隐私的专用二维码扫描器iPhone 扫描二维码并在我的手机上启动 Safari 浏览器。最后,我点击“下载”按钮。
![example download][3]
*下载示例 .txt 文件CC BY-SA 4.0*
### 用 qrcp 接收文件
接收文件也一样简单,只是命令略有不同:
```
$ qrcp receive
```
![example of receiving a file][4]
*使用 `qrcp receive` 命令接收一个文件CC BY-SA 4.0*
我扫描了二维码,它再次启动了我手机上的 Safari 浏览器,但这次出现了一些不同,因为我正在将文件从我的 iPhone 发送到 Linux 电脑上。
![example of selecting a file][5]
*选择一个要传输的文件CC BY-SA 4.0*
我点击“选择文件”,它让我选择想发送的文件。
![file appears in default location][6]
*文件被下载到默认位置CC BY-SA 4.0*
发送文件后,是在我的配置中指定的默认位置找到了文件。
### 尝试 qrcp
项目文档很简短但已足够,除了最初提出这个想法的 Claudio d'Angelis 之外,它还有开发者社区的支持。社区欢迎你加入他们,该应用将改变你对移动设备之间文件传输的看法。试试吧!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/transfer-files-phone-linux
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
[2]: https://opensource.com/sites/default/files/send-example.png
[3]: https://opensource.com/sites/default/files/download-example.png
[4]: https://opensource.com/sites/default/files/receive-file.png
[5]: https://opensource.com/sites/default/files/select-file.jpg
[6]: https://opensource.com/sites/default/files/default-location.png

View File

@ -0,0 +1,140 @@
[#]: subject: "exa: A Modern Replacement for the ls Command"
[#]: via: "https://itsfoss.com/exa/"
[#]: author: "Pratham Patel https://itsfoss.com/author/pratham/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13972-1.html"
exa一个 ls 命令的现代替代品
======
![](https://img.linux.net.cn/data/attachment/album/202111/10/155648vf7iwcwsetitqfuw.jpg)
我敢打赌你使用过 [Linux 上的 ls 命令][1],它是你 [学习 Linux][2] 时首次接触到的命令之一。
这个简单的 `ls` 命令列出目录的内容十分方便,但是直到我发现 `exa` 之前从来没想过会有命令能替代它。
### exa 命令简介
[exa][3] 是一个命令行工具,可以列出指定路径(如未指定则是当前目录)的目录和文件。这也许听起来很熟悉,因为这就是 `ls` 命令所做的事情。
`exa` 被视作从 UNIX 旧时代延续至今的古老的 `ls` 命令的一个现代替代品。如其所声称的那样,它有比 `ls` 命令更多的功能、更好的默认行为。
![exa 功能][4]
以下是一些你应该使用 `exa` 替代 `ls` 的原因:
* `exa``ls` 一样可移植(在所有主流 Linux 发行版、*BSD 和 macOS 上可用)
* 默认彩色输出
* `exa` 不同格式化的“详细”输出也许会吸引 Linux/BSD 新手
* 文件查询是并行进行的,这使得 `exa``ls` 的性能相当
* 显示单个文件的 git 暂存或未暂存状态
`exa` 的另外一个不同的地方是它是用 Rust 编写的。顺便说一句Rust 与 C 语言的执行速度相近,但在编译时减少了内存错误,使你的软件可以快速而安全地执行。
### 在 Linux 系统上安装 exa
`exa` 最近很流行,因为许多发行版开始将其包括在其官方软件库中。也就是说,你应该可以使用你的 [发行版的包管理器] 来安装它。
从 Ubuntu 20.10 开始,你可以使用 `apt` 命令来安装它:
```
sudo apt install exa
```
Arch Linux 已经有了它,你只需要 [使用 pacman 命令][6] 即可:
```
sudo pacman -S exa
```
如果它无法通过你的包管理器安装,请不要担心。毕竟它是一个 Rust 包,你可以很容易地用 Cargo 安装它。请确保在你使用的任何发行版 [或 Ubuntu 上安装了 Rust 和 Cargo][7]。
安装 Rust 和 Cargo 后,使用此命令安装 `exa`
```
cargo install exa
```
### 使用 exa
`exa` 有很多命令选项,主要是为了更好的格式化输出和一些提高舒适度的改进,比如文件的 git 暂存或未暂存状态等等。
下面是一些屏幕截图,展示了 `exa` 是如何在你的系统上工作的。
简单地使用 `exa` 命令将产生类似于 `ls` 但带有颜色的输出。这种彩色的东西可能没有那么吸引人,因为像 Ubuntu 这样的发行版至少在桌面版本中已经提供了彩色的 `ls` 输出。不过,`ls` 命令本身默认没有彩色输出。
```
exa
```
![exa 命令的输出截图,没有任何额外的标志][8]
请注意,`exa` 和 `ls` 命令的选项不尽相同。例如,虽然 `-l` 选项在 `exa``ls` 中都给出了长列表,但 `-h` 选项添加了一个列标题,而不是 `ls` 的人类可读选项。
```
exa -lh
```
![正如我之前提到的exa 有列标题以获得更好的“详细”输出][9]
我前面说过,`exa` 已经内置了 Git 集成。下面的屏幕截图给出了 `git` 标志的演示。请注意 `test_file``git``tracked` 列中显示 `-N` ,因为它尚未添加到存储库中。
```
exa --git -lh
```
![演示 git 标志如何与 exa 一起工作][10]
下面的例子不是我的猫键入的。它是各种选项的组合。`exa` 有可供你尝试和探索的很多选项。
```
exa -abghHliS
```
![一个非常丰富多彩和详细的输出,具有用户友好的详细输出][11]
你可以通过在终端中运行以下命令来获取完整的选项列表:
```
exa --help
```
但是,如果你想了解 `exa` 所提供的功能,可以查看其 [Git 存储库][13] 上的 [官方文档][12]。
### 值得从 ls 切换到 exa 吗?
对于类 UNIX 操作系统的新手来说,`exa` 可能是用户友好的,它牺牲了在脚本中容易使用的能力,以换取“易用性”和外观。其中,显示得更清楚并不是一件坏事。
无论如何,`ls` 就像通用命令。你可以将 `exa` 用于个人用途,但在编写脚本时,请坚持使用 `ls`。当预期输出与任一命令中的实际输出不匹配时,`ls` 和 `exa` 之间一个 [或多个] 标志的差异可能会让你发疯。
我想知道你对 `exa` 的看法。你已经尝试过了吗?你对它的体验如何?
--------------------------------------------------------------------------------
via: https://itsfoss.com/exa/
作者:[Pratham Patel][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pratham/
[b]: https://github.com/lujun9972
[1]: https://linuxhandbook.com/ls-command/
[2]: https://itsfoss.com/free-linux-training-courses/
[3]: https://the.exa.website/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/exa-features.png?resize=800%2C331&ssl=1
[5]: https://itsfoss.com/package-manager/
[6]: https://itsfoss.com/pacman-command/
[7]: https://itsfoss.com/install-rust-cargo-ubuntu-linux/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/01_exa.webp?resize=800%2C600&ssl=1
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/02_exa_lh.webp?resize=800%2C600&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/03_exa_git.webp?resize=800%2C600&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/04_exa_all_flags.webp?resize=800%2C600&ssl=1
[12]: https://github.com/ogham/exa#command-line-options
[13]: https://github.com/ogham/exa

View File

@ -0,0 +1,198 @@
[#]: subject: "7 Linux commands to use just for fun"
[#]: via: "https://opensource.com/article/21/11/fun-linux-commands"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13990-1.html"
7 个好玩的 Linux 命令
======
> 这些好玩的 Linux 命令也有它的用处。
![](https://img.linux.net.cn/data/attachment/album/202111/16/164838m35s5q81t353sxq3.jpg)
Linux 的命令行可以说是资深用户和系统管理员的小窝。然而Linux 不仅仅是大量的枯燥工作。Linux 是由爱玩的人开发的,他们还创造了一系列搞笑的命令。当你想轻松一下的时候,就可以自己试着这些。
### 蒸汽机车
随便什么时候,你可以使用 `sl` 命令使一辆<ruby>蒸汽机车<rt>Steam Locomotive</rt></ruby>在你的终端上跑过。可以用你的软件包管理器安装这辆蒸汽机车。例如,在 Fedora 上可以这样:
```
$ sudo dnf install sl
```
![由符号和字符组成的蒸汽机车引擎图][2]
#### 实际用途
据我所知,`sl` 命令确实只是为了好玩。你对 `sl` 有什么有趣的用途吗?请在评论中告诉我。
LCTT 译注:`sl` 的实际用途就是提醒你 `ls` 打错了。🤣)
### 壁炉
点燃一座壁炉来温暖你的心和你的终端吧,`aafire` 命令会播放一段壁炉的动画。你可以用你的软件包管理器安装 `aafire`。在 Debian、Mint、 Elementary 之类的发行版上:
```
$ sudo apt install libaa-bin
```
在 Fedora、CentOS 之类的发行版:
```
$ sudo dnf install aalib
```
![由文字符号和字符组成的火的黑白图像][4]
#### 实际用途
这个动画是向你的团队或老板传达一切即将化为乌有的微妙方式。
### 是的
你可以使用 `yes` 命令打印出一串文字,直到用 `Ctrl+C` 强行停止。例如,我是一个 Buffalo Bills 的球迷,所以我选择用 `yes` 命令打印出一串无尽的 “Buffalo Bills”
```
$ yes Buffalo Bills
```
![画面上重复显示的是一行行 Buffalo Bills左侧边缘略微被切断][5]
#### 实际用途
你可以用这个命令来向脚本输送确认信息,这样,当脚本停顿下来要求确认时,它就会自动收到 `yes`。例如,想象一下,你运行的一个脚本经常停下来问你确认:
```
$ foo
Are you sure you want to do this? Y/n  Y
Are you really sure? y/N  Y
But are you really? y/N
```
你可以通过向命令传递 `yes` 来自动接受这些信息:
```
$ yes | foo
```
另外,你也可以用 `yes` 来自动拒绝信息:
```
$ yes no | foo
```
### 命运
通过安装 `fortune` 命令,你可以就可以得到命运的指点。`fortune` 会打印出一段随机的、可能有意义的话语LCTT 译注:来自命运的指点)。
用你的软件包管理器安装 `fortune`
```
$ sudo apt install fortune
```
在 Fedora 上:
```
$ sudo dnf install fortune-mod
```
命运命令有许多数据集,它可以从中提取各种话语。例如,你可以从文学作品或科幻电视节目 《Firefly》中获得名人名言或者从笑话、关于 Linux 的技巧等中选择。在你的资源库中搜索 `fortune`,看看你的发行版提供了哪些数据集。
```
$ fortune
Johnson's law:
  Systems resemble the organizations that create them.
```
#### 实际用途
你可以用命运来生成一个伪随机数。没有足够的熵来使它在密码学上安全,但当你需要一个意外的数字时,你可以用来计算字符或单词:
```
$ fortune | wc --chars
38
$ fortune | wc --words
8
$ fortune | wc --chars
169
```
### 彩虹猫
彩虹猫(`lolcat`)是一个将文件或标准输入连接到标准输出的程序(就像一般的 `cat` 命令),并在其中加入彩虹色。你可以用管道将其他命令的输出连接到 `lolcat`,这样就可以为结果加上彩虹色。
下面是 `lolcat -h` 的帮助输出的结果。
![屏幕上的文字被染成了彩虹的渐变色][6]
LCTT 译注:我知道 `cat``catch`。)
### “FIG 来信”和横幅
“FIG 来信”FIGlet来源于 Frank、Ian 和 Glenn 信件中的签名艺术。这个命令(`figlet`)和横幅命令(`banner`)可以帮你创建简单的 ASCII 文本横幅。下面是一个 CentOS 系统的文本横幅:
```
$ figlet centos.com
```
![由符号和字符组成的阅读 “centos.com” 的文本横幅][7]
`figlet` 连接到 `lolcat`,可以得到一个彩色的横幅:
```
$ figlet centos.com | lolcat
```
![用 lolcat 将 “centos.com” 的文字横幅渲染成彩虹色][8]
```
$ banner Hello World
```
![用英镑符号拼出的 “Hello World” 横幅][9]
#### 实际用途
`figlet``banner` 都是提醒用户他们正在登录的系统的简单方法。就像许多系统管理员、网页设计师和云开发人员一样,当你和几十台服务器一起工作时,这很有帮助。
### 电子语音
你可以通过安装电子语音(`espeak`)来为你的命令行添加语音功能。
一旦 `espeak` 安装完毕,调高你的电脑的音量,听你的机器和你说话,会有一些乐趣。电子语音是一个软件语音合成器,有几个不同的语音库可用:
```
$ espeak "Linux is the best operating system.”
```
### 有趣的命令
请查阅所有这些命令的手册,以探索所有的可能性和变化。你最喜欢哪些好玩的命令,它们在现实世界中是否也有用途?请在评论中分享你的最爱。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/fun-linux-commands
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop)
[2]: https://opensource.com/sites/default/files/uploads/locomotive_0.png (Steam locomotive)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://opensource.com/sites/default/files/uploads/fireside.png (fireside)
[5]: https://opensource.com/sites/default/files/uploads/bills.png (Yes command)
[6]: https://opensource.com/sites/default/files/uploads/lolcat_rainbow.png (lolcat)
[7]: https://opensource.com/sites/default/files/uploads/figlet_centos.png (figlet text banner)
[8]: https://opensource.com/sites/default/files/uploads/lolcat_figlet_centos.png (Figlet with lolcat effects)
[9]: https://opensource.com/sites/default/files/uploads/hello_world_0.png (Hello World banner)

View File

@ -0,0 +1,100 @@
[#]: subject: "Forza Horizon 5 on Linux? Theres a Good Chance That You Can Play it Already"
[#]: via: "https://news.itsfoss.com/forza-horizon-5-linux/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13980-1.html"
《极限竞速:地平线 5》登陆 Linux 了吗?有可能你已经玩起来了
======
> 《极限竞速:地平线 5》是一款非常受欢迎的赛车游戏。虽然它还没有发布 Linux 版,但看起来 Valve 的 Proton 可能就是让它跑起来的答案!
![](https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/forza-horizon-linux.jpg?w=1200&ssl=1)
《极限竞速:地平线 5》是一款新的赛车电子游戏由 Playground Games 开发Xbox 游戏工作室发行。
在正式发布之前,已经有大约 100 万玩家通过高级版提前进入了游戏。
虽然该游戏的视觉效果和对旧硬件的优化令人叹为观止,但它是又一款只在 Windows 平台上运行的游戏。
幸运的是,它也可以在 Steam 上玩。因此,有可能在 Linux 上使用 Proton 兼容层来尝试它。
### Linux 上的《极限竞速:地平线 5》似乎成为现实
最初,在 Linux 上运行的《极限竞速:地平线 4》的状态很差。根据 ProtonDB 上的报告,它是不可玩的。
随着 Proton或 [SteamPlay for Linux][1])的改进,现在有了一个银色评价。仍然不令人满意,但总比没有好。
而且,到了《极限竞速:地平线 5》并没有变得更好它在 [ProtonDB][2] 上有一个不可玩的评价。
不过,我注意到有几个人在尝试运行《极限竞速:地平线 5》时获得了成功。
Jeremy SollerSystem76 的工程师)分享了一条推特,可以看到他在 Pop!_OS 21.10 Beta 上玩《极限竞速:地平线 5》
> 炸裂了!
>
> 《极限竞速:地平线 5》
>
> Proton 前沿实验版
>
> Pop!_OS 21.10 Beta
>
> 正式发布后,你们今晚就能玩了
>
> ![pic.twitter.com/6UHhdShdOg][3]
>
> -- Jeremy Soller@jeremy_soller[2021 年 11 月 10 日][4]
而且并不是只是他一个人Win-staging 维护者和 Proton-GE 开发者 Tom又名 GloriousEggroll也分享了一条在 Linux 上玩《极限竞速:地平线 5》的推特
> 对于一个 5 天前发布的游戏来说Linux 的表现还不错 \o/ < 插入它运行的样子 gif>
>
> ![pic.twitter.com/aoBKEKQKsq][5]
>
> -- GloriousEggroll@GloriousEggroll[2021 年 11 月 11 日][6]
此外GloriousEggroll 在推特上提到:
> Paulwine 的开发者之一)向 Proton 实验版推送了修正,此外还有 vkd3d 中需要的修正。我已经把它们移植到了 proton-ge 上(还有 battleye 补丁和 CEG drm 补丁)。
所以,看起来你可以使用 Proton 的最新的前沿实验版玩《地平线5》了。
当然,这对每个人来说都不是一个理想的解决方案。但是,很高兴知道,最新的 Proton 前沿实验版使(在 Steam 上)玩《极限竞速:地平线 5》成为可能。
### 你应该在 Steam 上为 Linux 购买《极限竞速:地平线 5》吗
如果你的目标是专门在 Linux 上玩这个游戏,你或许应该等一等。
根据 Proton 上的 [GitHub 议题讨论][7],该游戏似乎存在一些问题,即使你使用 Proton 的前沿实验版。
但是,如果你想帮助提交 bug 报告并进行测试,那么请尝试一下。
当然,如果你配置了 Windows 的双启动,或者有另一个 Windows 系统,如果你喜欢,你可以购买这个游戏。
希望随着下一个 Proton 实验版或稳定版的发布,我们可能会得到对 Forza Horizon 5 的必要支持。
你对 Linux 上的《极限竞速:地平线 5》有什么看法自己测试过吗
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/forza-horizon-5-linux/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/steam-play/
[2]: https://www.protondb.com/app/1551360
[3]: https://pbs.twimg.com/media/FD3ggosUYAUSCVf?format=jpg&name=medium
[4]: https://twitter.com/jeremy_soller/status/1458568707798536200?ref_src=twsrc%5Etfw
[5]: https://pbs.twimg.com/media/FD30neKWQAUCOrd?format=jpg&name=medium
[6]: https://twitter.com/GloriousEggroll/status/1458590821075361794?ref_src=twsrc%5Etfw
[7]: https://github.com/ValveSoftware/Proton/issues/5285

View File

@ -0,0 +1,172 @@
[#]: subject: "How to Mount Bitlocker Encrypted Windows Partition in Linux"
[#]: via: "https://itsfoss.com/mount-encrypted-windows-partition-linux/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14008-1.html"
如何在 Linux 中挂载 Bitlocker 加密的 Windows 分区
======
> 情况是这样的。我的系统自带 Windows 10 Pro并且带有 BitLocker 加密功能。我 [甚至在 Windows 启用了 BitLocker 加密的情况下,以双启动模式安装了 Ubuntu][1]。
![](https://img.linux.net.cn/data/attachment/album/202111/22/144133k6n9xsnnt46t0z94.jpg)
你可以轻松地从 Linux 中访问 Windows 文件。没有什么高科技的东西。只要进入文件管理器,点击通常位于“<ruby>其他位置<rt>Other Locations</rt></ruby>”标签下的 Windows 分区。
![Mounting Windows partition through the file manager in Linux desktop][2]
对于 BitLocker 加密的 Windows 分区来说,这个过程也不是太复杂。只是当你试图挂载 Windows 分区时,它会要求你输入密码。
![Password required for encrypted Windows drive mount in Linux][3]
这是能工作的。在我的情况中,我输入了 48 位 BitLocker 恢复密码,它解密了 Windows 分区,并在带有 GNOME 40 的 Ubuntu 21.10 中毫无问题地挂载了它。
试试你的 BitLocker 密码。如果这不起作用,试试恢复密码。对于普通的 Windows 10 Pro 用户,恢复密码存储在你的微软账户中。
[BitLocker Recovery Password in Micrsoft Account][4]
输入恢复密码,你会看到 Windows 分区和它的文件现在可以访问。勾选“<ruby>记住密码<rt>Remember Password</rt></ruby>”框也是为了进一步使用而节省时间。
![Encrypted Windows partition now mounted in Linux][5]
如果上述方法对你不起作用,或者你不熟悉命令行,还有一个替代方法。
这个方法包括使用一个叫做 [Dislocker][6] 的工具。
### 使用 Dislocker 在 Linux 中挂载 BotLocker 加密的 Windows 分区(命令行方法)
使用 Dislocker 分为两部分。第一部分是解开 BitLocker 的加密,并给出一个名为 `dislocker-file` 的文件。这基本上是一个虚拟的 NTFS 分区。第二部分是挂载你刚刚得到的虚拟 NTFS 分区。
你需要 BitLocker 密码或恢复密码来解密加密的驱动器。
让我们来看看详细的步骤。
#### 步骤 1安装 Disclocker
大多数 Linux 发行版的仓库中都有 Dislocker。请使用你的发行版的包管理器来安装它。
在基于 Ubuntu 和 Debian 的发行版上,使用这个命令:
```
sudo apt install dislocker
```
![Installing Dislocker in Ubuntu][7]
#### 步骤 2创建挂载点
你需要创建两个挂载点。一个是 Dislocker 生成 `dislocker-file` 的地方,另一个是将这个 `dislocker-file`(虚拟文件系统)作为一个回环设备挂载。
没有命名限制,你可以给这些挂载目录起任何你想要的名字。
逐一使用这些命令:
```
sudo mkdir -p /media/decrypt
sudo mkdir -p /media/windows-mount
```
![Creating mount points for dislocker][8]
#### 步骤 3获取需要解密的分区信息
你需要 Windows 分区的名称。你可以使用文件资源管理器或像 Gparted 这样的 GUI 工具。
![Get the partition name][9]
在我的例子中Windows 分区是 `/dev/nvme0n1p3`。对你的系统来说,这将是不同的。你也可以使用命令行来达到这个目的。
```
sudo lsblk
```
#### 步骤 4解密分区并挂载
你已经设置好了一切。现在是真正的部分。
**如果你有 BitLocker 密码**,以这种方式使用 `dislocker` 命令(用实际值替换 `<partition_name>``<password>`
```
sudo dislocker <partition_name> -u<password> -- /media/decrypt
```
如果你只有恢复密码,请以这种方式使用该命令用实际值替换 `<partition_name>``<password>`
```
sudo dislocker <partition_name> -p<recovery_password> -- /media/decrypt
```
在解密该分区时,应该不会花很长时间。你应该在指定的挂载点看到 `dislocker-file`,在我们的例子中是 `/media/decrypt`。现在挂载这个 dislocker-file
```
sudo mount -o loop /media/decrypt/dislocker-file /media/windows-mount
```
![][10]
完成了。你的 BitLocker 加密的 Windows 分区已经被解密并挂载到 Linux 中。你也可以从文件资源管理器中访问它。
![Mounting Dislocker decrypted Windows partition with file manager][11]
#### 文件系统类型错误的故障排除提示
如果你遇到这样的错误:
```
mount: /media/windows-mount: wrong fs type, bad option, bad superblock on /dev/loop35, missing codepage or helper program, or other error.
```
你应该在挂载时指定文件系统。
对于NTFS使用
```
sudo mount -t ntfs-3g -o loop /media/decrypt/dislocker-file /media/windows-mount
```
对于 exFAT使用
```
sudo mount -t exFAT-fuse -o loop /media/decrypt/dislocker-file /media/windows-mount
```
#### 解除对 Windows 分区的挂载
你可以从文件管理器中取消挂载的分区。只要**点击名为 windows-mount 的分区旁边的卸载符号**。
或者,你可以使用卸载命令:
```
sudo umount /media/decrypt
sudo umount /media/windows-mount
```
我希望这对你有帮助。如果你还有问题或建议,请在评论中告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/mount-encrypted-windows-partition-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/dual-boot-ubuntu-windows-bitlocker/
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/mount-encrypted-windows-partition-in-linux.png?resize=800%2C476&ssl=1
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/password-needed-for-encrypted-windows-drive-mount-in-Linux.png?resize=788%2C380&ssl=1
[4]: https://account.microsoft.com/devices/recoverykey?refd=support.microsoft.com
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/encrypted-windows-partition-mounted-in-Linux.png?resize=800%2C491&ssl=1
[6]: https://github.com/Aorimn/dislocker
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/install-dislocker-ubuntu.png?resize=786%2C386&ssl=1
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/creating-mount-points-for-dislocker.png?resize=777%2C367&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/show-device-name-gparted.png?resize=800%2C416&ssl=1
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/mount-dislocker-decrypted-windows-partition.png?resize=777%2C253&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/discloker-mount-encrypted-windows-partition.png?resize=800%2C483&ssl=1

View File

@ -0,0 +1,143 @@
[#]: subject: "8 New Features & Improvements to Expect in GIMP 3.0 Release"
[#]: via: "https://news.itsfoss.com/gimp-3-0-features/"
[#]: author: "Ankush Das https://news.itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13987-1.html"
GIMP 3.0 中值得期待的 8 项新功能和改进
======
> 将带来重大改进的 GIMP 3.0 是最令人期待的版本之一。根据它最近的开发版本,这是一个预期的功能列表。
![](https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/GIMP-3-0-expected.jpg?w=1200&ssl=1)
[GIMP][1] 是可在 Linux 上使用的 [最佳免费图像编辑器][2] 之一。不仅仅适用于那些想要 Adobe 套件的免费替代品的用户,许多专业人士也使用 GIMP 进行艺术创作、设计和照片编辑。
尽管 GIMP 提供了许多必要的功能和选项,但在各种平台上已经出现了许多现代替代品,其中在一些方面已经超过了 GIMP。
不过GIMP 3.0 可能是一个扭转局面的版本,它将使 GIMP 成为最好的现代产品之一,可与现有的商业对手相竞争。
本文将讨论预期出现在 GIMP 3.0 版本的功能。
### GIMP 3.0 值得期待的顶级功能
随着 GIMP 3.0 的版本发布,其开发版本增加了很多改进。
你或许想看到早期开发版本中的所有功能/变化,但我们将在本文中只能介绍其中重要的亮点。
### 1、基于 GTK3 的用户界面
![来源GIMP 博客][3]
GIMP 3.0 将带来基于 GTK3 的用户界面的重新打造的视觉享受。除了改进的外观和感受,你还可以看到一些新的小工具。
别忘了,[爱德华•斯诺登也认为 GIMP 需要进行 UI 大修][4]。所以GIMP 3.0 即使最终没有成为一种视觉享受,也应该在某种形式上有所改善。
在以前的 GIMP 版本中它并不支持高像素密度的显示器。虽然可以使用但如果你有一个更高分辨率的屏幕GIMP 看起来就不够好。
现在,随着 GTK3 的加入,它增加了对高像素密度显示器的支持。你所要做的就是设置你的操作系统的首选缩放/分辨率GIMP 应该可以支持它了。
### 2、Wayland 支持
向 GTK3 的过渡应该能提供更好的 Wayland 支持。因此,如果你开始使用 Wayland 桌面会话GIMP 3.0 应该可以让你毫无问题地使用该应用程序。
### 3、多层选择
![来源GIMP 播客][5]
最关键的新增功能之一是可以选择多个图层。
虽然这是一个需要很长时间才能完成的功能,但它终于在 GIMP 3.0 版本中实现了。
有了这个功能,人们可以轻松地管理和组织他们的工程中的几个图层。
根据现有的信息,他们正式提到了这个变化:
> 可停靠的图层现在完全可以进行多选,使用通常的交互方式进行多项目选择(“`Shift` + 点击”用于选择一系列图层,“`Ctrl` + 点击”用于选择或取消选择不相邻的层)。组织操作现在可以对所有选定的图层起作用,即你可以一次性移动、重新排序、删除、复制、合并(以及更多...)所有选定的图层。
换句话说,你可以选择多个图层,并同时对它们进行批量操作。
例如,你可以裁剪、移动、使用合并的图层中的颜色选择工具,并使用这个功能执行更多操作。
在发表这篇文章时,我注意到,根据他们的开发博客,这是一个正在进行的工作,有一些限制。希望在稳定版中,多层选择可以完美地工作。
### 4、新的插件 API
插件 API 保留了所有的基本功能,但也引入了一些新的改进。
因此,新的插件 API 应该不会对开发者造成任何破坏,而应该可以让他们把插件轻松地移植到 GIMP 3.0 应用程序上。
他们的开发版本中提到了一些改进:
* 摆脱了对象 ID转而使用真实的对象。特别是在 GIMP 3 中,`GimpImage`、`GimpItem`、`GimpDrawable`、`GimpLayer`、`GimpVectors`、`GimpChannel` 和 `GimpPDB` 都是对象(其他类别的对象已经存在或以后可能加入)。
* 路径现在被当作 `GFile` 来处理,这意味着使用 GLib/GIO API。
* GIMP API 应该可以完全支持各种语言Python 3、JavaScript、Lua、Vala 等)。
### 5、绘画选择工具
![来源GIMP 博客][6]
前景选择工具不会消失。然而,他们正在开发一个新的实验性的“绘画选择”工具,它可以让你逐步选择和绘制使用画笔的区域。
绘画选择工具的目的还在于克服前景选择工具对大图像的限制,并解决内存/稳定性问题。
考虑到它被列为实验性工具,我们不能确定它是否能进入稳定版,但它已经有了新的图标,你可以找到这个工具。
### 6、Windows Ink 支持
![来源GIMP 播客][7]
当然,我们在这里讨论的都是 Linux。但是GIMP 是一个流行的跨平台工具。
所以,为了吸引更多的用户,最好为专业人士或设计师使用的工具提供硬件支持。
GIMP 3.0 现在支持 Windows Ink 开箱即用,这要归功于它引入的 GTK3。你会从管理输入设备的设置中找到使用 Windows Ink API 的选项。
### 7、改进手势支持
GIMP 并不是笔记本电脑用户的最佳选择,或者更准确地说,不是触摸板/触摸屏用户的最佳选择。
然而,随着 GIMP 3.0 增加了手势支持,可以捏住画布放大/缩小,这种情况可能会有所改善。
我们还可能在最终版本中看到更多的手势支持,但截至 2.99.8 版本,也就是最新的开发版本,还没有新的手势。
### 8、改进的文件格式支持
GIMP 3.0 现在支持 JPEG-XL 文件格式,能够加载/导出带有灰度和 RGB 颜色配置文件的 .jxl 文件。
对 Adobe Photoshop 工程文件的支持也得到了改进。它现在可以处理大于 4GB 的 PSD 文件,同时加载多达 99 个通道。
除此以外,对 WebP 和 16 位 SGI 图像的支持也得到了改进。
### 总结
看看 [GIMP 的开发博客][8],看起来他们几乎已经完成了最终版本。然而,他们并没有透露任何特定的发布日期时间表。
所以,一旦他们解决了错误并完成了改进,它就会到来。
你期待 GIMP 3.0 吗?它看起来是一个有希望的下一代版本吗?
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/gimp-3-0-features/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://www.gimp.org/
[2]: https://itsfoss.com/image-applications-ubuntu-linux/
[3]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/gimp-2-99-2-gtk3.png?w=1200&ssl=1
[4]: https://news.itsfoss.com/gimp-ui-edward-snowden/
[5]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/gimp-2.99.2-multi-layer-selection.png?w=492&ssl=1
[6]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/gimp-paint-select-tool.png?w=800&ssl=1
[7]: https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/gimp-windows-ink.png?w=1278&ssl=1
[8]: https://www.gimp.org/news/2021/10/20/gimp-2-99-8-released/

View File

@ -0,0 +1,98 @@
[#]: subject: "How to Switch to Dark Mode in Fedora Linux [Beginners Tip]"
[#]: via: "https://itsfoss.com/fedora-dark-mode/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14002-1.html"
入门:如何在 Fedora Linux 中切换到深色模式
======
![](https://img.linux.net.cn/data/attachment/album/202111/20/114412oqqw4mdnxbwfbprb.jpg)
与 Ubuntu 不同Fedora 提供的是真正的、原生般的 GNOME 体验,而且体验很好,横向布局、三指滑动,一切都很好。
我不喜欢的一点是默认的标准主题,它是 Adwaita Light默认主题和深色的 GNOME Shell 的混合体。
因此,虽然通知和通知区是深色的,但系统和应用的其他部分是浅色主题。老实说,对我来说,这看起来很不协调。
![Fedora GNOME standard theme][1]
另一方面,深色主题让它看起来更好。
![Fedora GNOME dark theme][2]
让我告诉你如何在 Fedora 或其他任何使用 GNOME 桌面环境的发行版中开启深色模式。
### 在 Fedora 中切换到深色模式
好了!我将分享命令行的方法,因为它更快。打开一个终端,使用这个命令:
```
gsettings set org.gnome.desktop.interface gtk-theme Adwaita-dark
```
完成了。这很容易,对吗?但我也要展示一下 GUI 的方法。
因为我主要使用 Ubuntu所以我始终参照 Ubuntu。Ubuntu 在系统设置中本身就提供了在浅色和深色主题之间切换的选项。
然而,在原生 GNOME 中却没有这样的设置。你必须先 [在 Fedora 上安装 GNOME Tweaks 工具][3],然后用它来切换主题。
你可以在软件中心搜索它并点击“<ruby>安装<rt>Install</rt></ruby>”按钮:
![Install GNOME Tweaks from the software center in Fedora][4]
或者,你在终端输入以下命令:
```
sudo dnf install gnome-tweaks
```
安装完成后,按 `Super` 键(`Windows` 键)在系统菜单中搜索它:
![Start GNOME Tweaks][5]
点击左侧边栏的“<ruby>外观<rt>Appearance</rt></ruby>”标签,点击主题部分下的应用。
![Changing theme in Fedora][6]
你会看到这里有几个可用的主题。你应该在这里选择 “Adwaita-dark”。当你选择了它应用就会切换到深色主题。
![Selecting the Adwaita-dark theme][7]
就是你在 Fedora 中切换到深色模式所需要做的一切。由于 GNOME Shell 已经在使用深色主题,你不需要明确地将它设置为深色模式。所有的通知、信息栏等都是在深色模式下的。
### 总结
你可以找到各种深色 GTK 主题并安装它们来给你的 Fedora 带来不同的深色外观。然而,我注意到,只有系统自己的深色主题才能被网页浏览器识别。
所以,如果你访问一个根据你的系统主题自动启用深色模式的网站,它将与 Adwaita-dark 兼容,但可能与其他深色 GTK 主题不兼容。
这就是使用系统提供的深色主题的一个优势。
如你所见,在 Fedora 中启用深色模式并不是什么火箭科学。它只是一个了解和发现的过程。
享受深色色彩吧!
--------------------------------------------------------------------------------
via: https://itsfoss.com/fedora-dark-mode/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/fedora-gnome-standard-theme.webp?resize=800%2C450&ssl=1
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/fedora-gnome-dark-theme.webp?resize=800%2C450&ssl=1
[3]: https://itsfoss.com/install-gnome-tweaks-fedora/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/install-gnome-tweaks-fedora.webp?resize=800%2C448&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/start-gnome-tweaks-tool-in-Fedora.png?resize=800%2C271&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/change-GTK-theme-Fedora.webp?resize=800%2C532&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/switching-dark-mode-fedora.png?resize=800%2C515&ssl=1

View File

@ -0,0 +1,161 @@
[#]: subject: "Create Windows, macOS, and Linux Virtual Machines Easily With QEMU-based Quickgui"
[#]: via: "https://itsfoss.com/quickgui/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13998-1.html"
利用基于 QEMU 的 Quickgui 轻松创建虚拟机
======
![](https://img.linux.net.cn/data/attachment/album/202111/19/104431xwh8h8hw228e77hh.jpg)
> Quickgui 旨在成为 VirtualBox 的一个更简单的替代品,帮助你快速创建虚拟机。让我们来看看它。
目前,借助 VirtualBox、VMware 和其他一些程序,创建虚拟机相当容易。
你当然可以 [在你的 Linux 系统中安装 VirtualBox][1] 来创建虚拟机。但是,在这篇文章中,我把重点放在一个令人兴奋的工具上,即 Quickgui它使用简单运行速度快能帮助你快速启动虚拟机。
### QuickguiQuickemu 的图形用户界面前端
![][2]
Quickemu 是一个基于终端的工具,可以让你创建优化的桌面虚拟机并轻松地管理它们。该工具专注于消除配置虚拟机的所有细微差别。相反,它根据虚拟机的可用系统资源选择最佳配置以使其正常工作。
不仅限于配置,它还会下载操作系统的镜像(使用 quickget 包)。
因此,你所要做的就是像通常那样安装操作系统,然后开始工作。
Quickemu 以 [QEMU][3] 为核心,旨在用 Bash 和 QEMU 取代 VirtualBox。
QEMU 是一个开源的机器仿真器和虚拟化器。
Quickemu 是一个有趣的项目,由 [Martin Wimpress][4]Ubuntu MATE 负责人)在一些贡献者的帮助下完成。
作为对这个工具的补充Quickgui 是一个使用 [Flutter][5] 开发的前端,由另一组开发人员开发,以帮助在没有终端的情况下使用 Quickemu。
在此,我们重点介绍使用 Quickemu 创建和管理虚拟机的前端 Quickgui。
### Quickgui 的特点
![在 Zorin OS 16 上使用 Quickgui 运行虚拟机][6]
如上所述Quickgui 作为一个前端,其核心利用的是 Quickemu。因此你可以期待其具有同样的功能。
你可以用它做的一些事情包括:
* 搜索操作系统并下载它们以创建虚拟机。
* 管理你现有的虚拟机。
* 当你建立一个虚拟机时,创建默认配置。
* 提供黑暗模式。
* 开箱即用地创建 Windows 和 macOS 虚拟机。
* 支持各种 Linux 发行版,包括 elementaryOS、ZorinOS、Ubuntu 等。
* 支持 FreeBSD 和 OpenBSD。
* 支持 EFI 和传统的 BIOS。
* 不需要提升权限就能工作。
* 默认情况下,宿主机/访客机共享剪贴板。
* 可以选择镜像压缩方法。
* 能够禁用输入。
* 能够切换虚拟机中宿主机/访客机的可用 USB 设备。
* 包括对 [SPICE 连接][7] 的支持。
* 网络端口转发。
* Samba 文件共享。
* VirGL 加速。
* 智能卡直通。
鉴于它是如此简单和容易使用,其功能集令人印象深刻。让我给你提供一些使用的技巧。
### Quickgui 入门
用户界面非常简单,你可以选择 “<ruby>管理现有机器<rt>Manage existing machines</rt></ruby>” 和 “<ruby>创建新机器<rt>Create new machines</rt></ruby>”。
你需要点击“<ruby>创建<rt>Create</rt></ruby>”来开始制作虚拟机。
![Quickgui VM Creation][8]
选择操作系统,你应该看到一个列表。如果你找不到目标操作系统,只需搜索一下,它应该会出现。
![][9]
你会看到各种各样的操作系统。在接下来的选择中,选择所需的操作系统及其版本。然后,点击 “<ruby>下载<rt>Download</rt></ruby>”。
它应该会下载恢复镜像或 ISO这取决于你正在尝试的操作系统。下载将取决于你的互联网连接但它工作得很完美。
如果你想自己下载 ISO你将就得为它创建配置并进行设置。看看 [Quickemu 的 GitHub 页面][10],了解一下它的说明。
![][11]
你只需要在下载完成后点击 “<ruby>去除<rt>Dismiss</rt></ruby>”。
在这篇文章中,我测试了启动一个 Linux 虚拟机([elementary OS 6][12])、一个 macOS 实例,以及一个 Windows 虚拟机。
我成功地以虚拟机方式运行了 Linux 和 macOS 。然而,我在快速建立一个 Windows 虚拟机时遇到了一些问题。我在 Quickemu 的 GitHub 页面上找不到任何相关信息,所以我没有费心去排除故障。
如果你需要使用 Windows 虚拟机,可以自己试试,并在他们的 [Discord 服务器][13] 中联系他们寻求帮助。
你不一定需要改变虚拟机的配置来使其工作。因此,它变成了一个节省时间的工具。
### 在 Linux 中安装 Quickgui
要使用 Quickgui你需要先安装 Quickemu。
对于基于 Ubuntu 的发行版,你可以使用 PPA 来安装它:
```
sudo apt-add-repository ppa:flexiondotorg/quickemu
sudo apt update
sudo apt install quickemu
```
它应该安装了你需要的所有东西(连同 quickget 包),使其发挥作用。
完成后,你可以使用另一个 PPA 继续安装 Quickgui
```
sudo add-apt-repository ppa:yannick-mauray/quickgui
sudo apt update
sudo apt install quickgui
```
如果你使用的是其他 Linux 发行版,你可以参考 [Quickemu 的 GitHub 页面][10] 和查看 [Quickgui 的 GitHub 页面][14],以获得更多说明。
### 总结
Quickgui 使人们能够方便地利用 Quickemu 的能力,快速创建和管理多个虚拟机,而不需要进行任何配置。
更有好的是,你不需要提升权限就能让它工作。
因此,如果你正在寻找 VirtualBox 的替代品,这可能就是答案。或者,你也可以试试 [GNOME Boxes][15] 作为一个更简单的替代品。
你对 Quickgui 有什么看法?请在下面的评论中告诉我你的想法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/quickgui/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/install-virtualbox-ubuntu/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/quickgui-emu.png?resize=800%2C547&ssl=1
[3]: https://www.qemu.org/
[4]: https://twitter.com/m_wimpress
[5]: https://itsfoss.com/install-flutter-linux/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/quickgui-vms.png?resize=800%2C450&ssl=1
[7]: https://www.spice-space.org/index.html
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/quickgui-select.png?resize=800%2C534&ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/quickgui-quickemu-selection.png?resize=800%2C559&ssl=1
[10]: https://github.com/wimpysworld/quickemu
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/quickemu-gui-mac.png?resize=800%2C552&ssl=1
[12]: https://news.itsfoss.com/elementary-os-6-features/
[13]: https://discord.com/invite/sNmz3uw
[14]: https://github.com/quickgui/quickgui
[15]: https://help.gnome.org/users/gnome-boxes/stable/

View File

@ -0,0 +1,95 @@
[#]: subject: "A Notion like Open-Source App is in Development"
[#]: via: "https://news.itsfoss.com/appflowy-development/"
[#]: author: "Rishabh Moharir https://news.itsfoss.com/author/rishabh/"
[#]: collector: "lujun9972"
[#]: translator: "zengyi1001"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14012-1.html"
一个正在开发中的类似 Notion 的开源 APP
======
> 它被称为“Notion 的开源替代品”AppFlowy 旨在让你可以完全控制你的数据和定制选项。
![](https://img.linux.net.cn/data/attachment/album/202111/23/162933pedqtdm73tk7tp49.jpg)
Notion 是深受团队和个人欢迎的生产力应用之一,而现在我们似乎有希望获得一个有前途的开源替代品。
这里为还没用过 Notion 的人介绍一下Notion 是一个多功能合一的生产力应用,可以用于创建/管理任务、笔记、项目、数据,以及组建维基。
换句话说,它能让你自由组织你的<ruby>工作流<rt>workflow</rt></ruby>让你在一个地方就能集中完成所有事情。更进一步Notion 还支持连接团队或邀请他人的协作功能。
那么,如果有一个可以提供类似 UI 和功能的开源应用会怎么样?
这就是 AppFlowy。
![](https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/AppFlowy-ft.png?w=1200&ssl=1)
### 什么是 AppFlowy
![Source: AppFlowy.io][1]
AppFlowy 和 Notion 非常相像,但有一个巨大的区别,那就是它是 100% 开源的。
虽然它还处在一个积极开发中的状态,但已经吸引到了不少的关注。至少可以说,尽管它是一个全新的东西,但任何开源替代项目都是令人兴奋的!
正如他们 GitHub 页面的表述,说明了为什么开发者们想要创造一个 Notion 的替代品:
> 我们都知道 Notion 有其局限性。比如说脆弱的数据安全性和糟糕的移动设备端兼容性。同样,其它一些协作工作管理工具替代品也具有各自的局限性。
因此,他们希望他们的的用户既拥有 Notion 的功能特性,又具备良好的数据安全性和由社区驱动的良好的原生体验。
他们也明确表示不想在功能特性和设计上与 Notion 相竞争。
> 坦率的讲,我们并没有声称在功能和设计上要优于 Notion至少现在还是如此。此外我们当前的首要任务也不在于提供更多的功能。相反我们期望培养一个社区使制作复杂工作场所管理工具的知识和设施民主化同时通过为个人和企业配备一个多功能积木工具箱使他们能够自己创造美好的事物。
听起来很吸引我!
继续关于 **AppFlowy** 的更多信息:
AppFlowy 的首个 macOS 版本已经在几天前发布了。它是用 Flutter 和 Rust 构建的。
它的目标是让用户和团队能够完整控制他们的数据和定制。他们还表示,他们希望提供包括移动设备在内的跨平台的原生体验。除此之外,你还可以离线访问你的工作区,这一点是与 Notion 不同的。
别忘了,社区可以发布定制主题和模板给他人使用,你可以按照你的需求任意定制。用户对它的发展能产生直接的影响。
它还将支持插件以扩展应用的功能。因此,即使你不具有任何编程经验,你也仍然可以选择使用这些插件来增强你工作空间的功能。并且,由于它的 UI 和 Notion 本身非常相似,如果你以后想要在两者之间切换,也不会让你感觉有太大的改变。
### 是否仅支持 macOS
到目前为止只有 macOS 的用户可以尝鲜使用 AppFlowy。但 Linux 和 Windows 客户端也在开发之中。你可以持续关注它的 [GitHub 主页][2] 或订阅官网的最新通知。
开发者还希望能带来更多视觉上的改变和优化。
当然,它仍然处在开发阶段。所以也别期望它现在就能替代 Notion。一些类似于<ruby>拖放模式<rt>drag/drop</rt><ruby>和离线模式的功能仍然还在它的 [路线图][3] 之中。
如果你有兴趣的话,可以访问它的官方网站或 GitHub 页面获取更多的信息,以及为其发展做出贡献。
- [AppFlowy][4]
### 总而言之
有了社区的支持AppFlowy 可能是 Notion 的可靠替代品。基于免费和开源的事实让它一开始就吸引大量个人和团队来试用它。
我也开始期待它尽快增加对 Linux 的支持,我知道它已经在开发中了。
你如何看待 AppFlowy? 你会计划试用它吗?
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/appflowy-development/
作者:[Rishabh Moharir][a]
选题:[lujun9972][b]
译者:[zengyi1001](https://github.com/zengyi1001)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/rishabh/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/welcome.png?resize=1568%2C1117&ssl=1
[2]: https://github.com/AppFlowy-IO/appflowy
[3]: https://trello.com/b/NCyXCXXh/appflowy-roadmap
[4]: https://www.appflowy.io

View File

@ -0,0 +1,141 @@
[#]: subject: "Raspberry Pi 3 vs 4: Which One Should You Get?"
[#]: via: "https://itsfoss.com/raspberry-pi-3-vs-4/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-13997-1.html"
树莓派 3 还是 4你应该买哪一个
======
![](https://img.linux.net.cn/data/attachment/album/202111/18/171924sg0bk3iu43bwi3x4.jpg)
树莓派是一种物美价廉的单板计算机,在很多场景都很有用。不过,在树莓派 4 之前,它作为快速的桌面替代品并不是一个特别合适的选择。
所以,树莓派 4 以其新的功能改变了游戏规则。但是,它与树莓派 3 相比如何?
树莓派 3 仍然值得考虑吗?或者,你应该去买最新和更强大的树莓派 4
在这篇文章中,我们试图通过强调两者之间的一些关键差异来为你提供一些答案。
首先,让我们看一下两者提供的规格:
### 树莓派 3 的规格
![][1]
树莓派 3 满足了一个基本入门 DIY 项目的所有要求。如果 [树莓派 Zero 或 树莓派 Zero W][2] 不符合你的要求,那么树莓派 3 是一个物美价廉的选择:
* 四核 1.2GHz 博通 BCM2837 64 位 CPU
* 1GB 内存
* 无线局域网和低功耗蓝牙BLE
* 以太网
* 40 针扩展 GPIO
* 4 个 USB 2 端口
* 4 极立体声输出和复合视频端口
* 全尺寸的 HDMI
* CSI 摄像机端口
* DSI 显示端口
* 用于操作系统和存储数据的微型 SD 端口
* 升级后的开关式微型 USB 电源,最高可达 2.5A 电流
### 树莓派 4 的规格
![][3]
* 博通 BCM2711四核 Cortex-A72ARM v864 位 SoC @ 1.5GHz
* 2GB、4GB 或 8GB LPDDR4-3200
* 2.4 GHz 和 5.0 GHz IEEE 802.11ac 无线,蓝牙 5.0BLE
* 千兆位以太网
* 2 个 USB 3.0 端口
* 2 个 USB 2.0 端口
* 40 针 GPIO 接头(向后兼容)
* 2 个微型 HDMI 端口(最多可支持 4kp60
* 2 线 MIPI DSI 显示端口
* 2 线 MIPI CSI 摄像头端口
* 4 极立体声音频和复合视频端口
* H.2654kp60 解码H2641080p60 解码1080p30 编码)
* OpenGL ES 3.1Vulkan 1.0
* 用于操作系统和存储数据的 MicroSD 卡插槽
* 通过 USB-C 接口的 5V 直流电
* 通过 GPIO 接头的 5V 直流电
* 通过以太网供电
### 内存RAM选项
对于树莓派机型,通常情况下,你会得到一个包括 1 或 2GB 内存的单一产品阵容。
树莓派 3B+ 就是这种情况。如果你不需要更多的内存,树莓派 3 可以就是一个不错的解决方案,可以满足所有常规 DIY 项目的需求。
然而,对于树莓派 4你可以选择 2GB、4GB 和 8GB 的版本。所以,如果你想完成更多的事情,或者在你的树莓派板上实现多个任务,树莓派 4 应该是一个不错的选择。
### 性能差异
尽管这两块板子都采用了博通公司的芯片,但树莓派 4 的性能明显更快。
如果你想把它作为你的迷你桌面的替代品,或者想为你的任务获得更好的计算能力,树莓派 4 将是明显的选择。
说到树莓派 3它配备了一个四核 1.2GHz 的博通 BCM2837 64 位 CPU。它是一个能够完成各种任务的芯片。
#### 连接能力
两块树莓派板都提供了一个 40 针的扩展 GPIO 接头。
然而,说到 USB 连接时,树莓派 4 提供了两个 USB 3.0 端口以及另外两个 USB 2 端口。而树莓派 3 只限于两个 USB 2 端口。
因此如果你需要更快的数据传输速度USB 3.0 端口应该会有帮助。例如,如果你要使用任何 [媒体服务器软件][4],这可以派上用场。
除此之外,树莓派 4 上还有 USB-C 的存在,如果 USB 配件需要它可以用来给电路板供电5V DC
#### 双显示器与相机支持
虽然树莓派 3 提供了一个全尺寸的 HDMI 端口、DSI 端口和 CS 端口,但它并不具有双显示器支持。
有了树莓派 4你可以得到两个微型 HDMI 端口,一个双通道 DSI 端口和一个双通道 CSI 摄像头端口。
### 你应该买哪一个?
规格 | 树莓派 3 | 树莓派 4
---|---|---
**处理器** | 四核 1.2GHz 博通 BCM2837 | 四核 1.5GHz 博通BCM2711
**RAM** | 1 GB | 高达 8 GB
**蓝牙** | BLE | 蓝牙 5.0
**USB端口** | 4 x USB 2.0 | 2 x USB 3.02 x USB 2.0
**无线连接** | 是 | 是2.4 GHz & 5 GHz 频段支持
**显示端口** | 1 x HDMI1 x DSI | 2 x micro-HDMI1 个 DSI
**电源** | microUSB 和 GPIO高达 2.5 A | 5V DC通过 USB-C 和GPIO3 A
**MicroSD 插槽** | 是 | 是
**价格** | 35 美元 | 35 美元1 GB 内存、45 美元2 GB 内存、55 美元4 GB 内存、75 美元8 GB 内存)
如果你想要更快的数据传输,支持双显示器,以及更好的性能,树莓派 4 是一个很好的选择。
考虑到 2GB 内存的树莓派 4 基本型号的价格约为 35 美元。以几乎相同的价格选择 1GB 的树莓派 3 型号,实在是毫无意义。
当然,除非你能得到一个更便宜的价格,并且有特定的要求。树莓派 4 总体上是一个明确的选择。
然而,有一些事情,如板子发热和其他潜在的问题,你可能想在决定之前探讨一下。树莓派 3 已被证明能在许多项目中发挥作用,而树莓派 4 是相当新的,可能还没有经过各种项目的测试。
一旦你确定了这一点,你就可以继续得到它们中的任何一个。
你喜欢用什么?你都试过了吗?请在下面的评论中告诉我们。
--------------------------------------------------------------------------------
via: https://itsfoss.com/raspberry-pi-3-vs-4/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/raspberry-pi-3.jpg?resize=800%2C534&ssl=1
[2]: https://itsfoss.com/raspberry-pi-zero-vs-zero-w/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/raspberry-pi-4.jpg?resize=583%2C340&ssl=1
[4]: https://itsfoss.com/best-linux-media-server/
[5]: https://itsfoss.com/raspberry-pi-projects/

View File

@ -0,0 +1,132 @@
[#]: subject: "vifm: A Terminal File Browser for Hardcore Vim Lovers"
[#]: via: "https://itsfoss.com/vifm-terminal-file-manger/"
[#]: author: "Pratham Patel https://itsfoss.com/author/pratham/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14001-1.html"
vifm为铁杆 Vim 爱好者提供的终端文件浏览器
======
> 让我们探索一个基于终端的文件浏览器,可以使用 Vim 风格的键绑定。
![](https://img.linux.net.cn/data/attachment/album/202111/20/103256stau7uhetccj7uun.png)
当在命令行中浏览 [Linux 目录结构][1] 时,人们经常依赖 [cd 命令][2]。
这也没什么不好,因为你登录到任何一个 Linux 系统上都有 `cd` 命令。
然而,如果系统是由你维护的,你想更直观地看到目录,那么文件管理器比 `cd``tree` 命令要好得多。
是的,你也可以在终端中找到文件管理器。它们可能不如 Nautilus 这样的图形界面应用,但仍然比普通的老命令好。
有几个 [TUI][3] 文件浏览器,我们已经介绍了其中的几个。今天,我们来看看 `vifm`
### vifm 简介
![][4]
[vifm][5] 是一个命令行工具,它是一个文件管理器,导航和操作文件系统对象的键绑定与 Vim 类似。如果你不清楚我所说的“文件系统对象”是什么意思,它们是文件、目录、符号链接、硬链接等。
除了非常直观的 Vim 交互键绑定外,下面是 `vifm` 为你提供的一系列功能:
* 一个就在你的终端中的快速文件管理器
* 从文件管理器内编辑文本文件
* `vifm` 使用 curses 界面
* `vifm` 是跨平台的(在 Cygwin 的帮助下甚至可以在 Windows 上工作;它应该可以,但我没有测试过)
* 支持 Vim 风格的键绑定输入,如 `dd`、`j`、`k`、`h`、`l` 等
* [vifm 插件][6] 可以在 Vim 中使用,这样就可以通过 Vim 打开文件
* 支持 Vim 命令的自动补完
* 支持多个面板
* 可以使用 [或不使用] 正则表达式进行批量重命名
### 在 Linux 上安装 vifm
`vifm` 软件包并不算新,因此在默认情况下,即使是“稳定”发行版(如 Debian的软件库中也很容易找到它。
在 Debian 和基于 Debian 的发行版(如 Ubuntu、Pop!_OS、Mint 等)之上,你可以 [使用 apt 软件包管理器][7] 来安装 `vifm`
```
sudo apt install vifm
```
使用 [pacman 软件包管理器][8] 在 [基于 Arch 的 Linux 发行版][9]上安装 `vifm`
```
sudo pacman -S vifm
```
`vifm` 在 Fedora 和 RHEL 仓库中也有;用 DNF 软件包管理器安装它:
```
sudo dnf install vifm
```
安装好了 `vifm`,你可以简单地在终端输入 `vifm`,像下面这样,然后启动它:
```
vifm
```
### vifm 的用户界面
当你第一次启动 `vifm` 时,默认情况下,它启动时显示你当前所在的目录的概览。你还会注意到,`vifm` 默认使用两个窗格。
![默认的 vifm 界面,包括一个正常的视图(隐藏的文件不可见)和两个默认打开的窗格][10]
如果你对界面感到困惑,只需尝试按 `j` 键将光标向下移动一行,按 `k` 键将光标向上移动一行。你可以通过按 `h` 键向上移动一级目录。就像 Vim 中一样!
如果你的光标目前在一个文件上,按 `l` 键将在 Vim 中打开该文件(如果没有另外说明的话)。但如果你的光标在一个目录上,按 `l` 键将导航到该目录并显示其内容。
你也可以通过按 `Ctrl + g` 键绑定,从文件管理器中获得关于文件或目录的详细信息。
![`Ctrl + g` 键绑定如何显示目录/文件信息的屏幕截图][11]
你可以按 `za` 键来显示被隐藏的文件和目录(开头有 `.` 的文件和目录默认是隐藏的)。如果这些特殊的文件和目录没有被隐藏,按 `za` 键将会隐藏它们。
你可以用 `zo` 键绑定一直显示隐藏的文件和目录,或用 `zm` 键绑定使这些项目一直不可见。
![举例说明,当你按下 `zo` 键绑定时的情况][12]
### 总结
由于它是基于 Vim 的,你可以用 vifmrc 文件来配置它。在 [vifm wiki][15] 上有最新的默认绑定键的速查表,在 [这里][16]。这个项目的文档非常好。
![vifm 默认按键绑定][17]
`vifm` 是一个了不起的文件管理器,特别是对于 Vim 用户来说,因为它与 Vim 生态系统整合得非常好。它将许多 Vim 的功能和按键绑定整合到一个文件管理器中。默认的双窗格布局使其更具生产力。
不要犹豫,尝试一下 `vifm`。它真的是一个了不起的命令行工具。
--------------------------------------------------------------------------------
via: https://itsfoss.com/vifm-terminal-file-manger/
作者:[Pratham Patel][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pratham/
[b]: https://github.com/lujun9972
[1]: https://linuxhandbook.com/linux-directory-structure/
[2]: https://linuxhandbook.com/cd-command-examples/
[3]: https://itsfoss.com/gui-cli-tui/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/vifm-screenshot.png?resize=800%2C309&ssl=1
[5]: https://github.com/vifm/vifm
[6]: https://github.com/vifm/vifm.vim
[7]: https://itsfoss.com/apt-command-guide/
[8]: https://itsfoss.com/pacman-command/
[9]: https://itsfoss.com/arch-based-linux-distros/
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/01_two_panes-1.webp?resize=800%2C600&ssl=1
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/02_ctrl_g_info.webp?resize=800%2C600&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/03_toggle_dotfile_visibility.webp?resize=800%2C600&ssl=1
[13]: https://itsfoss.com/nnn-file-browser-linux/
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/04/nnn-file-browser.jpg?fit=800%2C450&ssl=1
[15]: https://wiki.vifm.info/index.php/Main_Page
[16]: https://vifm.info/cheatsheets.shtml
[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/vifm-key-binding-cheatsheet.webp?resize=800%2C561&ssl=1

View File

@ -0,0 +1,98 @@
[#]: subject: "KDE Plasma 5.24 To Add a GNOME-Style Overview and Will Prevent You From Uninstalling Plasma"
[#]: via: "https://news.itsfoss.com/kde-plasma-5-24-dev/"
[#]: author: "Jacob Crume https://news.itsfoss.com/author/jacob/"
[#]: collector: "lujun9972"
[#]: translator: "wxy"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-14015-1.html"
开发中的 KDE Plasma 5.24 新变化:增加 GNOME 式概览、防删功能
======
> KDE Plasma 正在升级中,以改善类似于 GNOME shell 的概览,并为非技术用户提供更多的可用性改进。
![](https://i1.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/kde-plasma-5-24-dev-ft.png?w=1200&ssl=1)
自从 2011 年 GNOME 3 以来,活动概览在与 GNOME 的互动中起到了关键作用。尽管它在发布时受到了严厉的批评,但许多用户现在已经爱上了它,导致其他一些桌面环境也在考虑实现类似的功能。
而且,看起来 KDE Plasma 正在添加类似的东西,它就像是一个全新的类似 GNOME 的概览。让我们来仔细了解一下。
### 新的 Plasma 概览
![Image: Vlad Zahorodnii / KDE Developer][1]
当我偶然看到 KDE 开发者 [Nate Graham 的博文][2] 时,我注意到了一个与 GNOME 活动概览非常相似的东西。
如上面的截图所示,当你在 KDE 上按 `Windows`/`Super` 键进入概览界面时,你就会看到它。
然而,值得注意的是,它仍然处在开发版本中。而且,它已经被合并到 KDE Plasma 5.24 中去了。
但是它和 GNOME 活动概览相似吗?
它看起来确实相似,但有一些关键的区别,其中包括:
* 在概览中,你可以完全访问底部面板。
* 搜索功能是由 KRunner 提供的,用来寻找应用程序和活动窗口。
#### 可以完全访问底部面板
如果你正在使用 KDE你肯定知道底部面板。到目前为止它与 GNOME 的概览效果最显著的区别是有任务栏。这使得用户可以访问一个统一的地方来打开应用程序,访问快速设置,并查看通知。
作为一个拥有任务栏的传统桌面理念的粉丝,这对于 Plasma 这样的传统桌面环境来说感觉非常完美。
#### 使用 KRunner 进行强大的搜索(新增加的功能)
![][1]
多年来KRunner 无疑是 Linux 中最强大的应用程序启动器之一。它的一些神奇功能包括可以搜索:
* 文件
* 设置
* 应用程序
* 打开浏览器标签
* 打开现有窗口
因此KRunner 已被整合到 KWin 的概览效果中,可以让你搜索现有窗口或启动新的应用程序。
我相信很多用户会非常高兴地看到这个整合,特别是那些已经使用 KRunner 的用户。
### 防止用户卸载 KDE
为了改善非技术用户的用户体验DiscoverKDE 的软件中心)现在会阻止任何删除任何关键软件包的行为,比如桌面环境。
![来自 PointiestStick 博客][3]
你可能知道,这个问题是由 Linus SebastianLinus Tech Tips强调的当他在 Pop!_OS 上安装 Steam 时,结果却删除了 GNOME、Xorg 和其他重要软件包。
所以,接下来的改进是为了解决这个问题,这对 KDE Plasma 5.24 来说是一个很好的补充。
### 其他改进
除了关键的亮点之外KDE Plasma 5.24 旨在提高性能、响应速度和用户体验。
此外,也有一些细微的用户界面调整和小程序改进。要想了解更多,你应该浏览一下 [Nate 的博文][2] 以及与上述功能相关的 [合并请求][4]。
### 总结
根据我对 KDE Plasma 的体验它是一个非常注重生产力的桌面。KDE 在改善桌面环境的概览效果和可用性的同时,将包括一个同样具有生产力和专注性的用户界面,这是很有意义的。
如果你想尝试新的改进,恐怕你必须等待 Plasma 5.24 发布。虽然这对现在来说是要等待相当长的时间2022 年 2 月),但我相信这将是值得的。
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/kde-plasma-5-24-dev/
作者:[Jacob Crume][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/jacob/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/kwin-kde-gnome-overview-effect.png?w=1200&ssl=1
[2]: https://pointieststick.com/2021/11/19/this-week-in-kde-most-of-gnome-shell-in-the-overview-effect/
[3]: https://i2.wp.com/news.itsfoss.com/wp-content/uploads/2021/11/cant-remove-plasma-1.png?w=1022&ssl=1
[4]: https://invent.kde.org/plasma/kwin/-/merge_requests/1688

View File

@ -1,65 +0,0 @@
[#]: subject: "Google to Pay up to $50,337 for Exploiting Linux Kernel Bugs"
[#]: via: "https://news.itsfoss.com/google-linux-kernel-bounty/"
[#]: author: "Rishabh Moharir https://news.itsfoss.com/author/rishabh/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Google to Pay up to $50,337 for Exploiting Linux Kernel Bugs
======
Google makes good use of Linux across its platforms, especially when it comes to Android and its massive servers. Over the years, Google has been inclining more towards open-source projects and programs.
Recently, the tech giant sponsored $1 million to fund a security-focused open-source program run by The Linux Foundation, more details in our [original coverage.][1]
And, now, Google just tripled its bounty rewards for the next three months for security researchers working on finding kernel exploits that help achieve privilege escalation (i.e., when an attacker gains administrator access using a bug/flaw)
Its no surprise that there will always be some form of bugs and flaws that plague the security and development of the kernel. Fortunately, hundreds of security researchers from various organizations and individuals-alike work to improve its state of security, which is why the vulnerabilities are not necessarily exploited in the wild.
Even though Google has a good track record of rewarding security researchers, it stepped up the game for the next three months by announcing a base reward of **$30,377 to $50,377** as the upper limit.
### Program Details and Rewards
The exploits can be responding to currently patched vulnerabilities, new unpatched vulnerabilities, and new techniques.
The base reward of **$31,337** holds for exploiting publicly patched vulnerabilities that exploit privilege escalation. If it identifies unpatched vulnerabilities or new exploit techniques, the reward can go up to **$50,337**.
Moreover, this program also goes along with the Android VRP and Patch Reward programs. This means if the exploit works on Android, you can be eligible for rewards up to 250,000 USD in addition to this program.
You can read more about this on their [official portal][2] if you are curious about Android.
The hike in reward will be open for the next three months, that is, until January 31, 2022.
Security researchers can go through their [official blog post][3] to set up the lab environment and read more about the requirements on their [official GitHub webpage.][4]
### Wrapping Up
This program is an excellent initiative by Google. It is undoubtedly going to attract and benefit many security professionals and researchers alike.
Not to forget, the state of security for Linux Kernel should get the ultimate benefit.
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/google-linux-kernel-bounty/
作者:[Rishabh Moharir][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/rishabh/
[b]: https://github.com/lujun9972
[1]: https://news.itsfoss.com/google-sos-sponsor/
[2]: https://bughunters.google.com/about/rules/6171833274204160
[3]: https://security.googleblog.com/2021/11/trick-treat-paying-leets-and-sweets-for.html
[4]: https://google.github.io/kctf/vrp

View File

@ -1,83 +0,0 @@
[#]: subject: "After Moving From FreeBSD to Void Linux, Project Trident Finally Discontinues"
[#]: via: "https://news.itsfoss.com/project-trident-discontinues/"
[#]: author: "John Paul https://news.itsfoss.com/author/john/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
After Moving From FreeBSD to Void Linux, Project Trident Finally Discontinues
======
Sadly, the [Project Trident][1] team announced that they will be ending development of their Linux distro.
### Story Time!!!
For those of you who have not heard of Project Trident, let me give you a little ride down memory lane. Back in 2005, Kris Moore introduced [PC-BSD][2] as an easy way to set up FreeBSD with a desktop interface. It was acquired the following year by [iXsystems][3]. In September of 2016, the name of the project was changed to TrueOS. The project also became a rolling release based on the Current branch of FreeBSD. Two years later, TrueOS [announced][4] that they would be doing away with the desktop version of their operating system and focusing on the enterprise and server market. The desktop elements [were spun off][5] to a new project: Project Trident.
For a time, the dev team at Project Trident tried their best to create a good desktop experience on top of FreeBSD. However, due to [issues with FreeBSD][6] including “hardware compatibility, communications standards, or package availability continue to limit Project Trident users” they decide to base it on something else. Their solution was to rebase their project on [Void Linux][7] in 2019. For a while, it looked like the future of Project Trident was set. Then 2020 happened.
![Project Trident desktop][8]
### The End of a Project
On October 29th, the Project Trident team posted the following [announcement][9]:
> It is with great sadness that we are announcing that Project Trident will be entering is “sunset” period starting Nov 1 of 2021 and will be closing up shop in March of 2022. The core team of the project has come to this decision together. With changes and events over the past two years in life, jobs, family, etc; our individual priorities have changed as well.
>
> We will keep the Project Trident package repository and websites up and running until the EOL date of March 1, 2022, but we strongly encourage users to begin looking for alternative desktop OS solutions over the coming new year holiday.
>
> Thank you all for your support and encouragement! The project had a good run and we thoroughly enjoyed getting to know many of you over the years.
### The Lumina Project Continues
One constant throughout the PC-BSD/TrueOS/Project Trident saga is the desktop environment in use. In 2012, [Ken Moore][10] (Kris younger brother) started working on a Qt-based desktop environment named [Lumina][11]. In 2014, it became the default desktop environment of PC-BSD and has stayed that way down to Project Trident. Lumina stands apart from other desktop environment because it was designed to be operating system agnostic. Other desktop environments like KDE and GNOME have Linux specific code that makes it hard to port to BSD.
![Lumina desktop environment][8]
June of this year, Ken [handed the reigns of Lumina][12] to a fellow Project Trident developer [JT Pennington][13] (also of [BSDNow][14] fame).
The [announcement][12] states:
> After more than 7 years of work, I have decided that it is time to let others take over the development of the Lumina Desktop project going forward. It has been an incredible task which has pushed me into areas of development that I had never previously considered. However, with work and life changes, my time for developing new functionality for Lumina has become nearly non-existent, particularly for the Qt5 -&gt; Qt6 change that will be coming within the next year or so. By passing the torch over to JT (q5sys on GitHub), I am hoping that the project might receive more timely updates, for the benefit of everyone.
>
> Thank you all, and I hope for the continued success of the Lumina Desktop project!!
### Final Thoughts
I always had high hopes for Project Trident. They were small compared to many of the distros that we cover. They werent just a reskin of Arch or Ubuntu with one or two new tools. Not only that, but they were working to improve a distro (Void) that shared their ideals. However, life happens, even to the best of us. I wish Ken, JT and the others well as they sunset a project that they have spent many hours working on. Hopefully, well be seeing more from them in the future.
#### Big Tech Websites Get Millions in Revenue, It's FOSS Got You!
If you like what we do here at It's FOSS, please consider making a donation to support our independent publication. Your support will help us keep publishing content focusing on desktop Linux and open source software.
I'm not interested
--------------------------------------------------------------------------------
via: https://news.itsfoss.com/project-trident-discontinues/
作者:[John Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://news.itsfoss.com/author/john/
[b]: https://github.com/lujun9972
[1]: https://project-trident.org/
[2]: https://en.wikipedia.org/wiki/TrueOS
[3]: http://ixsystems.com/
[4]: https://itsfoss.com/trueos-plan-change/
[5]: https://itsfoss.com/project-trident-interview/
[6]: https://project-trident.org/post/os_migration/
[7]: https://voidlinux.org/
[8]: data:image/svg+xml;base64,PHN2ZyBoZWlnaHQ9IjQzOSIgd2lkdGg9Ijc4MCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2ZXJzaW9uPSIxLjEiLz4=
[9]: https://project-trident.org/post/2021-10-29_sunset/
[10]: https://github.com/beanpole135
[11]: https://lumina-desktop.org/
[12]: https://lumina-desktop.org/post/2021-06-23/
[13]: https://github.com/q5sys
[14]: https://www.bsdnow.tv/

View File

@ -1,56 +0,0 @@
[#]: subject: "4 tips to becoming a technical writer with open source contributions"
[#]: via: "https://opensource.com/article/21/11/technical-writing-open-source"
[#]: author: "Ashley Hardin https://opensource.com/users/ashleyhardin"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
4 tips to becoming a technical writer with open source contributions
======
Your open source contributions show potential employers that you take
the initiative and seek opportunities to learn, grow, and challenge
yourself.
![A person writing.][1]
Whether youre a tech hobbyist interested in dabbling in technical writing or an established technologist looking to pivot your career to become a professional technical writer, you can build your technical writing portfolio with open source contributions. Writing for open source projects is fun, flexible, and low risk. Contribute to a project of interest to you on your own schedule, and you might be surprised at how welcoming the community can be or how fast you can make an impact.
Your open source contributions show potential employers that you take the initiative and seek opportunities to learn, grow, and challenge yourself. As with anything, you have to start somewhere. Contributing to open source projects allows you to showcase your talents while also learning new skills and technologies. In addition, writing for open source projects enables you to connect with new communities, collaborate with new people across time zones, and build your network. When you dig into open source opportunities, you enhance your resume and set yourself apart from other candidates. Here are four ways to get started with contributing to open source that can lead to a career in technical writing. 
### Learn the tools of the trade
To get started, I recommend becoming familiar with [Git][2], setting up [GitLab][3] and [GitHub][4] accounts, and finding a text editor that you like. Personally, I love working with the open source tool [Atom][5]. When it comes to Git, there is a wealth of free learning resources available online, including some excellent interactive tutorials. You dont need to be a Git master to dive into open source. I recommend learning the basics first and letting your skills develop as you contribute more.
### Find a project
The hardest part of contributing to open source can be finding a project to contribute to. You can check out [Up For Grabs][6] and search for projects that interest you. [First Timers Only][7] has more resources on getting started. Dont hesitate to contact project maintainers to learn more about the project and where they need help. Be persistent. It can take some time to find a project thats right for you.
### Say goodbye to imposter syndrome
A common misconception is that you need to be a programmer to contribute to open source projects. As a self-taught contributor with no engineering or computer science credentials, I can assure you that is not the case. Documentation is often the most valuable but most neglected part of development projects. These projects often lack the people and resources needed to create complete, quality documentation. This presents a great opportunity for you to get involved by submitting pull requests or by filing issues against the project. You can do it!
### Start small
Check the repository for the project that you are interested in for possible contribution guidelines to follow. Next, look for opportunities to update README files or to submit typo fixes. No contribution is too small. The project maintainers will likely be happy for the help, and youll be glad to get your first pull requests submitted as you build your technical writing portfolio.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/technical-writing-open-source
作者:[Ashley Hardin][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ashleyhardin
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E (A person writing.)
[2]: https://git-scm.com/
[3]: https://about.gitlab.com/
[4]: https://github.com/
[5]: https://atom.io/
[6]: https://up-for-grabs.net/#/
[7]: https://www.firsttimersonly.com/

View File

@ -0,0 +1,72 @@
[#]: subject: "Why now is a great time to consider a career in open source hardware"
[#]: via: "https://opensource.com/article/21/11/open-source-hardware-careers"
[#]: author: "Joshua Pearce https://opensource.com/users/jmpearce"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Why now is a great time to consider a career in open source hardware
======
Open source hardware is now a field of its own and it is growing
rapidly.
![open source hardware shaking hands][1]
It has become commonplace in the software industry for programmers of all flavors to build careers writing code that releases to the commons with open source licenses. Industry headhunters often demand access to the code to vet future employees. Those that focus their career on open source development get rewarded. According to payscale.com, Linux sysadmins earn more than their Windows counterparts, indicating better pay and job security for jobs in open source software. There's also a good feeling (maybe even karma) that comes with sharing your work. You know you are creating value literally for the entire world. Historically, such opportunities did not exist for those of us that work in open hardware. 
Twenty years or so ago, almost no one even knew what open source hardware was, let alone planned a career around it. In 2000, for example, out of the more than 2 million academic papers published that year in the entire world, only seven articles even mentioned "open source hardware" at all. When I first wrote [_Open-Source Lab_][2], I'd collected every example (only a few dozen) and could easily keep up and read every open hardware article that got published to post them on a wiki. I am happy to report that is no longer physically possible. There have already been over 1,500 articles that discuss "open source hardware" this year, and I am sure many more will be out by year's end. Open source hardware is now a field of its own, with a few journals dedicated to it specifically (for example, [_HardwareX_][3] and the [_Journal of Open Hardware_][4]). In a wide range of fields, dozens of traditional journals now routinely cover the latest open hardware developments.
![Smart open source 3-D printing][5]
Developing smart open source 3-D printing (Joshua Pearce, [GNU-FDL][6])
Even a decade ago, stressing open source hardware development was somewhat of a risk from a career perspective. I remember downplaying it on my resume for my last job and stressing my more conventional work. Supervisors in industry and academia had difficulty figuring out how you'd capture value if designs were given away and got manufactured elsewhere. This has all been changing. Like free and open source software, open source hardware development is faster and, dare I say, superior to proprietary approaches.
![Open source recycle bot][7]
(Joshua Pearce, [GNU-FDL][6])
There are plenty of successful [open hardware business models][8] for every kind of enterprise. With the rise of digital manufacturing (largely due to open source development), the lines have blurred between open source software and open source hardware. Open source software like [FreeCAD][9] enables open designs to be made and then used in built-in CAM to get fabricated on open source laser cutters, CNC mills, or 3-D printers. [OpenSCAD][10], an open source script-based CAD package, in particular, really blurs the lines between software and hardware so much that code and physical design become synonymous. Many of us started speaking out about open hardware openly. I made it a core thrust of my research program, first making my own equipment open source and then working on open hardware development for others. I was far from alone. As a community, we had gained enough critical mass that the [Open Source Hardware Association][11] (OSHWA) got founded in 2012. Today, almost a decade later, career prospects in open source hardware are totally different: Hundreds of open source hardware companies exist, the Internet is swimming with millions (millions!) of open source designs, and the interest in open source hardware in the academic literature has been rising exponentially. 
![Open source production for solar photovoltaics][12]
Developing open source production for solar photovoltaics (Joshua Pearce, [GNU-FDL][6])
There are even jobs meant to push a faster transition to ubiquitous open source hardware. For example, the Internet of Production (IoP) Alliance in developing Open Data Standards and growing the community of users of these standards has [positions open now][13] for Operations &amp; Communications Officer, Data standards Community Support Manager, and DevOps engineer. I was just hired into a tenured endowed chair at [Western University in Canada,][14] a top 1% global university, **because** of my open source hardware work, not in spite of it. The position is cross-pointed with the [Ivey Business School,][15] the #1 business school in Canada. My job is to help the University rapidly evolve to take advantage of open source technology development opportunities. To put my money where my mouth is, I am [currently hiring][16] graduate students at the masters and PhD levels, including a full-tuition scholarship and a living stipend. These [Free Appropriate Sustainability Technology (FAST) Lab][17] graduate engineering positions are specifically reserved for developing open source hardware for a range of applications covering solar photovoltaic systems, distributed recycling, and emergency food production. This type of work gets more frequently financed by funders who want to maximize [return on their investment for research][18]. Entire nations are moving in this direction. The latest good example is France, which just published its [second plan for Open Science][19]. I have noticed a marked uptick in the number of "open source" keyword grants listed on [GrantForward][20] for open source funding in the US. Many foundations have already received the open source memo loud and clear—so there is a growing deluge of opportunities in open source R&amp;D.
So if you have not already, maybe it is time for you to consider open source as a career, even if you are an engineer that likes to develop hardware.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/open-source-hardware-careers
作者:[Joshua Pearce][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jmpearce
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open-source-hardware.png?itok=vS4MBRSh (shaking hands open source hardware)
[2]: https://www.appropedia.org/Open-source_Lab
[3]: https://www.hardware-x.com/
[4]: https://openhardware.metajnl.com/
[5]: https://opensource.com/sites/default/files/uploads/smart-open-source-3d-printing.png (Smart open source 3-D printing)
[6]: https://www.gnu.org/licenses/fdl-1.3.en.html
[7]: https://opensource.com/sites/default/files/pictures/open-source-recyclebot_0.jpg (Open source recycle bot)
[8]: https://doi.org/10.5334/joh.4
[9]: https://www.freecadweb.org/
[10]: https://openscad.org/
[11]: https://www.oshwa.org/
[12]: https://opensource.com/sites/default/files/uploads/open-source-solar-photovoltaics.png (Open source production for solar photovoltaics)
[13]: https://www.internetofproduction.org/hiring
[14]: https://www.uwo.ca/
[15]: https://www.ivey.uwo.ca/
[16]: https://www.appropedia.org/FAST_application_process
[17]: https://www.appropedia.org/Category:FAST
[18]: https://www.academia.edu/13799962/Return_on_Investment_for_Open_Source_Hardware_Development
[19]: https://www.ouvrirlascience.fr/wp-content/uploads/2021/10/Second_French_Plan-for-Open-Science_web.pdf
[20]: https://www.grantforward.com/index

View File

@ -0,0 +1,122 @@
[#]: subject: "What is open core?"
[#]: via: "https://opensource.com/article/21/11/open-core-vs-open-source"
[#]: author: "Scott McCarty https://opensource.com/users/fatherlinux"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What is open core?
======
How does open core differ from open source? When is one more useful than
the other?
![A confusing business organization chart][1]
What is open core? Is a project open core, or is a business open core? That's debatable. Like open source, some view it as a [development model][2], others view it as a [business model][3]. As a product manager, I view it more in the context of value creation and value capture.
![market problems based on open core][4]
(Scott McCarty, CC BY-SA 4.0)
With open source, an engineering team can capture more value than it contributes. An engineer participating in an open source project can contribute $1 worth of code, yet get back $10, $20, $30, or more worth of value. This value is measured in both personal brand, as well as ability to lead and influence the project in a direction that is beneficial to their employer.
With open core, at least some of the code is proprietary. With proprietary code, a company hires engineers, solves business problems, and charges for that software. For the proprietary portion of the code base, there is no community-based engineering, so there's no process by which a community member can profit by participating. With proprietary code, a dollar invested in engineering is a dollar returned in code. Unlike open source, a proprietary development process can't return more value than the engineering team contributes (see also: [Why Red Hat is investing in CRI-O and Podman][5]).
This lack of community profit is really important when analyzing open core. There is no community for that part of the code, so there is no community participation, no community profit. If there is no community, there is no potential for a community member to gain the standard benefits of open source (personal brand, influence, right to use, learning, etc.).
There's no differential value created with open core, so in [18 ways to differentiate open source products from upstream suppliers][6], I describe it as a methodology for capturing value. Community-driven open source is about value creation, not value capture. This is a fundamental tension between open source and open core.
### Open core versus glue code
First, let's take a look at what people typically view as open source. As described [on Wikipedia][3], "the open-core model primarily involves offering a 'core' or feature-limited version of a software product as free and open-source software, while offering 'commercial' versions or add-ons as proprietary software." The drawing below shows this model graphically.
An example of this model is SugarCRM, which had a core, open source piece of software as well as a bunch of plugins, many of which were proprietary. Another example of this is [the original plan Microsoft had for the Hot Reload feature in .Net][7] (which has since then been reversed).
![Open Core and Proprietary Diagram][8]
Another related model is what I'll refer to as glue code. Glue code doesn't directly provide a customer business value. Instead, it hangs a bunch of projects together. Notice, in this example, I demonstrate three open source projects, one data-driven [API service][9], and some glue code that holds it all together. Glue code can be open source or proprietary, but this is not what people typically think of when they talk about open core.
An example of open source glue code is Red Hat Satellite Server. It's made up of multiple upstream open source projects like Foreman, Candlepin, Pulp, and Katello, as well as a connection to a data service for updates (as well as [connections with tools like Red Hat Insights][10]). In the case of Satellite server, all of the glue code is open source, but one can easily imagine how other companies might make the choice to employ proprietary code for this functionality.
![An example of open source glue code][11]
### When open core conflicts with community goals
The classic problem with open core is when the upstream community wants to implement a feature that is in one of the proprietary bolt-ons. The company or product which employs the open core model has an incentive to stop this from happening in the open source project on which the proprietary code relies. This creates some serious issues for both the upstream community and for customers.
Potential customers will be incentivized to participate in the community to implement the proprietary features which are perceived as missing. Community members who try to implement these features will be shocked or annoyed when their pull requests are difficult to get accepted or rejected outright.
I've never seen a serious solution for this problem. In this video, [How To Do Open Core Well: 6 Concrete Recommendations][12], Jono Bacon recommends being upfront with community members. He recommends telling them that pull requests which compete with proprietary product features will be rejected. While that's better than not being upfront, it's not a scalable solution. Both the upstream project and the downstream product with proprietary features are constantly changing landscapes. Often, community contributors aren't even paying attention to the downstream product and have no idea which features are implemented downstream, or worse, on the roadmap to be implemented as proprietary features. The upstream community is rarely emotionally engaged with the business problems solved by downstream products, and can easily be confused when their pull requests are rejected.
Even if the community is willing to accept the no-go zones (example: [GitLab Features by Paid Tier][13]), this makes it a high probability that the open source project will be a single-vendor endeavor (example: [GitLab contributions are primarily GitLab employees][14]). It's highly unlikely that competitors will participate, and this will intrinsically limit the value creation of the community. The open core business could still capture value through thought leadership, technology adoption, and customer loyalty, but arguably they will never truly create more code value than they invest.
Apart from these problems, if an upstream project truly adheres to open governance, there's actually nothing the open core business can do to prevent proprietary features from being implemented. Intra-project (within a single project) proprietary code just doesn't work.
### When open core might work
Glue code is a place where open core or proprietary code might work. I'm not advocating for open core, and I often think it's inefficient, but I want to be honest with my analysis. There are indeed natural boundaries between open source projects. Going back to my open source as a supply chain thesis (see also: [The Delicate Art of Product Management with Open Source][15]), a [fuel injector][16] is a fuel injector; it's not an [alternator][17]. These natural demarcation points do make good areas for differentiation of the downstream product from the upstream project (see also: [18 Ways to differentiate open source software products from their upstream projects][6]).
A classic example of proprietary glue code is the original [Red Hat Network (RHN)][18], released in the year 2000. RHN was essentially a SaaS offering which provided updates for [Red Hat Linux][19] machines, and this was before [Red Hat Enterprise Linux][20] was even a thing. For context, when RHN was released, the term open core wasn't even invented yet ([coined in 2008][3]), coincidentally the same year that the [upstream Spacewalk project][21] was released. Back then, everyone was still learning the core competencies of how to do open source well.
In retrospect, I don't think it's a coincidence that RHN existed at the nexus of the natural demarcation point between an upstream Linux distribution and pay-for offering. This naturally fits the mental model of [differentiating a product from the upstream supplier][6]. It might be tempting to conclude - "See!?!? The largest open source company in the world differentiated itself with proprietary code! Open core is the reason Red Hat survived and flourished" - but I'd be careful not to confuse correlation with causation. Red Hat eventually open sourced RHN as Spacewalk and never took a hit to revenue.
Pivoting slightly, one could also make an argument that the cloud providers often follow this model today. It's well known in the industry that most of the large cloud providers carry their own forks of the Linux kernel. These forks have proprietary extensions which make Linux usable in their environments. These features don't solve a customer's business problem directly but instead solve the cloud provider's problems. They're essentially glue code.
Cloud providers have a slightly different motivation for not getting these changes upstream. For them, carrying a fork is often cheaper and/or easier (though not easy) than contributing these features upstream, especially when the changes are often not wanted by the Linux kernel community. Cloud providers are often choosing the best bad idea out of a bunch of bad ideas.
Open core glue code might be called inter-project (between multiple projects) proprietary code. This might work, but arguably, this kind of code is already difficult to implement and doesn't need the perceived "protections" of a proprietary license. Stated another way, open source contributors aren't necessarily incentivized to work on and maintain glue code. It's a natural place where a vendor can differentiate. Often glue code is complex and requires specific integrations between specific versions of upstream projects for specific life cycle requirements. All of these specific requirements make glue code a great place for a product to differentiate itself from the upstream projects without the need for a proprietary license. The velocity (speed and direction) of enterprise integrations are quite different from the velocity needed for collaboration between multiple upstream projects. This velocity mismatch between upstream community needs, and downstream customer needs is a perfect place for differentiation without the need for open core.
### Conclusion
Can open core work? Is it better than open source? Should everyone use it? It seems obvious that Open core can work, but only in very specific situations with very specific types of software (ex. glue code). It seems less obvious that there's any argument that open core is actually better for creating value. Therefore, I do not recommend open core for most businesses. Furthermore, I think the percieved protections that it offers are actually unnecessary.  
Often, vendors find natural places to compete with each other. For example, SUSE runs the [OpenSUSE Build Service][22], which is based on completely open source code. Even though Red Hat could download the source code and set up a competing build service, they haven't. In fact, the upstream Podman project, which is heavily sponsored by Red Hat, uses the [OpenSUSE build service][23]. Though SUSE could easily make this code proprietary, they don't need to. Setting up, running, and maintaining a competing service is a lot of work and doesn't necessarily provide Red Hat customers with a lot of value.
I still think Open Core is a step in the right direction from fully proprietary code (example: [GitLab is open core, GitHub is closed source][24]), but I don't see a compelling reason to promote it as a better alternative to completely open source. In fact, I think it's [exceedingly difficult][12] to do open core well and likely impossible to genuinely create differentiated value with it (see also: [The community-led renaissance of open source][25] and [Fauxpen source is bad for business][26]).
This thesis on open core was developed by working with and learning from hundreds of passionate people in Open Source, including engineers, product managers, marketing managers, sales people, and some of the foremost lawyers in this space. To release new features and capabilities in Red Hat Enterprise Linux and OpenShift, like launching Red Hat Universal Base Image, I've worked closely with so many different teams at Red Hat. I've absorbed 25+ years of institutional knowledge, in my 10+ years here. Now, I'm trying to formalize this a bit in public work like [The Delicate Art of Product Management with Open Source][15] and follow-on articles like this one.
This work has contributed to my recent promotion to Senior Principal Product Manager of [RHEL for Server][27], [arguably the largest open source business in the world][28]. Even with this experience, I'm constantly listening, learning, and seeking truth. I'd love to discuss this subject further in the comments or on Twitter (@fatherlinux).
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/open-core-vs-open-source
作者:[Scott McCarty][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/fatherlinux
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89 (A confusing business organization chart)
[2]: https://opensource.org/blog/OpenCore
[3]: https://en.wikipedia.org/wiki/Open-core_model
[4]: https://opensource.com/sites/default/files/values.png
[5]: https://www.redhat.com/en/blog/why-red-hat-investing-cri-o-and-podman
[6]: https://opensource.com/article/21/2/differentiating-products-upstream-suppliers
[7]: https://dusted.codes/can-we-trust-microsoft-with-open-source
[8]: https://opensource.com/sites/default/files/uploads/open_core_diagram1.png (Open Core and Proprietary Diagram)
[9]: https://opensource.com/article/17/8/open-core-vs-open-perimeter
[10]: https://access.redhat.com/documentation/en-us/red_hat_satellite/6.2/html/server_administration_guide/ch06s06
[11]: https://opensource.com/sites/default/files/uploads/open_core_diagram2.png (An example of open source glue code)
[12]: https://www.youtube.com/watch?v=o-OOxOS8oDs
[13]: https://about.gitlab.com/features/by-paid-tier/
[14]: https://gitlab.com/gitlab-org/gitlab-foss/-/graphs/master
[15]: http://crunchtools.com/the-delicate-art-of-product-management/
[16]: https://en.wikipedia.org/wiki/Fuel_injection
[17]: https://en.wikipedia.org/wiki/Alternator
[18]: https://en.wikipedia.org/wiki/Red_Hat_Network
[19]: https://en.wikipedia.org/wiki/Red_Hat_Linux
[20]: https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux
[21]: https://spacewalkproject.github.io/
[22]: https://en.opensuse.org/openSUSE:Build_Service_FAQ
[23]: https://build.opensuse.org/package/show/openSUSE:Factory/podman
[24]: https://about.gitlab.com/blog/2016/07/20/gitlab-is-open-core-github-is-closed-source/
[25]: https://opensource.com/article/19/9/community-led-renaissance
[26]: https://opensource.com/article/19/4/fauxpen-source-bad-business
[27]: https://www.redhat.com/en/store/red-hat-enterprise-linux-server
[28]: https://last10k.com/sec-filings/rht/0001087423-19-000012.htm#link_fullReport

View File

@ -1,185 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (FigaroCao)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using Powershell to automate Linux, macOS, and Windows processes)
[#]: via: (https://opensource.com/article/20/2/devops-automation)
[#]: author: (Willy-Peter Schaub https://opensource.com/users/wpschaub)
Using Powershell to automate Linux, macOS, and Windows processes
======
Automation is pivotal to DevOps, but is everything automatable?
![CICD with gears][1]
Automation takes control of manual, laborious, and error-prone processes and replaces engineers performing manual tasks with computers running automation scripts. Everyone agrees that manual processes are a foe of a healthy DevOps mindset. Some argue that automation is not a good thing because it replaces hard-working engineers, while others realize that it boosts consistency, reliability, and efficiency, saves time, and (most importantly) enables engineers to work smart.
> "_DevOps is not just automation or infrastructure as code_" —[Donovan Brown][2].
Having used automated processes and toolchains since the early '80s, I always twitch when I hear or read the recommendation to "automate everything." While it is technically possible to automate everything, automation is complex and comes at a price in terms of development, debugging, and maintenance. If you have ever dusted off an inviable Azure Resource Manager (ARM) template or a precious maintenance script you wrote a long time ago, expecting it to execute flawlessly months or years later, you will understand that automation, like any other code, is brittle and needs continuous maintenance and nurture.
So, what and when should you automate?
* Automate processes you perform manually more than once or twice.
* Automate processes you will perform regularly and continuously.
* Automate everything automatable.
More importantly, what should you _not_ automate?
* Don't automate processes that are a one-off—it is not worth the investment unless you reuse it as reference documentation and regularly validate to ensure it remains functional.
* Don't automate highly volatile processes—it is too complex and expensive.
* Don't automate broken processes—fix them before automating.
For example, my team continuously inspects hundreds of user activities on our common collaboration and engineering system, looking for inactivity that is wasting precious dollars. If a user has been inactive for three or more months and has been assigned an expensive license, we revert the user to a less functional and free license.
As Fig. 1 shows, it is not a technically challenging process. It is a mind-numbing and error-prone process, especially when it's performed while context switching with other development and operational tasks.
![Manual process to switch user license][3]
Fig. 1 Manual process to switch user license
Incidentally, this is an example of a value stream map created in three easy steps:
1. Visualize all activities: list users, filter users, and reset licenses.
2. Identify stakeholders, namely operations and licensing teams.
3. Measure:
* Total lead time (TLT) = 13 hours
* Total cycle time (TCT) = 1.5 hours
* Total efficiency percentage = TLT/TCT*100 = 11.5%
If you hang a copy of these visualizations in high-traffic and high-visibility areas, such as your team's breakout area, cafeteria, or on the way to your washrooms, you will trigger lots of discussions and unsolicited feedback. For example, looking at the visual, it is evident that the manual tasks are a waste, caused primarily by long process wait times.
Let us explore a simple PowerShell script that automates the process, as shown in Figure 2, reducing the total lead-time from 13 to 4 hours and 60 seconds, and raising the overall efficiency from 11.5 to 12.75%.
![Semi-automated PowerShell-based process to switch user license][4]
 
[PowerShell][5] is an open source task-based scripting language. It is found [on GitHub][6], is built on .NET, and allows you to automate Linux, macOS, and Windows processes. Users with a development background, especially C#, will enjoy the full benefits of PowerShell.
The PowerShell script example below communicates with [Azure DevOps][7] via its service [REST API][8]. The script combines the manual list users and filter users tasks in Fig. 1, identifies all users in the **DEMO** organization that have not been active for two months and are using either a **Basic** or a more expensive **Basic + Test** license, and outputs the user's details to the console. Simple!
First, set up the authentication header and other variables that will be used later with this initialization script:
```
param(
  [string]   $orgName       = "DEMO",
  [int]      $months        = "-2",
  [string]   $patToken      = "&lt;PAT&gt;"
)
# Basic authentication header using the personal access token (PAT)
$basicAuth = ("{0}:{1}" -f "",$patToken)
$basicAuth = [System.Text.Encoding]::UTF8.GetBytes($basicAuth)
$basicAuth = [System.Convert]::ToBase64String($basicAuth)
$headers   = @{Authorization=("Basic {0}" -f $basicAuth)}
# REST API Request to get all entitlements
$request_GetEntitlements    = "<https://vsaex.dev.azure.com/>" + $orgName + "/_apis/userentitlements?top=10000&amp;api-version=5.1-preview.2";
# Initialize data variables
$members              = New-Object System.Collections.ArrayList
[int] $count          = 0;
[string] $basic       = "Basic";
[string] $basicTest   = "Basic + Test Plans";
```
Next, query all the entitlements with this script to identify inactive users:
```
# Send the REST API request and initialize the members array list.
$response = Invoke-RestMethod -Uri $request_GetEntitlements -headers $headers -Method Get
$response.items | ForEach-Object { $members.add($_.id) | out-null }
# Iterate through all user entitlements
$response.items | ForEach-Object {
  $name    = [string]$_.user.displayName;
  $date    = [DateTime]$_.lastAccessedDate;
  $expired = Get-Date;
  $expired = $expired.AddMonths($months);
  $license = [string]$_.accessLevel.AccountLicenseType;
  $licenseName = [string]$_.accessLevel.LicenseDisplayName;
  $count++;
  if ( $expired -gt $date ) {
    # Ignore users who have NEVER or NOT YET ACTIVATED their license
    if ( $date.Year -eq 1 )
    {
      Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
    }
    # Look for BASIC license
    elseif ( $licenseName -eq $basic ) {
         Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
      }
    }
    # Look for BASIC + TEST license
    elseif ( $licenseName -eq $basicTest ) {
        Write-Host " **INACTIVE** " " Name: " $name " Last Access: " $date "License: " $licenseName
      }
    }
}
```
When you run the script, you get the following output, which you can forward to the licensing team to reset the user licenses:
```
**INACTIVE** Name: Demo1 Last Access: 2019/09/06 11:01:26 AM License: Basic
**INACTIVE** Name: Demo2 Last Access: 2019/06/04 08:53:15 AM License: Basic
**INACTIVE** Name: Demo3 Last Access: 2019/09/26 12:54:57 PM License: Basic
**INACTIVE** Name: Demo4 Last Access: 2019/06/07 12:03:18 PM License: Basic
**INACTIVE** Name: Demo5 Last Access: 2019/07/18 10:35:11 AM License: Basic
**INACTIVE** Name: Demo6 Last Access: 2019/10/03 09:21:20 AM License: Basic
**INACTIVE** Name: Demo7 Last Access: 2019/10/02 11:45:55 AM License: Basic
**INACTIVE** Name: Demo8 Last Access: 2019/09/20 01:36:29 PM License: Basic + Test Plans
**INACTIVE** Name: Demo9 Last Access: 2019/08/28 10:58:22 AM License: Basic
```
If you automate the final step, automatically setting the user licenses to a free stakeholder license, as in Fig. 3, you can further reduce the overall lead time to 65 seconds and raise the overall efficiency to 77%.
![Fully automated PowerShell-based process to switch user license][9]
Fig. 3 Fully automated PowerShell-based process to switch user license
The core value of this PowerShell script is not just the ability to _automate_ but also to perform the process _regularly_, _consistently_, and _quickly_. Further improvements would trigger the script weekly or daily using a scheduler such as an Azure pipeline, but I will hold the programmatic license reset and script scheduling for a future article.
Here is a graph to visualize the progress:
![Graph to visualize progress][10]
Fig. 4 Measure, measure, measure
I hope you enjoyed this brief journey through automation, PowerShell, REST APIs, and value stream mapping. Please share your thoughts and feedback in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/devops-automation
作者:[Willy-Peter Schaub][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/wpschaub
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cicd_continuous_delivery_deployment_gears.png?itok=kVlhiEkc (CICD with gears)
[2]: http://www.donovanbrown.com/post/what-is-devops
[3]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_1.png (Manual process to switch user license)
[4]: https://opensource.com/sites/default/files/uploads/the_devops_quest_to_automate_everything_automatable_using_powershell_picture_2.png (Semi-automated PowerShell-based process to switch user license)
[5]: https://opensource.com/article/19/8/variables-powershell
[6]: https://github.com/powershell/powershell
[7]: https://docs.microsoft.com/en-us/azure/devops/user-guide/what-is-azure-devops?view=azure-devops
[8]: https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-5.1
[9]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_3.png (Fully automated PowerShell-based process to switch user license)
[10]: https://opensource.com/sites/default/files/uploads/devops_quest_to_automate_4.png (Graph to visualize progress)

View File

@ -1,336 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (MjSeven)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Scaling a GraphQL Website)
[#]: via: (https://theartofmachinery.com/2020/06/29/scaling_a_graphql_site.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
Scaling a GraphQL Website
======
For obvious reasons, I normally write abstractly about work Ive done for other people, but Ive been given permission to write about a website, [Vocal][1], that I did some SRE work on last year. I actually gave [a presentation at GraphQL Sydney back in February][2], but for various reasons its taken me this long to get it into a blog post.
Vocal is a GraphQL-based website that got traction and hit scaling problems that I got called in to fix. Heres what I did. Obviously, youll find this post useful if youre scaling another GraphQL website, but most of its representative of what you have to deal with when a site gets enough traffic to cause technical problems. If website scalability is a key interest of yours, you might want to read [my recent post about scalability][3] first.
### Vocal
![][4]
Vocal is a blogging platform publishing everything from diaries to movie reviews to opinion pieces to recipes to professional and amateur photography to beauty and lifestyle tips and poetry. Of course, theres no shortage of proud pet owners with cute cat and dog pictures.
![][5]
One thing thats a bit different about Vocal is that it lets everyday people get paid for producing works that viewers find interesting. Authors get a small amount of money per page view, and can also receive donations from other users. There are professionals using the platform to show off their work, but for most users its just a fun hobby that happens to make some extra pocket money as a bonus.
Vocal is the product of [Jerrick Media][6], a New Jersey startup. Development started in 2015 in collaboration with [Thinkmill][7], a medium-sized Sydney software development consultancy that specialises in all things JavaScript, React and GraphQL.
### Some spoilers for the rest of this post
I was told that unfortunately I cant give hard traffic numbers for legal reasons, but publicly available information can give an idea. Alexa ranks all websites it knows of by traffic level. Heres a plot of Alexa rank I showed in my talk, showing growth from November 2019 up to getting ranked number 5,567 in the world by February.
![Vocal global Alexa rank rising from#9,574 in November 2019 to #5,567 in February 2020.][8]
Its normal for the curve to slow down because it requires more and more traffic to win each position. Vocal is now at around #4,900. Obviously theres a long way to go, but thats not shabby at all for a startup. Most startups would gladly swap their Alexa rank with Vocal.
Shortly after the site was upgraded, Jerrick Media ran a marketing campaign that doubled traffic. All we had to do on the technical side was watch numbers go up in the dashboards. In the past 9 months since launch, there have only been two platform issues needing staff intervention: [the once-in-five-years AWS RDS certificate rotation that landed in March][9], and an app rollout hitting a Terraform bug. Ive been very happy with how little platform busywork is needed to keep Vocal running.
Heres an overview of the technical stuff Ill talk about in this post:
* Technical and historical background
* Database migration from MongoDB to Postgres
* Deployment infrastructure revamp
* Making the app compatible with scaling
* Making HTTP caching work
* Miscellaneous performances tweaks
### Some background
Thinkmill built a website using [Next.js][10] (a React-based web framework), talking to a GraphQL API provided by [Keystone][11] in front of MongoDB. Keystone is a GraphQL-based headless CMS library: you define a schema in JavaScript, hook it up to some data storage, and get an automatically generated GraphQL API for data access. Its a free and open-source software project thats commercially backed by Thinkmill.
#### Vocal V2
The version 1 of Vocal got traction. It found a userbase that liked the product, and it grew, and eventually Jerrick Media asked Thinkmill to help develop a version 2, which was successfully launched in September last year. The Jerrick Media folk avoided the [second system effect][12] by generally basing changes on user feedback, so they were [mostly UI and feature changes that I wont go into][13]. Instead, Ill talk about the stuff I was brought in for: making the new site more robust and scalable.
For the record, Im thankful that I got to work with Jerrick Media and Thinkmill on Vocal, and that they let me present this story, but [Im still an independent consultant][14]. I wasnt paid or even asked to write this post, and this is still my own personal blog.
### The database migration
Thinkmill suffered several scalability problems with using MongoDB for Vocal, and decided to upgrade Keystone to version 5 to take advantage of its new Postgres support.
If youve been in tech long enough to remember the “NoSQL” marketing from the end of the 00s, that might surprise you. The message was that relational (SQL) databases like Postgres arent as scalable as “webscale” NoSQL databases like MongoDB. Its technically true, but the scalability of NoSQL databases comes from compromises in the variety of queries that can be efficiently handled. Simple, non-relational databases (like document and key-value databases) have their places, but when used as a general-purpose backend for an app, the app often outgrows the querying limitations of the database before it outgrows the theoretical scaling limit a relational database would have. Most of Vocals DB queries worked just fine with MongoDB, but over time more and more queries needed hacks to work at all.
In terms of technical requirements, Vocal is very similar to Wikipedia, one of the biggest sites in the world. Wikipedia runs on MySQL (or rather, its fork, MariaDB). Sure, some significant engineering is needed to make that work, but I dont see relational databases being a serious threat to Vocals scaling in the foreseeable future.
At one point I checked, the managed AWS RDS Postgres instances cost less than a fifth of the old MongoDB instances, yet CPU usage of the Postgres instances was still under 10%, despite serving more traffic than the old site. Thats mostly because of a few important queries that just never were efficient under the document database architecture.
The migration could be a blog post of its own, but basically a Thinkmill dev built an [ETL pipeline][15] using [MoSQL][16] to do the heavy lifting. Thanks to Keystone being a FOSS project, I was also able to contribute some performance improvements to its GraphQL to SQL mapping. For that kind of stuff, I always recommend Markus Winands SQL blogs: [Use the Index Luke][17] and [Modern SQL][18]. His writing is friendly and accessible to non-experts, yet has most of the theory you need for writing fast and effective SQL. A good, DB-specific book on performance gives you the rest.
### The platform
#### The architecture
V1 was a couple of Node.js apps running on a single virtual private server (VPS) behind Cloudflare as a CDN. Im a fan of avoiding overengineering as a high priority, so that gets a thumbs up from me. However, by the time V2 development started, it was obvious that Vocal had outgrown that simple architecture. It didnt give Thinkmillers many options when handling big traffic spikes, and it made updates hard to deploy safely and without downtime.
Heres the new architecture for V2:
![Architecture of Vocal V2. Requests come through a CDN to a load balancer in AWS. The load balancer distributes traffic to two apps, "Platform" and "Website". "Platform" is a Keystone app storing data in Redis and Postgres.][19]
Basically, the two Node.js apps have been replicated and put behind a load balancer. Yes, thats it. In my SRE work, I often meet engineers who expect a scalable architecture to be more complicated than that, but Ive worked on sites that are orders of magnitude bigger than Vocal but are still just replicated services behind load balancers, with DB backends. If you think about it, if the platform architecture needs to keep getting significantly more complicated as the site grows, its not really very scalable. Website scalability is mostly about fixing the many little implementation details that prevent scaling.
Vocals architecture might need a few additions if traffic grows enough, but the main reason it would get more complicated is new features. For example, if (for some reason) Vocal needed to handle real-time geospatial data in future, that would be a very different technical beast from blog posts, so Id expect architectural changes for it. Most of the complexity in big site architecture is because of feature complexity.
If you dont know how to make your architecture scalable, I always recommend keeping it as simple as you can. Fixing an architecture thats too simple is easier and cheaper than fixing an architecture thats too complex. Also, an unnecessarily complex architecture is more likely to have mistakes, and those mistakes will be harder to debug.
By the way, Vocal happened to be split into two apps, but thats not important. A common scaling mistake is to prematurely split an app into smaller services in the name of scalability, but split the app in the wrong place and cause more scalability problems overall. Vocal could have scaled okay as a monolithic app, but the split is also in a good place.
#### The infrastructure
Thinkmill has a few people who have experience working with AWS, but its primarily a dev shop and needed something more “hands off” than the old Vocal deployment. I ended up deploying the new Vocal on [AWS Fargate][20], which is a relatively new backend to Elastic Container Service (ECS). In the old days, many people wanted ECS to be a simple “run my Docker container as a managed service” product, and were disappointed that they still had to build and manage their own server cluster. With ECS Fargate, AWS manages the cluster. It supports running Docker containers with the basic nice things like replication, health checking, rolling updates, autoscaling and simple alerting.
A good alternative would have been a managed Platform-as-a-Service (PaaS) like App Engine or Heroku. Thinkmill was already using them for simple projects, but often needed more flexibility with other projects. There are much bigger sites running on PaaSes, but Vocal is at a scale where a custom cloud deployment can make sense economically.
Another obvious alternative would have been Kubernetes. Kubernetes has a lot more features than ECS Fargate, but its a lot more expensive — both in resource overhead, and the staffing needed for maintenance (such as regular node upgrades). As a rule, I dont recommend Kubernetes to any place that doesnt have dedicated DevOps staff. Fargate has the features Vocal needs, and has let Thinkmill and Jerrick Media focus on website improvements, not infrastructure busywork.
Yet another option was “Serverless” function products like AWS Lambda or Google Cloud Functions. Theyre great for handling services with very low or highly irregular traffic, but (as Ill explain) ECS Fargates autoscaling is enough for Vocals backend. Another plus of these products is that they allow developers to deploy things in cloud environments without needing to learn a lot about cloud environments. The tradeoff is that the Serverless product becomes tightly coupled to the development process, and to the testing and debugging processes. Thinkmill already had enough AWS expertise in-house to manage a Fargate deployment, and any dev who knows how to make a Node.js Express Hello World app can work on Vocal without learning anything about either Serverless functions or Fargate.
An obvious downside of ECS Fargate is vendor lock-in. However, avoiding vendor lock-in is a tradeoff like avoiding downtime. If youre worried about migrating, it doesnt make sense to spend more on platform independence than you would on a migration. The total amount of Fargate-specific code in Vocal is &lt;500 lines of [Terraform][21]. The most important thing is that the Vocal app code itself is platform agnostic. It can run on normal developer machines, and then be packaged up into a Docker container that can run practically anywhere a Docker container can, including ECS Fargate.
Another downside of Fargate is that its not trivial to set up. Like most things in AWS, its in a world of VPCs, subnets, IAM policies, etc. Fortunately, that kind of stuff is quite static (unlike a server cluster that requires maintenance).
### Making a scaling-ready app
Theres a bunch of stuff to get right if you want to run an app painlessly at scale. Youre doing well if you follow [the Twelve-Factor App design][22], so I wont repeat it here.
Theres no point building a “scalable” system if staff cant operate it at scale — thats like putting a jet engine on a unicycle. An important part of making Vocal scalable was setting up stuff like CI/CD and [infrastructure as code][23]. Similarly, some deployment ideas arent worth it because they make production too different from the development environment (see also [point #10 of the Twelve-Factor App][24]). Every difference between production and development slows app development and can be expected to lead to a bug eventually.
### Caching
Caching is a really big topic — I once gave [a presentation on just HTTP caching][25], and that still wasnt enough. Ill stick to the essentials for GraphQL here.
First, an important warning: Whenever you have performance problems, you might wonder, “Can I make this faster by putting this value into a cache for future reuse?” **Microbenchmarks will practically _always_ tell you the answer is “yes”.** However, putting caches everywhere will tend to make your overall system **slower**, thanks to problems like cache coherency. Heres my mental checklist for caching:
1. Ask if the performance problem needs to be solved with caching
2. Really ask (non-caching performance wins tend to be more robust)
3. Ask if the problem can be solved by improving existing caches
4. If all else fails, maybe add a new cache
One cache system youll always have is the HTTP caching system, so a corollary is that its a good idea to use HTTP caching effectively before trying to add extra caches. Ill focus on that in this post.
Another very common trap is using a hash map or something inside the app for caching. [It works great in local development but performs badly when scaled.][26] The best thing is to use an explicit caching library that supports pluggable backends like Redis or Memcached.
#### The basics
There are two types of caches in the HTTP spec: private and public. Private caches are caches that dont share data with multiple users — in practice, the users browser cache. Public caches are all the rest. They include ones under your control (such as CDNs or servers like Varnish or Nginx) and ones that arent (proxies). Proxy caches are rarer in todays HTTPS world, but some corporate networks have them.
![][27]
Caching lookup keys are normally based on URLs, so caching is less painful if you stick to a “same content, same URL; different content, different URL” rule. I.e., give each page a canonical URL, and avoid “clever” tricks returning varying content from one URL. Obviously, this has implications for GraphQL API endpoints (that Ill discuss later).
Your servers can take custom configuration, but the primary way to configure HTTP caching is through HTTP headers you set on web responses. The most important header is `cache-control`. The following says that all caches down the line may cache the page for up to 3600 seconds (one hour):
```
cache-control: max-age=3600, public
```
For user-specific pages (such as user settings pages), its important to use `private` instead of `public` to tell public caches not to store the response and serve it to other users.
Another common header is `vary`. This tells caches that the response varies based on some things other than the URL. (Effectively it adds HTTP headers to the the cache key, alongside the URL.) Its a very blunt tool, which is why I recommend using a good URL structure instead if possible, but an important use case is telling browsers that the response depends on the login cookie, so that they update pages on login/logout.
```
vary: cookie
```
If a page can vary based on login status, you need `cache-control: private` (and `vary: cookie`) even on the public, logged out version, to make sure responses dont get mixed up.
Other useful headers include `etag` and `last-modified`, but I wont cover them here. You might still see some old headers like `expires` and `pragma: cache`. They were made obsolete by HTTP/1.1 back in 1997, so I only use them if I want to disable caching and Im feeling paranoid.
#### Clientside headers
Less well known is that the HTTP spec allows `cache-control` headers to be used in client requests to reduce the cache time and get a fresher response. Unfortunately `max-age` greater than 0 doesnt seem to be widely supported by browsers, but `no-cache` can be useful if you sometimes need a fresh response after an update.
#### HTTP caching and GraphQL
As above, the normal cache key is the URL. But GraphQL APIs often use just one endpoint (lets call it `/api/`). If you want a GraphQL query to be cachable, you need the query and its parameters to appear in the URL path, like `/api/?query={user{id}}&variables={"x":99}` (ignoring URL escaping). The trick is to configure your GraphQL client to use HTTP GET requests for queries (e.g., [set `useGETForQueries` for `apollo-link-http`][28]).
Mutations mustnt be cached, so they still need to use HTTP POST requests. With POST requests, caches will only see `/api/` as the URL path, but caches will refuse to cache POST requests outright. Remember: GET for non-mutating queries, POST for mutations. Theres a case where you might want to avoid GET for a query: if the query variables contain sensitive information. URLs have a habit of appearing in log files, browser history and chat channels, so sensitive information in URLs is usually a bad idea. Things like authentication should be done as non-cachable mutations, anyway, so this is a rare case, but one worth remembering.
Unfortunately, theres a problem: GraphQL queries tend to be much larger than REST API URLs. If you simply switch on GET-based queries, youll get some pretty big URLs, easily bigger than the ~2000 byte limit before some popular browsers and servers just wont accept them. A solution is to send some kind of query ID, instead of sending the whole query. (I.e., something like `/api/?queryId=42&variables={"x":99}`.) Apollo GraphQL server supports two ways of doing this.
One way is to [extract all the GraphQL queries from the code and build a lookup table thats shared serverside and clientside][29]. One downside is that it makes the build process more complicated. Another downside is that it couples the client project to the server project, which goes against a selling point of GraphQL. Yet another downside is that version X of your code might recognise a different set of queries from version Y of your code. This is a problem because 1) your replicated app will serve multiple versions during an update rollout, or rollback, and 2) clients might use cached JavaScript, even as you upgrade or downgrade the server.
Another way is what Apollo GraphQL calls [Automatic Persisted Queries (APQs)][30]. With APQs, the query ID is a hash of the query. The client optimistically makes a request to the server, referring to the query by hash. If the server doesnt recognise the query, the client sends the full query in a POST request. The server stores that query by hash so that it can be recognised in future.
![][31]
#### HTTP caching and Keystone 5
As above, Vocal uses Keystone 5 for generating its GraphQL API, and Keystone 5 works with Apollo GraphQL server. How do we actually set the caching headers?
Apollo supports cache hints on GraphQL schemas. The neat thing is that Apollo gathers all the hints for everything thats touched by a query, and then it automatically calculates the appropriate overall cache header values. For example, take this query:
```
query userAvatarUrl {
authenticatedUser {
name
avatar_url
}
}
```
If `name` has a max age of one day, and the `avatar_url` has a max age of one hour, the overall cache max age would be the minimum, one hour. `authenticatedUser` depends on the login cookie, so it needs a `private` hint, which overrides the `public` on the other fields, so the resulting header would be `cache-control: max-age=3600, private`.
I added [cache hint support to Keystone lists and fields][32]. Heres a simple example of adding a cache hint to a field in the to-do list demo from the docs:
```
const keystone = new Keystone({
name: 'Keystone To-Do List',
adapter: new MongooseAdapter(),
});
keystone.createList('Todo', {
schemaDoc: 'A list of things which need to be done',
fields: {
name: {
type: Text,
schemaDoc: 'This is the thing you need to do',
isRequired: true,
cacheHint: {
scope: 'PUBLIC',
maxAge: 3600,
},
},
},
});
```
#### One more problem: CORS
Cross-Origin Resource Sharing (CORS) rules create a frustrating conflict with caching in an API-based website.
Before getting stuck into the problem details, let me jump to the easiest solution: putting the main site and API onto one domain. If your site and API are served from one domain, you wont have to worry about CORS rules (but you might want to consider [restricting cookies][33]). If your API is specifically for the website, this is the cleanest solution, and you can happily skip this section.
In Vocal V1, the Website (Next.js) and Platform (Keystone GraphQL) apps were on different domains (`vocal.media` and `api.vocal.media`). To protect users from malicious websites, modern browsers dont just let one website interact with another. So, before allowing `vocal.media` to make requests to `api.vocal.media`, the browser would make a “pre-flight” check to `api.vocal.media`. This is an HTTP request using the `OPTIONS` method that essentially asks if the cross-origin sharing of resources is okay. After getting the okay from the pre-flight check, the browser makes the normal request that was originally intended.
The frustrating thing about pre-flight checks is that they are per-URL. The browser makes a new `OPTIONS` request for each URL, and the server response applies to that URL. [The server cant say that `vocal.media` is a trusted origin for all `api.vocal.media` requests][34]. This wasnt a serious problem when everything was a POST request to the one api endpoint, but after giving every query its own GET-able URL, every query got delayed by a pre-flight check. For extra frustration, the HTTP spec says `OPTIONS` requests cant be cached, so you can find that all your GraphQL data is beautifully cached in a CDN right next to the user, but browsers still have to make pre-flight requests all the way to the origin server every time they use it.
There are a few solutions (if you cant just use a shared domain).
If your API is simple enough, you might be able to exploit the [exceptions to the CORS rules][35].
Some cache servers can be configured to ignore the HTTP spec and cache `OPTIONS` requests anyway (e.g., Varnish-based caches and AWS CloudFront). This isnt as efficient as avoiding the pre-flight requests completely, but its better than the default.
Another (really hacky) option is [JSONP][36]. Beware: you can create security bugs if you dont get this right.
#### Making Vocal more cachable
After making HTTP caching work at the low level, I needed to make the app take better advantage of it.
A limitation of HTTP caching is that its all-or-nothing at the response level. Most of a response can be cachable, but if a single byte isnt, all bets are off. As a blogging platform, most Vocal data is highly cachable, but in the old site almost no _pages_ were cachable at all because of a menu bar in the top right corner. For an anonymous user, the menu bar would show links inviting the user to log in or create an account. That bar would change to a user avatar and profile menu for signed-in users. Because the page varied based on user login status, it wasnt possible to cache any of it in CDNs.
![A typical page from Vocal. Most of the page is highly cachable content, but in the old site none of it was actually cachable because of a little menu in the top right corner.][37]
These pages are generated by Server-Side Rendering (SSR) of React components. The fix was to take all the React components that depended on the login cookie, and force them to be [lazily rendered clientside only][38]. Now the server returns completely generic pages with placeholders for personalised components like the login menu bar. When a page loads in the users browser, these placeholders are filled in clientside by making calls to the GraphQL API. The generic pages can be safely cached in CDNs.
Not only does this trick improve cache hit ratios, it helps improve perceived page load time thanks to human psychology. Blank screens and even spinner animations make us impatient, but once the first content appears, it distracts us for several hundred milliseconds. If people click a Vocal post link from social media and the main content appears immediately from a CDN, very few will ever notice that some components arent fully interactive until a few hundred milliseconds later.
By the way, another trick for getting the first content in front of the user faster is to [stream render the SSR response as its generated][39], instead of waiting for the whole page to be rendered before sending it. Unfortunately, [Next.js doesnt support that yet][40].
The idea of splitting responses for improved cachability also applies to GraphQL. The ability to query multiple pieces of data with one request is normally an advantage of GraphQL, but if the different parts of the response have very different cachability, it can be better overall to split them. As a simple example, Vocals pagination component needs to know the number of pages plus the content for the current page. Originally the component fetched both in one query, but because the total number of pages is a constant across all pages, I made it a separate query so it can be cached.
#### Benefits of caching
The obvious benefit of caching is that it reduces the load on Vocals backend servers. Thats good, but its dangerous to rely on caching for capacity, though, because you still need a backup plan for when you inevitably drop the cache one day.
The improved responsiveness is a better reason for caching.
A couple of other benefits might be less obvious. Traffic spikes tend to be highly localised. If someone with a lot of social media followers shares a link to a page, Vocal will get a big surge of traffic, but mostly to that one page and its assets. Thats why caches are good at absorbing the worst traffic spikes, making the backend traffic patterns relatively smoother and easier for autoscaling to handle.
Another benefit is graceful degradation. Even if the backends are in serious trouble for some reason, the most popular parts of the site will still be served from the CDN cache.
### Other performance tweaks
As I always say, the secret to scaling isnt making things complicated. Its making things no more complicated than needed, and then thoroughly fixing all the things that prevent scaling. Scaling Vocal involved a lot of little things that wont fit in this post.
Heres one tip: for the difficult debugging problems in distributed systems, the hardest part is usually getting the right data to see whats going on. I can think of plenty of times that Ive got stuck and tried to just “wing it” by guessing instead of figuring out how to find the right data. Sometimes that works, but not for the hard problems.
A related tip is that you can learn a lot by getting real-time data (even just log files under [`tail -F`][41]) on each component in a system, displaying it in various windows in one monitor, and just clicking around the site in another. Im talking about things like, “Hey, why does toggling this one checkbox generate dozens of DB queries in the backend?”
Heres an example of one fix. Some pages were taking more than a couple of seconds to render, but only in the deployment environment, and only with SSR. The monitoring dashboards didnt show any CPU usage spikes, and the apps werent using disk, so it suggested that maybe the app was waiting on network requests, probably to a backend. In a dev environment I could watch how the app worked using [the sysstat tools][42] to record CPU/RAM/disk usage, along with Postgres statement logging and the usual app logs. [Node.js supports probes for tracing HTTP requests][43] using something like [bpftrace][44], but boring reasons meant they didnt work in the dev environment, so instead I found the probes in the source code and made a custom Node.js build with request logging. I used [tcpdump][45] to record network data. That let me find the problem: for every API request made by Website, a new network connection was being created to Platform. (If that hadnt worked, I guess I would have added request tracing to the apps.)
Network connections are fast on a local machine, but take non-negligible time on a real network. Setting up an encrypted connection (like in the production environment) takes even longer. If youre making lots of requests to one server (like an API), its important to keep the connection open and reuse it. Browsers do that automatically, but Node.js doesnt by default because it cant know if youre making more requests. Thats why the problem only appeared with SSR. Like many long debugging sessions, the fix was very simple: just configure SSR to [keep connections alive][46]. The rendering time of the slower pages dropped dramatically.
If you want to know more about this kind of stuff, I highly recommend reading [the High Performance Browser Networking book][47] (free to read online) and following up with [guides Brendan Gregg has published][48].
### What about your site?
Theres actually a lot more stuff we could have done to improve Vocal, but we didnt do it all. Thats a big difference between doing SRE work for a startup and doing it for a big company as a permanent employee. We had goals, a budget and a launch date, and now Vocal V2 has been running for 9 months with a healthy growth rate.
Similarly, your site will have its own requirements, and is likely quite different from Vocal. However, I hope this post and its links give you at least some useful ideas to make something better for users.
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2020/06/29/scaling_a_graphql_site.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: https://vocal.media
[2]: https://www.meetup.com/en-AU/GraphQL-Sydney/events/267681845/
[3]: https://theartofmachinery.com/2020/04/21/what_is_high_traffic.html
[4]: https://theartofmachinery.com/images/scaling_a_graphql_site/vocal1.png
[5]: https://theartofmachinery.com/images/scaling_a_graphql_site/vocal2.png
[6]: https://jerrick.media
[7]: https://www.thinkmill.com.au/
[8]: https://theartofmachinery.com/images/scaling_a_graphql_site/alexa.png
[9]: https://aws.amazon.com/blogs/database/amazon-rds-customers-update-your-ssl-tls-certificates-by-february-5-2020/
[10]: https://github.com/vercel/next.js
[11]: https://www.keystonejs.com/
[12]: https://wiki.c2.com/?SecondSystemEffect
[13]: https://vocal.media/resources/vocal-2-0
[14]: https://theartofmachinery.com/about.html
[15]: https://en.wikipedia.org/wiki/Extract,_transform,_load
[16]: https://github.com/stripe/mosql
[17]: https://use-the-index-luke.com/
[18]: https://modern-sql.com/
[19]: https://theartofmachinery.com/images/scaling_a_graphql_site/architecture.svg
[20]: https://aws.amazon.com/fargate/
[21]: https://www.terraform.io/docs/providers/aws/r/ecs_task_definition.html
[22]: https://12factor.net/
[23]: https://theartofmachinery.com/2019/02/16/talks.html
[24]: https://12factor.net/dev-prod-parity
[25]: https://www.meetup.com/en-AU/Port80-Sydney/events/lwcdjlyvjblb/
[26]: https://theartofmachinery.com/2016/07/30/server_caching_architectures.html
[27]: https://theartofmachinery.com/images/scaling_a_graphql_site/http_caches.svg
[28]: https://www.apollographql.com/docs/link/links/http/#options
[29]: https://www.apollographql.com/blog/persisted-graphql-queries-with-apollo-client-119fd7e6bba5
[30]: https://www.apollographql.com/blog/improve-graphql-performance-with-automatic-persisted-queries-c31d27b8e6ea
[31]: https://theartofmachinery.com/images/scaling_a_graphql_site/apq.png
[32]: https://www.keystonejs.com/api/create-list/#cachehint
[33]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies#Define_where_cookies_are_sent
[34]: https://lists.w3.org/Archives/Public/public-webapps/2012AprJun/0236.html
[35]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simple_requests
[36]: https://en.wikipedia.org/wiki/JSONP
[37]: https://theartofmachinery.com/images/scaling_a_graphql_site/cachablepage.png
[38]: https://nextjs.org/docs/advanced-features/dynamic-import#with-no-ssr
[39]: https://medium.com/the-thinkmill/progressive-rendering-the-key-to-faster-web-ebfbbece41a4
[40]: https://github.com/vercel/next.js/issues/1209
[41]: https://linux.die.net/man/1/tail
[42]: https://github.com/sysstat/sysstat/
[43]: http://www.brendangregg.com/blog/2016-10-12/linux-bcc-nodejs-usdt.html
[44]: https://theartofmachinery.com/2019/04/26/bpftrace_d_gc.html
[45]: https://danielmiessler.com/study/tcpdump/
[46]: https://www.npmjs.com/package/agentkeepalive
[47]: https://hpbn.co/
[48]: http://www.brendangregg.com/

View File

@ -1,98 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Run your favorite Windows applications on Linux)
[#]: via: (https://opensource.com/article/21/2/linux-wine)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
在你的Linux系统上运行Windows软件
======
WINE是一个让你在Linux系统上运行windows本地程序的开源项目。
![Computer screen with files or windows open][1]
2021年有很多原因让人们比以往更加喜欢Linux系统。在这一系列的文章中我们将分享21个使用Linux系统的理由。下面将介绍如何使用WINE实现从windows系统到Linux系统的无缝转换。
你是否有一个程序只能在windows平台上运行是不是由于某一个程序阻碍了你使用Linux系统如果是这样的话你将会很想了解WINE这是一个开源项目它彻底地改变了了
Windows系统核心库让原生Windows程序能运行在你的Linux系统上。
WINE 的意思是“Wine不是一个模糊测试器”它使用了驱动这项技术的代码。自从1993年以来极客们一直致力于将应用程序的任何WIndows API调用转换成[POSIX][2]
这是一个惊人的编程壮举尤其是这个项目是独立运行的没有微软的帮助至少来讲但是有限制。应用程序离Windows API的核心越来越远WINE就无法预料到应用程序的需求。 有一些供应商可以弥补这一点,特别是[Codeweavers][3] 和[Valve Software][4]。需要得到支持的应用程序的生产商,与进行开发的公司和人没有协调。因此例如在更新软件上,从[WINE headquarters][5]到获得到“黄金”支持地位时,可能会存在一些延迟时间。
However, if you're looking to run a well-known Windows application on Linux, the chances are good that WINE is ready for it.
然而如果你你希望在Linux上运行一个著名的Windows应用程序的时候那么WINE很可能已经准备好了。
### 安装WINE
你可以从Fedora,CentOS Stream,或者RHEL等发型版本的软件储存仓库安装WINE。
```
`$ sudo dnf install wine`
```
在Debian, Linux Mint,Elementary上的安装方法相似
```
`$ sudo apt install wine`
```
WINE不是一个你启动的应用程序它是一个你启动Windows应用程序的后端支持软件。你WINE的第一次打交道很可能发生在你在Linux上启动Windows应用程序时。
### 安装应用程序
[TinyCAD][6]是一个很好的设计电路的开源应用程序但只适用于Windows系统。虽然它是一个小程序但它的确包含了不少.NET组件因此应该对于WINE来说有一点压力。
首先下载TinyCAD的安装程序与Windows安装程序的常见情况一样它是一个EXE文件。下载后双击运行它。
![WINE TinyCAD installation wizard][7]
首先通过WINE安装的步骤就如在Windows上安装软件相同。一般都采用默认方案安装尤其是在使用WINE的时候。WINE的运行环境是独立的隐藏在你的硬件驱动**drive_c**文件中可以让Windows程序如在Windows系统中一般采用管理员的的权限运行在模拟的系统环境中。
![WINE TinyCAD installation and destination drive][8]
WINE TinyCAD 的运行位置
安装后,应用程序通常会为你启动。如果你准备好进行测试,请启动应用程序。
### 运行Windows应用程序
除了安装后立即启动外通常启动WINE应用程序的方式与启动本机Linux应用程序的方式相同。无论你是使用应用程序菜单还是活动屏幕应用程序的名称在WINE中运行的桌面Windows应用程序基本上都被视为Linux上的本机应用程序。
![TinyCAD running with WINE][9]
TinyCAD通过WINE得到运行支持
### 当WINE崩溃时
大多数我在使用WINE运行的应用程序包括TinyCAD或者其他的程序都能正常运行。然而有一些例外情况当你等了几个月后看看WINE的开发人员或者说有游戏软件
是否能赶上开发进度或者说你可以联系Codeweavers这样的供应商了解他们是否销售对于你需要的应用程序支持。
### WINE是“欺骗”但“是有益处”
一些Linux用户认为如果你使用WINE你就是在Linux上“作弊”。也许会有这种感觉但WINE是一个开源项目它允许用户切换到Linux并且仍然可以运行他们工作或爱好所需的应用程序。如果WINE解决了你的问题并让你更加方便的使用Linux系统那么就使用它并接受Linux系统的灵活性。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/2/linux-wine
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[hongsofwing](https://github.com/hongsofwing)
校对:[hongsofwing](https://github.com/hongsofwing)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
[3]: https://www.codeweavers.com/crossover
[4]: https://github.com/ValveSoftware/Proton
[5]: http://winehq.org
[6]: https://sourceforge.net/projects/tinycad/
[7]: https://opensource.com/sites/default/files/wine-tinycad-install.jpg
[8]: https://opensource.com/sites/default/files/wine-tinycad-drive_0.jpg
[9]: https://opensource.com/sites/default/files/wine-tinycad-running.jpg

View File

@ -1,110 +0,0 @@
[#]: subject: (Set and use environment variables in FreeDOS)
[#]: via: (https://opensource.com/article/21/6/freedos-environment-variables)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Set and use environment variables in FreeDOS
======
Environment variables are helpful in almost every command-line
environment, including FreeDOS.
![Looking at a map for career journey][1]
A useful feature in almost every command-line environment is the _environment variable_. Some of these variables allow you to control the behavior or features of the command line, and other variables simply allow you to store data that you might need to reference later. Environment variables are also used in FreeDOS.
### Variables on Linux
On Linux, you may already be familiar with several of these important environment variables. In the [Bash][2] shell on Linux, the `PATH` variable identifies where the shell can find programs and commands. For example, on my Linux system, I have this `PATH` value:
```
bash$ echo $PATH
/home/jhall/bin:/usr/lib64/ccache:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin
```
That means when I type a command name like `cat`, Bash will check each of the directories listed in my `PATH` variable, in order:
1. `/home/jhall/bin`
2. `/usr/lib64/ccache`
3. `/usr/local/bin`
4. `/usr/local/sbin`
5. `/usr/bin`
6. `/usr/sbin`
And in my case, the `cat` command is located in the `/usr/bin` directory, so the full path to that command is `/usr/bin/cat`.
To set an environment variable on Linux, you type the name of the variable, then an equals sign (`=`) and then the value to store in the variable. To reference that value later using Bash, you type a dollar sign (`$`) in front of the variable name.
```
bash$ var=Hello
bash$ echo $var
Hello
```
### Variables on FreeDOS
On FreeDOS, environment variables serve a similar function. Some variables control the behavior of the DOS system, and others are useful to store some temporary value.
To set an environment variable on FreeDOS, you need to use the `SET` keyword. FreeDOS is _case insensitive_, so you can type that using either uppercase or lowercase letters. Then set the variable as you might on Linux, using the variable name, an equals sign (`=`), and the value you want to store.
However, referencing or _expanding_ an environment variable's value in FreeDOS is quite different from how you do it on Linux. You can't use the dollar sign (`$`) to reference a variable in FreeDOS. Instead, you need to surround the variable's name with percent signs (`%`).
![Use % \(not $\) to reference a variable's value][3]
(Jim Hall, [CC-BY SA 4.0][4])
It's important to use the percent signs both before and after the name because that's how FreeDOS knows where the variable name begins and ends. This is very useful, as it allows you to reference a variable's value while immediately appending (or prepending) other text to the value. Let me demonstrate this by setting a new variable called `reply` with the value `yes`, then referencing that value with the text "11" before and "22" after it:
![Set and reference an environment variable][5]
(Jim Hall, [CC-BY SA 4.0][4])
Because FreeDOS is case insensitive you can also use uppercase or lowercase letters for the variable name, as well as the `SET` keyword. However, the variable's value will use the letter case as you typed it on the command line.
Finally, you can see a list of all the environment variables currently defined in FreeDOS. Without any arguments, the `SET` keyword will display all variables, so you can see everything at a glance:
![Show all variables at once with SET][6]
(Jim Hall, [CC-BY SA 4.0][4])
Environment variables are a useful staple in command-line environments, and the same applies to FreeDOS. You can set your own variables to serve your own needs, but be careful about changing some of the variables that FreeDOS uses. These can change the behavior of your running FreeDOS system:
* **DOSDIR**: The location of the FreeDOS installation directory, usually `C:\FDOS`
* **COMSPEC**: The current instance of the FreeDOS shell, usually `C:\COMMAND.COM` or `%DOSDIR%\BIN\COMMAND.COM`
* **LANG**: The user's preferred language
* **NLSPATH**: The location of the system's language files, usually `%DOSDIR%\NLS` 
* **TZ**: The system's time zone
* **PATH**: A list of directories where FreeDOS can find programs to run, such as `%DOSDIR%\BIN`
* **HELPPATH**: The location of the system's documentation files, usually `%DOSDIR%\HELP`
* **TEMP**: A temporary directory where FreeDOS stores output from each command as it "pipes" data between programs on the command line
* **DIRCMD**: A variable that controls how the `DIR` command displays files and directories, typically set to `/OGNE` to order (O) the contents by grouping (G) directories first, then sorting entries by name (N) then extension (E)
If you accidentally change any of the FreeDOS "internal" variables, you could prevent some parts of FreeDOS from working properly. In that case, simply reboot your computer, and FreeDOS will reset the variables from the system defaults.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/freedos-environment-variables
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/career_journey_road_gps_path_map_520.png?itok=PpL6jJgY (Looking at a map for career journey)
[2]: https://opensource.com/article/19/8/using-variables-bash
[3]: https://opensource.com/sites/default/files/uploads/env-path.png (Use % (not $) to reference a variable's value)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/env-vars.png (Set and reference an environment variable)
[6]: https://opensource.com/sites/default/files/uploads/env-set.png (Show all variables at once with SET)

View File

@ -1,221 +0,0 @@
[#]: subject: (Automate tasks with BAT files on FreeDOS)
[#]: via: (https://opensource.com/article/21/6/automate-tasks-bat-files-freedos)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Automate tasks with BAT files on FreeDOS
======
Here's a helpful guide to batch files under FreeDOS.
![Tips and gears turning][1]
Even if you haven't used DOS before, you are probably aware of its command-line shell, named simply `COMMAND.COM`. The `COMMAND.COM` shell has become synonymous with DOS, and so it's no surprise that FreeDOS also implements a similar shell called "FreeCOM"—but named `COMMAND.COM` just as on other DOS systems.
But the FreeCOM shell can do more than just provide a command-line prompt where you run commands. If you need to automate tasks on FreeDOS, you can do that using _batch files_, also called "BAT files" because these scripts use the `.BAT` extension.
Batch files are much simpler than scripts you might write on Linux. That's because when this feature was originally added to DOS, long ago, it was meant as a way for DOS users to "batch up" certain commands. There's not much flexibility for conditional branching, and batch files do not support more advanced features such as arithmetic expansion, separate redirection for standard output vs error messages, background processes, tests, loops, and other scripting structures that are common in Linux scripts.
Here's a helpful guide to batch files under FreeDOS. Remember to reference environment variables by wrapping the variable name with percent signs (`%`) such as `%PATH%`. However, note that `FOR` loops use a slightly different construct for historical reasons.
### Printing output
Your batch file might need to print messages to the user, to let them know what's going on. Use the `ECHO` statement to print messages. For example, a batch file might indicate it is done with a task with this statement:
```
`ECHO Done`
```
You don't need quotes in the `ECHO` statement. The FreeCOM `ECHO` statement will not treat quotes in any special way and will print them just like regular text.
Normally, FreeDOS prints out every line in the batch file as it executes them. This is usually not a problem in a very short batch file that only defines a few environment variables for the user. But for longer batch files that do more work, this constant display of the batch lines can become bothersome. To suppress this output, use the `OFF` keyword to the `ECHO` statement, as:
```
`ECHO OFF`
```
To resume displaying the batch lines as FreeDOS runs them, use the `ON` keyword instead:
```
`ECHO ON`
```
Most batch files include an `ECHO OFF` statement on the first line, to suppress messages. But the shell will still print `ECHO OFF` to the screen as it executes that statement. To hide that message, batch files often use an at sign (`@`) in front. This special character at the start of any line in a batch file suppresses printing that line, even if `ECHO` is turned on.
```
`@ECHO OFF`
```
### Comments
When writing any long batch file, most programmers prefer to use _comments_ to remind themselves about what the batch file is meant to do. To enter a comment in a batch file, use the `REM` (for _remark_) keyword. Anything after `REM` gets ignored by the FreeCOM shell.
```
@ECHO OFF
REM This is a comment
```
### Executing a "secondary" batch file
Normally, FreeCOM only runs one batch file at a time. However, you might need to use another batch file to do certain things, such as set environment variables that are common across several batch files.
If you simply call the second batch file from a "running" batch file, FreeCOM switches entirely to that second batch file and stops processing the first one. To instead run the second batch file "inside" the first batch file, you need to tell the FreeCOM shell to _call_ the second batch file with the `CALL` keyword.
```
@ECHO OFF
CALL SETENV.BAT
```
### Conditional evaluation
Batch files do support a simple conditional evaluation structure with the `IF` statement. This has three basic forms:
1. Testing the return status of the previous command
2. Testing if a variable is equal to a value
3. Testing if a file exists
A common use of the `IF` statement is to test if a program returned successfully to the operating system. Most programs will return a zero value if they completed normally, or some other value in case of an error. In DOS, this is called the _error level_ and is a special case to the `IF` test.
To test if a program called `MYPROG` exited successfully, you actually want to examine if the program returned a "zero" error level. Use the `ERRORLEVEL` keyword to test for a specific value, such as:
```
@ECHO OFF
MYPROG
IF ERRORLEVEL 0 ECHO Success
```
Testing the error level with `ERRORLEVEL` is a clunky way to examine the exit status of a program. A more useful way to examine different possible return codes for a DOS program is with a special variable FreeDOS defines for you, called `ERRORLEVEL`. This stores the error level of the most recently executed program. You can then test for different values using the `==` test.
You can test if a variable is equal to a value using the `==` test with the `IF` statement. Like some programming languages, you use `==` to directly compare two values. Usually, you will reference an environment variable on one side and a value on the other, but you could also compare the values of two variables to see if they are the same. For example, you could rewrite the above `ERRORLEVEL` code with this batch file:
```
@ECHO OFF
MYPROG
IF %ERRORLEVEL%==0 ECHO Success
```
And another common use of the `IF` statement is to test if a file exists, and take action if so. You can test for a file with the `EXIST` keyword. For example, to delete a temporary file called `TEMP.DAT`, you might use this line in your batch file:
```
@ECHO OFF
IF EXIST TEMP.DAT DEL TEMP.DAT
```
With any of the `IF` statements, you can use the `NOT` keyword to _negate_ a test. To print a message if a file _does not_ exist, you could write:
```
@ECHO OFF
IF NOT EXIST TEMP.DAT ECHO No file
```
### Branched execution
One way to leverage the `IF` test is to jump to an entirely different part of the batch file, depending on the outcome of a previous test. In the simplest case, you might want to skip to the end of the batch file if a key command fails. Or you might want to execute other statements if certain environment variables are not set up correctly.
You can skip around to different parts of a batch file using the `GOTO` instruction. This jumps to a specific line, called a _label_, in the batch file. Note that this is a strict "go-to" jump; batch file execution picks up at the new label.
Let's say a program needed an existing empty file to store temporary data. If the file did not exist, you would need to create a file before running the program. You might add these lines to a batch file, so your program always has a temporary file to work with:
```
@ECHO OFF
IF EXIST temp.dat GOTO prog
ECHO Creating temp file...
TOUCH temp.dat
:prog
ECHO Running the program...
MYPROG
```
Of course, this is a very simple example. For this one case, you might instead rewrite the batch file to create the temporary file as part of the `IF` statement:
```
@ECHO OFF
IF NOT EXIST temp.dat TOUCH temp.dat
ECHO Running the program...
MYPROG
```
### Iteration
What if you need to perform the same task over a set of files? You can _iterate_ over a set of files with the `FOR` loop. This is a one-line loop that runs a single command with a different file each time.
The `FOR` loop uses a special syntax for the iteration variable, which is used differently than other DOS environment variables. To loop through a set of text files so you can edit each one, in turn, use this statement in your batch file:
```
@ECHO OFF
FOR %F IN (*.TXT) DO EDIT %F
```
Note that the iteration variable is specified with only one percent sign (`%`) if you run this loop at the command line, without a batch file:
```
`C:\> FOR %F IN (*.TXT) DO EDIT %F`
```
### Command-line processing
FreeDOS provides a simple method to evaluate any command-line options the user might have provided when running batch files. FreeDOS parses the command line, and stores the first nine batch file options in the special variables `%1`, `%2`, .. and so on until `%9`. Notice that the eleventh option (and beyond) are not directly accessible in this way. (The special variable `%0` stores the name of the batch file.)
If your batch file needs to process more than nine options, you can use the `SHIFT` statement to remove the first option and _shift_ every option down by one value. So the second option becomes `%1`, and the tenth option becomes `%9`.
Most batch files need to shift by one value. But if you need to shift by some other increment, you can provide that parameter to the `SHIFT` statement, such as:
```
`SHIFT 2`
```
Here's a simple batch file that demonstrates shifting by one:
```
@ECHO OFF
ECHO %1 %2 %3 %4 %5 %6 %7 %8 %9
ECHO Shift by one ..
SHIFT 1
ECHO %1 %2 %3 %4 %5 %6 %7 %8 %9
```
Executing this batch file with ten arguments shows how the `SHIFT` statement reorders the command line options, so the batch file can now access the tenth argument as `%9`:
```
C:\SRC&gt;args 1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9
Shift by one ..
2 3 4 5 6 7 8 9 10
C:\SRC&gt;
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/automate-tasks-bat-files-freedos
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/gears_devops_learn_troubleshooting_lightbulb_tips_520.png?itok=HcN38NOk (Tips and gears turning)

View File

@ -2,7 +2,7 @@
[#]: via: (https://opensource.com/article/21/6/freedos-fdconfigsys)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -2,7 +2,7 @@
[#]: via: (https://opensource.com/article/21/6/freedos-package-manager)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,158 +0,0 @@
[#]: subject: (Install FreeDOS without the installer)
[#]: via: (https://opensource.com/article/21/6/install-freedos-without-installer)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
Install FreeDOS without the installer
======
Here's how to set up your FreeDOS system manually without using the
installer.
![FreeDOS fish logo and command prompt on computer][1]
Most people should be able to install FreeDOS 1.3 RC4 very easily using the installer. The FreeDOS installer asks a few questions, then takes care of the rest—including making space for FreeDOS and making the system bootable.
But what if the installer doesn't work for you? Or what if you prefer to set up your FreeDOS system _manually_, without using the installer? With FreeDOS, you can do that too! Let's walk through the steps to install FreeDOS without using the installer. I'll do all of these steps using the QEMU virtual machine, using a blank hard drive image. I created a one hundred megabyte ("100M") hard drive image with this Linux command:
```
`$ qemu-img create freedos.img 100M`
```
I downloaded the FreeDOS 1.3 RC4 installation LiveCD as FD13LIVE.iso, which provides a "live" environment where I can run FreeDOS, including all the standard tools. Most users also use the LiveCD to install FreeDOS with the regular installer, but here I'll only use the LiveCD to install FreeDOS using individual commands from the command line.
I started the virtual machine with this rather long QEMU command, and selected the "Use FreeDOS 1.3 in Live Environment mode" boot menu entry:
```
`$ qemu-system-x86_64 -name FreeDOS -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off -enable-kvm -cpu host -m 8 -overcommit mem-lock=off -no-user-config -nodefaults -rtc base=utc,driftfix=slew -no-hpet -boot menu=on,strict=on -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on -hda freedos.img -cdrom FD13LIVE.iso -device sb16 -device adlib -soundhw pcspk -vga cirrus -display sdl -usbdevice mouse`
```
![manual install][2]
Select "Use FreeDOS 1.3 in Live Environment mode" to boot the LiveCD
(Jim Hall, [CC-BY SA 4.0][3])
That QEMU command line includes a bunch of options that may seem confusing at first. You configure QEMU entirely with command-line options, so there is a lot to examine here. But I'll briefly highlight a few important options:
* `-m 8:` Set the system memory ("RAM") to 8 megabytes
* **`-boot menu=on,strict=on:`** Use a boot menu, so I can select if I want to boot from the CD-ROM image or the hard drive image
* **`-hda freedos.img`:** Use **freedos.img** as the hard drive image
* `-cdrom FD13LIVE.iso:`Use **FD13LIVE.iso** as the CD-ROM image
* **`-device sb16 -device adlib -soundhw pcspk`:** Define the machine with a SoundBlaster16 sound card, AdLib digitial music card, and PC speaker emulation (these are useful if you want to play DOS games)
* **`-usbdevice mouse`:** Recognize the user's mouse as a USB mouse (click in the QEMU window to use the mouse)
### Partition the hard drive
You can use FreeDOS 1.3 RC4 from the LiveCD, but if you want to install FreeDOS to your computer, you'll first need to make space on the hard drive. This requires creating a _partition_ with the FDISK program.
From the DOS command line, type `FDISK` to run the _fixed disk_ setup program. FDISK is a full-screen interactive program, and you only need to type a number to select a menu item. From the main FDISK menu, enter "1" to create a DOS partition on the drive, then enter "1" on the next screen to create a _primary_ DOS partition.
![using fdisk][4]
Select "1" to create a partition
(Jim Hall, [CC-BY SA 4.0][3])
![using fdisk][5]
Select "1" on the next menu to make a primary partition
(Jim Hall, [CC-BY SA 4.0][3])
FDISK asks you if you wish to use the full size of the hard disk to create the partition. Unless you need to share space on this hard drive with another operating system, such as Linux, you should answer "Y" to this prompt.
After FDISK creates the new partition, you'll need to reboot before DOS can recognize the new partition information. Like all DOS operating systems, FreeDOS identifies the hard drive information only when it boots up. So if you create or delete any disk partitions, you'll need to reboot so FreeDOS recognizes the changed partition information. FDISK reminds you to reboot, so you don't forget.
![using fdisk][6]
You need to reboot to recognize the new partition
(Jim Hall, [CC-BY SA 4.0][3])
You can reboot by stopping and restarting the QEMU virtual machine, but I prefer to reboot FreeDOS from the FreeDOS command line, using the FreeDOS Advanced Power Management (FDADPM) tool. To reboot, type the command `FDADPM /WARMBOOT` and FreeDOS reboots itself.
### Formatting the hard drive
After FreeDOS restarts, you can continue setting up the hard drive. Creating the disk partition was "step 1" in this process; now you need to make a DOS _filesystem_ on the partition so FreeDOS can use it.
DOS systems identify "drives" using the letters `A` through `Z`. FreeDOS recognizes the first partition on the first hard drive as the `C` drive, and so on. You often indicate the drive with the letter and a colon (`:`) so the new partition we created above is actually the `C:` drive.
You can create a DOS filesystem on the new partition with the FORMAT command. This command takes a few options, but we'll only use the `/S` option to tell FORMAT to make the new filesystem bootable—the "S" means to install the FreeDOS "System" files. Type `FORMAT /S C:` to make a new DOS filesystem on the `C:` drive.
![formatting the disk][7]
Format the partition to create the DOS filesystem
(Jim Hall, [CC-BY SA 4.0][3])
With the `/S` option, FORMAT will run the SYS program to transfer the system files. You'll see this as part of the output from FORMAT:
![formatting the disk][8]
FORMAT /S will use SYS to make the disk bootable
(Jim Hall, [CC-BY SA 4.0][3])
### Installing software
Having created a new partition with FDISK and a new filesystem with FORMAT, the new `C:` drive is basically empty. At this point, the `C:` drive only contains a copy of the kernel and the `COMMAND.COM` command-line shell. To do anything useful with the new disk, we need to install software on it. This is the last step for the manual install process.
The FreeDOS 1.3 RC4 LiveCD contains all of the software you might want to install on the new system. Each FreeDOS program is available as a separate "package," which is really just a Zip archive file. The packages that set up the standard DOS environment are stored in the `BASE` directory, under the `PACKAGES` directory on the LiveCD.
You could install the packages by "unzipping" each of them to the hard drive, one at a time. With 62 individual packages in the "Base" group, installing each package individually would take a very long time. However, you can run a one-line `FOR` "loop" command to "unzip" each program. Then FreeDOS can "unzip" all of the packages for you.
The basic usage of the `FOR` loop indicates a single-letter variable (let's use `%F`) that FreeDOS uses to "fill in" the filename later. The `FOR` also requires a list of files in brackets and the command that it should run against each of the files. The syntax to unzip a list of Zip files looks like this:
```
`FOR %F IN (*.ZIP) DO UNZIP %F`
```
This extracts all of the Zip files into the current directory. To extract or "unzip" the files into a different location, use the `-d` ("destination") option at the end of the `UNZIP` command line. For most FreeDOS systems, you will want to install the software packages to the `C:\FDOS` directory:
![installing the software][9]
Unzip all of the Base packages to finish installing FreeDOS
(Jim Hall, [CC-BY SA 4.0][3])
FreeDOS takes care of the rest, installing all 62 packages to your system. This may take several minutes because DOS can be slow when working with lots of individual files—and this command needs to extract 62 Zip files. The installation process would run a lot faster if we used a single `BASE.ZIP` archive file, but using packages provides more flexibility in what software you might want to install versus what you choose to leave out.
![installing the software][10]
After installing all the Base packages
(Jim Hall, [CC-BY SA 4.0][3])
After you've installed everything, reboot your system with `FDADPM /WARMBOOT`. Installing manually means your new FreeDOS system won't have the usual `FDCONFIG.SYS` configuration file, so FreeDOS will assume a few typical default values when it starts up. Without the `AUTOXEC.BAT` file, FreeDOS also prompts you for the time and date.
![rebooting FreeDOS][11]
Rebooting FreeDOS after a manual install
(Jim Hall, [CC-BY SA 4.0][3])
Most users should be able to use the more user-friendly process to install FreeDOS on a new computer. But if you want to install it yourself the "old school" way, you can also run the installation steps manually. This can provide some additional flexibility and control because you install everything yourself. And now you know how to do it.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/install-freedos-without-installer
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos-fish-laptop-color.png?itok=vfv_Lpph (FreeDOS fish logo and command prompt on computer)
[2]: https://opensource.com/sites/default/files/uploads/manual-install3.png (Select "Use FreeDOS 1.3 in Live Environment mode" to boot the LiveCD)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://opensource.com/sites/default/files/uploads/manual-install6.png (Select "1" to create a partition)
[5]: https://opensource.com/sites/default/files/uploads/manual-install7.png (Select "1" on the next menu to make a primary partition)
[6]: https://opensource.com/sites/default/files/uploads/manual-install10.png (You need to reboot to recognize the new partition)
[7]: https://opensource.com/sites/default/files/uploads/manual-install13.png (Format the partition to create the DOS filesystem)
[8]: https://opensource.com/sites/default/files/uploads/manual-install14.png (FORMAT /S will use SYS to make the disk bootable)
[9]: https://opensource.com/sites/default/files/uploads/manual-install18.png (Unzip all of the Base packages to finish installing FreeDOS)
[10]: https://opensource.com/sites/default/files/uploads/manual-install24.png (After installing all the Base packages)
[11]: https://opensource.com/sites/default/files/uploads/manual-install28.png (Rebooting FreeDOS after a manual install)

View File

@ -1,155 +0,0 @@
[#]: subject: (How to use FreeDOS as an embedded system)
[#]: via: (https://opensource.com/article/21/6/freedos-embedded-system)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
How to use FreeDOS as an embedded system
======
Many embedded systems today run on Linux. But once upon a time, embedded
systems either ran on a custom, proprietary platform or ran on DOS.
![Computer laptop in space][1]
The [FreeDOS website][2] says that most people use FreeDOS for three main tasks:
1. Playing classic DOS games
2. Running legacy DOS software
3. Running an embedded system
But what does it mean to run an "embedded" system?
An embedded system is basically a very minimal system that is dedicated to run a specific task. You might think of embedded systems today as part of the _Internet of Things_ (IoT) including sensors, thermostats, and doorbell cameras. Many embedded systems today run on Linux.
But once upon a time, embedded systems either ran on a custom, proprietary platform or ran on DOS. Some of these DOS-based embedded systems still run today, such as cash registers or phone private branch exchange (PBX) systems. In one example as recently as 2017, trainspotters discovered a Russian electric train control system (Russian: _САВПЭ_) running FreeDOS with special software to control and monitor the route of suburban trains and to make passenger announcements.
Setting up an embedded system on DOS requires defining a minimal DOS environment that runs a single application. Fortunately, setting up a minimal FreeDOS environment is pretty easy. Technically, all you need to boot FreeDOS and run DOS applications is the kernel and a `FDCONFIG.SYS` configuration file.
### Installing a minimal system
We can simulate a dedicated, minimal FreeDOS system by using the QEMU emulator with very small allocations. To reflect an embedded system more accurately, I'll define a virtual machine with only 8 megabytes of memory and a mere 2 megabytes for a virtual hard drive.
To create the tiny virtual hard drive, I'll use this `qemu-img` command to define a 2-megabyte file:
```
$ qemu-img create tiny.img 2M
Formatting 'tiny.img', fmt=raw size=2097152
```
This command line defines a 32-bit "i386" CPU with 8 megabytes of memory, using the 2-megabyte `tiny.img` file as the hard drive image and the FreeDOS 1.3 RC4 LiveCD as the CD-ROM media. We'll also set the machine to boot from the CD-ROM drive (`-boot order=d`) although we only need that to install. We'll boot the completed embedded system from the hard disk after we've set everything up:
```
qemu-system-i386 -m 8 -hda tiny.img -cdrom FD13LIVE.iso -boot order=d
```
Boot the system using the "Live Environment mode"—this provides us with a running FreeDOS system that we can use to transfer a minimal FreeDOS to the hard disk.
![embedded setup][3]
Boot into the LiveCD environment
(Jim Hall, [CC-BY SA 4.0][4])
We'll need to create a partition on the virtual hard drive for our programs. To do that, run the FDISK program from the command line. FDISK is the standard _fixed disk_ utility on FreeDOS. Use FDISK to create a single hard drive partition that spans the entire (2-megabyte) hard drive.
![embedded setup][5]
FDISK, after creating the 2 megabyte partition
(Jim Hall, [CC-BY SA 4.0][4])
But FreeDOS won't see the new hard drive partition until you reboot—FreeDOS only reads the hard disk details at startup. Exit FDISK and reboot, and you'll be ready for the next step.
After rebooting, you need to create a DOS filesystem on the new hard drive. Since there's just the one virtual hard disk, FreeDOS will identify it as the `C:` drive. You can create a DOS filesystem on `C:` with the FORMAT command. The `/S` option transfers the operating system files (the kernel, plus a copy of the `COMMAND.COM` shell) to the new drive.
![embedded setup][6]
Format the new drive to create a DOS filesystem
(Jim Hall, [CC-BY SA 4.0][4])
 
Now that you've created the drive and formatted it, you can install the application that will run on the embedded system.
### Installing the dedicated application
An embedded system is really just a single-purpose application running on a dedicated system. Such applications are usually custom-built for the system it will control, such as a cash register, display terminal, or control environment. For this demonstration, let's use a program from the FreeDOS 1.3 RC4 installation CD-ROM. It needs to be small enough to fit in the tiny 2-megabyte hard drive we've created for it. This can be anything—so just for fun, let's make it a game.
FreeDOS 1.3 RC4 includes several fun games. One game that I like is a board game called Simple Senet. It's based on Senet, an ancient Egyptian board game. The details of the game aren't important for this demonstration, except that we'll install it and set it up as the dedicated application for the embedded system.
To install the application, go into the `\PACKAGES\GAMES` directory on the FreeDOS 1.3 RC4 LiveCD. You'll see a long list of packages there, and the one we want is `SENET.ZIP`.
![embedded setup][7]
A list of game packages from FreeDOS 1.3 RC4
(Jim Hall, [CC-BY SA 4.0][4])
To unzip the Simple Senet package onto the virtual hard drive, use the `UNZIP` command. All FreeDOS packages are Zip files, so you can use any Zip-compatible archive utility to manage them. FreeDOS 1.3 RC4 includes `ZIP` to create Zip archives, and `UNZIP` to extract Zip archives. Both are from the [Info-Zip Project][8].
```
`UNZIP SENET.ZIP -d C:\FDOS`
```
Normally, using `UNZIP` will extract a Zip file in the current directory. The `-d C:\FDOS` option at the end of the command line tells `UNZIP` to extract the Zip file to the `C:\FDOS` directory. (`-d` means "destination.")
![embedded setup][9]
Unzipping the Simple Senet game
(Jim Hall, [CC-BY SA 4.0][4])
To run the Simple Senet game whenever the embedded system boots, we need to tell FreeDOS to use Senet as the system "shell." The default FreeDOS shell is the `COMMAND.COM` program, but you can define a different shell program using the `SHELL=` directive in the `FDCONFIG.SYS` kernel configuration file. We can use FreeDOS Edit to create the new `C:\FDCONFIG.SYS` file.
![Embedded edit senet][10]
(Jim Hall, [CC-BY SA 4.0][4])
If you need to define other parameters to support the embedded system, you can add those to the `FDCONFIG.SYS` file. For example, you might need to set environment variables using the `SET` action, or tune the FreeDOS kernel with `FILES=` or `BUFFERS=` statements.
### Run the embedded system
With the embedded system fully defined, we can now reboot the machine to run the embedded application. Running an embedded system usually requires only limited resources, so for this demonstration, we'll tweak the QEMU command line to only boot from the hard drive (`-boot order=c`) and not define a CD-ROM drive:
```
qemu-system-i386 -m 8 -hda tiny.img -boot order=c
```
When the FreeDOS kernel starts up, it reads the `FDCONFIG.SYS` file for its startup parameters. Then it runs the shell using the `SHELL=` line. That runs the Simple Senet game automatically.
![embedded setup][11]
Running Simple Senet as an embedded system
(Jim Hall, [CC-BY SA 4.0][4])
We've used Simple Senet to demonstrate how to set up an embedded system on FreeDOS. Depending on your needs, you can use whatever standalone application you like. Define it as the DOS shell using the `SHELL=` line in `FDCONFIG.SYS` and FreeDOS will automatically launch the application at boot-time.
However, there's one limitation here. Embedded systems do not usually need to exit back to a command prompt, so these dedicated applications don't usually allow the user to quit to DOS. If you manage to exit the embedded application, you'll likely see a "Bad or missing Command Interpreter" prompt, where you'll need to enter the full path to a new shell. For a user-focused desktop system, this would be a problem. But on an embedded system that's dedicated to doing only one job, you should never need to exit anyway.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/freedos-embedded-system
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://www.freedos.org/
[3]: https://opensource.com/sites/default/files/uploads/embedded-setup02.png (Boot into the LiveCD environment)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/embedded-setup09.png (FDISK, after creating the 2 megabyte partition)
[6]: https://opensource.com/sites/default/files/uploads/embedded-setup19.png (Format the new drive to create a DOS filesystem)
[7]: https://opensource.com/sites/default/files/uploads/games-dir.png (A list of game packages from FreeDOS 1.3 RC4)
[8]: http://infozip.sourceforge.net/
[9]: https://opensource.com/sites/default/files/uploads/senet-unzip.png (Unzipping the Simple Senet game)
[10]: https://opensource.com/sites/default/files/pictures/embedded-edit-senet.png (Embedded edit senet)
[11]: https://opensource.com/sites/default/files/uploads/senet.png (Running Simple Senet as an embedded system)

View File

@ -1,168 +0,0 @@
[#]: subject: "Parse command-line arguments with argparse in Python"
[#]: via: "https://opensource.com/article/21/8/python-argparse"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Parse command-line arguments with argparse in Python
======
Use the argparse module to enable options in your Python applications.
![Python options][1]
There are several third-party libraries for command-line argument parsing, but the standard library module `argparse` is no slouch either.
Without adding any more dependencies, you can write a nifty command-line tool with useful argument parsing.
### Argument parsing in Python
When parsing command-line arguments with `argparse`, the first step is to configure an `ArgumentParser` object. This is often done at the global module scope since merely _configuring_ the parser has no side effects.
```
import argparse
PARSER = argparse.ArgumentParser()
```
The most important method on `ArgumentParser` is `.add_argument()`. It has a few variants. By default, it adds an argument that expects a value.
```
`PARSER.add_argument("--value")`
```
To see it in action, call the method `.parse_args()`:
```
`PARSER.parse_args(["--value", "some-value"])`[/code] [code]`Namespace(value='some-value')`
```
It's also possible to use the syntax with `=`:
```
`PARSER.parse_args(["--value=some-value"])`[/code] [code]`Namespace(value='some-value')`
```
You can also specify a short "alias" for a shorter command line when typed into the prompt:
```
`PARSER.add_argument("--thing", "-t")`
```
It's possible to pass either the short option:
```
`PARSER.parse_args("-t some-thing".split())`[/code] [code]`Namespace(value=None, thing='some-thing')`
```
or the long one:
```
`PARSER.parse_args("--thing some-thing".split())`[/code] [code]`Namespace(value=None, thing='some-thing')`
```
### Types
There are more types of arguments available. The two most popular ones, after the default, are boolean and counting. The booleans come with a variant that defaults to true, and one that defaults to false.
```
PARSER.add_argument("--active", action="store_true")
PARSER.add_argument("--no-dry-run", action="store_false", dest="dry_run")
PARSER.add_argument("--verbose", "-v", action="count")
```
This means that `active` is `False` unless `--active` is passed, and `dry_run` is `True` unless `--no-dry-run` is passed. Short options without value can be juxtaposed.
Passing all the arguments results in a non-default state:
```
`PARSER.parse_args("--active --no-dry-run -vvvv".split())`[/code] [code]`Namespace(value=None, thing=None, active=True, dry_run=False, verbose=4)`
```
The default is somewhat less exciting:
```
`PARSER.parse_args("".split())`[/code] [code]`Namespace(value=None, thing=None, active=False, dry_run=True, verbose=None)`
```
### Subcommands
Though classic Unix commands "did one thing, and did it well," the modern tendency is to do "several closely related actions."
The examples of `git`, `podman`, and `kubectl` can show how popular the paradigm is. The `argparse` library supports that too:
```
MULTI_PARSER = argparse.ArgumentParser()
subparsers = MULTI_PARSER.add_subparsers()
get = subparsers.add_parser("get")
get.add_argument("--name")
get.set_defaults(command="get")
search = subparsers.add_parser("search")
search.add_argument("--query")
search.set_defaults(command="search")
[/code] [code]`MULTI_PARSER.parse_args("get --name awesome-name".split())`[/code] [code]`Namespace(name='awesome-name', command='get')`[/code] [code]`MULTI_PARSER.parse_args("search --query name~awesome".split())`[/code] [code]`Namespace(query='name~awesome', command='search')`
```
### Anatomy of a program
One way to use `argparse` is to structure the program as follows:
```
## my_package/__main__.py
import argparse
import sys
from my_package import toplevel
parsed_arguments = toplevel.PARSER.parse_args(sys.argv[1:])
toplevel.main(parsed_arguments)
[/code] [code]
## my_package/toplevel.py
PARSER = argparse.ArgumentParser()
## .add_argument, etc.
def main(parsed_args):
    ...
    # do stuff with parsed_args
```
In this case, running the command is done with `python -m my_package`. Alternatively, you can use the [`console_scripts`][2] entry points in the package's setup.
### Summary
The `argparse` module is a powerful command-line argument parser. There are many more features that have not been covered here. The limit is your imagination.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/8/python-argparse
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/bitmap_0.png?itok=PBXU-cn0 (Python options)
[2]: https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point

View File

@ -2,7 +2,7 @@
[#]: via: "https://opensource.com/article/21/9/linux-find-command"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: translator: "MjSeven"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "

View File

@ -1,109 +0,0 @@
[#]: subject: "How to change Ubuntu Terminal Font and Size [Beginners Tip]"
[#]: via: "https://itsfoss.com/change-terminal-font-ubuntu/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: "robsean"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to change Ubuntu Terminal Font and Size [Beginners Tip]
======
If you are spending a lot of time using the terminal on Ubuntu, you may want to adjust the font and size to get a good experience.
Changing the font is one of the simplest but most visual way of [Linux terminal customization][1]. Let me show you the detailed steps for changing the terminal fonts in Ubuntu along with some tips and suggestions on font selection.
_**Note:**_ _The steps should be the same for most other [Linux terminal emulator][2] but the way you access the option can differ._
### Change Ubuntu Terminal font and size using GUI
**Step 1.** [Launch the terminal on your Ubuntu][3] system by using Ctrl+Alt+T keys.
**Step 2.** Head to the “**Preferences**” option that you can find when you click on the menu.
![][4]
You can also perform a right-click anywhere on the terminal to access the option as shown below.
![][5]
**Step 3.** Now, you should be able to access the settings for the terminal. By default, there will be an unnamed profile. This is the default profile. **I suggest creating a new profile** so that your changes do not impact the default settings.
![][6]
**Step 4**. To change the font, you need to enable the “**Custom font**” option and then click on “**Monospace Regular**”
![][7]
It will show a list of fonts available for selection.
![][8]
Here, you get a quick preview of the font at the bottom of the font listing and also the ability to tweak the font size of your Ubuntu terminal.
By default, it uses a size of **12** for the font and **Ubuntu mono** style.
**Step 5**. Finally, you can search for your preferred font style and hit “**Select**” to finalize it after looking at the preview and adjusting the font size.
![][9]
Thats it. You have successfully changed the fonts. See the changes in the image below. Move the slider to see the difference.
![Ubuntu terminal font change][10]
#### Want to Customize the Look of your Linux Terminal?
Check out our detailed article on some terminal customization tips and tricks.
[Linux Terminal Tweaks][1]
### Tips on getting new fonts for Ubuntu terminal
You can download fonts from the internet in TTF file format and [easily install these new fonts in Ubuntu][11] by double-clicking the TTF file.
![][12]
You should open a new terminal window to load the newly installed fonts.
However, keep in mind that **Ubuntu will not show ALL the newly installed fonts in the terminal**. Why? Because the terminal is designed to use monospaced fonts. Fonts that have letters too close to each other may look weird. Some fonts do not offer proper clarity between the alphabet O and the number 0. Similarly, you may face issues in differentiating the lowercase l and i.
This is why youll see the available fonts in the terminal are often have mono in their name.
Overall, there can be plenty of readability issues that could create more confusion. Hence, it is best to select a font that does not make the terminal hard to read.
You should also check if a font looks good/weird when you increase/decrease the size of the font to ensure that you do not have a problem when customizing the look of your terminal.
### Font suggestions for terminal customization
Free mono and Noto mono are some of the good fonts available from the default list of font selections to apply on your terminal.
You can try [installing new fonts in Linux][11] like **JetBrains Mono**, and **Robo Mono**, Larabiefont, Share Tech Mono and more from Google Fonts and other sources.
_What font style/size do you prefer to use with the Ubuntu terminal? Let us know in the comments below!_
--------------------------------------------------------------------------------
via: https://itsfoss.com/change-terminal-font-ubuntu/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/customize-linux-terminal/
[2]: https://itsfoss.com/linux-terminal-emulators/
[3]: https://itsfoss.com/open-terminal-ubuntu/
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/terminal-preference.png?resize=800%2C428&ssl=1
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/terminal-right-click-menu.png?resize=800%2C341&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-terminal-preference-option.png?resize=800%2C303&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/09/enable-font-change-ubuntu-terminal.png?resize=798%2C310&ssl=1
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/monospace-font-default.png?resize=800%2C651&ssl=1
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-custom-font-selection.png?resize=800%2C441&ssl=1
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/09/ubuntu-terminal-font-2.png?resize=723%2C353&ssl=1
[11]: https://itsfoss.com/install-fonts-ubuntu/
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/12/install-new-fonts-ubuntu.png?resize=800%2C463&ssl=1

View File

@ -1,194 +0,0 @@
[#]: subject: "7 handy tricks for using the Linux wget command"
[#]: via: "https://opensource.com/article/21/10/linux-wget-command"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: "zengyi1001"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 handy tricks for using the Linux wget command
======
Download files from the internet in your Linux terminal. Get the most
out of the wget command with our new cheat sheet.
![Computer screen with files or windows open][1]
Wget is a free utility to download files from the web. It gets data from the Internet and saves it to a file or displays it in your terminal. This is literally also what web browsers do, such as Firefox or Chromium, except by default, they _render_ the information in a graphical window and usually require a user to be actively controlling them. The `wget` utility is designed to be non-interactive, meaning you can script or schedule `wget` to download files whether you're at your computer or not.
### Download a file with wget
You can download a file with `wget` by providing a link to a specific URL. If you provide a URL that defaults to `index.html`, then the index page gets downloaded. By default, the file is downloaded into a file of the same name in your current working directory.
```
$ wget <http://example.com>
\--2021-09-20 17:23:47-- <http://example.com/>
Resolving example.com... 93.184.216.34, 2606:2800:220:1:248:1893:25c8:1946
Connecting to example.com|93.184.216.34|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1256 (1.2K) [text/html]
Saving to: 'index.html'
```
You can make `wget` send the data to standard out (`stdout`) instead by using the `--output-document` with a dash `-` character:
```
$ wget <http://example.com> \--output-document - | head -n4
&lt;!doctype html&gt;
&lt;html&gt;
&lt;head&gt;
   &lt;title&gt;Example Domain&lt;/title&gt;
```
You can use the `--output-document` option (`-O` for short) to name your download whatever you want:
```
`$ wget http://example.com --output-document foo.html`
```
### Continue a partial download
If you're downloading a very large file, you might find that you have to interrupt the download. With the `--continue` (`-c` for short), `wget` can determine where the download left off and continue the file transfer. That means the next time you download a 4 GB Linux distribution ISO you don't ever have to go back to the start when something goes wrong.
```
`$ wget --continue https://example.com/linux-distro.iso`
```
### Download a sequence of files
If it's not one big file but several files that you need to download, `wget` can help you with that. Assuming you know the location and filename pattern of the files you want to download, you can use Bash syntax to specify the start and end points between a range of integers to represent a sequence of filenames:
```
`$ wget http://example.com/file_{1..4}.webp`
```
### Mirror a whole site
You can download an entire site, including its directory structure, using the `--mirror` option. This option is the same as running `--recursive --level inf --timestamping --no-remove-listing`, which means it's infinitely recursive, so you're getting everything on the domain you specify. Depending on how old the website is, that could mean you're getting a lot more content than you realize.
If you're using `wget` to archive a site, then the options `--no-cookies --page-requisites --convert-links` are also useful to ensure that every page is fresh, complete, and that the site copy is more or less self-contained.
### Modify HTML headers
Protocols used for data exchange have a lot of metadata embedded in the packets computers send to communicate. HTTP headers are components of the initial portion of data. When you browse a website, your browser sends HTTP request headers. Use the `--debug` option to see what header information `wget` sends with each request:
```
$ wget --debug example.com
\---request begin---
GET / HTTP/1.1
User-Agent: Wget/1.19.5 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: example.com
Connection: Keep-Alive
\---request end---
```
You can modify your request header with the `--header` option. For instance, it's sometimes useful to mimic a specific browser, either for testing or to account for poorly coded sites that only work correctly for specific user agents.
To identify as Microsoft Edge running on Windows:
```
`$ wget --debug --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.59" http://example.com`
```
You can also masquerade as a specific mobile device:
```
$ wget --debug \
\--header="User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 13_5_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Mobile/15E148 Safari/604.1" \
<http://example.com>
```
### Viewing response headers
In the same way header information is sent with browser requests, header information is also included in responses. You can see response headers with the `--debug` option:
```
$ wget --debug example.com
[...]
\---response begin---
HTTP/1.1 200 OK
Accept-Ranges: bytes
Age: 188102
Cache-Control: max-age=604800
Content-Type: text/html; charset=UTF-8
Etag: "3147526947"
Server: ECS (sab/574F)
Vary: Accept-Encoding
X-Cache: HIT
Content-Length: 1256
\---response end---
200 OK
Registered socket 3 for persistent reuse.
URI content encoding = 'UTF-8'
Length: 1256 (1.2K) [text/html]
Saving to: 'index.html'
```
### Responding to a 301 response
A 200 response code means that everything has worked as expected. A 301 response, on the other hand, means that an URL has been moved permanently to a different location. It's a common way for a website admin to relocate content while leaving a "trail" so people visiting the old location can still find it. By default, `wget` follows redirects, and that's probably what you normally want it to do.
However, you can control what `wget` does when it encounters a 301 response with the `--max-redirect` option. You can set it to `0` to follow no redirects:
```
$ wget --max-redirect 0 <http://iana.org>
\--2021-09-21 11:01:35-- <http://iana.org/>
Resolving iana.org... 192.0.43.8, 2001:500:88:200::8
Connecting to iana.org|192.0.43.8|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: <https://www.iana.org/> [following]
0 redirections exceeded.
```
Alternately, you can set it to some other number to control how many redirects `wget` follows.
#### Expand a shortened URL
The `--max-redirect` option is useful for looking at shortened URLs before actually visiting them. Shortened URLs can be useful for print media, in which users can't just copy and paste a long URL, or on social networks with character limits (this isn't as much of an issue on a modern and [open source social network like Mastodon][2]). However, they can also be a little dangerous because their destination is, by nature, concealed. By combining the `--head` option to view just the HTTP headers, and the `--location` option to unravel the final destination of an URL, you can peek into a shortened URL without loading the full resource:
```
$ wget --max-redirect 0 "<https://bit.ly/2yDyS4T>"
\--2021-09-21 11:32:04-- <https://bit.ly/2yDyS4T>
Resolving bit.ly... 67.199.248.10, 67.199.248.11
Connecting to bit.ly|67.199.248.10|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: <http://example.com/> [following]
0 redirections exceeded.
```
The penultimate line of output, starting with **Location**, reveals the intended destination.
### Use wget
Once you practice thinking about the process of exploring the web as a single command, `wget` becomes a fast and efficient way to pull information you need from the Internet without bothering with a graphical interface. To help you build it into your usual workflow, we've created a cheat sheet with common `wget` uses and syntax, including an overview of using it to query an API. [**Download the Linux `wget` cheat sheet here.**][3]
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/10/linux-wget-command
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
[2]: https://opensource.com/article/17/4/guide-to-mastodon
[3]: https://opensource.com/downloads/linux-wget-cheat-sheet

View File

@ -1,214 +0,0 @@
[#]: subject: "What you need to know about Kubernetes NetworkPolicy"
[#]: via: "https://opensource.com/article/21/10/kubernetes-networkpolicy"
[#]: author: "Mike Calizo https://opensource.com/users/mcalizo"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What you need to know about Kubernetes NetworkPolicy
======
Understanding Kubernetes NetworkPolicy is one of the fundamental
requirements to learn before deploying an application to Kubernetes.
![Parts, modules, containers for software][1]
With a growing number of cloud-native applications going to production through Kubernetes adoption, security is an important checkpoint that you must consider early in the process. When designing a cloud-native application, it is very important to embed a security strategy up front. Failure to do so leads to lingering security issues that can cause project delays and ultimately cost you unnecessary stress and money.
For years, people left security at the end—until their deployment was about to go into production. That practice causes delays on deliverables because each organization has security standards to adhere to, which are either bypassed or not followed with a lot of accepted risks to make the deliverables.
Understanding Kubernetes NetworkPolicy can be daunting for people just starting to learn the ins and outs of Kubernetes implementation. But this is one of the fundamental requirements that you must learn before deploying an application to your Kubernetes cluster. When learning Kubernetes and cloud-native application patterns, make your slogan "Don't leave security behind!"
### The NetworkPolicy concept
[NetworkPolicy][2] replaces firewall appliances in the data center context that you know—as pods to compute instances, network plugins to router and switches, and volumes to storage area network (SAN).
By default, the Kubernetes NetworkPolicy allows [pods][3] to receive traffic from anywhere. If you are not concerned about security for your pods, then that might be OK. But if you are running a critical workload, then you need to secure your pods. The way to control the traffic flow within the cluster (including ingress and egress traffic) is through NetworkPolicies.
To enable NetworkPolicy, you need a [network plugin][4] that supports NetworkPolicy. Otherwise, any rules you applied become useless.
There a different network plugins [listed on Kubernetes.io][4]:
* CNI plugins: adhere to the [Container Network Interface][5] (CNI) specification, designed for interoperability.
* Kubernetes follows the [v0.4.0][6] release of the CNI specification.
* Kubernetes plugin: implements basic `cbr0` using the `bridge` and `host-local` CNI plugins.
### Applying a network policy
To apply a network policy, you need a working Kubernetes cluster with a network plugin that supports NetworkPolicy.
But first, you need to understand how to use NetworkPolicy in the context of Kubernetes. The Kubernetes NetworkPolicy allows [pods][3] to receive traffic from anywhere. This is not ideal. To secure the pods, you must understand the endpoints pods can communicate within the Kubernetes construct.
1. Pod-to-pod communication using `podSelector`.
```
- namespaceSelector:
    matchLabels:
      project: myproject
```
2. Namespace-to-namespace communication and namespace-to-pod communication using `namespaceSelector` and/or a combination of `podSelector` and `namespaceSelector`.
```
- namespaceSelector:
    matchLabels:
      project: myproject
\- podSelector:
    matchLabels:
      role: frontend
```
3. IP blocks communication for pods using `ipBlock` to define which `IP CIDR` blocks dictate the source and destination.
```
- ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
```
Note the difference between pod, namespace, and IP-based policy. For pod and namespace-based NetworkPolicy, you use `selector` to control traffic, while for IP-based NetworkPolicy, controls get defined using `IP blocks` (CIDR ranges).
Putting it together, a NetworkPolicy should look like the following:
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 192.168.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978
```
Referencing the NetworkPolicy above, notice the `spec` section. Under this section, `podSelector` with label _app=backend_ is the target of our NetworkPolicy. In short, the NetworkPolicy protects the application called _backend_ inside a given namespace.
This section also has `policyTypes` definition. This field indicates whether or not the given policy applies to ingress traffic to the selected pod, egress traffic from selected pods, or both.
```
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  - Egress
```
Now, look at the `ingress` and `egress` section. This definition dictates the control of the NetworkPolicy.
First, examine the `ingress from` section.
The NetworkPolicy in this instance allows pod connection from:
* `ipBlock`
* Allow 172.17.0.0/16
* Deny192.168.1.0/24
* `namespaceSelector`
* `myproject`: Allow all pods from this namespace and with the same labels _project=myproject_.
* `podSelector`
* `frontend`: Allow pods that match the label _role=frontend_
```
ingress:
\- from:
  - ipBlock:
      cidr: 172.17.0.0/16
      except:
      - 192.168.1.0/24
  - namespaceSelector:
      matchLabels:
        project: myproject
  - podSelector:
      matchLabels:
        role: frontend
```
Now, examine the `egress to` section. This dictates the connection from the pod to:
* `ipBlock`
* 10.0.0.0/24: Connection to this CIDR is allowed
* Ports: Allowed to connect using TCP and port 5978
```
egress:
\- to:
  - ipBlock:
      cidr: 10.0.0.0/24
  ports:
  - protocol: TCP
    port: 5978
```
## NetworkPolicy limitations
NetworkPolicy alone cannot totally secure your Kubernetes clusters. You can use either the operating system components or Layer 7 technologies to overcome the known limitations. You need to remember that NetworkPolicy can only address security for IP address and port level—Open Systems Interconnection (OSI) layer 3 or 4.
To address security requirements that NetworkPolicy can't handle, you need to use other security solutions. Here are some [use cases][7] that you need to know where NetworkPolicy needs augmentation by other technologies.
## Summary
Understanding Kubernetes NetworkPolicy is important because it's a way to fulfill (but not replace) the firewall role that you usually use in a datacenter setup, but for Kubernetes. Think of this as the first layer of your container security, knowing that NetworkPolicy alone is not a total security solution.
NetworkPolicy applies security on pod and namespace using selectors and labels. In addition, NetworkPolicy can also enforce security through IP ranges.
Having a sound understanding of NetworkPolicy is an important skill towards secure adoption of containerization in the Kubernetes context.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/10/kubernetes-networkpolicy
作者:[Mike Calizo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mcalizo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
[2]: https://kubernetes.io/docs/concepts/services-networking/network-policies/
[3]: https://kubernetes.io/docs/concepts/workloads/pods/
[4]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
[5]: https://github.com/containernetworking/cni
[6]: https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md
[7]: https://kubernetes.io/docs/concepts/services-networking/network-policies/#what-you-can-t-do-with-network-policies-at-least-not-yet

View File

@ -1,79 +0,0 @@
[#]: subject: "Use the Linux cowsay command for a colorful holiday greeting"
[#]: via: "https://opensource.com/article/21/11/linux-cowsay"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Use the Linux cowsay command for a colorful holiday greeting
======
Celebrate the Day of the Dead using this fun Linux command-line tool.
![Pumpkins painted for Day of the Dead][1]
You may have heard of a small program that takes input, such as a message that you type, and outputs a picture of a cow quoting your message. It is called **cowsay**. It has been written about before here on [Opensource.com][2].
So, to have a little fun with it, I thought I'd use it to celebrate Día de los Muertos (Day of the Dead).
In addition to a cow, there are other images available. When you install `cowsay` it includes several other images, which the install stores in `/user/share/cowsay`. You can use the `-l` argument to get a list.
```
$ sudo dnf install cowsay
$ cowsay -l
```
There's actually quite a bit of development activity related to `cowsay` and similar programs. It is possible to create your own image files or download images others have made. For instance, [Charc0al's cowsay file converter][3] is located on GitHub. You can use this tool to convert your own pictures to the special ASCII format file required by `cowsay`. Depending on your Linux or FreeBSD terminal settings, you may have color support enabled. The `cowsay` utility can display color images, as well. Charc0al's converter provides many ready-to-go color files.
I chose to use the Beetlejuice file for my celebration. First, I saved the [beetlejuice.cow][4] file to `/usr/share/cowsay`. This directory is owned by root, so you may have to save the file to your home directory first and then copy it. I also needed to give all users read access.
```
$ sudo cp beetlejuice.cow /usr/share/cowsay
$ sudo chmod o+r /usr/share/cowsay/beetlejuice.cow
```
It is interesting to notice how the image is generated. The top sets various ASCII color control codes to variables. These variables are then used to draw the image in the traditional ASCII art style. The image is almost full-body and did not fit my terminal height without scrolling off the screen, so I edited the file and removed the last 15 lines to shorten it.
The image is also detected by the `cowsay` program and appears in the list.
```
$ cowsay -l
Cow files in /usr/share/cowsay:
beavis.zen beetlejuice blowfish bud-frogs bunny cheese cower default dragon
...
```
Now, simply run the program and specify the image using the `-f` option. Don't forget to provide a message.
```
`$ cowsay -f beetlejuice "Happy Day of the Dead!"`
```
![ASCII display of Beetlejuice via cowsay][5]
Beetlejuice says Happy Day of the Dead  (CC BY-SA 4.0)
The `cowsay` command is just another way to have some command-line fun with your Linux computer. Experiment with `cowsay` and ASCII art—get creative.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/linux-cowsay
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/drew-hays-unsplash.jpg?itok=uBrvJkTW (Pumpkins painted for Day of the Dead)
[2]: https://opensource.com/article/18/12/linux-toy-cowsay
[3]: https://charc0al.github.io/cowsay-files/converter/
[4]: https://raw.githubusercontent.com/charc0al/cowsay-files/master/cows/beetlejuice.cow
[5]: https://opensource.com/sites/default/files/cowsay_beetlejuice.png

View File

@ -1,87 +0,0 @@
[#]: subject: "4 ways to edit photos on the Linux command line"
[#]: via: "https://opensource.com/article/21/11/edit-photos-linux-command-line"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
4 ways to edit photos on the Linux command line
======
Here are a few of my favorite ImageMagick tricks and how to use them
without a GUI.
![Montage of Alan as a Kid][1]
Linux is useful to photographers and graphic artists. It provides many tools for editing different types of image files and formats, including photographs. This roundup shows that you do not even need a graphical interface to work with your photos. Here are four ways that you can edit images at the command line.
### Apply effects to your images
A couple of years ago, Seth Kenlon wrote the article, [4 fun (and semi-useless) Linux toys][2] which included an introduction to the ImageMagick suite of editing tools. ImageMagick is even more relevant today in 2021.
This article taught us about Fred's ImageMagick scripts, which really are useful. Fred Weinhaus maintains over 200 scripts for applying all sorts of effects to your image files. Seth shows us an example of Fred's `vintage3` script that gives an image an old-time appearance.
### Create photo collages
This year, Jim Hall showed us how to create a collage from photos with his article, [Create a photo collage from the Linux command line][3].
Collages are used a lot in pamphlets and brochures. They are a fun way to display several images within a single picture. Effects can be applied to blend them further together. As a matter of fact, I used his article as a guide to create the collage of pictures above. That is me when I was a kid! Here is the command I used:
```
$ montage Screenshot-20211021114012.png \
Screenshot-20211021114220.png \
Screenshot-20211021114257.png \
Screenshot-20211021114530.png \
Screenshot-20211021114639.png \
Screenshot-20211021120156.png \
-tile 3x2 -background black \
screenshot-montage.png
```
### Resize images
Jim delivered another article, [Resize an image from the Linux terminal][4]. This tutorial demonstrates how to change the dimensions of an image file and save it as a new file using ImageMagick. For example, the collage that resulted from the montage command above did not have the required dimensions. Learning how to resize allowed me to adjust the width and height so that it could be included. This is the command I used to resize the lead image of this article:
```
`$ convert screenshot-montage.png -resize 520x292\! alanfd-kid-montage.png`
```
### Automate image processing
Recently, I decided to take a look at the ImageMagick suite for myself. This time, I combined its tools into a Bash script. The article is entitled [Automate image processing with this bash script][5]. This example is a simple script that automates the production of images for my articles. It is tailored to the requirements here on Opensource.com. I provided a Git repo link in the article if you would like to use the script. It is easily modified and extensible for anyone's needs.
### Wrap up
I hope you enjoy these articles and use Linux in your artistic endeavors. If you would like to check out more Linux image software, take a look at the Fedora [Design Suite][6] Spin. It is a complete operating system installation that includes many different open source multimedia production and publishing tools, such as:
* GIMP
* Inkscape
* Blender
* Darktable
* Krita
* Scribus
* and more...
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/edit-photos-linux-command-line
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/alanfd-kid-montage.png?itok=r1kgXpLc (Montage of Alan as a Kid)
[2]: https://opensource.com/life/16/6/fun-and-semi-useless-toys-linux
[3]: https://opensource.com/article/21/9/photo-montage-imagemagick
[4]: https://opensource.com/article/21/9/resize-image-linux
[5]: https://opensource.com/article/21/10/image-processing-bash-script
[6]: https://labs.fedoraproject.org/en/design-suite/

View File

@ -1,103 +0,0 @@
[#]: subject: "Motrix: A Beautiful Cross-Platform Open-Source Download Manager"
[#]: via: "https://itsfoss.com/motrix/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Motrix: A Beautiful Cross-Platform Open-Source Download Manager
======
_**Brief:** An open-source download manager that_ _provides a clean user interface while offering all the essential features to operate cross-platform. Explore more about it here._
There are plenty of download managers available for Linux. If you want to download something and have the ability to manage them, you can choose any of the download managers available.
However, if you want a good-looking download manager that offers a modern user experience without compromising on the feature set, Ive something that you might like.
### Meet Motrix: A Feature-Rich Open Source Download Manager
![][1]
Motrix is a no-nonsense download manager that provides a clean look out of the box. It is free and open-source software.
You can choose to use it for Linux, Windows, and macOS as well.
It could be a potential replacement for some [torrent clients available for Linux][2] as well.
Let me highlight some key features along with the installation instructions.
### Features of Motrix
![][3]
You should find all the features that you would typically expect in a download manager. Heres a list of them:
* Cross-platform support
* Easy to use interface
* BitTorrent selective download
* Automatic tracker list update
* UPnP &amp; NAT-PMP Port Mapping
* Parallel download tasks (up to 10)
* Support for up to 64 threads in a single task
* Ability to set a speed limit
* Option to change the user-agent
* System tray support
* Dark mode
* Multiple languages supported
![][4]
Overall, it worked well with torrent files and detected the download links from the clipboard as well. The advanced options can be accessed right before downloading a file, so that should come in handy.
![][5]
I did not find any issues while using it on Ubuntu as a snap package in my brief testing.
### Install Motrix in Linux
You get a variety of installation options for Motrix. So, you should be able to install it on any Linux distribution of your choice.
Primarily, it offers an AppImage for download. But, you can also find it available as a [Flatpak package][6] and [Snap][7] [][7][store][7].
If you are using Ubuntu, you should find it listed through the software center.
In addition to these, it is also available in [AUR][8] for Arch Linux users. In either case, you can always get the DEB/RPM packages from their [GitHub releases section][9].
You can find the links to download and more information on installation on their [official website][10] and the [GitHub page][11].
[Motrix][10]
### Wrapping Up
Motrix offers all the goodies that youd want in a download manager with a modern UX as a bonus.
I recommend you try this out as your download manager and see if it replaces your current tool. Id be curious to know your active download manager on your Linux system; feel free to tell me more about it in the comments below!
--------------------------------------------------------------------------------
via: https://itsfoss.com/motrix/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/motrix-download-manager.png?resize=800%2C604&ssl=1
[2]: https://itsfoss.com/best-torrent-ubuntu/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/motrix-dm-setting.png?resize=800%2C607&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/motrix-dm-white.png?resize=800%2C613&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/motrix-dm-options.png?resize=800%2C596&ssl=1
[6]: https://itsfoss.com/what-is-flatpak/
[7]: https://itsfoss.com/enable-snap-support-linux-mint/
[8]: https://itsfoss.com/aur-arch-linux/
[9]: https://github.com/agalwood/Motrix/releases
[10]: https://motrix.app/
[11]: https://github.com/agalwood/Motrix

View File

@ -1,202 +0,0 @@
[#]: subject: "Turn any website into a Linux desktop app with open source tools"
[#]: via: "https://opensource.com/article/21/11/linux-apps-nativefier"
[#]: author: "Ayush Sharma https://opensource.com/users/ayushsharma"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Turn any website into a Linux desktop app with open source tools
======
Nativefier and Electron creates desktop apps from any website.
![Text editor on a browser, in blue][1]
Mastodon is a great open source, decentralised social network. I use Mastodon every day, and it's probably most common to use Mastodon through its web interface (although being open source, there are many different ways to interact with it, including terminal-based applications and mobile apps), but I prefer dedicated application windows.
Recently, I discovered [Nativefier][2], and I can now enjoy Mastodon, or any other web app, as a desktop application on my Linux desktop. Nativefier takes a URL and wraps it with the Electron framework, which runs the open source Chromium browser as its backend but runs as its own executable application. Nativefier is licensed under the MIT license and is available for Linux, Windows, and macOS.
### Installing Nativefier
Nativefier requires Node.js
Installing Nativefier is as simple as running:
```
`$ sudo npm install -g nativefier`
```
On my Ubuntu desktop, I had to upgrade NodeJS first, so be sure to check what versions of Node is required when you install Nativefier.
Once installed, you can check your version of Nativefier to verify that it's been installed:
```
$ nativefier --version
45.0.4
```
Running `nativefier --help` lists all options the app supports.
### Setup
I recommend that you create a new folder called `~/NativeApps` before you start creating apps with Nativefier. This helps keep your applications nice and organized.
```
$ mkdir ~/NativeApps
cd ~/NativeApps
```
### Creating an app for Mastodon
I'll start by creating an app for [mastodon.technology][3].
Use the command:
```
$ nativefier --name Mastodon \
\--platform linux --arch x64 \
\--width 1024 --height 768 \
\--tray --disable-dev-tools \
\--single-instance <https://mastodon.technology>
```
The options in this example do the following:
* `--name`: Sets the app name to Mastodon
* `--platform`: Sets the app's platform to Linux
* \--arch x64: Sets the architecture to x64
* `--width 1024 --height 768`: Sets the apps' dimensions on launch
* `--tray`: Creates a tray icon for the app
* `--disable-dev-tools`: Disables Chrome dev tools
* `--single-instance`: Only allows one instance of the app
Running that single command shows the following output:
```
Preparing Electron app...
Converting icons...
Packaging... This will take a few seconds, maybe minutes if the requested Electron isn't cached yet...
Packaging app for platform linux x64 using electron v13.4.0 Finalizing build...
App built to /home/tux/NativeApps/Mastodon-linux-x64, move to wherever it makes sense for you and run the contained executable file (prefixing with ./ if necessary)
Menu/desktop shortcuts are up to you, because Nativefier cannot know where you're going to move the app. Search for "linux .desktop file" for help, or see <https://wiki.archlinux.org/index.php/Desktop\_entries>
```
The output shows that the files get placed in `/home/tux/NativeApps/Mastodon-linux-x64`. When you `cd` into this folder, you see a file named Mastodon. This is the main executable that launches the app. Before you can launch it, you must give it the appropriate permissions.
```
$ cd Mastodon-linux-x64
chmod +x Mastodon
```
Now, execute `./Mastodon` to see your Linux app launch!
![Mastodon app launched][4]
(Ayush Sharma, [CC BY-SA 4.0][5])
### Creating an app for my blog
For fun, I'm going to create an app for my blog website as well. What good is having a tech blog if there's no Linux app for it?
![Ayush Sharma blog][6]
(Ayush Sharma, [CC BY-SA 4.0][5])
The command:
```
$ nativefier -n ayushsharma \
-p linux -a x64 \
\--width 1024 --height 768 \
\--tray --disable-dev-tools \
\--single-instance <https://ayushsharma.in>
$ cd ayushsharma-linux-x64
chmod +x ayushsharma
```
### Creating an app for findmymastodon.com
And finally, here's an app for my pet project, [findmymastodon.com][7].
![Find my mastodon website][8]
(Ayush Sharma, [CC BY-SA 4.0][5])
The command:
```
$ nativefier -n findmymastodon \
-p linux -a x64 \
\--width 1024 --height 768 \
\--tray --disable-dev-tools \
\--single-instance <https://findmymastodon.com>
$ cd findmymastodon-linux-x64
chmod +x findmymastodon
```
### Creating Linux desktop icons
With the apps created and the executables ready to go, it's time to create desktop icons.
As a demonstration, here's how to create a desktop icon for the Mastodon launcher. First, download an icon for [Mastodon][9]. Place the icon in its Nativefier app directory as `icon.png`.
Then create a file called `Mastodon.desktop` and enter this text:
```
[Desktop Entry]
Type=Application
Name=Mastodon
Path=/home/tux/NativeApps/Mastodon-linux-x64
Exec=/home/tux/NativeApps/Mastodon-linux-x64/Mastodon
Icon=/home/tux/NativeApps/Mastodon-linux-x64/icon.png
```
You can move the `.desktop` file to your Linux desktop to have it as a desktop launcher. You can also place a copy of it in `~/.local/share/applications` so it shows up in your application menu or activity launcher.
### Conclusion
I love having dedicated apps for tools I use often. My favorite feature about having an app for Mastodon is that once I log in to Mastodon, I don't have to log in again! Nativefier runs Chromium underneath. So it's able to remember your session just like any browser does. I'd like to give a special thanks to the Nativefier team for taking the Linux desktop one step closer to perfection.
* * *
_This article originally appeared on the [author's website][10] and is republished with permission._
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/linux-apps-nativefier
作者:[Ayush Sharma][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ayushsharma
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_blue_text_editor_web.png?itok=lcf-m6N7 (Text editor on a browser, in blue)
[2]: https://github.com/nativefier/nativefier
[3]: https://mastodon.technology/
[4]: https://opensource.com/sites/default/files/uploads/2_launch-mastodon-app.png (Mastodon app launched)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/3_ayush-shama-blog.png (Ayush Sharma blog)
[7]: https://findmymastodon.com/
[8]: https://opensource.com/sites/default/files/uploads/4_find-my-mastodon-app.png (Find my mastodon website)
[9]: https://icons8.com/icons/set/mastodon
[10]: https://ayushsharma.in/2021/10/make-linux-apps-for-notion-mastodon-webapps-using-nativefier

View File

@ -1,127 +0,0 @@
[#]: subject: "How to update a Linux symlink"
[#]: via: "https://opensource.com/article/21/11/update-linux-file-system-link"
[#]: author: "Alan Formy-Duval https://opensource.com/users/alanfdoss"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to update a Linux symlink
======
Links have always been a unique advanced feature of UNIX file systems.
![Links][1]
UNIX and Linux users find many uses for links, particularly symbolic links. One way that I like to use symbolic links is to manage configuration backups of various IT equipment.
I have a directory structure to hold everything related to documentation, updates, and other files for the computers and devices on my network. Devices can include routers, access points, NAS servers, and laptops, often of different brands and versions. The configuration backups themselves might be deep within the directory tree, e.g. `/home/alan/Documents/network/device/NetgearRL5000/config`.
To simplify the backup process, I have a directory in my home called `Configuration`. I use symbolic links from this directory to point to the specific device directory:
```
:~/Configuration/ $ ls -F1
Router@
Accesspoint@
NAS@
```
**Note**: The `-F` option of the `ls` command appends special characters to each file name to represent its type. As shown above, the `@` symbol indicates that these are links.
### Creating a link
The symbolic link **Router** points to the `config` directory of my Netgear RL5000. The command to create it is `ln -s:`
```
`$ ln -s /home/alan/Documents/network/device/NetgearRL5000/config Router`
```
Then, take a look and confirm with `ls -l:`
```
:~/Configuration/ $ ls -l
Router -&gt; /home/alan/Documents/network/device/NetgearRL5000/config
NAS -&gt; /home/alan/Documents/network/device/NFSBox/config
...
```
The advantage is that when performing maintenance on this device, I simply browse to `~/Configuration/Router`.
The second advantage of using a symbolic link becomes evident if I decide to replace this router with a new model. I might re-task the old router to be an access point. Therefore, its directory does not get deleted. Instead, I have a new directory that corresponds to the new router, perhaps an ASUS DF-3760. I create the directory and confirm its existence:
```
`$ mkdir -p ~/Documents/network/device/ASUSDF-3760/config`[/code] [code]
:~/Documents/network/device/ $ ls
NetgearRL5000
ASUSDF-3760
NFSBox
...
```
Another example could be if you have several access points throughout your offices. You can use symbolic links to represent each one logically with either a generic name, such as `ap1`, `ap2`, and so on, or you can use descriptive words such as `ap_floor2`, `ap_floor3`, etc. This way, as the physical devices change over time, you do not have to continuously update any processes that might be managing them as they are addressing the links rather than the actual device directories.
### Updating a link
Since my main router has changed, I want the router's symbolic link to point to its directory. I could use the `rm` and `ln` commands to remove and create a new symbolic link, but there is a way to do this in one step using only the `ln` command with a few options:
```
:~/Configuration/ $ ln -vfns ~/Documents/network/device/ASUSDF-3760/config/ Router
'Router' -&gt; '/home/alan/Documents/network/device/ASUSDF-3760/config/'
:~/Configuration/ $ ls -l
Router -&gt; /home/alan/Documents/network/device/ASUSDF-3760/config
NAS -&gt; /home/alan/Documents/network/device/NFSBox/config
```
The options, according to the man page, are as follow:
**-v, --verbose**
print name of each linked file
**-f, --force**
remove destination file (necessary since a link already exists)
**-n, --no-dereference**
treat LINK_NAME as a normal file if it is a symbolic link to a directory
**-s, --symbolic**
make symbolic links instead of hard links
### Wrap up
Links are one of the most powerful features of UNIX and Linux file systems. Other operating systems have tried to mimic this capability, but those never worked as well or were as usable due to the lack of a fundamental link design in their file systems.
The demonstration above is only one possibility of many to take advantage of links for seamlessly navigating an ever-changing directory structure in a living production environment. Links provides the flexibility needed in an organization that is never static for long.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/update-linux-file-system-link
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/links.png?itok=enaPOi4L (Links)

View File

@ -0,0 +1,211 @@
[#]: subject: "10 eureka moments of coding in the community"
[#]: via: "https://opensource.com/article/21/11/community-code-stories"
[#]: author: "Jen Wike Huger https://opensource.com/users/jen-wike"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
10 eureka moments of coding in the community
======
We asked our community to share about a time they sat down and wrote
code that truly made them proud.
![Woman sitting in front of her laptop][1]
If you've written code, you know it takes practice to get good at it. Whether it takes months or years, there's inevitably a moment of epiphany.
We wanted to hear about that time, so we asked our community to share about that time they sat down and wrote code that truly made them proud.
* * *
One of mine around coding goes back to college in the 70s. I learned about parsing arithmetic expressions and putting them into Reverse Polish notation. And then, I figured out that, just like multiplication is repeated addition, division is repeated subtraction. But you can make it quicker by starting with an appropriate power of 10.  And with that, I wrote a BASIC Plus program on a 16-bit PDP 11/45 running RSTS to do multi-precision arithmetic. And then, I added a bunch of subroutines for various calculations. I tested it by calculating PI to 45 digits. It ran for a half-hour but worked. I may have saved it to DECtape. —[Greg Scott][2]
* * *
In the mid-1990s, I worked as part of a small consulting team (three programmers) on production planning and scheduling for a large steel company. The app was to be delivered on Hewlett-Packard workstations (then quite the thing), and the GUI was to be done in XWindows. To our amazement, it was CommonLisp that came out with the first decent interface to Motif, which had (for the time) a very nice widget toolkit. As a result, the entire app was done in CommonLisp and performed acceptably on the workstations. It was great fun to do something commercial in Lisp.
As you might guess, the company then wanted to port from the Hewlett-Packard workstations to something cheap, and so, about four years later, we rewrote the app in C with the expected boost in performance. —[Marty Kalin][3]
* * *
This topic brought an old memory back. Though I got many moments of self-satisfaction from writing the first C program to print a triangle to writing a validating admission webhook and operators for Kubernetes from scratch.
For a long time, I saw and played games written in various languages, and so I had an irresistible itch to write a few possible games using bash shell script.
I wrote the first one, tic-tac-toe and then Minesweeper, but they never got published until a couple of years back, when I committed them to GitHub, and people started liking them.
I was glad to get an opportunity to have the [article published][4] on this website. —[Abhishek Tamrakar][5]
* * *
Although there've been other, more recent works, two rather long-in-the-tooth bits of a doggerel leap to mind, mostly because of the "Eureka!" moments when I was able to examine the output and verify that I had indeed understood the Rosetta Stones I was working with enough to decipher the cryptic data:
* **UNPAL**: A cross-disassembler written in the DECsystem-10's MACRO-10 assembly language. It would take PDP-11 binaries and convert them back to the PDP-11 MACRO-11 assembler language. Many kudos to the folks writing the documentation back then, particularly, the DEC-10's 17 or so volumes, filled with great information and a fair amount of humor. UNPAL made its way out into the world and is probably the only piece of code that was used by people outside of either my schools or my workplace. (On the other hand, some of my documentation/tutorials got spread around on lots of external mailing lists and websites.)
* **MUNSTER**: Written in a language I had not yet learned, for an operating system I had never encountered before, on a computer that I'd only heard of, for a synthesizer I knew nothing about, using cryptic documentation. The language was C, the machine, an Atari 1040-ST (? ST-1040?), the OS—I don't remember, but it had something to do with GEM? And the synthesizer, a Korg M1—hence the name "munster" (m1-ster). It was quite the learning experience, studying all of the components simultaneously. The code would dump and restore the memory of eight synthesizers in the music lab. The Korg manual failed (IMHO) to really explain the data format. The appendix was a maze of twisty little passages all alike, with lots of "Note 8: See note 14 in Table 5. Table 5, Note 14: See notes 3, 4, and 7 in Table 2." Eventually, I deciphered from a picture without any real explanation, that when dumping data, every set of seven 8-bit bytes was being converted to eight 7-bit bytes, by stripping the high-order bit from each of the seven bytes and prefixing the seven high-order bits into an extra byte preceding the seven stripped bytes. This had to be figured out from a tiny illustration in the appendix (see attached screenshot from the manual):
![Korg appendix illustration][6]
(Kevin Cole, C[C BY-SA 4.0][7])
—[Kevin Cole][8]
* * *
For me, it is definitively GSequencer's synchronization function `AgsThread::clock()`.
**Working with parallelism**
During the development of GSequencer, I have encountered many obstacles. When I started the project, I was barely familiar with multi-threaded code execution. I was aware of `pthread_create()`, `pthread_mutex_lock()`, and `pthread_mutex_unlock()`.
But what I needed was more complex synchronization functionality. There are mainly three choices available—conditional locks, barriers, and semaphores. I decided for conditional locks since it is available from GLib-2.0 threading API.
Conditional locks usually don't proceed with program flow until a condition within a loop turns to be FALSE. So in one thread, you do, for example:
```
gboolean start_wait;
gboolean start_done = FALSE;
static GCond cond;
static GMutex mutex;
/* conditional lock */
g_mutex_lock(&amp;mutex);
if(!start_done){
  start_wait = TRUE;
  while(start_wait &amp;&amp;
        !start_done){
      g_cond_wait(&amp;cond,
                  &amp;mutex);
  }
}
g_mutex_unlock(&amp;mutex);
```
Within another thread, you can wake up the conditional lock, and if conditional evaluates to FALSE, the program flow proceeds for the waiting thread.
```
/* signal conditional lock */
g_mutex_lock(&amp;mutex);
start_done = TRUE;
if(start_wait){
  g_cond_signal(&amp;cond);
}
g_mutex_unlock(&amp;mutex);
```
Libags provides a thread wrapper built on top of GLib's threading API. The `AgsThread` object synchronizes the thread tree by `AgsThread::clock()` event. It is some kind of parallelism trap.
![GSequencer threads][9]
(Joel Krahemann, [CC BY-SA 4.0][7])
All threads within the tree synchronize to `AgsThread:max-precision` per second because all threads shall have the very same time running in parallel. I talk of tic-based parallelism, with a max-precision of 1000 Hz, each thread synchronizes 1000 times within the tree—giving you strong semantics to compute a deterministic result in a multi-threaded fashion.
Since we want to run tasks exclusively without any interference from competing threads, there is a mutex lock involved just after synchronization and then invokes `ags_task_launcher_sync_run()`. Be aware the conditional lock can be evaluated to be true for many threads.
After how many tics the flow is repeated depends on sample rate and buffer size. If you have an `AgsThread` with max-precision 1000, the sample rate of 44100 common for audio CDs, and a buffer size of 512 frames, then the delay until it's repeated calculates as follows:
```
`tic_delay = 1000.0 / 44100.0 * 512.0; // 11.609977324263039`
```
As you might have pre-/post-synchronization needing three tics to do its work, you get eight unused tics.
Pre-synchronization is used for reading from a soundcard or MIDI device. The intermediate tic does the actual audio processing. Post-synchronization is used by outputting to the soundcard or exporting to an audio file.
To get this working, I went through heights and depths. This is especially because you can't hear or see a thread. GDB's batch debugging helped a lot. With batch debugging, you can retrieve a stack trace of a running process. —[Joël Kräheman][10]
* * *
I don't know that I've written any code to be particularly proud of—me being a neurodiverse programmer may mean my case is that I'm only an average programmer with specific strengths and weaknesses.
However, many years ago, I did some coding in C with basic examples in parallel virtual machines, which I was very happy when I got them working.
More than ten years ago, I had a programming course where I taught Java to adult students, and I'm happy that I was able to put that course together.
I'm recently happy that I managed to help college students with disabilities bug test code as a part-time job. —[Rikard Grossman-Nielsen][11]
* * *
Like others, this made me think back aways. I don't really consider myself a developer, but I have done some along the way.  The thing that stuck out for me is the epiphany factor, or "moment of epiphany," as you said.
When I was a student at UNCW, I worked for the OIT network group managing the network for the dormitories. The students all received their IP address registrations using Bootstrap Protocol (BOOTP)—the predecessor to DHCP. The configuration file was maintained by hand in the beginning when we only had about 30 students. This was the very first year that the campus offered Internet to students! The next year, as more dorms got wired, the numbers grew and quickly reached over 200. I wrote a small C program to maintain the config file. The epiphany and "just plain old neat part" was that my code could touch and manipulate something "real" outside itself. In this case, a file on the system. I had a similar feeling later in a Java class when I learned how to read and write to a SQL server.
Anyway, there was something cool about seeing a real result from a program. One more amazing thing is that the original binary, which was compiled on a Red Hat 5.1 Linux system, will still run on my current Fedora 34 Linux desktop!! —[Alan Formy-Duval][12]
* * *
At the age of 18, I was certainly proud when I wrote a Virtual Basic application for a small company to automate printing AutoCAD files in bulk. At that time, it was the first "complex" application I wrote. Many user interactions were needed to configure the printer settings. In addition, the application was integrated with AutoCAD using Com ActiveX. It was challenging. The company kept using it until recently. The application stopped working because of an incompatibility issue with Windows 10. They used it for 18 years without issues!
I've been tasked to rewrite the application using today's technology. I've written the [new version][13] in Python. Looking back at the code I wrote was funny. It was so clumsy.
Attached is a screenshot of the first version. 
![Original VB printing app][14]
(Patrik Dufresne, [CC BY-SA 4.0][7])
—[Patrik Dufresne][15]
* * *
I once integrated GitHub with the Open Humans platform, which was part of my Outreachy project back in 2019. That was my venture into Django, and I learned a lot about APIs and rate limits in the process.
Also, very recently, I started working with Quarkus and started building REST, GraphQl APIs with it. I found it really cool. —[Manaswini Das][16]
* * *
Around 1998, I got bored and decided to write a game. Inspired by an old Mac game from the 1980s, I decided to create a "simulation" game where the user constructed a simple "program" to control a virtual robot and then explore a maze. The environment was littered with prizes and energy pellets to power your robot—but also contained enemies that could damage your robot if it ran into them. I added an energy "cost" so that every time your robot moved or took any action, it used up a little bit of its stored energy. So you had to balance "picking up prizes" with "finding energy pellets." The goal was to pick up as many prizes before you ran out of energy.
I experimented with using GNU Guile (a Scheme extension language) as a programming "backend," which worked well, even though I don't really know Scheme. I figured out enough Scheme to write some interesting robot programs.
And that's how I wrote GNU Robots. It was just a quick thing to amuse myself, and it was fun to work on and fun to play. Later, other developers picked it up and ran with it, making major improvements to my simple code. It was so cool to rediscover a few years ago that you can still compile GNU Robots and play around with them. Congratulations to the new maintainers for keeping it going. —[Jim Hall][17]
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/community-code-stories
作者:[Jen Wike Huger][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jen-wike
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop)
[2]: https://opensource.com/users/greg-scott
[3]: https://opensource.com/users/mkalindepauledu
[4]: https://opensource.com/article/19/9/advanced-bash-building-minesweeper
[5]: https://opensource.com/users/tamrakar
[6]: https://opensource.com/sites/default/files/uploads/kevincole_korg.png (Korg appendix illustration)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://opensource.com/users/kjcole
[9]: https://opensource.com/sites/default/files/uploads/joelkrahemann_ags-threading.png (BSequencer threads)
[10]: https://opensource.com/users/joel2001k
[11]: https://opensource.com/users/rikardgn
[12]: https://opensource.com/users/alanfdoss
[13]: https://gitlab.com/ikus-soft/batchcad
[14]: https://opensource.com/sites/default/files/uploads/patrikdufresne_vb-cadprinting.png (Original VB printing app)
[15]: https://opensource.com/user_articles/447861
[16]: https://opensource.com/user_articles/380116
[17]: https://opensource.com/users/jim-hall

View File

@ -0,0 +1,223 @@
[#]: subject: "Write your first CI/CD pipeline in Kubernetes with Tekton"
[#]: via: "https://opensource.com/article/21/11/cicd-pipeline-kubernetes-tekton"
[#]: author: "Savita Ashture https://opensource.com/users/savita-ashture"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Write your first CI/CD pipeline in Kubernetes with Tekton
======
Tekton is a Kubernetes-native open source framework for creating
continuous integration and continuous delivery (CI/CD) systems.
![Plumbing tubes in many directions][1]
Tekton is a Kubernetes-native open source framework for creating continuous integration and continuous delivery (CI/CD) systems. It also helps to do end-to-end (build, test, deploy) application development across multiple cloud providers or on-premises systems by abstracting away the underlying implementation details.
### Introduction to Tekton
[Tekton][2], known initially as [Knative Build][3], later got restructured as its own open source project with its own [governance organization][4] and is now a [Linux Foundation][5] project. Tekton provides an in-cluster container image build and deployment workflow—in other words, it is a continuous integration (CI) and continuous delivery (CD) service. It consists of Tekton Pipelines and several supporting components, such as Tekton CLI, Triggers, and Catalog.
Tekton is a Kubernetes native application. It installs and runs as an extension on a Kubernetes cluster and comprises a set of Kubernetes Custom Resources that define the building blocks you can create and reuse for your pipelines. Because it's a K-native technology, Tekton is remarkably easy to scale. When you need to increase your workload, you can just add nodes to your cluster. It's also easy to customize because of its extensible design and thanks to a community repository of contributed components.
Tekton is ideal for developers who need CI/CD systems to do their work and platform engineers who build CI/CD systems for developers in their organization.
### Tekton components
Building CI/CD pipelines is a far-reaching endeavor, so Tekton provides tools for every step of the way. Here are the major components you get with Tekton:
* **Pipeline: **Pipeline defines a set of Kubernetes [Custom Resources][6] that act as building blocks you use to assemble your CI/CD pipelines.
* **Triggers: **Triggers is a Kubernetes Custom Resource that allows you to create pipelines based on information extracted from event payloads. For example, you can trigger the instantiation and execution of a pipeline every time a merge request gets opened against a Git repository.
* **CLI:** CLI provides a command-line interface called `tkn` that allows you to interact with Tekton from your terminal.
* **Dashboard:** Dashboard is a web-based graphical interface for Tekton pipelines that displays information about the execution of your pipelines.
* **Catalog:** Catalog is a repository of high-quality, community-contributed Tekton building blocks (tasks, pipelines, and so on) ready for use in your own pipelines.
* **Hub:** Hub is a web-based graphical interface for accessing the Tekton catalog.
* **Operator:** Operator is a Kubernetes [Operator pattern][7] that allows you to install, update, upgrade, and remove Tekton projects on a Kubernetes cluster.
* **Chains: **Chains is a Kubernetes Custom Resource Definition (CRD) controller that allows you to manage your supply chain security in Tekton. It is currently a work-in-progress.
* **Results: **Results aims to help users logically group CI/CD workload history and separate out long-term result storage away from the pipeline controller.
### Tekton terminology
![Tekton terminology][8]
(Source: [Tekton documentation][9])
* **Step:** A step is the most basic entity in a CI/CD workflow, such as running some unit tests for a Python web app or compiling a Java program. Tekton performs each step with a provided container image.
* **Task:** A task is a collection of steps in a specific order. Tekton runs a task in the form of a [Kubernetes pod][10], where each step becomes a running container in the pod.
* **Pipelines:** A pipeline is a collection of tasks in a specific order. Tekton collects all tasks, connects them in a directed acyclic graph (DAG), and executes the graph in sequence. In other words, it creates a number of Kubernetes pods and ensures that each pod completes running successfully as desired.
![Tekton pipelines][11]
(Source: [Tekton documentation][12])
* **PipelineRun: **A PipelineRun, as its name implies, is a specific execution of a pipeline.
* **TaskRun:** A TaskRun is a specific execution of a task. TaskRuns are also available when you choose to run a task outside a pipeline, with which you may view the specifics of each step execution in a task.
### Create your own CI/CD pipeline
The easiest way to get started with Tekton is to write a simple pipeline of your own. If you use Kubernetes every day, you're probably comfortable with YAML, which is precisely how Tekton pipelines are defined. Here's an example of a simple pipeline that clones a code repository.
First, create a file called `task.yam`**l** and open it in your favorite text editor. This file defines the steps you want to perform. In this example, that's cloning a repository, so I've named the step clone. The file sets some environment variables and then provides a simple shell script to perform the clone.
Next comes the task. You can think of a step as a function that gets called by the task, and the task sets parameters and workspaces required for steps.
```
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
 name: git-clone
spec:
 workspaces:
   - name: output
     description: The git repo will be cloned onto the volume backing this Workspace.
 params:
   - name: url
     description: Repository URL to clone from.
     type: string
   - name: revision
     description: Revision to checkout. (branch, tag, sha, ref, etc...)
     type: string
     default: ""
 steps:
   - name: clone
     image: "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.21.0"
     env:
       - name: PARAM_URL
         value: $(params.url)
       - name: PARAM_REVISION
         value: $(params.revision)
       - name: WORKSPACE_OUTPUT_PATH
         value: $(workspaces.output.path)
     script: |
      #!/usr/bin/env sh
       set -eu
       CHECKOUT_DIR="${WORKSPACE_OUTPUT_PATH}"
       /ko-app/git-init \
         -url="${PARAM_URL}" \
         -revision="${PARAM_REVISION}" \
         -path="${CHECKOUT_DIR}"
       cd "${CHECKOUT_DIR}"
       EXIT_CODE="$?"
       if [ "${EXIT_CODE}" != 0 ] ; then
         exit "${EXIT_CODE}"
       fi
       # Verify clone is success by reading readme file.
       cat ${CHECKOUT_DIR}/README.md
```
Create a second file called `pipeline.yaml`, and open it in your favorite text editor. This file defines the pipeline by setting important parameters, such as a workspace where the task can be run and processed.
```
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
 name: cat-branch-readme
spec:
 params:
   - name: repo-url
     type: string
     description: The git repository URL to clone from.
   - name: branch-name
     type: string
     description: The git branch to clone.
 workspaces:
   - name: shared-data
     description: |
      This workspace will receive the cloned git repo and be passed
       to the next Task for the repo's README.md file to be read.
 tasks:
   - name: fetch-repo
     taskRef:
       name: git-clone
     workspaces:
       - name: output
         workspace: shared-data
     params:
       - name: url
         value: $(params.repo-url)
       - name: revision
         value: $(params.branch-name)
```
Finally, create a file called `pipelinerun.yaml` and open it in your favorite text editor. This file actually runs the pipeline. It invokes parameters defined in the pipeline (which, in turn, invokes the task defined by the task file.)
```
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
 name: git-clone-checking-out-a-branch
spec:
 pipelineRef:
   name: cat-branch-readme
 workspaces:
   - name: shared-data
     volumeClaimTemplate:
       spec:
         accessModes:
          - ReadWriteOnce
         resources:
           requests:
             storage: 1Gi
 params:
   - name: repo-url
     value: <https://github.com/tektoncd/pipeline.git>
   - name: branch-name
     value: release-v0.12.x
```
The advantage of structuring your work in separate files is that the `git-clone` task is reusable for multiple pipelines.
For example, suppose you want to do end-to-end testing for a pipeline project. You can use the `git-clone`** **task to ensure that you have a fresh copy of the code you need to test.
### Wrap up
As long as you're familiar with Kubernetes, getting started with Tekton is as easy as adopting any other K-native application. It has plenty of tools to help you create pipelines and to interface with your pipelines. If you love automation, try Tekton!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/cicd-pipeline-kubernetes-tekton
作者:[Savita Ashture][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/savita-ashture
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plumbing_pipes_tutorial_how_behind_scenes.png?itok=F2Z8OJV1 (Plumbing tubes in many directions)
[2]: https://github.com/tektoncd/pipeline
[3]: https://github.com/knative/build
[4]: https://cd.foundation/
[5]: https://www.linuxfoundation.org/projects/
[6]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[7]: https://operatorhub.io/what-is-an-operator
[8]: https://opensource.com/sites/default/files/uploads/tekto-terminology.png (Tekton terminology)
[9]: https://tekton.dev/docs/concepts/concept-tasks-pipelines.png
[10]: https://kubebyexample.com/en/concept/pods
[11]: https://opensource.com/sites/default/files/uploads/tekton-pipelines.png (Tekton pipelines)
[12]: https://tekton.dev/docs/concepts/concept-runs.png

View File

@ -0,0 +1,159 @@
[#]: subject: "7 Free and Open Source Plotting Tools [For Maths and Stats]"
[#]: via: "https://itsfoss.com/open-source-plotting-apps/"
[#]: author: "Marco Carmona https://itsfoss.com/author/marco/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 Free and Open Source Plotting Tools [For Maths and Stats]
======
We live in a world where almost everything we have generates data. Data, which can be analyzed and visualized thanks to tools that create graphs showing the relation between variables.
These tools are famously called “plotting apps”. They can be used for basic maths task in school to professional scientific projects. They can also be used for adding stats and data to presentations.
There are plenty of free and open source plotting apps available for Linux. But in this article, I am listing some of the best plotting apps I have come across.
### Best open source plotting apps
I am deliberately skipping productivity suits like LibreOffice. They could allow you to add graphs and plots in the documents and slides but they are very basic in terms of functionality.
Please also note that this is not a ranking list. The item at number one should not be considered better than the one at number five.
#### 1\. Matplotlib
![][1]
[Matplotlib][2] is an open-source drawing library that supports many sketch types like plots, histograms, bar charts, and other types of diagrams. Its mainly written in python; so if you have some knowledge of this programming language, Matplotlib can be your best option to start sketching your data.
The advantages are focused on simplicity, friendly UI, and high-quality images, besides the various formats such as PNG, PDF etc. for plots.
[Matplotlib][3]
#### 2\. GnuPlot
![][4]
[GnuPlot][5] is a command-driven plotting program that accepts commands in the form of special words or letters for performing tasks. It can be employed to manipulate functions and data points in both two- and three-dimensional in many different styles and many different output formats.
A special characteristic is that Gnuplot can also be used as a scripting language to automate the generation of plots.
You can refer to our [documentation][6] if you want to explore more about it before getting started.
[GnuPlot][7]
#### 3\. Octave
![][8]
[GNU Octave][9] is more than just a plotting tool. It helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. It may also be used as a batch-oriented language.
Some of its features are
* A large set of built-in functionalities to solve many different problems.
* A complete programming language that enables you to extend GNU Octave.
* Plotting facilities.
So, if you are interested in Octave, dont be afraid and go to check its [documentation][10].
[Octave][11]
#### 4\. Grace
![][12]
[Grace][13] is a tool to make two-dimensional plots of numerical data. Its capabilities are roughly similar to GUI-based programs like Octave plus script-based tools like Gnuplot or Genplot. In other words, it is a mix of a good user interface with the power of a scripting language.
Its important to mention that these two last characteristics let you do sophisticated calculations or perform automated tasks, which helps a lot when youre analyzing any type of data.
Other important aspect to mention is that it also brings tools like curve fitting, analysis capability, programmability, among others. So, if you want to know more about these helpful tools, go to its [official website][13] and check its other features.
[Grace][14]
#### 5\. LabPlot
![][15]
[LabPlot][16] is a program for two- and three-dimensional graphical presentation of data sets and functions. It comes with a complete user interface, which provides you with a lot of functions like Hilbert transform, statistics, color maps and conditional formatting, and its most recent [feature][17], Multi-Axes.
LabPlot allows you to work with multiple plots which each can have multiple graphs. The graphs can be produced from data or from functions; depending on what you need.
For more information, remember that the [documentation][18] and its [community][19] can be your best friend.
[LabPlot][20]
#### 6\. ROOT
![][21]
[ROOT][22] is a framework for data processing, which is created by the famous CERN lab which is at the heart of the research on high-energy physics. It is used to write petabytes of data recorded by the Large Hadron Collider experiments every year.
This project is used every day by thousands of physicists who analyze their data or perform simulations, especially in high-energy areas.
It is written in the C++ programming language for rapid and efficient prototyping and a persistence mechanism for C++ objects. If you dont like C++, I have good news for you. It can be used with Python as well.
[This project][23] is incredibly a complete toolkit, it can help you from creating a simple histogram to providing interactive graphics in web browsers. Awesome, isnt it?
[ROOT][24]
#### 7\. Plots
![][25]
This last option is more dedicated to basic academic students who are begin introduced to the graphs and math functions.
This open-source software called _**Plots**_ is a basic but powerful tool if you need to quickly visualize any data or math function in the least time possible. This is because it has not a lot of extra function, but notice that it doesnt mean it has no power at the time of plotting.
So, if youre starting in this area of data visualization, surely this last option is the best for you, Also, Id suggest you check our article about [Plots][26] to know how to set it up and get started.
### Conclusion
In my opinion, these open-source projects do more or less the same tasks; of course, some of them have more or fewer characteristics. The key is the way it generates the plotting; because one works with C as its programming language, while another works with Python. I suggest you to get informed about each of these plotting tools and choose the best that fits your tasks and necessities.
Have you ever used one of the tools on this list? What is your favorite open-source tool for plotting? Please let us know in the comments below.
If you found this article interesting, please take a minute to share it on social media; you can make a difference!
--------------------------------------------------------------------------------
via: https://itsfoss.com/open-source-plotting-apps/
作者:[Marco Carmona][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/marco/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/matplotlib.png?w=600&ssl=1
[2]: https://matplotlib.org/
[3]: https://matplotlib.org/stable/users/installing.html
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/gnuplot-1.png?w=600&ssl=1
[5]: http://www.gnuplot.info/
[6]: http://www.gnuplot.info/docs_5.4/Gnuplot_5_4.pdf
[7]: http://www.gnuplot.info/download.html
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/octave-1.png?w=600&ssl=1
[9]: https://www.gnu.org/software/octave/index#
[10]: https://www.gnu.org/software/octave/octave.pdf
[11]: https://www.gnu.org/software/octave/download
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/grace-1.jpg?w=600&ssl=1
[13]: https://plasma-gate.weizmann.ac.il/Grace/
[14]: https://plasma-gate.weizmann.ac.il/pub/grace/
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/labplot-1.png?w=600&ssl=1
[16]: https://labplot.kde.org/
[17]: https://labplot.kde.org/features/
[18]: https://labplot.kde.org/documentation/
[19]: https://labplot.kde.org/support/
[20]: https://labplot.kde.org/download/
[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/root.jpeg?w=600&ssl=1
[22]: https://root.cern/
[23]: https://root.cern/manual/
[24]: https://root.cern/install/
[25]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/meet_plots.png?w=600&ssl=1
[26]: https://itsfoss.com/plots-graph-app/

View File

@ -0,0 +1,259 @@
[#]: subject: "How Knative unleashes the power of serverless"
[#]: via: "https://opensource.com/article/21/11/knative-serving-serverless"
[#]: author: "Savita Ashture https://opensource.com/users/savita-ashture"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How Knative unleashes the power of serverless
======
An exploration of how Knative Serving works in detail, how it achieves
the quick scaling it needs, and how it implements the features of
serverless.
![Ship captain sailing the Kubernetes seas][1]
[Knative][2] is an open source project based on the [Kubernetes][3] platform for building, deploying, and managing serverless workloads that run in the cloud, on-premises, or in a third-party data center. Google originally started it with contributions from more than 50 companies.
Knative allows you to build modern applications which are container-based and source-code-oriented.
### Knative Core Projects
Knative consists of two components: Serving and Eventing. It's helpful to understand how these interact before attempting to develop Knative applications.
![Knative Serving and Eventing][4]
(Savita Ashture, [CC BY-SA 4.0][5])
### Knative Serving 
[Knative Serving][6] is responsible for features revolving around deployment and the scaling of applications you plan to deploy. This also includes network topology to provide access to an application under a given hostname. 
Knative Serving focuses on:
* Rapid deployment of serverless containers.
* Autoscaling includes scaling pods down to zero.
* Support for multiple networking layers such as Ambassador, Contour, Kourier, Gloo, and Istio for integration into existing environments.
* Give point-in-time snapshots of deployed code and configurations.
### Knative Eventing
[Knative Eventing][7] covers the [event-driven][8] nature of serverless applications. An event-driven architecture is based on the concept of decoupled relationships between event producers that create events and event consumers, or [_sinks_][9], that receive events.
Knative Eventing uses standard HTTP POST requests to send and receive events between event producers and sinks.
In this article, I focus on the Serving project since it is the most central project of Knative and helps deploy applications.
### The Serving project
Knative Serving defines a set of objects as Kubernetes Custom Resource Definitions (CRDs). These objects get used to define and control how your serverless workload behaves on the cluster:
![Knative Serving objects][10]
(Savita Ashture, [CC BY-SA 4.0][5])
* **Service**: A Knative Service describes a combination of a _route_ and a _configuration_ as shown above. It is a higher-level entity that does not provide any additional functionality. It should make it easier to deploy an application quickly and make it available. You can define the service to always route traffic to the latest revision or a pinned revision.
![Knative Service][11]
(Savita Ashture, [CC BY-SA 4.0][5])
* **Route**: The Route describes how a particular application gets called and how the traffic gets distributed across the different revisions. There is a high chance that several revisions can be active in the system at any given time based on the use case in those scenarios. It's the responsibility of routes to split the traffic and assign to revisions.
* **Configuration**: The Configuration describes what the corresponding deployment of the application should look like. It provides a clean separation between code and configuration and follows the [Twelve-Factor][12] App methodology. Modifying a configuration creates a new revision.
* **Revision**: The Revision represents the state of a configuration at a specific point in time. A revision, therefore, gets created from the configuration. Revisions are immutable objects, and you can retain them for as long as useful. Several revisions per configuration may be active at any given time, and you can automatically scale up and down according to incoming traffic.
### Deploying an application using Knative Service
To write an example Knative Service, you must have a Kubernetes cluster running. If you don't have a cluster, you can run a local [single-node cluster with Minikube][13]. Your cluster must have at least two CPUs and 4GB RAM available.
You must also install Knative Serving and its required dependencies, including a networking layer with configured DNS.
Follow the [official installation instructions][14] before continuing.
Here's a simple YAML file (I call it `article.yaml`) that deploys a Knative Service:
```
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
 name: knservice
 namespace: default
spec:
 template:
   spec:
     containers:
       - image: docker.io/##DOCKERHUB_NAME##/demo
```
Where `##DOCKERHUB_NAME##` is a username for `dockerhub`.
For example, `docker.io/savita/demo`.
This is a minimalist YAML definition for creating a Knative application.
Users and developers can tweak YAML files by adding more attributes based on their unique requirements.
```
$ kubectl apply -f article.yaml
service.serving.knative.dev/knservice created
```
That's it! You can now observe the different resources available by using `kubectl` as you would for any other Kubernetes process.
Take a look at the **service**:
```
$ kubectl get ksvc
NAME              URL                                                      LATESTCREATED                 LATESTREADY       READY   REASON
knservice         <http://knservice.default.example.com>                     knservice-00001               knservice-00001   True
```
 You can view the** configuration**:
```
$ kubectl get configurations
NAME         LATESTCREATED     LATESTREADY       READY   REASON
knservice    knservice-00001   knservice-00001   True
```
You can also see the **routes**:
```
$ kubectl get routes
NAME          URL                                    READY   REASON
knservice     <http://knservice.default.example.com>   True
```
You can view the **revision**:
```
$ kubectl get revision
NAME                       CONFIG NAME   K8S SERVICE NAME   GENERATION   READY   REASON   ACTUAL REPLICAS   DESIRED REPLICAS
knservice-00001            knservice                        1            True             1                 1
```
You can see the **pods** that got created:
```
$ kubectl get pods
NAME                                          READY    STATUS     RESTARTS   AGE
knservice-00001-deployment-57f695cdc6-pbtvj   2/2      Running    0          2m1s
```
### Scaling to zero
One of the properties of Knative is to scale down pods to zero if no request gets made to the application. This happens if the application does not receive any more requests for five minutes.
```
$ kubectl get pods
No resources found in default namespace.
```
The application becomes scaled to zero instances and no longer needs any resources. And this is one of the core principles of Serverless: If no resources are required, then none are consumed.
### Scaling up from zero
As soon as the application is used again (meaning that a request comes to the application), it immediately scales to an appropriate number of pods. You can see that by using the [curl command][15]:
```
$ curl <http://knservice.default.example.com>
Hello Knative!
```
Since scaling needs to occur first, and you must create at least one pod, the requests usually last a bit longer in most cases. Once it successfully finishes, the pod list looks just like it did before:
```
$ kubectl get pods
NAME                                          READY    STATUS     RESTARTS   AGE
knservice-00001-deployment-57f695cdc6-5s55q   2/2      Running    0          3s
```
### Conclusion
Knative has all those best practices which a serverless framework requires. For developers who already use Kubernetes, Knative is an extension solution that is easily accessible and understandable.
In this article, I've shown how Knative Serving works in detail, how it achieves the quick scaling it needs, and how it implements the features of serverless.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/knative-serving-serverless
作者:[Savita Ashture][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/savita-ashture
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_captain_devops_kubernetes_steer.png?itok=LAHfIpek (Ship captain sailing the Kubernetes seas)
[2]: https://knative.dev/docs/
[3]: https://opensource.com/resources/what-is-kubernetes
[4]: https://opensource.com/sites/default/files/uploads/knative_serving-eventing.png (Knative Serving and Eventing)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://github.com/knative/serving
[7]: https://github.com/knative/eventing
[8]: https://www.redhat.com/architect/event-driven-architecture-essentials
[9]: https://knative.dev/docs/developer/eventing/sinks/
[10]: https://opensource.com/sites/default/files/uploads/knative-serving.png (Knative Serving objects)
[11]: https://opensource.com/sites/default/files/uploads/knative-service.png (Knative Service)
[12]: https://12factor.net/
[13]: https://opensource.com/article/18/10/getting-started-minikube
[14]: https://knative.dev/docs/admin/install/serving/install-serving-with-yaml/#install-the-knative-serving-component
[15]: https://www.redhat.com/sysadmin/use-curl-api

View File

@ -0,0 +1,352 @@
[#]: subject: "What you need to know about cluster logging in Kubernetes"
[#]: via: "https://opensource.com/article/21/11/cluster-logging-kubernetes"
[#]: author: "Mike Calizo https://opensource.com/users/mcalizo"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
What you need to know about cluster logging in Kubernetes
======
Explore how different container logging patterns in Kubernetes work.
![Wheel of a ship][1]
Server and application logging is an important facility for developers, operators, and security teams to understand an application's state running in their production environment.
Logging allows operators to determine if the applications and the required components are running smoothly and detect if something unusual is happening so they can react to the situation.
For developers, logging gives visibility to troubleshoot the code during and after development. In a production setting, the developer usually relies on a logging facility without debugging tools. Coupled with logging from the systems, developers can work hand in hand with operators to effectively troubleshoot issues.
The most important beneficiary of logging facilities is the security team, especially in a cloud-native environment. Having the ability to collect information from applications and system logs enables the security team to analyze the data from authentication, application access to malware activities where they can respond to them if needed.
Kubernetes is the leading container platform where more and more applications get deployed in production. I believe that understanding the logging architecture of Kubernetes is a very important endeavor that every Dev, Ops, and Security team needs to take seriously.
In this article, I discuss how different container logging patterns in Kubernetes work.
### System logging and application logging
Before I dig deeper into the Kubernetes logging architecture, I'd like to explore the different logging approaches and how both functionalities are critical features of Kubernetes logging.
There are two types of system components: Those that run in a container and those that do not. For example:
* The Kubernetes scheduler and `kube-proxy` run in a container.
* The `kubelet` and container runtime do not run in containers.
Similar to container logs, system container logs get stored in the `/var/log` directory, and you should rotate them regularly.
Here I consider container logging. First, I look at cluster-level logging and why it is important for cluster operators. Cluster logs provide information about how the cluster performs. Information like why pods got evicted or the node dies. Cluster logging can also capture information like cluster and application access and how the application utilizes compute resources. Overall, a cluster logging facility provides the cluster operators information that is useful for cluster operation and security.
The other way to capture container logs is through the application's native logging facility. Modern application design most likely has a logging mechanism that helps developers troubleshoot application performance issues through standard out (`stdout`) and error streams (`stderr`).
To have an effective logging facility, Kubernetes implementation requires both app and system logging components.
### 3 types of Kubernetes container logging
There are three prominent methods of cluster-level logging that you see in most of the Kubernetes implementations these days.
1. Node-level logging agent
2. Sidecar container application for logging
3. Exposing application logs directly to logging backend
#### Node level logging agent
I'd like to consider the node-level logging agent. You usually implement these using a DaemonSet as a deployment strategy to deploy a pod (which acts as a logging agent) in all the Kubernetes nodes. This logging agent then gets configured to read the logs from all Kubernetes nodes. You usually configure the agent to read the nodes `/var/logs` directory capturing `stdout`/`stderr` streams and send it to the logging backend storage.
The figure below shows node-level logging running as an agent in all the nodes.
![Node-level logging agent][2]
(Mike Calizo, [CC BY-SA 4.0][3])
To set up node-level logging using the `fluentd` approach as an example, you need to do the following:
1. First, you need to create a ServiceAccount called `fluentdd`. This service account gets used by the Fluentd Pods to access the Kubernetes API, and you need to create them in the logging Namespace with the label `app: fluentd`. [code] #fluentd-SA.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: logging
  labels:
    app: fluentd [/code] You can view the complete example in this [repo][4].
2. You then need to create a ConfigMap `fluentd-configmap`. This provides a config file to the `fluentd daemonset` with all the required properties. [code] #fluentd-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: logging
  labels:
    app: fluentd
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      app: fluentd
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      labels:
        app: fluentd
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccount: fluentd
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.7.3-debian-elasticsearch7-1.0
        env:
          - name: FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.logging.svc.cluster.local"
          - name: FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            valueFrom:
              secretKeyRef:
                name: efk-pw-elastic
                key: password
          - name: FLUENT_ELASTICSEARCH_SED_DISABLE
            value: "true"
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluentconfig
          mountPath: /fluentd/etc/fluent.conf
          subPath: fluent.conf
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluentconfig
        configMap:
          name: fluentdconf [/code] You can view the complete example in this [repo][4].
Now, I look at the code on how to deploy a `fluentd daemonset` as the log agent.
```
#fluentd-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: logging
  labels:
    app: fluentd
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      app: fluentd
      kubernetes.io/cluster-service: "true"
  template:
    metadata:
      labels:
        app: fluentd
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccount: fluentd
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.7.3-debian-elasticsearch7-1.0
        env:
          - name: FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.logging.svc.cluster.local"
          - name: FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_ELASTICSEARCH_USER
            value: "elastic"
          - name: FLUENT_ELASTICSEARCH_PASSWORD
            valueFrom:
              secretKeyRef:
                name: efk-pw-elastic
                key: password
          - name: FLUENT_ELASTICSEARCH_SED_DISABLE
            value: "true"
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluentconfig
          mountPath: /fluentd/etc/fluent.conf
          subPath: fluent.conf
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluentconfig
        configMap:
          name: fluentdconf
```
To put this together: 
```
kubectl apply -f fluentd-SA.yaml \
              -f fluentd-configmap.yaml \
              -f fluentd-daemonset.yaml
```
#### Sidecar container application for logging
The other approach is by using a dedicated sidecar container with a logging agent. The most common implementation of the sidecar container is by using [Fluentd][5] as a log collector. In the enterprise deployment (where you won't worry about a little compute resource overhead), a sidecar container using `fluentd` (or [similar][6]) implementation offers flexibility over cluster-level logging. This is because you can tune and configure the collector agent based on the type of logs, frequency, and other possible tunings you need to capture.
The figure below shows a sidecar container as a logging agent.
![Sidecar container as logging agent][7]
(Mike Calizo, [CC BY-SA 4.0][3])
For example, a pod runs a single container, and the container writes to two different log files using two different formats. Here's a configuration file for the pod:
```
#log-sidecar.yaml
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args:
   - /bin/sh
    - -c
    - &gt;
     i=0;
      while true;
      do
        echo "$i: $(date)" &gt;&gt; /var/log/1.log;
        echo "$(date) INFO $i" &gt;&gt; /var/log/2.log;
        i=$((i+1));
        sleep 1;
      done
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  - name: count-log
    image: busybox
    args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log']
    volumeMounts:
    - name: varlog
      mountPath: /var/log
  volumes:
  - name: varlog
    emptyDir: {}
```
To put this together, you can run this pod:
```
`$ kubectl apply -f log-sidecar.yaml`
```
To verify if the sidecar container works as a logging agent, you can do:
```
`$ kubectl logs counter count-log`
```
The expected output should look like this:
```
$ kubectl logs counter count-log-1
Thu 04 Nov 2021 09:23:21 NZDT
Thu 04 Nov 2021 09:23:22 NZDT
Thu 04 Nov 2021 09:23:23 NZDT
Thu 04 Nov 2021 09:23:24 NZDT
```
#### Exposing application logs directly to logging backend
The third approach, which (in my opinion) is the most flexible logging solution for Kubernetes container and application logs, is by pushing the logs directly to the logging backend solution. Although this pattern does not rely on the native Kubernetes capability, it offers flexibility that most enterprises need like:
1. Extend a wider variety of support for network protocols and output formats.
2. Allows load balancing capability and enhances performance.
3. Configurable to accept complex logging requirements through upstream aggregation
Because this third approach relies on a non-Kubernetes feature by pushing logs directly from every application, it is outside the Kubernetes scope.
### Conclusion
The Kubernetes logging facility is a very important component for an enterprise deployment of a Kubernetes cluster. I discussed three possible patterns that are available for use. You need to find a suitable pattern for your needs.
As shown, the node-level logging using `daemonset` is the easiest deployment pattern to use, but it also has some limitations that might not fit your organization's needs. On the other hand, the sidecar pattern offers flexibility and customization that allows you to customize what type of logs to capture, giving you compute resource overhead. Finally, exposing application logs directly to the backend log facility is another enticing approach that allows further customization.
The choice is yours. You just need to find the approach that fits your organization's requirements.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/cluster-logging-kubernetes
作者:[Mike Calizo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mcalizo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Wheel of a ship)
[2]: https://opensource.com/sites/default/files/uploads/node-level-logging-agent.png (Node-level logging agent)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://github.com/mikecali/kubernetes-logging-example-article
[5]: https://www.fluentd.org/
[6]: https://www.g2.com/products/fluentd/competitors/alternatives
[7]: https://opensource.com/sites/default/files/uploads/sidecar-container-as-logging-agent.png (Sidecar container as logging agent)

View File

@ -0,0 +1,313 @@
[#]: subject: "5 De-Googled Android-based Operating Systems to Free Your Smartphone from Google and other Big Tech"
[#]: via: "https://itsfoss.com/android-distributions-roms/"
[#]: author: "Pratham Patel https://itsfoss.com/author/pratham/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
5 De-Googled Android-based Operating Systems to Free Your Smartphone from Google and other Big Tech
======
With the ever growing surveilling presence of advertisement giants like Google and Facebook on your personal and intimate devices like Phones and Tablets, it is time to deal with it.
You might be wondering why should you install a different Android based OS on your phone than what is already included. Let me give you a few reasons:
* Your [phone manufacturer partners with entities like Facebook to pre-install the apps][1] on your phone and simply uninstalling these apps wont net you any less surveillance (they tend to get reinstalled when there is a new OS update).
* [Android phone manufacturers dont have any incentive to provide you with OS and Security Updates][2]; using an alternative Operating System helps your device get necessary updates even after the vendor stops supporting it. Yes, your smartphone officially gets 3-4 years of support but it doesnt need to be thrown after that.
* Since these off the shelf Android ROMs dont bundle anything other than what is necessary, your phone can feel more responsive due to the less bloat.
* Less pre-installed software also means fewer services run in background, resulting in better battery life.
* A lot of customization options.
* Easy to rollback updates (because previous versions are available on the website of ROM).
WARNING
Please be careful if you decide to use any of these operating systems on an actual device. Flashing any third-party ROM on your device will void its warranty and may even render your device useless if not done correctly.
Installing custom ROM also needs a certain level of expertise and even then you could encounter issues, specially if the device is not supported by the choice of your operating system. Its better to try with an older, disused smartphone.
We take no responsibility for any damage caused to your device.
This list specifically focuses on Android based distributions and custom ROMs. We have a separate list of [open source mobile operating systems][3] that include options such as Ubuntu Touch and PureOS.
### 1\. LineageOS
[LineageOS][4] is arguably one of the most popular Android ROM, which is a fork of the very popular [[but dead since 2016][5]] CyanogenMod Android firmware/OS. Due to the popularity of LineageOS, it [has support for the vast majority of Android Phones][6].
This popularity also means that brand new phones get included in the LineageOS Project sooner than in other Android based ROMs.
LineageOS even supports your Nvidia Shield TV Set top boxes. How amazing is that?
![A few screenshots of Lineage OS User Interface][7]
#### Pros
* One of the most popular Android ROM
* Excellent first party and third party documentation due to the popularity
* The LineageOS ROM (in theory) is equally secure as the Android Open Source Project
* Extends your phone life cycle by providing OS updates even after phone vendor stops providing updates
* Timely updates for officially supported devices
* LineageOS follows the AOSP tree very closely (for people who want the most stock Android experience)
* Less “preinstalled bloatware” compared to your stock factory firmware
#### Cons
* Can feel “incomplete”, since no Google apps like YouTube/Gmail/Photos etc are included
* The LineageOS project is a community effort, so not all hardware features of your phone may work right out of the box
* LineageOS can not help making your phone more secure if the [vendor blobs][8] itself pose a security risk
* Unlocking bootloader is a necessary step (for all roms), and doing so can pose security issues
* Banking apps may be a hit or miss ([Read more here][9])
[Get LineageOS][10]
### 2\. CalyxOS
[CalyOS][11] is a rather interesting Android OS based on the [Android Open Source Project (AOSP)][12]. Instead of not shipping the Google Mobile Services (GMS) and leaving users to figure stuff out by themselves (flashing gapps etc), CalyxOS ships with [microG][13].
CalyxOS is backed by the [Calyx Institute][14], which is a non-profit organization to promote individual rights like free speech, privacy rights etc.
![The CalyxOS homepage along with a glimpse of their User Interface][15]
#### Pros
* Uses [microG][13]
* Ships with with [F-Droid][16] and the [Aurora Store][17] instead of Google Play Store
* Datura Firewall allows you to block internet access per app
* Uses [Mozilla Location Services][18] instead of Googles Location Services
* Monthly over-the-air security updates
* Has verified boot for increased security
* Phone Dialer automatically makes a Signal Call if the recipient has Signal
* CalyxOS locks the bootloader after installation, reducing security related attack vector(s)
#### Cons
* Only available on Pixel phones ([but there is a good reason behind this][19])
* As with all ROMs, bootloader unlock is required (and may lead to warranty issues)
* Flashing third party ROM on your phone has the possibility of bricking the phone
* Installing apps that you have paid for can be harder ([“_not so privacy friendly_” workaround][20])
* Banking apps can be a hit or miss on CalyxOS
[Get CalyxOS][21]
### 3\. GrapheneOS
[GrapheneOS][22] is an Android based ROM focusing on security and privacy. Although, one may argue that their efforts have been more towards increasing security, and doing so also benefits your privacy.
Neither is a bad thing, just know that GrapheneOS is oriented more towards people who value security more.
Their team works around the clock to harden the security of many parts of the base AOSP and provide you one of the best security oriented Android ROM. [GrapheneOS can even sandbox Googles Play Services][23].
![A stock photo of GrapheneOS installed on a Pixel device][24]
#### Pros
* Provides stronger and hardened app sandboxing than AOSP
* Uses its own [hardened malloc][25] (memory allocator with hardened security)
* The Linux kernel is hardened for better security
* Provides on time security updates (under a day or three)
* Ships with Full Disk Encryption (very important for a mobile device)
* Doesnt include any Google apps or Google services
#### Cons
* Limited hardware support; Only available for Google Pixels
* Their hardcore approach to security (sandboxing) has lead to headaches and is not recommended for new users
* Push notifications dont work _out-of-the-box_ for most apps (due to the lack of GMS)
* Security features like restricting mobile connectivity to LTE-only seem to appear a tad bit unnecessary for your average Joe
* Google SafetyNet doesnt work out of the box, which is required for your Banking apps
[Get GrapheneOS][26]
### 4\. /e/OS
You may think that [/e/OS][27] is yet another Android Operating System. You would be _partially_ right. Dont dismiss this Android ROM just yet. It packs so much more than any off the shelf Android based Operating System.
The biggest outstanding feature is that the [eFoundation][28] (which is behind /e/OS) provides you with a free [ecould account][29] (with 1GB of storage), instead of you needing to use your Google account.
Like any privacy respecting Android ROM, /e/OS replaces every single Google related module or app with a FOSS alternative.
_Side note_: The eFoundation also sells phones with /e/OS pre-installed. [Check it out here][30].
![A look at the app launcher in /e/OS and also an overview of the App Store ratings on /e/OS][31]
#### Pros
* The App store on /e/OS rates apps based on how many permissions they need and how privacy friendly is that app
* Provides an [ecloud account][29] (with a @e.email; 1GB in free tier) as a synchronization account
* Ships with [microG][13] framework
* Google DNS servers (8.8.8.8 and 8.8.4.4) are replaced with [Quad9][32] DNS servers
* DuckDuckGo is the default search engine replacing Google
* Google NTP servers are replaced with pool.ntp.orgs
* Uses location services provided by [Mozilla][18]
#### Cons
* Device compatibility is very limited ([list of supported devices][33])
* On top of limited device compatibility, only older phones are supported
* No indication if SafetyNet is being worked on; at the moment, SafetyNet is not working
* Roll-out of new features from Android takes a while
[Get /e/OS][34]
### 5\. CopperheadOS
[CopperheadOS][35] is another, one of the best security oriented Android ROM for your [Pixel] phone. It was developed by a team of just two people. It was a startup that used to sell Nexus phones (RIP) and Google Pixel phones with CopperheadOS pre-installed on the phones.
Just like CyanogenMod, CopperheadOS used to be all the glory for security oriented Android ROM. Unfortunately, due to an issue that I will not get into, the main developer went separate ways from CopperheadOS.
![The CopperheadOS website banner regarding security and privacy on your phone][36]
#### Pros
* [Unparalleled documentation][37], compared to any other Android ROM documentation
* CopperheadOS has had many of the security oriented features before AOSP itself
* Uses Cloudfare DNS (1.1.1.1 and 1.0.0.1) instead of Googles DNS (8.8.8.8 and 8.8.4.4)
* Includes a internet firewall for per-app permission
* Uses Open Source apps instead of obsolete AOSP apps (Calendar, SMS, Gallery etc)
* Includes [F-Droid][38] and the [Aurora App Store][17]
#### Cons
* [Questionable claims about the security of CopperheadOS after the main dev went different ways][39]
* The original aim towards security feels abandoned in favor of an organization that provides phones pre-loaded with CopperheadOS
* No indication of SafetyNet working on CopperheadOS
[Get CopperheadOS][40]
### Honourable mention: LineageOS for microG
The [LineageOS for microG][41] project is a fork of the official LineageOS with [microG][13] and Google Apps (GApps) included by default. This project takes care of making sure that microG works flawlessly on your phone (which can be a complicated process for a beginner).
![A list of stock apps included in LineageOS for microG][42]
#### Pros
* Provides the microG implementation of GMS without any inconveniences
* Comes with [F-Droid][38] as the default App Store
* Provides weekly/monthly over-the-air updates
* Has option to use location service provided by either [Mozilla][18], or by [Nominatim][43]
#### Cons
* Enabling signature spoofing to enable microG support can be an attack vector from a security POV
* Even though this ROM is based on LineageOS, as of writing this, not all of the LineageOS devices are supported
* Includes Google Apps (GApps) instead of providing Open Source alternatives
* No confirmation if Googles SafetyNet is working or not
[Get LineageOS for microG][44]
### Misclleanous
You may be wondering why some of the interesting Android based ROMs (CalyxOS, GrapheneOS etc) are only restricted to supporting Googles Phones. Isnt _that_ ironic?
Well, that is because most phones support unlocking a bootlaoder, but only Google Pixels support locking the bootloader again. Which is a consideration when you are developing an Android based ROM for privacy and/or security focused crowd. If the bootloader is unlocked, it is an attack vector that you havent patched yet.
Another reason for this irony is that, only Google makes their phones Device Tree and Kernel Source Code available for the public in a timely manner. You cannot develop a ROM for said phone without its Device Tree and Kernel Source Code.
I would also recommend the following FOSS apps regardless of your ROM choice. They will prove to be a nice addition to your privacy friendly app toolkit.
* [Signal Messenger][45]
* [K-9 Mail][46]
* [DuckDuckGo Browser][47]
* [Tor Borwser][48]
* [F-Droid][16]
* [Aurora Store][17]
* [OpenKeychain][49]
### Conclusion
In my opinion, if you have a Google Pixel phone, I recommend giving a try to either CalyxOS or GrapheneOS or CopperheadOS. These Android ROMs have excellent features to help you keep your phone out of Googles spying eyes while also keeping your phone [arguably] more secure.
If you do not have a Google Pixel, you can still give LineageOS for microG a try. It is a good community effort to bring Googles proprietary features without invading your privacy, to the masses.
If your phone isnt supported by either of the operating systems mentioned above, LineageOS is your friend. Due to the wide range of support for phones, yours will undoubtedly supported at any capacity, be it officially or unofficially.
--------------------------------------------------------------------------------
via: https://itsfoss.com/android-distributions-roms/
作者:[Pratham Patel][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pratham/
[b]: https://github.com/lujun9972
[1]: https://9to5google.com/2020/08/05/oneplus-phones-will-ship-with-facebook-services-that-cant-be-removed/
[2]: https://www.computerworld.com/article/3175067/android-upgrade-problem.html
[3]: https://itsfoss.com/open-source-alternatives-android/
[4]: https://lineageos.org/
[5]: https://www.androidauthority.com/cyanogenmod-lineageos-654810/
[6]: https://wiki.lineageos.org/devices/
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/01_lineageos.webp?resize=800%2C400&ssl=1
[8]: https://www.reddit.com/r/Android/comments/52ovsh/comment/d7m2pnr/?utm_source=share&utm_medium=web2x&context=3
[9]: https://lineageos.org/Safetynet/
[10]: https://download.lineageos.org/
[11]: https://calyxos.org/
[12]: https://www.androidauthority.com/aosp-explained-1093505/
[13]: https://microg.org/
[14]: https://calyxinstitute.org/projects/calyx-os
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/02_calyx-1.webp?resize=800%2C449&ssl=1
[16]: https://www.f-droid.org/en/about/
[17]: https://auroraoss.com/
[18]: https://location.services.mozilla.com/
[19]: https://calyxos.org/docs/guide/device-support/#requirements-for-supporting-a-new-device
[20]: https://auroraoss.com/faq/#how-do-i-purchase-paid-apps-without-using-the-play-store-app
[21]: https://calyxos.org/install/
[22]: https://grapheneos.org/
[23]: https://twitter.com/grapheneos/status/1445572173389725709
[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/03_graphene.webp?resize=800%2C600&ssl=1
[25]: https://github.com/GrapheneOS/hardened_malloc
[26]: https://grapheneos.org/install/
[27]: https://e.foundation/e-os/
[28]: https://e.foundation/
[29]: https://e.foundation/ecloud/
[30]: https://esolutions.shop/
[31]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/06_eos-1.webp?resize=600%2C539&ssl=1
[32]: https://www.quad9.net/
[33]: https://doc.e.foundation/easy-installer#list-of-devices-supported-by-the-easy-installer
[34]: https://doc.e.foundation/devices
[35]: https://copperhead.co/
[36]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/04_copperheados.webp?resize=800%2C420&ssl=1
[37]: https://copperhead.co/android/docs/
[38]: https://f-droid.org/en/about/
[39]: https://twitter.com/DanielMicay/status/1068641901157511168
[40]: https://copperhead.co/android/docs/install/
[41]: https://lineage.microg.org/
[42]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/05_losmicrog.webp?resize=450%2C800&ssl=1
[43]: https://nominatim.org/
[44]: https://download.lineage.microg.org/
[45]: https://signal.org/
[46]: https://k9mail.app/
[47]: https://duckduckgo.com/app
[48]: https://www.torproject.org/
[49]: https://www.openkeychain.org/

View File

@ -0,0 +1,219 @@
[#]: subject: "Linux tips for using cron to schedule tasks"
[#]: via: "https://opensource.com/article/21/11/cron-linux"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Linux tips for using cron to schedule tasks
======
Schedule backups, file cleanups, and other tasks by using this simple
yet powerful Linux command-line tool. Download our new cron cheat sheet.
![Linux keys on the keyboard for a desktop computer][1]
Making things happen on a regular and predictable schedule is important on computers. It's important because, as humans, we can sometimes be bad at remembering to do things reliably because we get distracted, have too much on our minds, or we're on holiday. Computers are really good at doing things on a schedule, but a human has to program the computer before the computer takes action.
In a way, the `cron` system is an easy and rudimentary introduction to programming. You can make your computer do what you want it to do just by editing a file. You don't even have to know where the file is kept. You have only to type in a simple command, enter the "recipe" you want your computer to follow, and save your work. From then on, your computer executes your instructions at the specified time until it is told to stop.
By design, `cron` is not a complex system. Here's what you need to know about it.
### What is cron?
The `cron` command is so ubiquitous in Linux and Unix, and it's been mimicked and reinvented so often that it's almost a generic term for _something that happens on a schedule_. It's a form of automation, and although there are different implementations of it (Dillon's cron, Vixie's cron, chrony, and others), and variations like [`anacron`][2] and [systemd timers][3], the syntax and workflow has remained essentially the same for decades.
Cron works on a "spool" system, much like printers and email. If you didn't know that printers and email use a spool, that's okay because the point of a spool file is that you aren't supposed to think about it much. On a Linux system, the directory `/var/spool` is designed as a central hub for important but low-level files that the user isn't meant to interact with directly. One of the spools managed in `/var/spool` is `cron` tables or "crontab" for short. Every user—yourself included—on a Linux system has a crontab. Users can edit, view, and remove their own crontab. In addition, users can use their crontab to schedule tasks. The `cron` system itself monitors crontabs and ensures that any job listed in a crontab is executed at its specified time.
### Edit cron settings
You can edit your crontab using the `crontab` command along with the `-e` (for _edit_) option. By default, most systems invoke the `vim` text editor. If you, like me, don't use Vim, then you can set a different editor for yourself in your `~/.bashrc` file. I set mine to Emacs, but you might also try [Nano][4], [Kate][5], or whatever your favorite editor happens to be. The **EDITOR** environment variable defines what text editor you use in your terminal, while the **VISUAL** variable defines what editor you use in a graphical mode:
```
export EDITOR=nano
export VISUAL=kate
```
Refresh your shell session with your new settings:
```
`$ source ~/.bashrc`
```
Now you can edit your crontab with your preferred editor:
```
`$ crontab -e`
```
#### Schedule a task
The `cron` system is essentially a calendaring system. You can tell `cron` how frequently you want a job to run by using five different attributes: minute, hour, date, month, weekday. The order of these attributes is strict and not necessarily intuitive, but you can think of them as filters or masks. By default, you might think of everything being set to _always_ or _every_. This entry would run `touch /tmp/hello` at the top of every minute during every hour of every day all year long:
```
`* * * * * touch /tmp/hello`
```
You can restrict this all-encompassing schedule by setting specific definitions for each attribute. To make the job run on the half-hour mark of each hour, set the minutes to **30**:
```
`30 * * * * touch /tmp/hello`
```
You can further constrain this instruction with a specific hour. This job runs at 3:30 AM every morning:
```
`30 3 * * * touch /tmp/hello`
```
You can also make the job run only on the first of each month:
```
`30 3 1 * * touch /tmp/hello`
```
You can set a month using 1 for January up to 12 for December, and you can set a day using 0 for Sunday up to 6 for Saturday. This job runs at 3:15 during the month of April, only on Mondays:
```
`15 3 * 4 1 touch /tmp/hello`
```
### Set increments
All of these settings match a value _exactly_. You can also use `cron` notation to run jobs after a set passage of time. For instance, you can run a job every 15 minutes:
```
`*/15 * * * * touch /tmp/hello`
```
You could run a job at 10 AM every three days:
```
`* 10 */3 * * touch /tmp/hello`
```
You could run a job every six hours:
```
`* */6 * * * touch /tmp/hello`
```
### Cron shorthand
Modern `cron` implementations have added a convenient shorthand for common schedules. These are:
* `@hourly`
* `@daily`
* `@weekly`
* `@monthly`
* `@yearly or @annually`
### List cron jobs
Using the `crontab` command, you can see a list of your scheduled `cron` jobs:
```
$ crontab -l
15 3 * 4 1 touch /tmp/hello
```
### Remove a crontab
When you're done with a crontab, you can remove it with the `-r` option:
```
`$ crontab -r -i`
```
The `-i` option stands for _interactive_. It prompts you for confirmation before deleting the file.
### What cron can do
It's one thing to know how to use `cron`, but it's another thing to know what to use it for. The classic use case is a good backup plan. If your computer is on for most of the day or all day and all night, then you can schedule a routine backup of an important partition. I run a backup application called `rdiff-backup` on my primary data partition daily at 3AM:
```
$ crontab -l | grep rdiff
* 3 * * * rdiff-backup /data/ /vault/
```
Another common use is system maintenance. On my Slackware desktop, I update my local repository catalog every Friday afternoon:
```
$ crontab -l | grep slack
* 14 * * 5 sudo slackpkg update
```
I could also run an Ansible script at 15:00 every three days to [tidy up my Downloads folder][6]:
```
$ crontab -l | grep ansible
* 15 */3 * * ansible-playbook /home/seth/Ansible/cleanup.yaml
```
A little investment in the health of your computing environment goes a long way. There are de-duplication scripts, file size and `/tmp` directory monitors, photo resizers, file movers, and many more menial tasks you could schedule to run in the background to help keep your system uncluttered. With `cron`, your computer can take care of itself in ways I only wish my physical apartment would.
### Remember cron settings
Besides coming up with _why_ you need `cron`, the hardest thing about `cron` in my experience has been remembering its syntax. Repeat this to yourself, over and over until you've committed it to memory:
_Minutes, hours, date, month, weekday._
_Minutes, hours, date, month, weekday._
_Minutes, hours, date, month, weekday._
Better yet, go [download our free cheatsheet][7] so you have the key close at hand when you need it the most!
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/cron-linux
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer)
[2]: https://opensource.com/article/21/2/linux-automation
[3]: https://opensource.com/article/20/7/systemd-timers
[4]: https://opensource.com/article/20/12/gnu-nano
[5]: https://opensource.com/article/20/12/kate-text-editor
[6]: https://opensource.com/article/21/9/keep-folders-tidy-ansible
[7]: https://opensource.com/downloads/linux-cron-cheat-sheet

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,229 @@
[#]: subject: "Debugging a weird 'file not found' error"
[#]: via: "https://jvns.ca/blog/2021/11/17/debugging-a-weird--file-not-found--error/"
[#]: author: "Julia Evans https://jvns.ca/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Debugging a weird 'file not found' error
======
Yesterday I ran into a weird error where I ran a program and got the error “file not found” even though the program I was running existed. Its something Ive run into before, but every time Im very surprised and confused by it (what do you MEAN file not found, the file is RIGHT THERE???!!??)
So lets talk about what happened and why!
### the error
Lets start by showing the error message I got. I had a Go program called [serve.go][1], and I was trying to bundle it into a Docker container with this Dockerfile:
```
FROM golang:1.17 AS go
ADD ./serve.go /app/serve.go
WORKDIR /app
RUN go build serve.go
FROM alpine:3.14
COPY --from=go /app/serve /app/serve
COPY ./static /app/static
WORKDIR /app/static
CMD ["/app/serve"]
```
This Dockerfile
1. Builds the Go program
2. Copies the binary into an Alpine container
Pretty simple. Seems like it should work, right?
But when I try to run `/app/serve`, this happens:
```
$ docker build .
$ docker run -it broken-container:latest /app/serve
standard_init_linux.go:228: exec user process caused: no such file or directory
```
But the file definitely does exist:
```
$ docker run -it broken-container:latest ls -l /app/serve
-rwxr-xr-x 1 root root 6220237 Nov 16 13:27 /app/serve
```
So whats going on?
### idea 1: permissions
At first I thought “hmm, maybe the permissions are wrong?”. But this cant be the problem, because:
* permission problems dont result in a “no such file or directory” error
* in any case when we ran `ls -l`, we saw that the file was executable
(Im including this even though its “obviously” wrong just because I have a lot of wrong thoughts when debugging, its part of the process :) )
### idea 2: strace
Then I decided to use strace, as always. Lets see what stracing `/app/serve/` looks like
```
$ docker run -it broken-container:latest /bin/sh
$ /app/static # apk add strace
(apk output omitted)
$ /app/static # strace /app/serve
execve("/app/serve", ["/app/serve"], 0x7ffdd08edd50 /* 6 vars */) = -1 ENOENT (No such file or directory)
strace: exec: No such file or directory
+++ exited with 1 +++
```
This is not that helpful, it just says “No such file or directory” again. But at least we know that the error is being thrown right away when we run the `evecve` system call, so thats good.
Interestingly though, this is different from what happens when we try to strace a nonexistent binary:
```
$ strace /app/asdf
strace: Can't stat '/app/asdf': No such file or directory
```
### idea 3: google “enoent but file exists execve”
I vaguely remembered that there was some reason you could get an `ENOENT` error when executing a program even if the file did exist, so I googled it. This led me to [this stack overflow answer][2]
which said, very helpfully:
> When execve() returns the error ENOENT, it can mean more than one thing:
> 1\. the program doesnt exist;
> 2\. the program itself exists, but it requires an “interpreter” that doesnt exist.
>
> ELF executables can request to be loaded by another program, in a way very similar to `#!/bin/something` in shell scripts.
That answer says that we can find the interpreter with `readelf -l $PROGRAM | grep interpreter`. So lets do that!
### step 4: use `readelf`
I didnt have `readelf` installed in the container and I wasnt sure how to install it, so I ran `mount` to get the path to the containers filesystem and then ran `readelf` from the host using that overlay directory.
(as an aside: this is kind of a weird way to do this, but as a result of writing a [containers zine][3] Im used to doing weird things with containers and I think doing weird things is fun, so this way just seemed fastest to me at the time. That trick wont work if youre on a Mac though, it only works on Linux)
```
$ mount | grep docker
overlay on /var/lib/docker/overlay2/1ed587b302af7d3182135d02257f261fd491b7acf4648736d4c72f8382ecba0d/merged type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/326ILTM2UXMVY64V7JFPCSDSKG:/var/lib/docker/overlay2/l/MGGPR357UOZZWXH3SH2AYHJL3E:/var/lib/docker/overlay2/l/EEEKSBSQ6VHGJ77YF224TBVMNV:/var/lib/docker/overlay2/l/RVKU36SQ3PXEQAGBRKSQRZFDGY,upperdir=/var/lib/docker/overlay2/1ed587b302af7d3182135d02257f261fd491b7acf4648736d4c72f8382ecba0d/diff,workdir=/var/lib/docker/overlay2/1ed587b302af7d3182135d02257f261fd491b7acf4648736d4c72f8382ecba0d/work,index=off)
$ # (then I copy and paste the "merged" directory from the output)
$ readelf -l /var/lib/docker/overlay2/1ed587b302af7d3182135d02257f261fd491b7acf4648736d4c72f8382ecba0d/merged/app/serve | grep interp
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
01 .interp
03 .text .plt .interp .note.go.buildid
```
Okay, so the interpreter is `/lib64/ld-linux-x86-64.so.2`.
And sure enough, that file doesnt exist inside our Alpine container
```
$ docker run -it broken-container:latest ls /lib64/ld-linux-x86-64.so.2
```
### step 5: victory!
Then I googled a little more and found out that theres a `golang:alpine` container thats meant for doing Go builds targeted to be run in Alpine.
I switched to doing my build in the `golang:alpine` container and that fixed everything.
### question: why is my Go binary dynamically linked?
The problem was with the programs interpreter. But I remembered that only dynamically linked programs have interpreters, which is a bit weird I expected my Go binary to be statically linked! Whats going on with that?
First, I double checked that the Go binary was actually dynamically linked using `file` and `ldd`: (`ldd` lists the dependencies of a dynamically linked executable! its very useful!)
(Im using the docker overlay filesystem to get at the binary inside the container again)
```
$ file /var/lib/docker/overlay2/1ed587b302af7d3182135d02257f261fd491b7acf4648736d4c72f8382ecba0d/merged/app/serve
/var/lib/docker/overlay2/1ed587b302af7d3182135d02257f261fd491b7acf4648736d4c72f8382ecba0d/merged/app/serve:
ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2, Go
BuildID=vd_DJvcyItRi4Q2RD0WL/z8P4ulttr6F6njfqx8CI/_odQWaUTR2e38bdHlD0-/ikjsOjlMbEOhj2qXv5AE,
not stripped
$ ldd /var/lib/docker/overlay2/1ed587b302af7d3182135d02257f261fd491b7acf4648736d4c72f8382ecba0d/merged/app/serve
linux-vdso.so.1 (0x00007ffe095a6000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f565a265000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f565a099000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f565a2b4000)
```
Now that I know its dynamically linked, its not that surprising that it didnt work on a different system than it was compiled on.
Some Googling tells me that I can get Go to produce a statically linked binary by setting `CGO_ENABLED=0`. Lets see if that works.
```
$ # first let's build it without that flag
$ go build serve.go
$ file ./serve
./serve: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, Go BuildID=UGBmnMfFsuwMky4-k2Mt/RaNGsMI79eYC4-dcIiP4/J7v5rNGo3sNiJqdgNR12/eR_7mqqrsil_Lr6vt-rP, not stripped
$ ldd ./serve
linux-vdso.so.1 (0x00007fff679a6000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f659cb61000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f659c995000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f659cbb0000)
$ # and now with the CGO_ENABLED_0 flag
$ env CGO_ENABLED=0 go build serve.go
$ file ./serve
./serve: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=Kq392IB01ShfNVP5TugF/2q5hN74m5eLgfuzTZzR-/EatgRjlx5YYbpcroiE9q/0Fg3zUxJKY3lbsZ9Ufda, not stripped
$ ldd ./serve
not a dynamic executable
```
It works! I checked, and thats an alternative way to fix this bug if I just set the `CGO_ENABLED=0` environment variable in my build container, then I can build a static binary and I dont need to switch to the `golang:alpine` container for my builds. I kind of like that fix better.
And statically linking in this case doesnt even produce a bigger binary (for some reason it seems to produce a slightly _smaller_ binary?? I dont know why that is)
I still dont understand _why_ its using cgo here, I ran `env | grep CGO` and I definitely dont have `CGO_ENABLED=1` set in my environment, but I dont feel like solving that mystery right now.
### that was a fun bug!
I thought this bug was a nice way to see how you can run into problems when compiling a dynamically linked executable on one platform and running it on another one! And to learn about the fact that ELF files have an interpreter!
Ive run into this “file not found” error a couple of times, and it feels kind of mind bending because it initially seems impossible (BUT THE FILE IS THERE!!! I SEE IT!!!). I hope this helps someone be less confused if you run into it!
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2021/11/17/debugging-a-weird--file-not-found--error/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://gist.github.com/jvns/6147bc21fbb60b0090d543bb5e240134
[2]: https://superuser.com/a/507031
[3]: https://wizardzines.com/zines/containers

View File

@ -0,0 +1,324 @@
[#]: subject: "7 Linux command-line tips for saving media file space"
[#]: via: "https://opensource.com/article/21/11/linux-commands-convert-files"
[#]: author: "Howard Fosdick https://opensource.com/users/howtech"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 Linux command-line tips for saving media file space
======
These commands make it simple to convert audio, image, and video files
quickly and maximize disk space.
![Command line prompt][1]
Have media files on your computer? You can likely reclaim significant disk space by storing that data in more space-efficient file formats.
This article demonstrates how to use Linux line commands to perform the most common space-saving conversions. I use line commands because they give you complete control over compression and format conversion features. Also, you'll need to use line commands if you want to write scripts. That allows you to develop programs that are custom-tailored to your own unique needs.
While this article covers terminal commands, there are many other ways to compress and convert files. You can install an open source conversion GUI application onto your computer, or you can even convert file formats using the `save as` and `export` functions of many common applications.
This article discusses only a handful of the most popular file formats and terminal commands among the hundreds that exist. The goal is to give you maximum benefit while keeping it simple.
### File deletion
Before you start your file-format conversions, it's helpful to identify and then delete any huge but unwanted files you have on your computer. Deleting just a handful of space hogs yields outsized benefits.
The [` du`][2], [`ncdu`][3], and [`dust`][4] commands list the biggest subdirectories under the current directory. They tell you which directories use the most disk space:
```
`$ du -a . | sort -n -r | head -n 50`
```
This command string identifies the 50 biggest files in its recursive directory tree. It lists those biggest files sorted by size:
```
`$ find  -type f  -exec  du -Sh {} +  |  sort -rh  |  head -n 50`
```
With this command, you can instantly recognize when you have large files stored in more than one location. Delete the duplicates, and you can reclaim some significant space. The output also helps you identify and then delete any big files you no longer need.
### Quality or storage space
Media files that hold images, audio, and video may use hundreds of different file formats. There's often a trade-off between data quality on the one hand and the storage space consumed on the other.
Some file formats are _lossless_: They preserve all the originally captured data. Lossless file formats can either be _uncompressed_ or _compressed_. They vary in size by this and other factors.
Other file formats are _lossy_. They save storage space by cleverly eliminating some of the least useful data. They're ideal if your use of the data is such that you can tolerate some minor data loss.
For example, capturing a digital image in a lossless format like RAW, PNG, or BMP creates a big file. Converting that image to a lossy alternative like JPG or WEBP saves lots of space. Is it worth it? That depends on your intended use of the image.
If you're a professional photographer who prints a photograph in a high-quality book, you probably want to keep your original lossless file. You likely require the highest quality image for your artwork. Your lossless file also means you can perform extensive image editing without losing quality.
If you're a website developer, you might make the opposite choice. Smaller lossy JPG or WEBP files download to users' computers much faster than lossless images, making your webpages load more quickly. This conversion works because few users can tell whether the image they view on their computer or cellphone screen is lossless or lossy.
Keep in mind that after you convert from a lossless format to a lossy one, you've removed some data. You can't convert back to regain that data. You can convert back to the previous format, but you do so without the data you've already sacrificed. Only delete the original file once you're satisfied that the converted file meets all your needs! You may choose not to delete the original file at all.
Sometimes, saving space is a matter of saving _convenient_ space. If original, lossless, uncompressed files are important to you for any reason, back them up to a separate storage location. You may not need that full-quality WAV file on your working computer every day, but you might be happy to have access to it later.
### Converting image files
Several popular bit-mapped file formats present great opportunities for saving space, including RAW, BMP, GIF, and TIFF. The widely-used PNG format is also a good candidate.
One possible conversion target for images is the lossy JPG format. With its quality settings, JPG allows you to specify a smaller file size with greater data loss or a larger file size with less loss. It might give you a compression ratio of up to 10:1 over some lossless formats. Yet if you display a JPG image on a computer or phone screen, the eye can rarely tell that conversion and compression have occurred.
A WEBP file looks just as good on screens as JPG files, but they save even more space. This savings is why WEBP is becoming the most popular lossy image format, supported by all modern browsers and most up-to-date apps. The WEBP format offers alpha transparency, animation, and good color radiance. It's nearly always used as a lossy format, though it supports lossless as well.
I converted most of my PNG and JPG files to WEBP format and reclaimed loads of storage space. On one disk, 500 megabytes of PNG files melted down to about 120 meg of WEBP. If you're certain that your images are only ever going to be displayed on a screen, converting to WEBP offers clear benefits.
The open source ImageMagick utility gives you Linux terminal commands to convert images. You probably need to install it first:
```
`$ sudo apt install imagemagick`
```
ImageMagick line commands help you reduce image file sizes through three techniques:
* Changing the file format
* Changing the degree of compression
* Making the image smaller
Here's the syntax of the ImageMagick `convert` command that performs file format conversions:
```
`convert  [input options]  input_file   [output options]  output_file`
```
These examples all reduced file size, as you can see from the results of sample runs:
```
$ convert image.bmp  new_image.jpg   #  7.4MB down to 1.1MB
$ convert image.tiff new_image.jpg   #  7.4MB down to 1.1MB
$ convert image.png  new_image.webp  #  4.8MB down to 515KB
$ convert image.png  new_mage.webp   #  1.5MB down to 560KB
$ convert image.jpg  new_image.webp  #  769KB down to 512KB
$ convert image.gif  new_image.jpg   #  13.2MB down to 10.9MB
$ convert image.gif  new_image.webp  #  13.2MB down to 4.1MB
```
You can convert RAW images, too. When converting a RAW image, its filename must not have an extension for the `convert` command to process it correctly.
```
`$ convert image new_image.png #  RAW 67.1MB down to 45.3MB`
```
There are some significant space savings to be gained, but only if the output is acceptable for your use case.
This example saves space by resizing a JPG image to as near as 800x600 as possible while still retaining the proper aspect ratio. In this example, I convert a 285KB input file at 1277x824 pixels to a 51KB output file at 800x600 pixels.
```
`$ convert image.jpg  -resize 800x600  new_image.jpg`
```
The `convert` command can change images however you like. For example, you can specify the trade-off between image quality and size. But you'll have to wade through its many options to understand its full capabilities. For more about ImageMagick, read Greg Pittman's [Getting started with ImageMagick][5], or visit the [ImageMagick website][6].
### Converting audio files
Like image files, audio files come in _lossless uncompressed_, _lossless compressed_, and _lossy_ formats.
As with images, the trade-off between lossless and lossy is primarily data quality versus saving space. If you require the highest quality audio, stick with lossless files. That might be the case if you edit digitized music, for example. If you want listenable music that consumes far less space, most of the world has decided that lossy formats like MP3, M4A, and OPUS are the best choice.
Here are the most popular audio formats. Note that file extensions often refer to containers that can support more than one audio encoding format and that most technologies claim more than a single file extension. This chart lists the most common scenarios you'll see:
* Lossless and uncompressed
* WAV
* PCM
* AIFF
* Lossless and compressed
* FLAC
* ALAC
* Lossy
* WEBM
* OPUS
* OGG (Vorbis)
* AAC (some implementations of this are not open formats)
* MP3
* M4A
* WMA (not an open format)
If your goal is to save disk space, try converting from a lossless format to a lossy one. Dont convert from one lossy format to another unless you have to. That will likely degrade the sound quality too much.
A very flexible Linux terminal command to convert audio files is `ffmpeg`. To install it:
```
`$ sudo apt install ffmpeg`
```
Like the ImageMagick `convert` command, `ffmpeg` supports a staggering range of file formats and codecs. View them all by entering:
```
`$ ffmpeg -encoders`
```
Using `ffmpeg` is usually pretty straightforward. Here's the standard syntax. The `-i` flag identifies the input file, and the `-vn` flag tells `ffmpeg` not to invoke any video-related code that might alter the audio output:
```
`$ ffmpeg  -i  audiofile_input.ext -vn audiofile_output.new`
```
These examples all convert lossless WAV files into lossy formats to save space. The process to convert AIFF files is the same (but replace `.wav` with `.aiff`):
```
$ ffmpeg -i audio.wav -vn audio.ogg  # 38.3MB to 3.3MB
$ ffmpeg -i audio.wav -vn audio.mp3  # 38.3MB to 3.5MB
$ ffmpeg -i audio.wav -vn audio.m4a  # 38.3MB to 3.6MB
$ ffmpeg -i audio.wav -vn audio.webm # 38.3MB to 2.9MB
```
All the commands reduced the size of the lossless input files by a factor of 10. The big question: Do the outputs sound different from the originals? Well, it depends. To most people listening on most consumer devices, the difference is negligible. That's why MP3, M4A, and other compressed formats are the world's most popular music formats. Even though it's not technically the best, the audio is quite listenable, and it consumes a fraction of the storage space (or bandwidth, when streaming).
### Converting video files
Video conversion presents another chance to save lots of space. Your goal should be to find the video format that best balances playback quality and file size to meet your needs.
A _video format_ is the combination of a _container file format_ and a _codec_. A codec is software that encodes and decodes a data stream as it moves to and from the container file.
Containers can be paired with multiple codecs. In practice, there are often only one or two or three popular codecs paired with a particular container. For example, with audio files, WAV files can be encoded as either lossless or lossy, but lossless encoding predominates the format such that most people assume that any WAV file is lossless.
These are some of today's most widely used open source video formats:
* The MP4 format, containing H.264 video and AAC audio, is used in BluRay and Internet streaming.
* The WEBM format, containing VP9 video and Opus audio, is remarkably flexible and is used for both archival-quality files as well as smaller files for streaming.
* The Matroska (MKV) container format can contain nearly any combination of video, audio, and even stereoscopic (3D) imagery. It's the basis for WEBM.
The main factors that determine video file size and quality are:
* Resolution (dimension of the frame)
* Bitrate
* Encoding
The `ffmpeg` command can change all three parameters. Here's a simple conversion example:
```
`$ ffmpeg -i input_video.mov output.webm`
```
This conversion resulted in a 1.8 MB output file from a 39 MB input.
Because I didn't specify any parameters, `ffmpeg` copies most of the existing attributes of the input file. In this example, my input file was a MOV file containing MJPEG video with a resolution of 1280x720, a frame rate of 23.98, and a bitrate of 40,219 kilobytes per second (kbps). The resulting output file contains VP9 video with the same resolution and frame rate. However, the bitrate is only 1,893 kbps.
As with audio conversions, video compression ratios are impressive, and the potential space savings are enormous. On my PC, these conversions viewed so similarly to the original that it was difficult to tell whether there was any degradation—which, for my purposes, is as good as saying that there was no degradation.
Whether the output quality is acceptable to you depends on your intended use of the video, your viewing devices, and your expectations. Never erase your original file until youve reviewed the converted file and found it satisfactory.
### Archival storage
Archiving takes multiple input files—often of different file types—and collects them into a single output file. Compression is optional. A compressed archive is useful for sending files across the internet and for long-term data storage. It's a great way to save space. The downside is that you're limited in how you can process archived files until you extract them out of the archive (though some tools are now pretty sophisticated in their manipulation of files within archives).
Among the many archive file formats, the most popular compressed formats include GZ, BZ2, XZ, ZIP, and 7Z. The [`tar`][7] command handles many archive formats. It supports compression commands including `gzip`, `bzip2`, `xz`, and others.
```
`$ tar --xz --create --file myarchive.tar.xz bigfile.xcf bigfile.tiff`
```
This command reduced 56 MB down to a 28 MB compressed archive. How much compression occurs varies widely by the files involved. Some media files (especially those already in a compressed format) compress little or not at all.
To unarchive a TAR file, use the `--extract` option:
```
`$ tar --extract --file myarchive.tar.xz`
```
The `tar` command bundles many files into one container (sometimes called a _tarball_). If you're compressing only one file, however, there's no need for a container.
Instead, you can just compress the file with commands like `gzip`, `bzip2`, `xz`, `zip`, `7z`, and others.
```
$ xz bigfile.xcf
$ ls
bigfile.xcf.xz
```
To uncompress a compressed file, you can usually use an "un" version of the command you used to compress the file:
```
`$ unxz bigfile.xcf.xz`
```
Sometimes there's also a `--decompress` option:
```
`$ xz --decompress bigfile.xcf.xz`
```
Not all Linux distributions include all these commands, so you may have to install some of them.
### Scripting tips
To convert all the files in a directory, simply embed your conversion command within a [`for` loop][8]. Place double quotes around the filename variable to handle any filenames that contain embedded spaces. This script converts all PNG files in a directory to WEBP files:
```
#!/bin/bash
for file_name in *.png ; do  
  convert "$file_name"  "$file_name".webp
done
```
To process all the files in a directory and all its subdirectories, you need to recursively traverse the directory structure. Use the [`pushd` and `popd` stack commands][9] or the [find command][10] for this.
### Conclusion
Used prudently, Linux commands that compress and reformat media files can save you gigabytes of storage. I'm sure you've got some great tips of your own, so please add them in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/linux-commands-convert-files
作者:[Howard Fosdick][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/howtech
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://opensource.com/article/21/7/check-disk-space-linux-du
[3]: https://opensource.com/article/21/8/use-ncdu-check-free-disk-space-linux
[4]: https://opensource.com/article/21/6/dust-linux
[5]: https://opensource.com/article/17/8/imagemagick
[6]: https://imagemagick.org/
[7]: https://opensource.com/article/17/7/how-unzip-targz-file
[8]: https://opensource.com/article/19/6/how-write-loop-bash
[9]: https://opensource.com/article/19/8/navigating-bash-shell-pushd-popd
[10]: https://opensource.com/article/19/6/how-write-loop-bash#find

View File

@ -0,0 +1,367 @@
[#]: subject: "Dynamic scheduling of Tekton workloads using Triggers"
[#]: via: "https://opensource.com/article/21/11/kubernetes-dynamic-scheduling-tekton"
[#]: author: "Savita Ashture https://opensource.com/users/savita-ashture"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Dynamic scheduling of Tekton workloads using Triggers
======
Upgrade your CI/CD pipeline with this Kubernetes-native application.
![Parts, modules, containers for software][1]
[Tekton][2] is a Kubernetes-native continuous integration and delivery (CI/CD) framework. It allows you to create containerized, composable, and configurable workloads declaratively through Kubernetes Custom Resource Definitions (CRD).
[Tekton Triggers][3] is a Tekton component that allows you to detect and extract information from events from various sources and execute [TaskRuns][4] and [PipelineRuns][5] based on that information. It also enables passing extracted information to TaskRuns and PipelineRuns from events.
This article demonstrates how Tekton Triggers integrates with external services, such as a Git repository, using GitLab as an example.
### Prerequisites
If you want to follow the steps in this article, you must have a Kubernetes cluster running Kubernetes 1.18 or above with an ingress controller installed that can give you an external IP. You must also have [Tekton Pipelines][6] and [Tekton Triggers][7] installed.
### Triggers flow
A Trigger works because Tekton, using a special pod called an EventListener, is able to monitor your cluster for a specific event. To pick up on relevant events, you can use a ClusterInterceptor. When an event occurs that you have identified as significant, the Tekton Trigger starts an action or workflow you have defined.
![A flow chart showing the interactions among EventListener, Trigger, and ClusterInterceptor.][8]
Credits: <https://github.com/tektoncd/triggers/blob/main/images/TriggerFlow.svg>
Tekton Trigger allows you to create a special resource called an EventListener, which is a Kubernetes service that listens for incoming HTTP requests from different sources, usually a Git repository, including those hosted on GitLab, GitHub, and others. Based on those events, the EventListener pod performs actions and creates Tekton resources, such as TaskRun or PipelineRun.
All Triggers resource definitions are created in YAML, the configuration format most commonly used in Kubernetes. However, before writing YAML files to define a Trigger, it's important to understand Tekton Triggers terminology.
#### EventListener
An [EventListener][9] is a Kubernetes service that listens for incoming HTTP requests and executes a Trigger. For example, after receiving a specific incoming request, this definition executes the `gitlab-listener-trigger` Trigger:
```
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
 name: gitlab-event-listener
spec:
 serviceAccountName: gitlab-listener-sa
 triggers:
   - triggerRef: gitlab-listener-trigger
 resources:
   kubernetesResource:
     serviceType: NodePort
```
#### Trigger
A [Trigger][10] decides what to do with a received event. It also sets a TriggerBinding, TriggerTemplate, and optional interceptors to run. Triggers make use of interceptors to validate or modify incoming requests before proceeding.
```
apiVersion: triggers.tekton.dev/v1beta1
kind: Trigger
metadata:
 name: gitlab-listener-trigger
spec:
 interceptors:
   - name: "verify-gitlab-payload"
     ref:
       name: "gitlab"
       kind: ClusterInterceptor
     params:
       - name: secretRef
         value:
           secretName: "gitlab-secret"
           secretKey: "secretToken"
       - name: eventTypes
         value:
          - "Push Hook"
 bindings:
   - ref: binding
 template:
   ref: template
```
#### Interceptor
An [interceptor][11] is an event processor that runs before the TriggerBinding. It also performs payload filtering, verification (using a secret), and transformation; defines and tests trigger conditions; and implements other useful processing.
By default, four core interceptors are installed when installing Triggers: GitHub, GitLab, Bitbucket, and CEL. The installation also includes one Webhook interceptor for implementing custom business logic.
##### GitLab interceptors
GitLab interceptors help to validate and filter GitLab webhooks and filter incoming events by event type. The GitLab interceptor requires a secret token. This token is set when creating the webhook in GitLab and is validated by the GitLab interceptor when the request arrives.
```
apiVersion: v1
kind: Secret
metadata:
 name: gitlab-secret
type: Opaque
stringData:
 secretToken: "1234567"
```
#### TriggerBinding
After validating and modifying the incoming request, you need to extract values from the request and bind them to variables that you can later use in a TriggerTemplate to pass our Pipeline.
For our example, you just need a URL and a revision.
```
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerBinding
metadata:
 name: binding
spec:
 params:
   - name: gitrevision
     value: $(body.checkout_sha)
   - name: gitrepositoryurl
     value: $(body.repository.git_http_url)
```
#### TriggerTemplate
The [TriggerTemplate][12] is a blueprint that instantiates `TaskRun` or `PipelineRun` when EventListener detects an event.
```
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
 name: template
spec:
 params:
   - name: gitrevision
   - name: gitrepositoryurl
 resourcetemplates:
   - apiVersion: tekton.dev/v1alpha1
     kind: TaskRun
     metadata:
       generateName: gitlab-run-
     spec:
       taskSpec:
         inputs:
           resources:
             - name: source
               type: git
         steps:
           - image: ubuntu
             script: |
              #! /bin/bash
               ls -al $(inputs.resources.source.path)
       inputs:
         resources:
           - name: source
             resourceSpec:
               type: git
               params:
                 - name: revision
                   value: $(tt.params.gitrevision)
                 - name: url
                   value: $(tt.params.gitrepositoryurl)
```
Note that the pipeline resources module is, at the time of writing, being deprecated and will be replaced by [git-clone][13] tasks, from [tektoncd/catalog][14].
### Dynamically schedule workloads by configuring a webhook
First, create a new namespace, `demo`:
```
`$ kubectl create ns demo`
```
Next, before applying the Triggers resource, configure the required role-based access control (RBAC):
```
$ kubectl -n demo apply -f \
"<https://gist.githubusercontent.com/savitaashture/596bc4d93ff6b7606fe52aa20ba1ba14/raw/158a5ed0dc30fd1ebdac461147a4079cd6187eac/triggers-rbac.yaml>"
```
Note: RBAC configurations vary depending on the permissions.
Apply Triggers resources:
```
$ kubectl -n demo apply -f \
"<https://gist.githubusercontent.com/savitaashture/8aa013db1cb87f5dd1f2f96b0e121363/raw/f4f592d8c1332938878c5ab9641e350c6411e2b0/triggers-resource.yaml>"
```
After applying, verify the successful creation of the EventListener object and pod:
EL object READY status should be **True. **
```
$ kubectl get el -n demo
NAME                    ADDRESS                                                     AVAILABLE REASON              READY             REASON                                                                                                            
gitlab-event-listener   <http://el-gitlab-event-listener.demo.svc.cluster.local:8080>   True                 MinimumReplicasAvailable   True
```
EL Pod status should be **Running.**
```
$ kubectl get pods -n demo
NAME                                       READY          STATUS    RESTARTS              AGE
el-gitlab-event-listener-fb77ff8f7-p5wnv   1/1            Running   0                     4m22s
```
Create ingress to get the external IP to configure in the GitLab webhook:
```
$ kubectl -n demo apply -f \
"<https://gist.githubusercontent.com/savitaashture/3b3554810e391477feae21bb8a9af93a/raw/56665b0a31c7a537f9acbb731b68a519be260808/triggers-ingress.yaml>"
```
Get the ingress IP:
```
$ kubectl get ingress triggers-ingress-resource -n demo
NAME               CLASS    HOSTS     ADDRESS                             PORTS   AGE
ingress-resource   &lt;none&gt;   *         &lt;address&gt;                            80      6s
```
Configure a webhook in GitLab. In your GitLab repository, go to **Settings -&gt; Webhooks**.
Then set the below fields:
* URL: external IP Address from the Ingress with `/` path
* Secret token: 1234567, which should match the secret value created above from `triggers-resource.yaml` file
Choose event type from the Trigger section, then just select **Push events,** uncheck **Enable SSL verification**, and click on **Add webhook.**
![screenshot of GitLab Webhooks configurations][15]
Savita Ashture, [CC BY-SA 4.0][16]
#### Testing GitLab events by pushing PR
Clone your own GitLab repository, make changes, and push. For example:
```
$ git clone <https://gitlab.com/savitaashture1/gitlabtest-triggers>
$ cd gitlabtest-triggers
               $ git commit -m "empty-commit" --allow-empty &amp;&amp; git push origin main
[main 934ecba] empty-commit
Username for '<https://gitlab.com>': savitaashture
Password for '<https://savitaashture@gitlab.com>':
warning: redirecting to <https://gitlab.com/savitaashture1/gitlabtest-triggers.git/>
Enumerating objects: 1, done.
Counting objects: 100% (1/1), done.
Writing objects: 100% (1/1), 183 bytes | 183.00 KiB/s, done.
Total 1 (delta 0), reused 0 (delta 0)
To <https://gitlab.com/savitaashture1/gitlabtest-triggers>
   ff1d11e..934ecba  main -&gt; main
```
Events will be generated and sent to the EventListener pod. You can verify this by doing:
```
kubectl get pods -n demo
kubectl logs -f &lt;pod_name&gt; -n demo
```
Verify successful delivery of events by doing a `get` operation for `TaskRun`.
```
$ kubectl  -n demo get taskruns | grep gitlab-run-
gitlab-run-hvtll   True        Succeeded   95s         87s
```
Clean all resources created by Triggers by removing namespace `demo`:
`$ kubectl delete ns demo`
### Conclusion
Tekton Triggers is one of the most useful modules that help schedule workloads dynamically in response to a user-defined set of events. Because of this module, my team was able to achieve end-to-end CI/CD.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/kubernetes-dynamic-scheduling-tekton
作者:[Savita Ashture][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/savita-ashture
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_modules_networking_hardware_parts.png?itok=rPpVj92- (Parts, modules, containers for software)
[2]: https://opensource.com/article/21/11/cicd-pipeline-kubernetes-tekton
[3]: https://github.com/tektoncd/triggers
[4]: https://github.com/tektoncd/pipeline/blob/master/docs/taskruns.md
[5]: https://github.com/tektoncd/pipeline/blob/master/docs/pipelineruns.md
[6]: https://github.com/tektoncd/pipeline/blob/main/docs/install.md#installing-tekton-pipelines-on-kubernetes
[7]: https://github.com/tektoncd/triggers/blob/main/docs/install.md#installing-tekton-triggers-on-your-cluster
[8]: https://opensource.com/sites/default/files/uploads/tekton_event_chart.png (Trigger flow)
[9]: https://github.com/tektoncd/triggers/blob/main/docs/eventlisteners.md
[10]: https://github.com/tektoncd/triggers/blob/main/docs/triggers.md
[11]: https://github.com/tektoncd/triggers/blob/main/docs/interceptors.md
[12]: https://github.com/tektoncd/triggers/blob/main/docs/triggertemplates.md
[13]: https://github.com/tektoncd/catalog/blob/main/task/git-clone/0.4/git-clone.yaml
[14]: https://github.com/tektoncd/catalog
[15]: https://opensource.com/sites/default/files/uploads/tekton_trigger_screenshot.png (GitLab repository)
[16]: https://creativecommons.org/licenses/by-sa/4.0/

View File

@ -0,0 +1,112 @@
[#]: subject: "How to Install Brave Browser on Fedora, Red Hat & CentOS"
[#]: via: "https://itsfoss.com/install-brave-browser-fedora/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
How to Install Brave Browser on Fedora, Red Hat & CentOS
======
[Brave][1] is an increasingly [popular web browser for Linux][2] and other operating system. The focus on blocking ads and tracking by default along with Chrome extension support has made Brave a popular choice among Linux users.
In this tutorial, youll learn to install Brave on Fedora Linux. Youll also learn about updating it and removing it.
The tutorial has been tested on Fedora but it should also be valid for other distributions in the Red Hat domain such as CentOS, Alma Linux and Rocky Linux.
### Install Brave browser on Fedora Linux
Youll have to go to the command line way for installing Brave here.
As a prerequisite, please ensure that dnf-plugin-core is installed.
```
sudo dnf install dnf-plugins-core
```
The next step is to add the Brave repository to your system:
```
sudo dnf config-manager --add-repo https://brave-browser-rpm-release.s3.brave.com/x86_64/
```
You should also import and add the repository key so that your system trusts the packages coming from this newly added repository:
```
sudo rpm --import https://brave-browser-rpm-release.s3.brave.com/brave-core.asc
```
![Adding Brave Browser repository in Fedora Linux][3]
You are all set to go now. Install Brave with this command:
```
sudo dnf install brave-browser
```
**Press Y when asked to confirm** your choice. It should take a few seconds or a couple of minutes based on your internet speed. If the DNF cache was not updated recently, it may even take longer.
![Installing Brave Browser in Fedora][4]
Once the installation finishes, look for Brave in the system menu and start from there.
![Start Brave browser in Fedora Linux][5]
### Updating Brave browser on Fedora Linux
You have added a repository for the browser and also imported its key. Your system trusts the packages coming from this repository.
So, when there is a new Brave browser release and it is made available to this repository, youll get it through the regular system upgrades.
In other words, you dont have to do anything special. Just keep your Fedora system updated and if there is a new version from Brave, it should be automatically installed with system updates.
### Removing Brave browser from Fedora Linux
![Brave Browser in Fedora Linux][6]
If you do not like Brave for some reasons, you can remove it from your system. Just use the dnf remove command:
```
sudo dnf remove brave-browser
```
Press Y when asked:
![Removing Brave browser from Fedora Linux][7]
You may also choose to disable the brave-browser-rpm-release.s3.brave.com_x86_64_.repoor delete this file completely from the /etc/yum/repos.d. Though its not really mandatory.
I hope you find this quick tip helpful. If you have any questions or suggestions, please let me know.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-brave-browser-fedora/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://brave.com/
[2]: https://itsfoss.com/best-browsers-ubuntu-linux/
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/adding-Brave-browser-repository-in-Fedora.png?resize=800%2C300&ssl=1
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/Installing-Brave-Browser-Fedora.png?resize=800%2C428&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/start-brave-fedora-linux.png?resize=759%2C219&ssl=1
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/Brave-Browser-in-Fedora-Linux.png?resize=800%2C530&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/removing-Brave-browser-Fedora-Linux.png?resize=800%2C399&ssl=1

View File

@ -0,0 +1,107 @@
[#]: subject: "4 open source ways to create holiday greetings"
[#]: via: "https://opensource.com/article/21/11/open-source-holiday-greetings"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
4 open source ways to create holiday greetings
======
Open source tools and resources provide creative possibilities for any
holiday.
![Painting art on a computer screen][1]
The holiday season is upon us once again, and this year I decided to celebrate in an [open source way][2]. Like a particular famous holiday busybody, I have a long list (and I do intend to check it twice) of holiday tasks: create a greeting card (with addressed envelopers) to send to family and friends, make a photo montage or video to a suitably festive song, and decorate my virtual office. There are plenty of open source applications and resources making my job easier. Here's what I use.
### Inkscape and clip art
One of my favorite resources is [FreeSVG.org][3] (formerly Openclipart.org). It's easy to find your favorite holiday, including Hanukkah, Christmas, New Year's, and more. The clip art is all contributed by users like you and me, and Creative Commons Zero (CC0), so you don't even need to provide attribution. When possible, I still do give attribution, to ensure that FreeSVG and its artists get visibility.
Here's an example of some clip art from FreeSVG:
![A cartoon of a brown cornucopia with red apples, an orange pumpkin, and brown nuts spilling out][4]
openclipart.com, [CC0 1.0][5]
Using [Inkscape's][6] Text to Path tool, I added my own text to the image, which I used on a card. With a little more preparation, I could also use the graphic on some custom cups or placemats.
![A cartoon of a brown cornucopia with red apples, an orange pumpkin, and brown nuts spilling out, with the words "We Give Thanks" in an arch over the top][7]
Don Watkins, [CC BY-SA 4.0][8]
### Word processing
[LibreOffice][9] Writer can be used to create greeting cards and posters for use around your home or distributing to your friends and family. Create a database of your family and friends using LibreOffice Calc and then use that resource to simplify making mailing labels with the mail merge function.
### Creative Commons pictures and graphics
There's also art on [search.creativecommons.org][10]. Mind the license type: give proper credit to anything requiring attribution. This image (["Thanksgiving Dealies"][11]) came from the Creative Commons image search. It's by [Martin Cathrae][12] and is licensed under [CC BY-SA 2.0][13], so it can be adapted, reused, and shared under the same license.
![A candlelight centerpiece using pumpkin shells as flower holders for small red and yellow floral bouquets.][14]
[Martin Cathrae,][12] [CC BY-SA 2.0][13]
I took this same image and added some of my own text to it with [GIMP][15]. You can use Inkscape to do the same thing. 
![A candlelight centerpiece using pumpkin shells as flower holders for small red and yellow floral bouquets, with the words "Happy Holidays" at the top left of the image][16]
Don Watkins, [CC BY-SA 4.0][17]
Creative Commons offers plenty of image options that would make for a festive background during your next [video conference][18].
### Videos and live streaming
You can also incorporate images like these along with some of your own and create a short video clip using [OpenShot][19] video editor. You can easily add narration by recording a separate voice track using Audacity. Sound effects can be added in Audacity, saved to file, and imported into a soundtrack on OpenShot video editor. Find [legal background][20] music to add to your video.
Livestream your holiday gatherings with [Open Broadcaster Software.][21] It's easy to use OBS to present an engaging holiday show for your friends and family using the software, or you can save the program as a Matroska or MP4 file for later viewing.
### Reading material
Project Gutenberg is an excellent source of free holiday reading material to share. [Dickens' Christmas Carol][22] is one such resource that is easily read on the web or downloaded as an EPUB or in a format for your favorite eReader. You can also find royalty-free reading materials, like "The Feast of Lights" from [Librivox][23], in mp3 format so they can be downloaded and played in your favorite browser or media player.
### Holiday fun
The most important aspect of the holiday season is that they're relaxing and fun times with friends and family. If you've got family members curious about computers, take a moment to share some of your favorite open source resources with them.
 
Anderson Silva shows us how to use a Raspberry Pi and LightshowPi to create a musical light show.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/open-source-holiday-greetings
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen)
[2]: https://opensource.com/open-source-way
[3]: http://freesvg.org
[4]: https://opensource.com/sites/default/files/uploads/cornucopia.png (cornucopia)
[5]: https://creativecommons.org/publicdomain/zero/1.0/
[6]: https://opensource.com/article/18/1/inkscape-absolute-beginners
[7]: https://opensource.com/sites/default/files/uploads/cornucopia_with_text.png (We Give Thanks)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/article/21/9/libreoffice-tips
[10]: https://search.creativecommons.org/
[11]: https://www.flickr.com/photos/34067077@N00/4014605524
[12]: https://www.flickr.com/photos/34067077@N00
[13]: https://creativecommons.org/licenses/by-sa/2.0/?ref=ccsearch&atype=rich
[14]: https://opensource.com/sites/default/files/uploads/fall_table.png (Holiday table)
[15]: https://opensource.com/content/cheat-sheet-gimp
[16]: https://opensource.com/sites/default/files/uploads/fall_table_with_text.png (Happy Holidays)
[17]: https://creativecommons.org/licenses/by/4.0/
[18]: https://opensource.com/article/20/5/open-source-video-conferencing
[19]: https://opensource.com/article/17/5/using-openshot-video-editor
[20]: https://opensource.com/article/20/1/what-creative-commons
[21]: https://opensource.com/article/21/4/obs-youtube
[22]: https://www.gutenberg.org/ebooks/19337
[23]: https://librivox.org/the-feast-of-lights-by-emma-lazarus/

View File

@ -0,0 +1,162 @@
[#]: subject: "Google Chrome vs Chromium: Whats the difference?"
[#]: via: "https://itsfoss.com/chrome-vs-chromium/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Google Chrome vs Chromium: Whats the difference?
======
Google Chrome is the most popular web browser. No matter whether you prefer to use it, Chrome manages to offer a good user experience.
Even though it is available for Linux, it is not an open-source web browser.
And, if you need the look and feel of Google Chrome but want to use an open-source solution, Chromium can be your answer.
But isnt Google Chrome based on Chromium? (thats a Yes.) And, its also developed by Google? (Also, Yes.)
So, what are the differences between Chrome and Chromium? In this article, we shall take an in-depth look at both of them and compare them while presenting some benchmarks.
### User Interface
![Google Chrome and Chromium running side-by-side on Zorin OS 16][1]
The user interfaces for both Google Chrome and Chromium remain very similar, with minor noticeable differences.
For instance, I noticed that the system title bar and borders were disabled by default for Google Chrome out of the box. In contrast, it was enabled by default for Chromium at the time of my tests.
You can also notice a share button in the address bar of Google Chrome, which is absent on Chromium.
It isnt a big visual difference, but just a set of UI tweaks as per the available features. So, yes, you can expect a similar user experience with under-the-hood tweaks. If you are in for the UI, both the browsers should suit you well.
### Open-Source &amp; Proprietary Code
![][2]
Chromium is entirely open-source, meaning anyone can use and modify the code to their hearts intent. You can check out its source code on its [GitHub mirror][3].
This is why you will find many [Chromium-based browsers][4] available such as Brave, Vivaldi and Edge.
You end up getting so many choices, so you can choose what you like the best.
On the other hand, Google Chrome adds proprietary code to Chromium, making Chrome a proprietary browser. For example, one can fork Brave, but one cannot fork Google Chrome, restricting the usage of their Google-specific code/work.
For end-users, the license does not affect the user experience. However, with an open-source project, you get more transparency without relying on the company to communicate what they intend to change and what theyre doing with the browser.
So, yes, if youre not a fan of proprietary code, Chromium is the answer.
### Feature Differences
Its no surprise that Google does not want its competitors to have similar capabilities. So, Google has been [locking up Chromium and disabling a lot of Google-specific abilities][5].
Hence, you will find some differences in capabilities between both browsers.
Not just limited to that, but because Chromium is open-source, you may notice some inconvenience. Fret not; Ill point out the crucial differences below:
**Google Chrome** | **Chromium**
---|---
Sign-in and Sync Available | No Sign-in and Sync
Media codec support to use Netflix | Manual codec installation is required
For starters, the Google-powered sign-in/sync feature is no longer available in Chromium. It supported sign-in and sync until Google decided to remove it from the open-source project.
Next, Google Chrome comes with built-in support for high-quality media codecs. So, you can load up content from Netflix. But, it wont work with Chromium.
![Netflix doesnt work in Chromium by default][6]
Technically, Chromium does not include the _Widevine Content Decryption module_. So, you will have to install the required codecs manually to make most of the things work.
However, you should not have any issues playing content from platforms like Apple Music and others on both browsers out of the box.
### Installation &amp; Availability of Latest update
You can install Google Chrome on virtually any platform. Linux is not an exception. Just head to its official website and grab the DEB/RPM package to install it quickly. The installed application also gets updated automatically.
![][7]
Installing Chromium is not that straightforward on several platforms. There was a time when some Linux distributions included Chromium as the default browser. Those were the days of the past.
Even on Windows, Chromium installation and update is not as smooth as Chrome.
On Linux, its entirely a different story for installing Chromium. Popular distribution like Ubuntu packages it as a sandboxed Snap application.
Even if you are trying to install it using the terminal, hoping that you would get it from the APT repositories, its Snap again:
![][8]
With the Snap package, you may face issues with blending in with your custom desktop theme. Snap applications take longer to start as well.
![][9]
And, if you proceed to build it and install Chromium manually, you will have to update it manually.
### The Privacy Angle
Google Chrome should be good enough for most users. However, if you are worried about your privacy, Google Chrome tracks usage info and some browsing-related information.
Recently, [Google introduced a new Chrome API][10] that lets sites detect when you are idle and when you are not. While this is a massive privacy concern, it isnt the only thing.
Google constantly experiments with new ways of tracking users; for instance, Googles FLoC experiment wasnt well-received, as pointed out by [EFF][11].
Technically, they claim that they want to enhance users privacy while still providing advertising opportunities. However, that is one impossible task to achieve as of now.
In comparison, Chromium should fare way better concerning privacy. However, if you hate anything Google-related in your browser, even the slightest telemetry, you should try [UnGoogled Chromium][12] instead.
It is Chromium, but without any Google components.
### Browser Performance
There are a variety of browser benchmarks that give you an idea of how well a browser can handle tasks.
Considering the advanced web applications and resource-intensive JavaScript found on websites, if a web browser does not perform well, you will get a noticeably bad experience when you dabble with many active tabs.
[JetStream 2][13] and [Speedometer 2][14] are two popular benchmarks that give you a performance estimate of handling various tasks and responsiveness, respectively.
In addition to that, I also tried out [Basemark Web 3.0][15], which also tests a variety of things and gives you an aggregate score.
![][16]
Overall, Google Chrome wins here.
But, it is worth noting that your system resources and background processes while running a browser will affect performance differently. So, take that into account as well.
### What should you choose?
The choices for browsers exist because users prefer different things. Google Chrome offers a good feature set and user experience. If you use Google-powered services in some form, Google Chrome is an easy recommendation.
However, if you are concerned about privacy practices and proprietary code, Chromium or UnGoogled Chromium, or any other Chromium-based browser like Brave can be a good pick.
Thats all I had in mind when debating Chrome and Chromium. I am open to receive your views now. The comment section is all yours.
--------------------------------------------------------------------------------
via: https://itsfoss.com/chrome-vs-chromium/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/chrome-chromium-ui.png?resize=800%2C627&ssl=1
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/open-source-proprietary.png?resize=800%2C450&ssl=1
[3]: https://github.com/chromium/chromium
[4]: https://news.itsfoss.com/chrome-like-browsers-2021/
[5]: https://news.itsfoss.com/is-google-locking-down-chrome/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/chromium-netflix.png?resize=800%2C541&ssl=1
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/google-chrome-version.png?resize=800%2C549&ssl=1
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/install-chromium.png?resize=800%2C440&ssl=1
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/chromium-version.png?resize=800%2C539&ssl=1
[10]: https://www.forbes.com/sites/zakdoffman/2021/10/02/stop-using-google-chrome-on-windows-10-android-and-apple-iphones-ipads-and-macs/
[11]: https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea
[12]: https://github.com/Eloston/ungoogled-chromium
[13]: https://webkit.org/blog/8685/introducing-the-jetstream-2-benchmark-suite/
[14]: https://webkit.org/blog/8063/speedometer-2-0-a-benchmark-for-modern-web-app-responsiveness/
[15]: https://web.basemark.com/
[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/chrome-chromium-benchmarks-1.png?resize=800%2C450&ssl=1

View File

@ -0,0 +1,188 @@
[#]: subject: "7 key components of observability in Python"
[#]: via: "https://opensource.com/article/21/11/observability-python"
[#]: author: "Moshe Zadka https://opensource.com/users/moshez"
[#]: collector: "lujun9972"
[#]: translator: " "
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
7 key components of observability in Python
======
Learn why observability is important for Python and how to implement it
into your software development lifecycle.
![Searching for code][1]
The applications you write execute a lot of code, in a way that's essentially invisible. So how can you know:
* Is the code working?
* Is it working well?
* Who's using it, and how?
Observability is the ability to look at data that tells you what your code is doing. In this context, the main problem area is server code in distributed systems. It's not that observability isn't important for client applications; it's that clients tend not to be written in Python. It's not that observability does not matter for, say, data science; it's that tooling for observability in data science (mostly Juptyter and quick feedback) is different.
### Why observability matters
So why does observability matter? Observability is a vital part of the software development life cycle (SDLC).
Shipping an application is not the end; it is the beginning of a new cycle. In that cycle, the first stage is to confirm that the new version is running well. Otherwise, a rollback is probably needed. Which features are working well? Which ones have subtle bugs? You need to know what's going on to know what to work on next. Things fail in weird ways. Whether it's a natural disaster, a rollout of underlying infrastructure, or an application getting into a strange state, things can fail at any time, for any reason.
Outside of the standard SDLC, you need to know that everything is still running. If it's not running, it's essential to have a way to know how it is failing.
### Feedback
The first part of observability is getting feedback. When code gives information about what it is doing, feedback can help in many ways. In a staging or testing environment, feedback helps find problems and, more importantly, triage them in a faster way. This improves the tooling and communication around the validation step.
When doing a canary deployment or changing a feature flag, feedback is also important to let you know whether to continue, wait longer, or roll it back.
### Monitor
Sometimes you suspect that something has gone wrong. Maybe a dependent service is having issues, or maybe social media is barraging you with questions about your site. Maybe there's a complicated operation in a related system, and you want to make sure your system is handling it well. In those cases, you want to aggregate the data from your observability system into dashboards.
When writing an application, these dashboards need to be part of the design criteria. The only way they have data to display is when your application shares it with them.
### Alerts
Watching dashboards for more than 15 minutes at a time is like watching paint dry. No human should be subjected to this. For that task, we have alerting systems. Alerting systems compare the observability data to the expected data and send a notification when it doesn't match up. Fully delving into incident management is beyond the scope of this article. However, observable applications are alert-friendly in two ways:
* They produce enough data, with enough quality, that high-quality alerts can be sent.
* The alert has enough data, or the receiver can easily get the data, to help triage the source.
High-quality alerts have three properties:
* Low false alarms: If there's an alert, there's definitely a problem.
* Low missing alarms: When there's a problem, an alert is triggered.
* Timely: An alert is sent quickly to minimize time to recovery.
These three properties are in a three-way conflict. You can reduce false alarms by raising the threshold of detection at the cost of increasing missing alarms. You can reduce missing alarms by lowering the threshold of detection at the expense of increasing false alarms. You can reduce both false alarms and missing alarms by collecting more data at the cost of timeliness.
Improving all three parameters is harder. This is where the quality of observability data comes in. Higher quality data can reduce all three.
### Logging
Some people like to make fun of print-based debugging. But in a world where most software runs on not-your-local-PC, print debugging is all you can do. Logging is a formalization of print debugging. The Python logging library, for all of its faults, allows standardized logging. Most importantly, it means you can log from libraries.
The application is responsible for configuring which logs go where. Ironically, after many years where applications were literally responsible for configuration, this is less and less true. Modern applications in a modern container orchestration environment log to standard error and standard output and trust the orchestration system to manage the log properly.
However, you should not rely on it in libraries, or pretty much anywhere. If you want to let the operator know what's going on, _use logging, not print_.
#### Logging levels
One of the most important features of logging is _logging levels_. Logging levels allow you to filter and route logs appropriately. But this can only be done if logging levels are consistent. At the very least, you should make them consistent across your applications.
With a little help, libraries that choose incompatible semantics can be retroactively fixed by appropriate configuration at the application level. Do this by using the most important universal convention in Python: using the `getLogger(__name-_)`.
Most reasonable libraries follow this convention. Filters can modify logging objects in place before they are emitted. You can attach a filter to the handler that will modify the messages based on the name to have appropriate levels.
```
import logging
LOGGER=logging.getLogger(__name__)
```
With this in mind, you now have to actually specify semantics for logging levels. There are a lot of options, but the following are my favorite:
* Error: This sends an immediate alert. The application is in a state that requires operator attention. (This means that Critical and Error are folded.)
* Warning: I like to call these “Business hours alerts.” Someone should look at this within one business day.
* Info: This is emitted during normal flow. It's designed to help people understand what the application is doing if they already suspect a problem.
* Debug: This is not emitted in the production environment by default. It might or might not be emitted in development or staging, and it can be turned on explicitly in production if more information is needed.
In no case should you include PII (Personal Identifiable Information) or passwords in logs. This is true regardless of levels. Levels change, debug levels are activated, and so on. Logging aggregation systems are rarely PII-safe, especially with evolving PII regulation (HIPAA, GDPR, and others).
#### Log aggregation
Modern systems are almost always distributed. Redundancy, scaling, and sometimes jurisdictional needs mean horizontal distribution. Microservices mean vertical distribution. Logging into each machine to check the logs is no longer realistic. It is often a bad idea for proper control reasons: allowing developers to log into machines gives them too many privileges.
All logs should be sent into an aggregator. There are commercial offerings, you can configure an ELK stack, or you can use any other database (SQL or no-SQL). As a really low-tech solution, you can write the logs to files and ship them to an object storage. There are too many solutions to explain, but the most important thing is choosing one and aggregating everything.
#### Logging queries
After logging everything to one place, there are too many logs. The specific aggregator defines how to write queries, but whether it's grepping through storage or writing NoSQL queries, logging queries to match source and details are useful.
### Metric scraping
Metrics scraping is a server pull model. The metrics server connects to the application periodically and pulls the metrics.
At the very least, this means the server needs connectivity and discovery for all relevant application servers.
#### Prometheus as a standard
The [Prometheus][2] format as an endpoint is useful if your metrics aggregator is Prometheus. But it is also useful if it is not! Almost all systems contain a compatibility shim for Prometheus endpoints.
Adding a Prometheus shim to your application using the client Python library allows it to be scraped by most metrics aggregators. Prometheus expects to find, once it discovers the server, a metrics endpoint. This is often part of the application routing, often at `/metrics`. Regardless of the platform of the web application, if you can serve a custom byte stream with a custom content type at a given endpoint, you can be scraped by Prometheus.
For the most popular framework, there is also a middleware plugin or something equivalent that automatically collects some metrics, like latency and error rates. This is not usually enough. You want to collect custom application data: for example, cache hit/miss rates per endpoint, database latency, and so on.
#### Using counters
Prometheus supports several data types. One important and subtle type is the counter. Counters always advance—with one caveat.
When the application resets, the counter goes back to zero. These “epochs” in counters are managed by having the counter “creation time” sent as metadata. Prometheus will know not to compare counters from two different epochs.
#### Using gauges
Gauges are much simpler: They measure instantaneous values. Use them for measurements that go up and down: for example, total allocated memory, size of cache, and so on.
#### Using enums
Enums are useful for states of the application as a whole, although they can be collected on a more granular basis. For example, if you are using a feature-gating framework, a feature that can have several states (e.g., in use, disabled, shadowing) might be useful to have as an enum.
### Analytics
Analytics are different from metrics in that they correspond to coherent events. For example, in network servers, an event is one outside request and its resulting work. In particular, the analytics event cannot be sent until the event is finished.
An event contains specific measurements: latency, number and possibly details of resulting requests to other services, and so on.
#### Structured Logging
One current possible option is structured logging. The send event is just sending a log with a properly formatted payload. This data can be queried from the log aggregator, parsed, and ingested into an appropriate system for allowing visibility into it.
### Error tracking
You can use logs to track errors, and you can use analytics to track errors. But a dedicated error system is worthwhile. A system optimized for errors can afford to send more data since errors are rare. It can send the right data, and it can do smart things with the data. Error-tracking systems in Python usually hook into a generic exception handler, collect data, and send it to a dedicated error aggregator.
#### Using Sentry
In many cases, running Sentry yourself is the right thing to do. When an error has occurred, something has gone wrong. Reliably removing sensitive data is not possible, since these are precisely the cases where the sensitive data might have ended up somewhere it shouldn't.
It is often not a big load: exceptions are supposed to be rare. Finally, this is not a system that needs high-quality, high-reliability backups. Yesterday's errors are already fixed, hopefully, and if they are not—you'll know!
### Fast, safe, repeatable: choose all three
Observable systems are faster to develop since they give you feedback. They are safer to run since, when they go wrong, they let you know sooner. Finally, observability lends itself to building repeatable processes around it since there is a feedback loop. Observability gives you knowledge about your application. And knowing is half the battle.
#### Upfront investment pays off
Building all the observability layers is hard work. It also often feels like wasted work, or at least like “nice to have but not urgent.”
Can you build it later? Maybe, but you shouldn't. Building it right lets you speed up the rest of development so much at all stages: testing, monitoring, and even onboarding new people. In an industry with as much churn as tech, just reducing the overhead of onboarding a new person is worth it.
The fact is, observability is important, so write it in early in the process and maintain it throughout. In turn, it will help you maintain your software.
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/observability-python
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_python_programming.png?itok=ynSL8XRV (Searching for code)
[2]: https://opensource.com/article/21/7/run-prometheus-home-container

View File

@ -0,0 +1,86 @@
[#]: subject: "Manage Flatpak Permissions Graphically With Flatseal"
[#]: via: "https://itsfoss.com/flatseal/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Manage Flatpak Permissions Graphically With Flatseal
======
The newer versions of Android give you a more granular control over the access and permission an individual app can have. This is vital because many applications were (are) abusing the system permissions. Download a weather app and it will ask to access your call logs as if that has anything to do with the weather.
Why am I talking about Android app permissions? Because that is something you could relate with this applications functioning.
You probably already know [what Flatpak is][1]. These are sandboxed applications with selected access to system resources like file storage, network interface etc.
Just like Android, you can control the access to system resources by Flatpak applications. By default, that happens with [Flatpak commands][2] and not everyone can be comfortable with it.
And hence, there is this tiny utility called Flatseal that allows you to manage and control the Flatpak permissions at application level.
### Flatseal
![Flatseal][3]
[Flatseal][4] is a graphical utility to review and modify permissions your Flatpak applications has got. This makes things a lot easier than going through the commands.
Flatseal lists all the installed Flatpak applications. When you select one, you can see all the permissions. The enabled permissions can be easily spotted and if you want, you can disable it.
For example, Ksnip is a screenshot utility and it also has networking access to share the screenshots with online services like Imgur. If you do not need it, you can disable it.
![Control permissions of individual Flatpak apps][5]
If nothing else, it is interesting to see what kind of permissions an application has. For example, you can see that ksnip has the ability to run in the background (so that you can use it for taking screenshots with keyboard shortcuts).
![][6]
### Installing Flatseal
Since its all about Flatpak, it only makes sense that Flatseal is available as a Flatpak package.
On Fedora, if the Flathub repo is added, you can install it from the software center.
![Installing Flatseal from the software center][7]
Otherwise, the command line is always there to help you.
```
flatpak install flathub com.github.tchx84.Flatseal
```
### Do you really need to control permissions?
Thats a subjective question and it totally depends on you. Thankfully, the desktop Linux apps are not as abusive as Android apps so far.
An average user usually does not bother with these things and thats totally fine.
However, if you are overly cautious about these things or you find a good reason, Flatseal provides the easy option.
You should also be careful about what permissions you are changing. If you disable a permission crucial to the functioning of the application, it will surely cause trouble while using the application.
So, overall, this is not something an average user is going to use.
--------------------------------------------------------------------------------
via: https://itsfoss.com/flatseal/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/what-is-flatpak/
[2]: https://itsfoss.com/flatpak-guide/
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/flatseal.png?resize=800%2C474&ssl=1
[4]: https://flathub.org/apps/details/com.github.tchx84.Flatseal
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/flatpak-permission-control-flatseal.png?resize=800%2C503&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/flatpak-permissions-with-flatseal.png?resize=800%2C441&ssl=1
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/install-flatseal.png?resize=800%2C467&ssl=1

View File

@ -0,0 +1,162 @@
[#]: subject: (Install FreeDOS without the installer)
[#]: via: (https://opensource.com/article/21/6/install-freedos-without-installer)
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
[#]: collector: (lujun9972)
[#]: translator: (robsean)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
不使用安装程序安装 FreeDOS
======
这里是如何在不使用安装程序的情况下来手动设置你的 FreeDOS 系统。
![FreeDOS fish logo and command prompt on computer][1]
大多数的人应该能够使用安装程序来非常容易地安装 FreeDOS 1.3 RC4 。FreeDOS 安装程序会先询问几个问题,然后处理剩余的工作—包括为 FreeDOS 制作安装空间和使系统可启动。
但是,如果安装程序不适合你怎么办?或者,你更喜欢 _手动_ 设置你的 FreeDOS 系统,而不喜欢使用安装程序怎么办?使用 FreeDOS ,你也可以做到这些!让我们在不使用安装程序的情况下逐步走完安装 FreeDOS 的步骤。我将使用 QEMU 虚拟机的一个空白的硬盘驱动器镜像来完成所有的步骤。我使用这个 Linux 命令来创建了一个 100 MB 的硬盘驱动器镜像:
```
`$ qemu-img create freedos.img 100M`
```
我下载了 FreeDOS 1.3 RC4 的 LiveCD ,并将其命名为 FD13LIVE.iso ,它提供了一个 "live" 环境,我可以在其中运行 FreeDOS ,包括所有的标准工具。大多数用户也使用 LiveCD 自带的常规安装程序来安装 FreeDOS 。但是,在这里我将仅使用 LiveCD ,并从其命令行中使用某些类型的命令来安装 FreeDOS 。
我使用这个相当长的 QEMU 命令来启动虚拟机,并选择 "Use FreeDOS 1.3 in Live Environment mode" 启动菜单项:
```
`$ qemu-system-x86_64 -name FreeDOS -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off -enable-kvm -cpu host -m 8 -overcommit mem-lock=off -no-user-config -nodefaults -rtc base=utc,driftfix=slew -no-hpet -boot menu=on,strict=on -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on -hda freedos.img -cdrom FD13LIVE.iso -device sb16 -device adlib -soundhw pcspk -vga cirrus -display sdl -usbdevice mouse`
```
![manual install][2]
选择 "Use FreeDOS 1.3 in Live Environment mode" 来启动 LiveCD
(Jim Hall, [CC-BY SA 4.0][3])
这个 QEMU 命令行包含大量的选项,乍看可能会让你迷糊。因为你完全使用命令行选项配置 QEMU ,所以在这里有很多东西需要审查。但是,我将简单地重点说明几个重要的选项:
* `-m 8:` 设置系统存储器 ("RAM") 为 8 MB
* **`-boot menu=on,strict=on:`** 使用一个启动菜单,这样,我可以选择从 CD-ROM 镜像或硬盘驱动器镜像启动
* **`-hda freedos.img`:** 使用 **freedos.img** 作为硬盘驱动器镜像
* `-cdrom FD13LIVE.iso:` 使用 **FD13LIVE.iso** 作为 CD-ROM 镜像
* **`-device sb16 -device adlib -soundhw pcspk`:** 定义计算机带有一个 SoundBlaster16 声卡AdLib 数字音乐卡PC 扬声器模拟器 (如果你想玩 DOS 游戏的话,这些模拟器是很有用的)
* **`-usbdevice mouse`:** 将识别用户的鼠标为一个 USB 鼠标 (在 QEMU 窗口中单击以使用鼠标)
### 对硬盘驱动器进行分区
你可以从 LiveCD 使用 FreeDOS 1.3 RC4 ,但是,如果你想安装 FreeDOS 到你的计算机中,你需要先在硬盘驱动器上制作安装空间。这需要使用 FDISK 程序来创建一个 _分区_ 。
从 DOS 命令行中,输入 `FDISK` 来运行 _fixed disk_ 设置程序。FDISK 是一个全屏交互式程序,你只需要输入数字来选择菜单项。从 FDISK 的主菜单中,输入 "1" 来在驱动器上创建一个 DOS 分区,然后在接下来的屏幕上输入 "1" 来创建一个 _主_ DOS 分区。
![using fdisk][4]
选择 "1" 来创建一个分区
(Jim Hall, [CC-BY SA 4.0][3])
![using fdisk][5]
在接下来的菜单上选择 "1" 来制作一个主分区
(Jim Hall, [CC-BY SA 4.0][3])
FDISK 会询问你是否想要使用全部的硬盘空间大小来创建分区。除非你需要在这个硬盘驱动器上和另外一个操作系统 (例如 Linux ) 共享硬盘空间,否则,对于这个提示,你应该回答 "Y" 。
在 FDISK 创建新的分区后,在 DOS 能够识别新的分区信息前,你将需要重新启动 DOS 。像所有的 DOS 操作系统一样FreeDOS 仅在其启动时识别硬盘驱动器信息。因此,如果你创建或删除任何的磁盘分区的话,你都将需要重新启动 FreeDOS 只有这样做FreeDOS 才能识别到更改的分区信息。FDISK 会提醒你重新启动,因此,你是不会忘记的。
![using fdisk][6]
你需要重新启动以 recognize 新的分区
(Jim Hall, [CC-BY SA 4.0][3])
你可以通过停止或重新启动 QEMU 虚拟机来重新启动 FreeDOS但是我更喜欢在 FreeDOS 命令行中使用 FreeDOS 的高级电源管理 (FDADPM) 工具来重新启动 FreeDOS 。为了重新启动,输入命令 `FDADPM /WARMBOOT` FreeDOS 将自动重新启动。
### 对硬盘驱动器进行格式化
在 FreeDOS 重新启动后,你可以继续
硬盘驱动器。创建磁盘分区是这个过程的"第一步";现在你需要在分区上创建一个 DOS _文件系统_ ,以便 FreeDOS 可以使用它。
DOS 系统使用字母 `A` 到 `Z` 来识别"驱动器"。FreeDOS 将识别第一个硬盘驱动器的第一个分区为 `C` 驱动器,依此论推。你经常使用字母和一个冒号 (`:`) 来表示驱动器,因此我们在上面创建的新分区实际上是 `C:` 驱动器。
你可以在新的分区上使用 FORMAT 命令来创建一个 DOS 文件系统。这个命令带有一些选项,但是,我们将仅使用 `/S` 选项来告诉 FORMAT 来使新的文件系统可启动— "S" 意味着安装 FreeDOS "系统" 文件。输入 `FORMAT /S C:` 来在 `C:` 驱动器上制作一个新的 DOS 文件系统。
![formatting the disk][7]
格式化分区来创建 DOS 文件系统
(Jim Hall, [CC-BY SA 4.0][3])
使用 `/S` 选项FORMAT 将运行 SYS 程序来
transfer
系统文件。你将看到这是从 FORMAT 输出的一部分:
![formatting the disk][8]
FORMAT /S 将使用 SYS 来使磁盘可启动
(Jim Hall, [CC-BY SA 4.0][3])
### 安装软件
在使用 FDISK 创建了一个新的分区,并使用 FORMAT 创建了一个新的文件系统后, 新的 `C:` 驱动器基本上是空的。此时,`C:` 驱动器仅包含一份内核和 `COMMAND.COM` 命令行 shell 的副本。为使新的磁盘可以执行一些有用的操作,我们需要在其上安装软件。这是手动安装过程的最后步骤。
FreeDOS 1.3 RC4 LiveCD 包含所有的你可能希望在新的系统上所要安装的软件。每个 FreeDOS 程序都是一个单独的 "软件包" ,它实际上只是一个 Zip 档案文件。建立标准 DOS 环境的软件包存储在 LiveCD 上 `PACKAGES` 目录下 `BASE` 目录之中。
你可以一次一个的将其中的每一个软件包都 "解压缩" 到硬盘驱动器来完成安装。在 "Base" 组中有 62 个单独的软件包,如果每次安装一个软件包,这可能会花费非常多的时间。不过,你可以运行一个只有一行的 `FOR` "循环" 命令来 "unzip" 每个程序。接下来 FreeDOS 可以为你 "解压缩" 所有的软件包。
`FOR` 循环的基本用法中提及的一个单个字母变量 (让我们使用 `%F`) 稍后FreeDOS 将使用该字母变量来 "填充" 文件名称。`FOR` 还需要括号中的一个文件列表,这个命令会对每个文件都运行一次。用来解压一系列的 Zip 文件的语法看起来像这样:
```
`FOR %F IN (*.ZIP) DO UNZIP %F`
```
这将提取所有的 Zip 文件到当前目录之中。为提取或 "unzip" 文件到一个不同的位置,在 `UNZIP` 命令行结尾处使用 `-d` ("目的地") 选项。对于大多数的 FreeDOS 系统来说,你应该安装软件包到 `C:\FDOS` 目录中:
![installing the software][9]
解压缩所有的基本软件包来完成安装 FreeDOS
(Jim Hall, [CC-BY SA 4.0][3])
FreeDOS 会处理剩余的工作,安装所有的 62 个软件包到你的系统之中。这可能会花费几分钟的时间,因为 DOS 在处理很多单个的文件时会很慢—这个命令需要提取 62 个 Zip 文件。如果我们使用单个的 `BASE.ZIP` 档案文件的话,安装过程可能会运行地更快,但是使用软件包的话,在你选择想要安装或不安装软件包时会提供更多的灵活性。
![installing the software][10]
在安装所有的基本软件包后
(Jim Hall, [CC-BY SA 4.0][3])
在我们安装完所有的东西后,使用 `FDADPM /WARMBOOT` 来重新启动你的系统。手动安装意味着你的新 FreeDOS 系统没有常见的 `FDCONFIG.SYS` 配置文件,因此,当 FreeDOS 在启动时,它将假设一些典型的默认值。即使没有 `AUTOXEC.BAT` 文件FreeDOS 也会提示你时间和日期。
![rebooting FreeDOS][11]
在手动安装后,重新启动 FreeDOS
(Jim Hall, [CC-BY SA 4.0][3])
大多数的用户应该能够使用比较用户友好的过程来在一台新的计算机上安装 FreeDOS 。但是如果你想自己使用"古老的"方法来安装它,那么你可以手动运行安装步骤。这会提供一些额外的灵活性和控制权,因为是你自己安装的一切。现在你知道如何安装它了。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/6/install-freedos-without-installer
作者:[Jim Hall][a]
选题:[lujun9972][b]
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jim-hall
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/freedos-fish-laptop-color.png?itok=vfv_Lpph (FreeDOS fish logo and command prompt on computer)
[2]: https://opensource.com/sites/default/files/uploads/manual-install3.png (Select "Use FreeDOS 1.3 in Live Environment mode" to boot the LiveCD)
[3]: https://creativecommons.org/licenses/by-sa/4.0/
[4]: https://opensource.com/sites/default/files/uploads/manual-install6.png (Select "1" to create a partition)
[5]: https://opensource.com/sites/default/files/uploads/manual-install7.png (Select "1" on the next menu to make a primary partition)
[6]: https://opensource.com/sites/default/files/uploads/manual-install10.png (You need to reboot to recognize the new partition)
[7]: https://opensource.com/sites/default/files/uploads/manual-install13.png (Format the partition to create the DOS filesystem)
[8]: https://opensource.com/sites/default/files/uploads/manual-install14.png (FORMAT /S will use SYS to make the disk bootable)
[9]: https://opensource.com/sites/default/files/uploads/manual-install18.png (Unzip all of the Base packages to finish installing FreeDOS)
[10]: https://opensource.com/sites/default/files/uploads/manual-install24.png (After installing all the Base packages)
[11]: https://opensource.com/sites/default/files/uploads/manual-install28.png (Rebooting FreeDOS after a manual install)

View File

@ -1,114 +0,0 @@
[#]: subject: "What is Build Essential Package in Ubuntu? How to Install it?"
[#]: via: "https://itsfoss.com/build-essential-ubuntu/"
[#]: author: "Abhishek Prakash https://itsfoss.com/author/abhishek/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
什么是 Ubuntu 中的 Build Essential 包?如何安装它?
======
_**简介:这是一篇快速提示,旨在告知 Ubuntu 的新用户关于 build-essential 软件包,它的用处和安装步骤。**_
在 Ubuntu 中安装 build-essential 包,就像在终端中输入这个命令一样简单:
```
sudo apt update && sudo apt install build-essential
```
但围绕它有几个问题,你可能想知道答案:
* 什么是 build essential 包?
* 它包含什么内容?
* 为什么要安装它(如果安装的话)?
* 如何安装它?
* 如何删除它?
### 什么是 Ubuntu 中的 build-essential 软件包?
build-essential 包实际上是属于 Debian 的。它本身并不是一个软件。它包含了创建一个 Debian 包deb所需的软件包列表。这些软件包包括 libc、gcc、g++、make、dpkg-dev 等。build-essential 包包含这些所需的软件包作为依赖,所以当你安装 build-essential 时,你只需一个命令就能安装所有这些软件包。
请不要认为 build-essential 是一个超级软件包,它可以在一个命令中神奇地安装从 Ruby 到 Go 的所有开发工具。它有一些开发工具,但不是全部。
#### 你为什么要安装 build-essential 包?
它是用来从应用的源代码创建 DEB 包。一个普通用户不会每天都去创建 DEB 包,对吗?
然而,有些用户可能会使用他们的 Ubuntu Linux 系统进行软件开发。你想[在 Ubuntu 中运行 c 程序][1],你需要 gcc 编译器。你想[在 Ubuntu 中运行 C++ 程序][2],你需要 g++ 编译器。如果你要使用一个不寻常的、只能从源代码中获得的软件,你的系统会抛出 [make 命令未找到的错误][3],因为你需要先安装 make 工具。
当然,所有这些都可以单独安装。然而,利用 build-essential 软件包的优势,一次性安装所有这些开发工具要容易得多。这就是你得到的好处。
这就像 [ubuntu-restricted-extras 包允许你一次安装几个媒体编解码器][4]。
现在你知道了这个包的好处,让我们看看如何安装它。
### 在 Ubuntu Linux 中安装 build-essential 包
![][5]
在 Ubuntu 中按 Ctrl+Alt+T 快捷键打开终端,输入以下命令:
```
sudo apt update
```
使用 sudo 命令,你会被要求输入你的账户密码。当你输入时,屏幕上没有任何显示。这没问题。这在大多数 Linux 系统中都是这样的。盲目地输入你的密码,然后按回车键。
![][6]
apt update 命令刷新了本地软件包的缓存。这对于一个新安装的 Ubuntu 来说是必不可少的。
之后,运行下面的命令来安装 build-essential 工具:
```
sudo apt install build-essential
```
它应该显示所有要安装的软件包。当要求确认时按 Y
![][7]
等待安装完成。就好了。
### 从 Ubuntu 中删除 build-essential 工具
保留这些开发工具不会损害你的系统。但如果你的磁盘空间不足,你可以考虑删除它。
在 Ubuntu 中,由于有 apt remove 命令,删除软件很容易:
```
sudo apt remove build-essential
```
运行 autoremove 命令来删除剩余的依赖包也是一个好主意:
```
sudo apt autoremove
```
你现在知道了所有关于 build-essential 包的要点(双关语)。请享受它吧 :)
--------------------------------------------------------------------------------
via: https://itsfoss.com/build-essential-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/run-c-program-linux/
[2]: https://itsfoss.com/c-plus-plus-ubuntu/
[3]: https://itsfoss.com/make-command-not-found-ubuntu/
[4]: https://itsfoss.com/install-media-codecs-ubuntu/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/10/Build-Essential-Ubuntu.png?resize=800%2C450&ssl=1
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/10/apt-update.png?resize=800%2C467&ssl=1
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/10/install-build-essential-ubuntu.png?resize=800%2C434&ssl=1

View File

@ -0,0 +1,149 @@
[#]: subject: "How the Kubernetes ReplicationController works"
[#]: via: "https://opensource.com/article/21/11/kubernetes-replicationcontroller"
[#]: author: "Mike Calizo https://opensource.com/users/mcalizo"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
Kubernetes ReplicationController 如何工作
======
ReplicationController 负责管理 pod 的生命周期并确保所需的指定数量的 pod 在任何时候都在运行。
![Ships at sea on the web][1]
你有没有想过,谁负责监督和管理 Kubernetes 集群内运行的 pod 的确切数量Kubernetes 可以通过多种方式做到这一点,但一个常见的方法是使用 ReplicationControllerrc。ReplicationController 负责管理 pod 的生命周期,并确保在任何时候都能运行所需的指定数量的 pod。另一方面它不负责高级的集群能力如执行自动扩展、准备度和活跃探测以及其他高级的复制能力。Kubernetes 集群中的其他组件可以更好地执行这些功能。
简而言之ReplicationController 的职责有限,通常用于不需要复杂逻辑就能达到某些要求的具体实现(例如,确保所需的 pod 数量总是与指定的数量相符。如果超过了所需的数量ReplicationController 会删除多余的,并确保即使在节点故障或 pod 终止的情况下,也有相同数量的 pod 存在。
简单的事情不需要复杂的解决方案,对我来说,这就是 ReplicationController 如何被使用的一个完美的比喻。
### 如何创建一个 ReplicationController
像大多数 Kubernetes 资源一样,你可以使用 YAML 或 JSON 格式创建一个 ReplicationController然后将其发布到 Kubernetes API 端点。
```
$ kubectl create -f rcexample.yaml
replicationcontroller/rcexample created
```
现在,我将深入一下 `rcexample.yaml` 的样子。
```
apiVersion: v1
kind: ReplicationController → rc descriptor
metadata:
name: rcexample → Name of the replication controller
spec:
replicas: 3 → Desired number of pods
selector: → The pod selector for this rc
app: nginx
template: → The template for creating a new pod
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
```
为了进一步解释,这个文件在执行时创建了一个名为 `rcexample` 的 ReplicationController确保三个名为 `nginx` 的 pod 实例一直在运行。如果一个或所有的pod `app=nginx` 没有运行,新的 pod 将根据定义的 pod 模板创建。
一个 ReplicationController 有三个部分:
* Replica3
* Pod Templateapp=nginx
* Pod Selectorapp=nginx
注意Pod Template 要与 Pod Selector 相匹配,以防止 ReplicationController 无限期地创建 pod。如果你创建的 ReplicationController 的 pod selector 与 template 不匹配Kubernetes API 服务器会给你一个错误。
为了验证 ReplicationController `rcexample` 是否被创建:
```
$ kubectl get po
NAME READY STATUS RESTARTS AGE
rcexample-53thy 0/1 Running 0 10s
rcexample-k0xz6 0/1 Running 0 10s
rcexample-q3vkg 0/1 Running 0 10s
```
要删除 ReplicationController
```
$ kubectl delete rc rcexample
replicationcontroller "rcexample" deleted
```
注意,你可以对 ReplicationController 中的服务使用[滚动更新][2]策略,逐个替换 pod。
### 其他复制容器的方法
在 Kubernetes 部署中有多种方法可以实现容器的复制。Kubernetes 成为容器平台的主要选择的主要原因之一是复制容器以获得可靠性、负载平衡和扩展的原生能力。
我在上面展示了你如何轻松地创建一个 ReplicationController以确保在任何时候都有一定数量的 pod 可用。你可以通过更新副本的数量来手动扩展 pod。
另一种可能的方法是通过使用 [ReplicaSet][3] 来达到复制的目的。
```
`(kind: ReplicaSet)`
```
ReplicaSetrs 的功能几乎与 ReplicationController 相同。主要区别在于ReplicaSet 不允许滚动更新策略。
另一种实现复制的方法是通过使用 [Deployments][4]。
```
`(kind: Deployment)`
```
Deployments 是一种更高级的容器复制方法。从功能上讲Deployments 提供了相同的功能,但在需要时可以推出和回滚变化。这种功能之所以能够实现,是因为 Deployments 有 StrategyType 规范来用新的 pod 替换旧的 pod。你可以定义两种类型的部署策略Recreate 和 RollingUpdate。你可以如下指定部署策略
```
`StrategyType: RollingUpdate`
```
### 总结
容器的复制是大多数企业考虑采用 Kubernetes 的主要原因之一。复制可以让你达到大多数关键应用程序需要的可靠性和可扩展性,作为生产的最低要求。
了解在 Kubernetes 集群中使用哪些方法来实现复制对于决定哪种方法最适合你的应用架构考虑非常重要。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/kubernetes-replicationcontroller
作者:[Mike Calizo][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mcalizo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes_containers_ship_lead.png?itok=9EUnSwci (Ships at sea on the web)
[2]: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
[3]: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
[4]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

View File

@ -0,0 +1,179 @@
[#]: subject: "3 interesting ways to use the Linux cowsay command"
[#]: via: "https://opensource.com/article/21/11/linux-cowsay-command"
[#]: author: "Don Watkins https://opensource.com/users/don-watkins"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
使用 Linux cowsay 命令的 3 种有趣方式
======
想试一个只是好玩的应用吗?试试 cowsay。
![Cow on parade.][1]
大多数时候,终端是一个生产力的动力源。但是,终端的作用不止是命令和配置。在所有杰出的开源软件中,有些是[为了好玩而写的][2]。我以前写过一些[有趣的命令][3],但这篇文章只讲一个:古老的 `cowsay` 命令。
cowsay 是一只可配置的会说话(或思考)的牛。它接受一个文本字符串,并输出一个牛说话的图形。下面是一头牛在说它喜欢 Linux
```
&lt; I love Linux &gt;
\--------------
\ ^__^
\ (oo)\\_______
(__)\ )\/\
||----w |
|| ||
```
要得到这个结果,我只需输入:
`$ cowsay "I love Linux"`
### 在 Linux 上安装 cowsay
你可以用你的包管理器安装 cowsay。在 Debian、Mint、Elementary 和类似的发行版上:
`$ sudo apt install cowsay`
在 Fedora 上:
`$ sudo apt install cowsay-beefymiracle`
### Cowsay 命令选项
cowsay 是一个简单又有点傻的应用。除了为你的终端机提供一些不同样式外,它并没有什么实际用途。例如,与其让一头普通的牛说一个有趣短语,你可以让一头长着古怪眼睛的牛说一个有趣的短语。输入:
`$ cowsay -e @@ Hello`
你会得到:
```
&lt; Hello &gt;
-------
\ ^__^
\ (@@)\\_______
(__)\ )\/\
||----w |
|| ||
```
或者你可以让它伸出舌头。输入:
`$ cowsay -T U Hello`
你会看到:
```
&lt; Hello &gt;
\-------
\ ^__^
\ (oo)\\_______
(__)\ )\/\
U ||----w |
|| ||
```
更好的是,你可以将 `fortune` 命令与 `cowsay` 结合起来:
`$ fortune | cowsay`
现在你会有一头特别聪明的牛:
```
_______________________________________
/ we: \
| |
| The single most important word in the |
\ world. /
---------------------------------------
\ ^__^
\ (oo)\\_______
(__)\ )\/\
||----w |
|| ||
```
### 结实的奇迹
在 Fedora 上,有一个额外的 cowsay 选项也是一个非官方的项目吉祥物。多年来Fedora 安装程序一直在展示宣传开源贡献的幻灯片。因为它们是根据汽车电影院的插曲设计的,所以幻灯片中常见的卡通人物是拟人化的热狗。
为了与这个主题保持一致,你可以用 Fedora 版本的 cowsay 调用一个所谓的结实的奇迹beefy miracle
`$ cowsay -f beefymiracle Hello Fedora`
你会得到一个非常傻的输出:
```
&lt; Hello Fedora &gt;
-------------- .---. __
, \ / \ \ ||||
\\\\\\\ |O___O | | \\\||||
\ // | \\_/ | | \ /
'--/----/| / | |-'
// // / -----'
// \\\ / /
// // / /
// \\\ / /
// // / /
/| ' / /
//\\___/ /
// ||\ /
\\\\_ || '---'
/' / \\\\_.-
/ / --| |
'-' | |
'-'
```
### 图形化的 cowsay
如果你发现自己需要用图形化的牛来传递信息,可以使用 `xcowsay` 命令。这是一个类似于 cowsay 的图形程序,它接受一个由用户输入的文本字符串,或从另一个应用(如 Fortune输送过来的文本字符串。
![A cartoon cow has a speech bubble that reads "I love Linux"][4]
Don Watkins[CC BY-SA 4.0][5]
### 有趣的 Linux 命令
虽然 `cowsay` 不是一个有用的命令,但它是一个有趣的命令,相当于你终端的桌面小工具。它很适合用来分散注意力和进行有趣的管道命令实验(尝试将 `ifconfig` 管道到 cowsay`lsblk``mount`,或任何东西!)。如果你想让你的终端更有趣,试试 cowsay。
--------------------------------------------------------------------------------
via: https://opensource.com/article/21/11/linux-cowsay-command
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_CowParade_osdc.png?itok=6GD1Wnbm (Cow on parade.)
[2]: https://opensource.com/life/16/6/fun-and-semi-useless-toys-linux
[3]: https://opensource.com/article/21/11/fun-linux-commands
[4]: https://opensource.com/sites/default/files/uploads/graphical_cowsay.png (graphical cowsay)
[5]: https://creativecommons.org/licenses/by-sa/4.0/

View File

@ -0,0 +1,131 @@
[#]: subject: "How to Install Discord on Fedora Linux"
[#]: via: "https://itsfoss.com/install-discord-fedora/"
[#]: author: "Pranav Krishna https://itsfoss.com/author/pranav/"
[#]: collector: "lujun9972"
[#]: translator: "geekpi"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
如何在 Fedora Linux 上安装 Discord
======
[Discord][1] 是一个流行的消息收发应用,可用于文字和语音信息传递。
它是几个社区的福音,可以帮助他们扩大项目,接触更多的人,并维持一个粉丝和追随者的社区。考虑到 Discord 最初是为游戏玩家设计的,这很令人惊讶。
Discord 可用于各种平台,包括 Linux。在本教程中我将引导你完成在 Fedora 中安装 Discord 的步骤。
* 使用 DNF 和 RPM Fusion 仓库安装 Discord
* 通过 Flatpak 安装Discord
Flatpak 软件包是沙盒的,因此需要更多的磁盘空间和时间来启动。然而,他们会相当快地更新到新的版本。
无论你想使用 Flatpak 还是 DNF选择权在你手上。我将向你展示这两种方法。
非 FOSS 警报!
Discord 并不是开源的。但由于他们提供了一个 Linux 客户端,而且许多 Linux 用户都依赖它,所以在这里介绍了。
### 方法 1通过 RPM Fusion 仓库安装 Discord
Discord 可以通过添加非自由的 RPM Fusion 仓库来安装,这是大多数 Fedora 用户的首选方法,因为更新很容易,而且应用的启动速度比 Flatpak 版本快。
打开终端,使用下面的命令来添加 RPM-fusion 非自由仓库:
```
sudo dnf install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
```
完成后,更新仓库列表(应该不需要,但只是为了它):
```
sudo dnf update
```
然后通过 DNF 命令安装 Discord像这样
```
sudo dnf install discord
```
![Installing Discord using DNF][2]
如果被要求导入 GPG 密钥,只要按 “Y” 就可以授权了。
![Authorize GPG key][3]
这就完成了!现在你可以从应用菜单中启动 Discord。你的登录页面将看起来像这样
![Launch Discord application][4]
#### 通过 DNF 删除 Discord
如果你不想再使用 Discord你可以从你的系统中删除它。要做到这一点在终端运行以下命令
```
sudo dnf remove discord
```
这真的很简单,不是吗?还有一种简单的安装 Discord 的方法,那就是使用 Flatpak 软件包。
### 方法 2通过 Flatpak 安装 Discord
Discord 可以使用 Flatpak 轻松安装,因为它在 Fedora 中是默认可用的。
首先,你需要在 Fedora 中启用 Flatpak 仓库:
```
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
```
接下来,通过这个命令安装 Discord
```
flatpak install discord
```
![Install Discord via Flatpak][5]
如果你想删除 Discord那么只需运行
```
flatpak remove discord
```
这就超级简单了。如果你在 Fedora Linux 上安装 Discord 需要任何帮助,请告诉我。
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-discord-fedora/
作者:[Pranav Krishna][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/pranav/
[b]: https://github.com/lujun9972
[1]: https://discord.com/
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/11/install-discord-dnf.png?resize=800%2C525&ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/authorize-gpg-key-1.png?resize=800%2C573&ssl=1
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/11/Discord-2.png?resize=800%2C432&ssl=1
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2021/11/install-discord-flatpak.png?resize=800%2C545&ssl=1